Compare commits

..

90 Commits

Author SHA1 Message Date
Daniel J Walsh
41991bab70 Bump to version v0.1.36
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-05-18 06:37:01 -04:00
Miloslav Trmač
2b5086167f Merge pull request #653 from TristanCacqueray/master
rootless: don't create a namespace unless for containers-storage
2019-05-18 05:49:21 +02:00
Tristan Cacqueray
b46d16f48c rootless: don't create a namespace unless for containers-storage
This change fixes skopeo usage in restricted environment such as
bubblewrap where it doesn't need extra capabilities or user namespace
to perform its action.

Close #649
Signed-off-by: Tristan Cacqueray <tdecacqu@redhat.com>
2019-05-18 02:53:20 +00:00
Tristan Cacqueray
9fef0eb3f3 vendor: update containers/image
Depends-On: https://github.com/containers/image/pull/631
Signed-off-by: Tristan Cacqueray <tdecacqu@redhat.com>
2019-05-18 02:53:20 +00:00
Miloslav Trmač
30b0a1741e Merge pull request #650 from csomh/fix-man-page-typo
Fix typo on the main man page
2019-05-15 18:51:37 +02:00
Hunor Csomortáni
945b9dc08f Fix typo on the main man page
Signed-off-by: Hunor Csomortáni <csomh@redhat.com>
2019-05-15 17:20:26 +02:00
Miloslav Trmač
904b064da4 Merge pull request #647 from nalind/config
inspect: add a --config flag
2019-05-09 15:37:12 +02:00
Nalin Dahyabhai
7ae62af073 inspect: add a --config flag
Add a --config option to "skopeo inspect" to dump an image's
configuration blob in the OCI format, or the original format
if --config and --raw are specified.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2019-05-08 11:07:52 -04:00
Miloslav Trmač
7525a79c93 Merge pull request #646 from QiWang19/creds
Add --no-creds flag to skopeo inspect
2019-05-07 20:53:02 +02:00
juanluisvaladas
07287b5783 Add --no-creds flag to skopeo inspect
Follow PR #433
Close #421

Currently skopeo inspect allows to:
Use the default credentials in $HOME/.docker.config
Explicitly define credentials via de --creds flag

This implements a --no-creds flag which will query docker registries anonymously.

Signed-off-by: Qi Wang <qiwan@redhat.com>
2019-05-03 13:30:33 -04:00
Daniel J Walsh
0a2a62ac20 Merge pull request #618 from SUSE/registry-mirror
Add skopeo registry mirror integration tests
2019-04-25 06:26:08 -04:00
Daniel J Walsh
5581c62a3a Merge pull request #632 from hakandilek/master
build image updated to ubuntu:18.10
2019-04-25 06:24:47 -04:00
Sascha Grunert
6b5bdb7563 Add skopeo registry mirror integration tests
- Update toml to latest release
- Update containers/image
- Add integration tests
- Add hidden `--registry-conf` flag used by the integration tests

Signed-off-by: Sascha Grunert <sgrunert@suse.com>
2019-04-25 11:35:12 +02:00
Valentin Rothberg
2bdffc89c2 Merge pull request #640 from rhatdan/vendor
Vendor update container/storage
2019-04-25 11:17:15 +02:00
Daniel J Walsh
65e6449c95 Vendor update container/storage
overlay: propagate errors from mountProgram
utils: root in a userns uses global conf file
Fix handling of additional stores
Correctly check permissions on rootless directory
Fix possible integer overflow on 32bit builds
Evaluate device path for lvm
lockfile test: make concurrent RW test determinisitc
lockfile test: make concurrent read tests deterministic

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-04-24 20:32:46 -04:00
Valentin Rothberg
2829f7da9e Merge pull request #638 from giuseppe/skip-namespace-if-not-needed
rootless: do not create a user namespace if not needed
2019-04-24 14:27:48 +02:00
Giuseppe Scrivano
ece44c2842 rootless: do not create a user namespace if not needed
do not create a user namespace if we already have the capabilities we
need for pulling and storing an image.

Closes: https://github.com/containers/skopeo/issues/637

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-04-24 13:48:31 +02:00
Miloslav Trmač
0fa335c149 Merge pull request #635 from SUSE/buildah-update
Vendor the latest buildah master
2019-04-24 07:16:29 +02:00
Sascha Grunert
5c0ad57c2c Vendor the latest buildah master
This commit contains the necessary split-up between buildah/pkg and
buildah/util to avoid dependency breaks.

Signed-off-by: Sascha Grunert <sgrunert@suse.com>
2019-04-23 15:07:37 +02:00
Hakan Dilek
b2934e7cf6 build image updated to ubuntu:18.10
fixes #621

Signed-off-by: Hakan Dilek <hakandilek@gmail.com>
2019-04-17 22:04:12 +02:00
Daniel J Walsh
2af7114ea1 Merge pull request #629 from chuanchang/add_help_to_makefile
added help to Makefile
2019-04-17 12:04:02 -04:00
Alex Jia
0e1cc9203e Merge branch 'master' into add_help_to_makefile 2019-04-17 09:49:44 +08:00
Miloslav Trmač
e255ccc145 Merge pull request #630 from lsm5/go-envvar
use GO envvar throughout in Makefile
2019-04-16 19:50:51 +02:00
Alex Jia
9447a55b61 added help to Makefile
Signed-off-by: Alex Jia <chuanchang.jia@gmail.com>
2019-04-16 09:29:10 +08:00
Lokesh Mandvekar
9fdceeb2b2 use GO envvar throughout in Makefile
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2019-04-16 00:04:57 +00:00
Daniel J Walsh
18ee5f8119 Merge pull request #628 from vrothberg/update-bolt
Switch to github.com/etcd-io/bbolt
2019-04-12 12:36:01 -04:00
Valentin Rothberg
ab6a17059c Switch to github.com/etcd-io/bbolt
github.com/boltdb/bolt is no longer maintained.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-04-12 17:27:09 +02:00
Miloslav Trmač
81c5e94850 Merge pull request #624 from giuseppe/skopeo-rootless
skopeo: add rootless support
2019-04-11 21:05:59 +02:00
Daniel J Walsh
99dc83062a Merge pull request #627 from SUSE/storage-v1.12.2
Update containers/storage to v1.12.2
2019-04-11 07:44:29 -04:00
Sascha Grunert
4d8ea6729f Update containers/storage to v1.12.2
This commit simply bumps containers/storage to the latest version to
unblock the containers/image integration test runs.

Signed-off-by: Sascha Grunert <sgrunert@suse.com>
2019-04-11 10:52:28 +02:00
Giuseppe Scrivano
ac85091ecd skopeo: create a userns when running rootless
Closes: https://github.com/containers/skopeo/issues/623

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-04-10 16:51:46 +02:00
Giuseppe Scrivano
ffa640c2b0 vendor: add and update containers/{buildah,image}
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-04-10 09:33:13 +02:00
Miloslav Trmač
c73bcba7e6 Merge pull request #626 from grdryn/fix-links
Update broken links in info docs
2019-04-08 14:56:45 +02:00
Gerard Ryan
329e1cf61c Update broken links in info docs 2019-04-07 14:37:14 +01:00
Valentin Rothberg
854f766dc7 Merge pull request #622 from rhatdan/man
Make sure we install man pages
2019-03-27 13:22:03 +01:00
Daniel J Walsh
5c73fdbfdc Make sure we install man pages
Currently we are only installing the skopeo.1 man page.  This
change will generate and install all man pages.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-27 05:52:48 -04:00
Valentin Rothberg
097549748a Merge pull request #620 from rhatdan/vendor
Vendor in latest containers/storage and containers/image
2019-03-25 10:49:46 +01:00
Daniel J Walsh
032309941b Vendor in latest containers/storage and containers/image
Update containers/storage and containers/image to define location of local storage.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-24 13:32:35 -04:00
Valentin Rothberg
d93a581fb8 Merge pull request #615 from vrothberg/fix-613
vendor: don't remove containers/image/registries.conf
2019-03-13 17:23:05 +01:00
Valentin Rothberg
52075ab386 Merge branch 'master' into fix-613 2019-03-13 14:20:44 +01:00
Miloslav Trmač
d65ae4b1d7 Merge pull request #616 from vrothberg/vendor-image
vendor containers/image
2019-03-13 14:19:05 +01:00
Valentin Rothberg
c32d27f59e Merge branch 'master' into fix-613 2019-03-13 13:51:16 +01:00
Valentin Rothberg
883d65a54a vendor containers/image
The progress bars now show messages on completion of the copy
operations.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-03-13 08:39:40 +01:00
Miloslav Trmač
94728fb73f Merge pull request #614 from vrothberg/vendor-storage-image
WIP - Vendor storage image
2019-03-12 17:04:11 +01:00
Valentin Rothberg
520f0e5ddb WIP - update storage & image
TEST PR for: https://github.com/containers/image/pull/603

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-03-12 14:38:48 +01:00
Valentin Rothberg
fa39b49a5c vendor: don't remove containers/image/registries.conf
Instruct vndr to not remove image/registries.conf to ease packaging on
Ubuntu.

Fixes: #618
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-03-11 17:37:14 +01:00
Valentin Rothberg
0490018903 Merge pull request #611 from eramoto/completions-global-option
completions: Fix completions with a global option and indentation
2019-03-06 11:04:11 +01:00
ERAMOTO Masaya
b434c8f424 completions: Use only spaces in indent
Since both of tabs and spaces in indentation were used and
there were tabs expected 4 spaces width and 8 spaces width,
only spaces use in indentation.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-03-06 11:45:41 +09:00
ERAMOTO Masaya
79de2d9f09 completions: Fix completions with a global option
After a global option was specified, a following string for global
options, commands, and command options was not complemented.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-03-06 11:45:13 +09:00
Valentin Rothberg
2031e17b3c Merge pull request #609 from rhatdan/release
Release 0.1.35
2019-03-05 14:43:59 +01:00
Daniel J Walsh
5a050c1383 version: bump to v0.1.36-dev
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-04 16:18:07 -05:00
Daniel J Walsh
404c5bd341 version: bump to v0.1.35
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-04 16:18:07 -05:00
Valentin Rothberg
2134209960 Merge pull request #608 from rhatdan/vendor
Vendor in latest containers/storage and image
2019-03-01 15:05:56 +01:00
Daniel J Walsh
1e8c029562 Vendor in latest containers/storage and image
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-01 07:16:57 -05:00
Miloslav Trmač
932b037d66 Merge pull request #606 from vrothberg/vendor-vendor-vendor
Vendor updates
2019-02-23 03:37:46 +01:00
Valentin Rothberg
26a48586a0 Travis: add vendor checks
Add checks to Tarvis to make sure that the vendor.conf is in sync with
the code and the dependencies in ./vendor.  Do this by first running
`make vendor` followed by running `./hack/tree_status.sh` to check if
any file in the tree has been changed.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-22 12:21:36 +01:00
Valentin Rothberg
683f4263ef vendor.conf: remove unused dependencies
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-21 14:07:33 +01:00
Valentin Rothberg
ebfa1e936b vendor.conf: pin branches to releases or commits
Most of the dependencies have been copied from libpod's vendor.conf
where such a cleanup has been executed recently.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-21 14:03:14 +01:00
Valentin Rothberg
509782e78b add hack/tree_status.sh
This script is meant to be used in CI after a `make vendor` run.  It's
sole purpose is to execute a `git status --porcelain` and fail with the
list of files reported by it.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-21 13:50:00 +01:00
Valentin Rothberg
776b408f76 make vendor: always fetch the latest vndr
Make sure to always use the latest version of vndr by fetching it prior
to execution.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-21 13:48:22 +01:00
Miloslav Trmač
fee5981ebf Merge pull request #604 from eramoto/transports-completions
completions: Introduce transports completions
2019-02-16 20:10:27 +01:00
Valentin Rothberg
d9e9604979 Merge pull request #602 from vrothberg/mpb-progress-bars
update containers/image
2019-02-16 10:37:27 +01:00
Valentin Rothberg
3606380bdb vendor latest containers/image
containers/image moved to a new progress-bar library to fix various
issues related to overlapping bars and redundant entries.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-16 10:08:35 +01:00
ERAMOTO Masaya
640b967463 completions: Introduce transports completions
Introduces bash completions for transports which commands (copy, delete,
and inspect) support.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-02-15 14:27:55 +09:00
Valentin Rothberg
b8b9913695 Merge pull request #603 from eramoto/modify-gitignore
Modify .gitignore for generated man pages
2019-02-13 10:16:44 +01:00
ERAMOTO Masaya
9e2720dfcc Modify .gitignore for generated man pages
Modify .gitigare to target any man page since skopeo man page was split up
in #598.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-02-13 10:03:26 +09:00
Miloslav Trmač
b329dd0d4e Merge pull request #600 from nalind/storage-multiple-manifests
Vendor latest github.com/containers/storage
2019-02-08 01:02:50 +01:00
Nalin Dahyabhai
1b10352591 Vendor latest github.com/containers/storage
Update github.com/containers/storage to master(06b6c2e4cf254f5922a79da058c94ac2a65bb92f).

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2019-02-07 17:20:45 -05:00
Daniel J Walsh
bba2874451 Merge pull request #598 from rhatdan/man
split up skopeo man pages
2019-02-01 15:15:46 -05:00
Daniel J Walsh
0322441640 Merge branch 'master' into man 2019-02-01 13:28:45 -05:00
Daniel J Walsh
8868d2ebe4 Merge pull request #596 from eramoto/fix-bash-completions
completions: Fix bash completions when a option requires a argument
2019-02-01 13:28:14 -05:00
Daniel J Walsh
f19acc1c90 split up skopeo man pages
Create a different man page for each of the subcommands.
Also replace some krufty references to kpod with podman

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-02-01 11:21:51 -05:00
Daniel J Walsh
47f24b4097 Merge branch 'master' into fix-bash-completions 2019-02-01 09:58:46 -05:00
Daniel J Walsh
c2597aab22 Merge pull request #599 from rhatdan/quiet
Add --quiet option to skopeo copy
2019-02-01 09:55:57 -05:00
Daniel J Walsh
47065938da Add --quiet option to skopeo copy
People are using skopeo copy in batch commands and do not need
all of the logging.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-02-01 01:38:39 +00:00
ERAMOTO Masaya
790620024e completions: Fix bash completions when a option requires a argument
Since the string of options variable as pattern in the case statement has
not been delimited and it does not match the value of prev variable,
bash completions tries to complement any option even when a specified
option requires a argument.
This fix stops complementing options when a option requires a argument.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-01-23 19:14:26 +09:00
Daniel J Walsh
42b01df89e Merge pull request #586 from Silvanoc/update-contributing
docs: consolidate CONTRIBUTING
2019-01-17 10:15:18 -05:00
Silvano Cirujano Cuesta
aafae2bc50 docs: consolidate CONTRIBUTING
Move documentation about dependencies management from README.md to
CONTRIBUTING.md.

Closes #583

Signed-off-by: Silvano Cirujano Cuesta <silvano.cirujano-cuesta@siemens.com>
2019-01-17 16:04:13 +01:00
Daniel J Walsh
e5b9ea5ee6 Merge pull request #593 from vrothberg/progress-bar-tty-check
vendor latest c/image
2019-01-17 06:45:48 -05:00
Valentin Rothberg
1c2ff140cb vendor latest c/image
When copying images and the output is not a tty (e.g., when piping to a
file) print single lines instead of using progress bars. This avoids
long and hard to parse output.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-01-16 17:59:52 +01:00
Valentin Rothberg
f7c608e65e Merge pull request #592 from eramoto/build-in-container
Makefile: Build docs in a container
2019-01-15 14:12:49 +01:00
ERAMOTO Masaya
ec810c91fe Makefile: Build docs in a container
Enables to build docs in a container even when go-md2man is not installed
locally.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-01-15 18:57:30 +09:00
Daniel J Walsh
17bea86e89 Merge pull request #581 from afbjorklund/build-tag
Allow building without btrfs and ostree
2019-01-04 09:13:25 -05:00
Anders F Björklund
3e0026d907 Allow building without btrfs and ostree
Copy the build tag scripts from Buildah

Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
2019-01-03 20:32:53 +01:00
Antonio Murdaca
3e98377bf2 Merge pull request #579 from runcom/v0134
release v0.1.34
2018-12-21 16:10:05 +01:00
Antonio Murdaca
0658bc80f3 version: bump to v0.1.35-dev
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-12-21 15:52:49 +01:00
Antonio Murdaca
e96a9b0e1b version: bump to v0.1.34
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-12-21 15:52:36 +01:00
Antonio Murdaca
08c30b8f06 bump(github.com/containers/image)
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-12-21 15:52:01 +01:00
Antonio Murdaca
05212df1c5 Merge pull request #577 from runcom/0133
bump to v0.1.33
2018-12-19 12:11:04 +01:00
Antonio Murdaca
7ec68dd463 version: bump to v0.1.34-dev
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-12-19 11:14:07 +01:00
369 changed files with 14411 additions and 29714 deletions

2
.gitignore vendored
View File

@@ -1,3 +1,3 @@
/docs/skopeo.1
*.1
/layers-*
/skopeo

View File

@@ -1,3 +1,4 @@
language: go
matrix:
include:
@@ -21,4 +22,4 @@ install:
script:
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then hack/travis_osx.sh ; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make check ; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make vendor && ./hack/tree_status.sh && make check ; fi

View File

@@ -115,6 +115,35 @@ Use your real name (sorry, no pseudonyms or anonymous contributions.)
If you set your `user.name` and `user.email` git configs, you can sign your
commit automatically with `git commit -s`.
### Dependencies management
Make sure [`vndr`](https://github.com/LK4D4/vndr) is installed.
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- run `make vendor`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `make vendor`
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `make vendor`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `make vendor`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete succcesfully again, merge the skopeo PR
## Communications
For general questions, or discussions, please use the

View File

@@ -1,4 +1,4 @@
FROM ubuntu:17.10
FROM ubuntu:18.10
RUN apt-get update && apt-get install -y \
golang \

View File

@@ -1,4 +1,4 @@
.PHONY: all binary build-container docs build-local clean install install-binary install-completions shell test-integration vendor
.PHONY: all binary build-container docs docs-in-container build-local clean install install-binary install-completions shell test-integration .install.vndr vendor
export GO15VENDOREXPERIMENT=1
@@ -22,15 +22,15 @@ CONTAINERSSYSCONFIGDIR=${DESTDIR}/etc/containers
REGISTRIESDDIR=${CONTAINERSSYSCONFIGDIR}/registries.d
SIGSTOREDIR=${DESTDIR}/var/lib/atomic/sigstore
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
GO_MD2MAN ?= go-md2man
GO ?= go
CONTAINER_RUNTIME := $(shell command -v podman 2> /dev/null || echo docker)
GOMD2MAN ?= $(shell command -v go-md2man || echo '$(GOBIN)/go-md2man')
ifeq ($(DEBUG), 1)
override GOGCFLAGS += -N -l
endif
ifeq ($(shell go env GOOS), linux)
ifeq ($(shell $(GO) env GOOS), linux)
GO_DYN_FLAGS="-buildmode=pie"
endif
@@ -50,10 +50,12 @@ CONTAINER_RUN := $(CONTAINER_CMD) "$(IMAGE)"
GIT_COMMIT := $(shell git rev-parse HEAD 2> /dev/null || true)
MANPAGES_MD = $(wildcard docs/*.md)
MANPAGES ?= $(MANPAGES_MD:%.md=%)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh) $(shell hack/btrfs_installed_tag.sh)
LIBDM_BUILD_TAG = $(shell hack/libdm_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG) $(DARWIN_BUILD_TAG)
OSTREE_BUILD_TAG = $(shell hack/ostree_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG) $(OSTREE_BUILD_TAG) $(DARWIN_BUILD_TAG)
BUILDTAGS += $(LOCAL_BUILD_TAGS)
ifeq ($(DISABLE_CGO), 1)
@@ -64,7 +66,20 @@ endif
# Note: Uses the -N -l go compiler options to disable compiler optimizations
# and inlining. Using these build options allows you to subsequently
# use source debugging tools like delve.
all: binary docs
all: binary docs-in-container
help:
@echo "Usage: make <target>"
@echo
@echo " * 'install' - Install binaries and documents to system locations"
@echo " * 'binary' - Build skopeo with a container"
@echo " * 'binary-local' - Build skopeo locally"
@echo " * 'test-unit' - Execute unit tests"
@echo " * 'test-integration' - Execute integration tests"
@echo " * 'validate' - Verify whether there is no conflict and all Go source files have been formatted, linted and vetted"
@echo " * 'check' - Including above validate, test-integration and test-unit"
@echo " * 'shell' - Run the built image and attach to a shell"
@echo " * 'clean' - Clean artifacts"
# Build a container image (skopeobuild) that has everything we need to build.
# Then do the build and the output (skopeo) should appear in current dir
@@ -88,10 +103,15 @@ binary-local-static:
build-container:
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -t "$(IMAGE)" .
docs/%.1: docs/%.1.md
$(GO_MD2MAN) -in $< -out $@.tmp && touch $@.tmp && mv $@.tmp $@
$(MANPAGES): %: %.md
@sed -e 's/\((skopeo.*\.md)\)//' -e 's/\[\(skopeo.*\)\]/\1/' $< | $(GOMD2MAN) -in /dev/stdin -out $@
docs: $(MANPAGES_MD:%.md=%)
docs: $(MANPAGES)
docs-in-container:
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
${CONTAINER_RUNTIME} run --rm --security-opt label=disable -v $$(pwd):/src/github.com/containers/skopeo \
skopeobuildimage make docs $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
clean:
rm -f skopeo docs/*.1
@@ -107,9 +127,9 @@ install-binary: ./skopeo
install -d -m 755 ${INSTALLDIR}
install -m 755 skopeo ${INSTALLDIR}/skopeo
install-docs: docs/skopeo.1
install-docs: docs
install -d -m 755 ${MANINSTALLDIR}/man1
install -m 644 docs/skopeo.1 ${MANINSTALLDIR}/man1/skopeo.1
install -m 644 docs/*.1 ${MANINSTALLDIR}/man1/
install-completions:
install -m 755 -d ${BASHINSTALLDIR}
@@ -140,5 +160,10 @@ validate-local:
test-unit-local:
$(GPGME_ENV) $(GO) test -tags "$(BUILDTAGS)" $$($(GO) list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/containers/skopeo/\(integration\|vendor/.*\)$$')
vendor: vendor.conf
vndr -whitelist '^github.com/containers/image/docs/.*'
.install.vndr:
$(GO) get -u github.com/LK4D4/vndr
vendor: vendor.conf .install.vndr
$(GOPATH)/bin/vndr \
-whitelist '^github.com/containers/image/docs/.*' \
-whitelist '^github.com/containers/image/registries.conf'

View File

@@ -229,34 +229,7 @@ NOT TODO
CONTRIBUTING
-
### Dependencies management
Make sure [`vndr`](https://github.com/LK4D4/vndr) is installed.
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- run `make vendor`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `make vendor`
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `make vendor`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `make vendor`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete succcesfully again, merge the skopeo PR
Please read the [contribution guide](CONTRIBUTING.md) if you want to collaborate in the project.
License
-

View File

@@ -23,6 +23,8 @@ type copyOptions struct {
removeSignatures bool // Do not copy signatures from the source image
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
format optionalString // Force conversion of the image to a specified format
quiet bool // Suppress output information when copying images
}
func copyCmd(global *globalOptions) cli.Command {
@@ -55,6 +57,11 @@ func copyCmd(global *globalOptions) cli.Command {
Usage: "additional tags (supports docker-archive)",
Value: &opts.additionalTags, // Surprisingly StringSliceFlag does not support Destination:, but modifies Value: in place.
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "Suppress output information when copying images",
Destination: &opts.quiet,
},
cli.BoolFlag{
Name: "remove-signatures",
Usage: "Do not copy signatures from SOURCE-IMAGE",
@@ -78,6 +85,11 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
if len(args) != 2 {
return errorShouldDisplayUsage{errors.New("Exactly two arguments expected")}
}
imageNames := args
if err := reexecIfNecessaryForImages(imageNames...); err != nil {
return err
}
policyContext, err := opts.global.getPolicyContext()
if err != nil {
@@ -85,13 +97,13 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
}
defer policyContext.Destroy()
srcRef, err := alltransports.ParseImageName(args[0])
srcRef, err := alltransports.ParseImageName(imageNames[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", args[0], err)
return fmt.Errorf("Invalid source name %s: %v", imageNames[0], err)
}
destRef, err := alltransports.ParseImageName(args[1])
destRef, err := alltransports.ParseImageName(imageNames[1])
if err != nil {
return fmt.Errorf("Invalid destination name %s: %v", args[1], err)
return fmt.Errorf("Invalid destination name %s: %v", imageNames[1], err)
}
sourceCtx, err := opts.srcImage.newSystemContext()
@@ -132,6 +144,9 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
ctx, cancel := opts.global.commandTimeoutContext()
defer cancel()
if opts.quiet {
stdout = nil
}
_, err = copy.Image(ctx, policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: opts.removeSignatures,
SignBy: opts.signByFingerprint,

View File

@@ -44,10 +44,15 @@ func (opts *deleteOptions) run(args []string, stdout io.Writer) error {
if len(args) != 1 {
return errors.New("Usage: delete imageReference")
}
imageName := args[0]
ref, err := alltransports.ParseImageName(args[0])
if err := reexecIfNecessaryForImages(imageName); err != nil {
return err
}
ref, err := alltransports.ParseImageName(imageName)
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", args[0], err)
return fmt.Errorf("Invalid source name %s: %v", imageName, err)
}
sys, err := opts.image.newSystemContext()

View File

@@ -34,6 +34,7 @@ type inspectOptions struct {
global *globalOptions
image *imageOptions
raw bool // Output the raw manifest instead of parsing information about the image
config bool // Output the raw config blob instead of parsing information about the image
}
func inspectCmd(global *globalOptions) cli.Command {
@@ -58,9 +59,14 @@ func inspectCmd(global *globalOptions) cli.Command {
Flags: append(append([]cli.Flag{
cli.BoolFlag{
Name: "raw",
Usage: "output raw manifest",
Usage: "output raw manifest or configuration",
Destination: &opts.raw,
},
cli.BoolFlag{
Name: "config",
Usage: "output configuration",
Destination: &opts.config,
},
}, sharedFlags...), imageFlags...),
Action: commandAction(opts.run),
}
@@ -73,7 +79,13 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
if len(args) != 1 {
return errors.New("Exactly one argument expected")
}
img, err := parseImage(ctx, opts.image, args[0])
imageName := args[0]
if err := reexecIfNecessaryForImages(imageName); err != nil {
return err
}
img, err := parseImage(ctx, opts.image, imageName)
if err != nil {
return err
}
@@ -88,12 +100,32 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
if err != nil {
return err
}
if opts.raw {
if opts.config && opts.raw {
configBlob, err := img.ConfigBlob(ctx)
if err != nil {
return fmt.Errorf("Error reading configuration blob: %v", err)
}
_, err = stdout.Write(configBlob)
if err != nil {
return fmt.Errorf("Error writing configuration blob to standard output: %v", err)
}
return nil
} else if opts.raw {
_, err := stdout.Write(rawManifest)
if err != nil {
return fmt.Errorf("Error writing manifest to standard output: %v", err)
}
return nil
} else if opts.config {
config, err := img.OCIConfig(ctx)
if err != nil {
return fmt.Errorf("Error reading OCI-formatted configuration data: %v", err)
}
err = json.NewEncoder(stdout).Encode(config)
if err != nil {
return fmt.Errorf("Error writing OCI-formatted configuration data to standard output: %v", err)
}
return nil
}
imgInspect, err := img.Inspect(ctx)
if err != nil {

View File

@@ -43,6 +43,11 @@ func (opts *layersOptions) run(args []string, stdout io.Writer) (retErr error) {
if len(args) == 0 {
return errors.New("Usage: layers imageReference [layer...]")
}
imageName := args[0]
if err := reexecIfNecessaryForImages(imageName); err != nil {
return err
}
ctx, cancel := opts.global.commandTimeoutContext()
defer cancel()
@@ -52,7 +57,7 @@ func (opts *layersOptions) run(args []string, stdout io.Writer) (retErr error) {
return err
}
cache := blobinfocache.DefaultCache(sys)
rawSource, err := parseImageSource(ctx, opts.image, args[0])
rawSource, err := parseImageSource(ctx, opts.image, imageName)
if err != nil {
return err
}

View File

@@ -18,14 +18,15 @@ import (
var gitCommit = ""
type globalOptions struct {
debug bool // Enable debug output
tlsVerify optionalBool // Require HTTPS and verify certificates (for docker: and docker-daemon:)
policyPath string // Path to a signature verification policy file
insecurePolicy bool // Use an "allow everything" signature verification policy
registriesDirPath string // Path to a "registries.d" registry configuratio directory
overrideArch string // Architecture to use for choosing images, instead of the runtime one
overrideOS string // OS to use for choosing images, instead of the runtime one
commandTimeout time.Duration // Timeout for the command execution
debug bool // Enable debug output
tlsVerify optionalBool // Require HTTPS and verify certificates (for docker: and docker-daemon:)
policyPath string // Path to a signature verification policy file
insecurePolicy bool // Use an "allow everything" signature verification policy
registriesDirPath string // Path to a "registries.d" registry configuratio directory
overrideArch string // Architecture to use for choosing images, instead of the runtime one
overrideOS string // OS to use for choosing images, instead of the runtime one
commandTimeout time.Duration // Timeout for the command execution
registriesConfPath string // Path to the "registries.conf" file
}
// createApp returns a cli.App, and the underlying globalOptions object, to be run or tested.
@@ -83,6 +84,12 @@ func createApp() (*cli.App, *globalOptions) {
Usage: "timeout for the command execution",
Destination: &opts.commandTimeout,
},
cli.StringFlag{
Name: "registries-conf",
Usage: "path to the registries.conf file",
Destination: &opts.registriesConfPath,
Hidden: true,
},
}
app.Before = opts.before
app.Commands = []cli.Command{
@@ -99,7 +106,7 @@ func createApp() (*cli.App, *globalOptions) {
}
// before is run by the cli package for any command, before running the command-specific handler.
func (opts *globalOptions) before(_ *cli.Context) error {
func (opts *globalOptions) before(ctx *cli.Context) error {
if opts.debug {
logrus.SetLevel(logrus.DebugLevel)
}

11
cmd/skopeo/unshare.go Normal file
View File

@@ -0,0 +1,11 @@
// +build !linux
package main
func maybeReexec() error {
return nil
}
func reexecIfNecessaryForImages(inputImageNames ...string) error {
return nil
}

View File

@@ -0,0 +1,46 @@
package main
import (
"github.com/containers/buildah/pkg/unshare"
"github.com/containers/image/storage"
"github.com/containers/image/transports/alltransports"
"github.com/pkg/errors"
"github.com/syndtr/gocapability/capability"
)
var neededCapabilities = []capability.Cap{
capability.CAP_CHOWN,
capability.CAP_DAC_OVERRIDE,
capability.CAP_FOWNER,
capability.CAP_FSETID,
capability.CAP_MKNOD,
capability.CAP_SETFCAP,
}
func maybeReexec() error {
// With Skopeo we need only the subset of the root capabilities necessary
// for pulling an image to the storage. Do not attempt to create a namespace
// if we already have the capabilities we need.
capabilities, err := capability.NewPid(0)
if err != nil {
return errors.Wrapf(err, "error reading the current capabilities sets")
}
for _, cap := range neededCapabilities {
if !capabilities.Get(capability.EFFECTIVE, cap) {
// We miss a capability we need, create a user namespaces
unshare.MaybeReexecUsingUserNamespace(true)
return nil
}
}
return nil
}
func reexecIfNecessaryForImages(imageNames ...string) error {
// Check if container-storage are used before doing unshare
for _, imageName := range imageNames {
if alltransports.TransportFromImageName(imageName).Name() == storage.Transport.Name() {
return maybeReexec()
}
}
return nil
}

View File

@@ -59,6 +59,7 @@ type imageOptions struct {
tlsVerify optionalBool // Require HTTPS and verify certificates (for docker: and docker-daemon:)
sharedBlobDir string // A directory to use for OCI blobs, shared across repositories
dockerDaemonHost string // docker-daemon: host to connect to
noCreds bool // Access the registry anonymously
}
// imageFlags prepares a collection of CLI flags writing into imageOptions, and the managed imageOptions structure.
@@ -101,6 +102,11 @@ func imageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, c
Usage: "use docker daemon host at `HOST` (docker-daemon: only)",
Destination: &opts.dockerDaemonHost,
},
cli.BoolFlag{
Name: flagPrefix + "no-creds",
Usage: "Access the registry anonymously",
Destination: &opts.noCreds,
},
}, &opts
}
@@ -108,14 +114,15 @@ func imageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, c
// It is guaranteed to return a fresh instance, so it is safe to make additional updates to it.
func (opts *imageOptions) newSystemContext() (*types.SystemContext, error) {
ctx := &types.SystemContext{
RegistriesDirPath: opts.global.registriesDirPath,
ArchitectureChoice: opts.global.overrideArch,
OSChoice: opts.global.overrideOS,
DockerCertPath: opts.dockerCertPath,
OCISharedBlobDirPath: opts.sharedBlobDir,
AuthFilePath: opts.shared.authFilePath,
DockerDaemonHost: opts.dockerDaemonHost,
DockerDaemonCertPath: opts.dockerCertPath,
RegistriesDirPath: opts.global.registriesDirPath,
ArchitectureChoice: opts.global.overrideArch,
OSChoice: opts.global.overrideOS,
DockerCertPath: opts.dockerCertPath,
OCISharedBlobDirPath: opts.sharedBlobDir,
AuthFilePath: opts.shared.authFilePath,
DockerDaemonHost: opts.dockerDaemonHost,
DockerDaemonCertPath: opts.dockerCertPath,
SystemRegistriesConfPath: opts.global.registriesConfPath,
}
if opts.tlsVerify.present {
ctx.DockerDaemonInsecureSkipTLSVerify = !opts.tlsVerify.value
@@ -127,6 +134,9 @@ func (opts *imageOptions) newSystemContext() (*types.SystemContext, error) {
if opts.tlsVerify.present {
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.value)
}
if opts.credsOption.present && opts.noCreds {
return nil, errors.New("creds and no-creds cannot be specified at the same time")
}
if opts.credsOption.present {
var err error
ctx.DockerAuthConfig, err = getDockerAuth(opts.credsOption.value)
@@ -134,6 +144,9 @@ func (opts *imageOptions) newSystemContext() (*types.SystemContext, error) {
return nil, err
}
}
if opts.noCreds {
ctx.DockerAuthConfig = &types.DockerAuthConfig{}
}
return ctx, nil
}

View File

@@ -5,20 +5,37 @@
_complete_() {
local options_with_args=$1
local boolean_options="$2 -h --help"
local transports=$3
case "$prev" in
$options_with_args)
return
;;
esac
local option_with_args
for option_with_args in $options_with_args $transports
do
if [ "$option_with_args" == "$prev" -o "$option_with_args" == "$cur" ]
then
return
fi
done
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
if [ -n "$transports" ]
then
compopt -o nospace
COMPREPLY=( $( compgen -W "$transports" -- "$cur" ) )
fi
;;
esac
}
_skopeo_supported_transports() {
local subcommand=$1
${PROG} $subcommand --help | grep "Supported transports" -A 1 | tail -n 1 | sed -e 's/,/:/g' -e 's/$/:/'
}
_skopeo_copy() {
local options_with_args="
--authfile
@@ -38,9 +55,15 @@ _skopeo_copy() {
local boolean_options="
--dest-compress
--remove-signatures
--src-no-creds
--dest-no-creds
"
_complete_ "$options_with_args" "$boolean_options"
local transports="
$(_skopeo_supported_transports $(echo $FUNCNAME | sed 's/_skopeo_//'))
"
_complete_ "$options_with_args" "$boolean_options" "$transports"
}
_skopeo_inspect() {
@@ -50,15 +73,22 @@ _skopeo_inspect() {
--cert-dir
"
local boolean_options="
--config
--raw
--tls-verify
--no-creds
"
_complete_ "$options_with_args" "$boolean_options"
local transports="
$(_skopeo_supported_transports $(echo $FUNCNAME | sed 's/_skopeo_//'))
"
_complete_ "$options_with_args" "$boolean_options" "$transports"
}
_skopeo_standalone_sign() {
local options_with_args="
-o --output
-o --output
"
local boolean_options="
"
@@ -89,50 +119,56 @@ _skopeo_delete() {
"
local boolean_options="
--tls-verify
--no-creds
"
_complete_ "$options_with_args" "$boolean_options"
local transports="
$(_skopeo_supported_transports $(echo $FUNCNAME | sed 's/_skopeo_//'))
"
_complete_ "$options_with_args" "$boolean_options" "$transports"
}
_skopeo_layers() {
local options_with_args="
--creds
--cert-dir
--creds
--cert-dir
"
local boolean_options="
--tls-verify
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_skopeo() {
local options_with_args="
--policy
--registries.d
--policy
--registries.d
--override-arch
--override-os
--command-timeout
"
local boolean_options="
--insecure-policy
--debug
--version -v
--help -h
--insecure-policy
--debug
--version -v
--help -h
"
commands=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
case "$prev" in
$main_options_with_args_glob )
return
;;
$main_options_with_args_glob )
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]} help" -- "$cur" ) )
;;
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]} help" -- "$cur" ) )
;;
esac
}
@@ -150,15 +186,17 @@ _cli_bash_autocomplete() {
local counter=1
counter=1
while [ $counter -lt $cword ]; do
case "!${words[$counter]}" in
*)
command=$(echo "${words[$counter]}" | sed 's/-/_/g')
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
case "${words[$counter]}" in
-*)
;;
*)
command=$(echo "${words[$counter]}" | sed 's/-/_/g')
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
done
local completions_func=_skopeo_${command}

83
docs/skopeo-copy.1.md Normal file
View File

@@ -0,0 +1,83 @@
% skopeo-copy(1)
## NAME
skopeo\-copy - Copy an image (manifest, filesystem layers, signatures) from one location to another.
## SYNOPSIS
**skopeo copy** [**--sign-by=**_key-ID_] _source-image destination-image_
## DESCRIPTION
Copy an image (manifest, filesystem layers, signatures) from one location to another.
Uses the system's trust policy to validate images, rejects images not trusted by the policy.
_source-image_ use the "image name" format described above
_destination-image_ use the "image name" format described above
## OPTIONS
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--format, -f** _manifest-type_ Manifest type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
**--quiet, -q** suppress output information when copying images
**--remove-signatures** do not copy signatures, if any, from _source-image_. Necessary when copying a signed image to a destination which does not support signatures.
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
**--src-creds** _username[:password]_ for accessing the source registry
**--dest-compress** _bool-value_ Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)
**--dest-creds** _username[:password]_ for accessing the destination registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry or daemon
**--src-no-creds** _bool-value_ Access the registry anonymously.
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry or daemon (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry or daemon
**--dest-no-creds** _bool-value_ Access the registry anonymously.
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry or daemon (defaults to true)
**--src-daemon-host** _host_ Copy from docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
**--dest-daemon-host** _host_ Copy to docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
Existing signatures, if any, are preserved as well.
## EXAMPLES
To copy the layers of the docker.io busybox image to a local directory:
```sh
$ mkdir -p /var/lib/images/busybox
$ skopeo copy docker://busybox:latest dir:/var/lib/images/busybox
$ ls /var/lib/images/busybox/*
/tmp/busybox/2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749.tar
/tmp/busybox/manifest.json
/tmp/busybox/8ddc19f16526912237dd8af81971d5e4dd0587907234be2b83e249518d5b673f.tar
```
To copy and sign an image:
```sh
$ skopeo copy --sign-by dev@example.com atomic:example/busybox:streaming atomic:example/busybox:gold
```
## SEE ALSO
skopeo(1), podman-login(1), docker-login(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

52
docs/skopeo-delete.1.md Normal file
View File

@@ -0,0 +1,52 @@
% skopeo-delete(1)
## NAME
skopeo\-delete - Mark _image-name_ for deletion.
## SYNOPSIS
**skopeo delete** _image-name_
Mark _image-name_ for deletion. To release the allocated disk space, you must login to the container registry server and execute the container registry garbage collector. E.g.,
```
/usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
Note: sometimes the config.yml is stored in /etc/docker/registry/config.yml
If you are running the container registry inside of a container you would execute something like:
$ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
```
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
**--no-creds** _bool-value_ Access the registry anonymously.
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
## EXAMPLES
Mark image example/pause for deletion from the registry.example.com registry:
```sh
$ skopeo delete --force docker://registry.example.com/example/pause:latest
```
See above for additional details on using the command **delete**.
## SEE ALSO
skopeo(1), podman-login(1), docker-login(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

71
docs/skopeo-inspect.1.md Normal file
View File

@@ -0,0 +1,71 @@
% skopeo-inspect(1)
## NAME
skopeo\-inspect - Return low-level information about _image-name_ in a registry
## SYNOPSIS
**skopeo inspect** [**--raw**] [**--config**] _image-name_
Return low-level information about _image-name_ in a registry
**--raw** output raw manifest, default is to format in JSON
_image-name_ name of image to retrieve information about
**--config** output configuration in OCI format, default is to format in JSON
_image-name_ name of image to retrieve configuration for
**--config** **--raw** output configuration in raw format
_image-name_ name of image to retrieve configuration for
**--authfile** _path_
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (\*.crt, \*.cert, \*.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
**--no-creds** _bool-value_ Access the registry anonymously.
## EXAMPLES
To review information for the image fedora from the docker.io registry:
```sh
$ skopeo inspect docker://docker.io/fedora
{
"Name": "docker.io/library/fedora",
"Digest": "sha256:a97914edb6ba15deb5c5acf87bd6bd5b6b0408c96f48a5cbd450b5b04509bb7d",
"RepoTags": [
"20",
"21",
"22",
"23",
"24",
"heisenbug",
"latest",
"rawhide"
],
"Created": "2016-06-20T19:33:43.220526898Z",
"DockerVersion": "1.10.3",
"Labels": {},
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:7c91a140e7a1025c3bc3aace4c80c0d9933ac4ee24b8630a6b0b5d8b9ce6b9d4"
]
}
```
# SEE ALSO
skopeo(1), podman-login(1), docker-login(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -0,0 +1,26 @@
% skopeo-manifest-digest(1)
## NAME
skopeo\-manifest\-digest -Compute a manifest digest of manifest-file and write it to standard output.
## SYNOPSIS
**skopeo manifest-digest** _manifest-file_
## DESCRIPTION
Compute a manifest digest of _manifest-file_ and write it to standard output.
## EXAMPLES
```sh
$ skopeo manifest-digest manifest.json
sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6
```
## SEE ALSO
skopeo(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -0,0 +1,34 @@
% skopeo-standalone-sign(1)
## NAME
skopeo\-standalone-sign - Simple Sign an image
## SYNOPSIS
**skopeo standalone-sign** _manifest docker-reference key-fingerprint_ **--output**|**-o** _signature_
## DESCRIPTION
This is primarily a debugging tool, or useful for special cases,
and usually should not be a part of your normal operational workflow; use `skopeo copy --sign-by` instead to publish and sign an image in one step.
_manifest_ Path to a file containing the image manifest
_docker-reference_ A docker reference to identify the image with
_key-fingerprint_ Key identity to use for signing
**--output**|**-o** output file
## EXAMPLES
```sh
$ skopeo standalone-sign busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 --output busybox.signature
$
```
## SEE ALSO
skopeo(1), skopeo-copy(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -0,0 +1,36 @@
% skopeo-standalone-verify(1)
## NAME
skopeo\-standalone\-verify - Verify an image signature
## SYNOPSIS
**skopeo standalone-verify** _manifest docker-reference key-fingerprint signature_
## DESCRIPTION
Verify a signature using local files, digest will be printed on success.
_manifest_ Path to a file containing the image manifest
_docker-reference_ A docker reference expected to identify the image in the signature
_key-fingerprint_ Expected identity of the signing key
_signature_ Path to signature file
**Note:** If you do use this, make sure that the image can not be changed at the source location between the times of its verification and use.
## EXAMPLES
```sh
$ skopeo standalone-verify busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 busybox.signature
Signature verified, digest sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55
```
## SEE ALSO
skopeo(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -1,11 +1,13 @@
% SKOPEO(1) Skopeo Man Pages
% Jhon Honce
% August 2016
# NAME
## NAME
skopeo -- Command line utility used to interact with local and remote container images and container image registries
# SYNOPSIS
## SYNOPSIS
**skopeo** [_global options_] _command_ [_command options_]
# DESCRIPTION
## DESCRIPTION
`skopeo` is a command line utility providing various operations with container images and container image registries.
`skopeo` can copy container images between various containers image stores, converting them as necessary. For example you can use `skopeo` to copy container images from one container registry to another.
@@ -31,7 +33,7 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(kpod login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(podman login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
**docker-archive:**_path_[**:**_docker-reference_]
An image is stored in the `docker save` formatted file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
@@ -45,7 +47,7 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
# OPTIONS
## OPTIONS
**--debug** enable debug output
@@ -65,230 +67,29 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**--version**|**-v** print the version number
# COMMANDS
## COMMANDS
## skopeo copy
**skopeo copy** [**--sign-by=**_key-ID_] _source-image destination-image_
| Command | Description |
| ----------------------------------------- | ------------------------------------------------------------------------------ |
| [skopeo-copy(1)](skopeo-copy.1.md) | Copy an image (manifest, filesystem layers, signatures) from one location to another. |
| [skopeo-delete(1)](skopeo-delete.1.md) | Mark image-name for deletion. |
| [skopeo-inspect(1)](skopeo-inspect.1.md) | Return low-level information about image-name in a registry. |
| [skopeo-manifest-digest(1)](skopeo-manifest-digest.1.md) | Compute a manifest digest of manifest-file and write it to standard output.|
| [skopeo-standalone-sign(1)](skopeo-standalone-sign.1.md) | Sign an image. |
| [skopeo-standalone-verify(1)](skopeo-standalone-verify.1.md)| Verify an image. |
Copy an image (manifest, filesystem layers, signatures) from one location to another.
Uses the system's trust policy to validate images, rejects images not trusted by the policy.
_source-image_ use the "image name" format described above
_destination-image_ use the "image name" format described above
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--format, -f** _manifest-type_ Manifest type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
**--remove-signatures** do not copy signatures, if any, from _source-image_. Necessary when copying a signed image to a destination which does not support signatures.
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
**--src-creds** _username[:password]_ for accessing the source registry
**--dest-compress** _bool-value_ Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)
**--dest-creds** _username[:password]_ for accessing the destination registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry or daemon
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry or daemon (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry or daemon
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry or daemon (defaults to true)
**--src-daemon-host** _host_ Copy from docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
**--dest-daemon-host** _host_ Copy to docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
Existing signatures, if any, are preserved as well.
## skopeo delete
**skopeo delete** _image-name_
Mark _image-name_ for deletion. To release the allocated disk space, you must login to the container registry server and execute the container registry garbage collector. E.g.,
```
/usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
Note: sometimes the config.yml is stored in /etc/docker/registry/config.yml
If you are running the container registry inside of a container you would execute something like:
$ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
```
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
## skopeo inspect
**skopeo inspect** [**--raw**] _image-name_
Return low-level information about _image-name_ in a registry
**--raw** output raw manifest, default is to format in JSON
_image-name_ name of image to retrieve information about
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
## skopeo manifest-digest
**skopeo manifest-digest** _manifest-file_
Compute a manifest digest of _manifest-file_ and write it to standard output.
## skopeo standalone-sign
**skopeo standalone-sign** _manifest docker-reference key-fingerprint_ **--output**|**-o** _signature_
This is primarily a debugging tool, or useful for special cases,
and usually should not be a part of your normal operational workflow; use `skopeo copy --sign-by` instead to publish and sign an image in one step.
_manifest_ Path to a file containing the image manifest
_docker-reference_ A docker reference to identify the image with
_key-fingerprint_ Key identity to use for signing
**--output**|**-o** output file
## skopeo standalone-verify
**skopeo standalone-verify** _manifest docker-reference key-fingerprint signature_
Verify a signature using local files, digest will be printed on success.
_manifest_ Path to a file containing the image manifest
_docker-reference_ A docker reference expected to identify the image in the signature
_key-fingerprint_ Expected identity of the signing key
_signature_ Path to signature file
**Note:** If you do use this, make sure that the image can not be changed at the source location between the times of its verification and use.
## skopeo help
show help for `skopeo`
# FILES
## FILES
**/etc/containers/policy.json**
Default trust policy file, if **--policy** is not specified.
The policy format is documented in https://github.com/containers/image/blob/master/docs/policy.json.md .
The policy format is documented in https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md .
**/etc/containers/registries.d**
Default directory containing registry configuration, if **--registries.d** is not specified.
The contents of this directory are documented in https://github.com/containers/image/blob/master/docs/registries.d.md .
The contents of this directory are documented in https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md .
# EXAMPLES
## SEE ALSO
podman-login(1), docker-login(1)
## skopeo copy
To copy the layers of the docker.io busybox image to a local directory:
```sh
$ mkdir -p /var/lib/images/busybox
$ skopeo copy docker://busybox:latest dir:/var/lib/images/busybox
$ ls /var/lib/images/busybox/*
/tmp/busybox/2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749.tar
/tmp/busybox/manifest.json
/tmp/busybox/8ddc19f16526912237dd8af81971d5e4dd0587907234be2b83e249518d5b673f.tar
```
To copy and sign an image:
```sh
$ skopeo copy --sign-by dev@example.com atomic:example/busybox:streaming atomic:example/busybox:gold
```
## skopeo delete
Mark image example/pause for deletion from the registry.example.com registry:
```sh
$ skopeo delete --force docker://registry.example.com/example/pause:latest
```
See above for additional details on using the command **delete**.
## skopeo inspect
To review information for the image fedora from the docker.io registry:
```sh
$ skopeo inspect docker://docker.io/fedora
{
"Name": "docker.io/library/fedora",
"Digest": "sha256:a97914edb6ba15deb5c5acf87bd6bd5b6b0408c96f48a5cbd450b5b04509bb7d",
"RepoTags": [
"20",
"21",
"22",
"23",
"24",
"heisenbug",
"latest",
"rawhide"
],
"Created": "2016-06-20T19:33:43.220526898Z",
"DockerVersion": "1.10.3",
"Labels": {},
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:7c91a140e7a1025c3bc3aace4c80c0d9933ac4ee24b8630a6b0b5d8b9ce6b9d4"
]
}
```
## skopeo layers
Another method to retrieve the layers for the busybox image from the docker.io registry:
```sh
$ skopeo layers docker://busybox
$ ls layers-500650331/
8ddc19f16526912237dd8af81971d5e4dd0587907234be2b83e249518d5b673f.tar
manifest.json
a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4.tar
```
## skopeo manifest-digest
```sh
$ skopeo manifest-digest manifest.json
sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6
```
## skopeo standalone-sign
```sh
$ skopeo standalone-sign busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 --output busybox.signature
$
```
See `skopeo copy` above for the preferred method of signing images.
## skopeo standalone-verify
```sh
$ skopeo standalone-verify busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 busybox.signature
Signature verified, digest sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55
```
# SEE ALSO
kpod-login(1), docker-login(1)
# AUTHORS
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

7
hack/btrfs_installed_tag.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
cc -E - > /dev/null 2> /dev/null << EOF
#include <btrfs/ioctl.h>
EOF
if test $? -ne 0 ; then
echo exclude_graphdriver_btrfs
fi

6
hack/ostree_tag.sh Executable file
View File

@@ -0,0 +1,6 @@
#!/bin/bash
if pkg-config ostree-1 2> /dev/null ; then
echo ostree
else
echo containers_image_ostree_stub
fi

13
hack/tree_status.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/bash
set -e
STATUS=$(git status --porcelain)
if [[ -z $STATUS ]]
then
echo "tree is clean"
else
echo "tree is dirty, please commit all changes and sync the vendor.conf"
echo ""
echo "$STATUS"
exit 1
fi

View File

@@ -662,3 +662,37 @@ func verifyManifestMIMEType(c *check.C, dir string, expectedMIMEType string) {
mimeType := manifest.GuessMIMEType(manifestBlob)
c.Assert(mimeType, check.Equals, expectedMIMEType)
}
const regConfFixture = "./fixtures/registries.conf"
func (s *SkopeoSuite) TestSuccessCopySrcWithMirror(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "--registries-conf="+regConfFixture, "copy",
"docker://mirror.invalid/busybox", "dir:"+dir)
}
func (s *SkopeoSuite) TestFailureCopySrcWithMirrorsUnavailable(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoFails(c, ".*no such host.*", "--registries-conf="+regConfFixture, "copy",
"docker://invalid.invalid/busybox", "dir:"+dir)
}
func (s *SkopeoSuite) TestSuccessCopySrcWithMirrorAndPrefix(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "--registries-conf="+regConfFixture, "copy",
"docker://gcr.invalid/foo/bar/busybox", "dir:"+dir)
}
func (s *SkopeoSuite) TestFailureCopySrcWithMirrorAndPrefixUnavailable(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoFails(c, ".*no such host.*", "--registries-conf="+regConfFixture, "copy",
"docker://gcr.invalid/wrong/prefix/busybox", "dir:"+dir)
}

View File

@@ -0,0 +1,28 @@
[[registry]]
location = "mirror.invalid"
mirror = [
{ location = "mirror-0.invalid" },
{ location = "mirror-1.invalid" },
{ location = "gcr.io/google-containers" },
]
# This entry is currently unused and exists only to ensure
# that the mirror.invalid/busybox is not rewritten twice.
[[registry]]
location = "gcr.io"
prefix = "gcr.io/google-containers"
[[registry]]
location = "invalid.invalid"
mirror = [
{ location = "invalid-mirror-0.invalid" },
{ location = "invalid-mirror-1.invalid" },
]
[[registry]]
location = "gcr.invalid"
prefix = "gcr.invalid/foo/bar"
mirror = [
{ location = "wrong-mirror-0.invalid" },
{ location = "gcr.io/google-containers" },
]

View File

@@ -1,21 +1,24 @@
github.com/urfave/cli v1.20.0
github.com/kr/pretty v0.1.0
github.com/kr/text v0.1.0
github.com/containers/image 50e5e55e46a391df8fce1291b2337f1af879b822
github.com/containers/image 2c0349c99af7d90694b3faa0e9bde404d407b145
github.com/containers/buildah 810efa340ab43753034e2ed08ec290e4abab7e72
github.com/vbauerster/mpb v3.3.4
github.com/mattn/go-isatty v0.0.4
github.com/VividCortex/ewma v1.1.1
golang.org/x/sync 42b317875d0fa942474b76e1b46a6060d720ae6e
github.com/opencontainers/go-digest c9281466c8b2f606084ac71339773efd177436e7
gopkg.in/cheggaaa/pb.v1 v1.0.27
github.com/mattn/go-runewidth 14207d285c6c197daabb5c9793d63e7af9ab2d50
github.com/containers/storage master
github.com/containers/storage v1.12.3
github.com/sirupsen/logrus v1.0.0
github.com/go-check/check v1
github.com/stretchr/testify v1.1.3
github.com/davecgh/go-spew master
github.com/pmezard/go-difflib master
github.com/pkg/errors master
golang.org/x/crypto master
github.com/davecgh/go-spew v1.1.1
github.com/pmezard/go-difflib 5d4384ee4fb2527b0a1256a821ebfc92f91efefc
github.com/pkg/errors v0.8.1
golang.org/x/crypto a4c6cb3142f211c99e4bf4cd769535b29a9b616f
github.com/ulikunitz/xz v0.5.4
github.com/boltdb/bolt master
github.com/etcd-io/bbolt v1.3.2
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
github.com/docker/docker da99009bbb1165d1ac5688b5c81d2f589d418341
github.com/docker/go-connections 7beb39f0b969b075d1325fecb092faf27fd357b6
@@ -24,49 +27,41 @@ github.com/vbatts/tar-split v0.10.2
github.com/gorilla/context 14f550f51a
github.com/gorilla/mux e444e69cbd
github.com/docker/go-units 8a7beacffa3009a9ac66bad506b18ffdd110cf97
golang.org/x/net master
golang.org/x/net 45ffb0cd1ba084b73e26dee67e667e1be5acce83
github.com/gogo/protobuf fcdc5011193ff531a548e9b0301828d5a5b97fd8
# end docker deps
golang.org/x/text master
github.com/docker/distribution master
golang.org/x/text e6919f6577db79269a6443b9dc46d18f2238fb5d
github.com/docker/distribution 5f6282db7d65e6d72ad7c2cc66310724a57be716
# docker/distributions dependencies
github.com/docker/go-metrics 399ea8c73916000c64c2c76e8da00ca82f8387ab
github.com/prometheus/client_golang c332b6f63c0658a65eca15c0e5247ded801cf564
github.com/prometheus/client_model 99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c
github.com/prometheus/common 89604d197083d4781071d3c65855d24ecfb0a563
github.com/prometheus/procfs cb4147076ac75738c9a7d279075a253c0cc5acbd
github.com/beorn7/perks 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c
github.com/golang/protobuf 8d92cf5fc15a4382f8964b08e1f42a75c0591aa3
# end of docker/distribution dependencies
github.com/docker/libtrust master
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
github.com/docker/docker-credential-helpers d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1
github.com/opencontainers/runc master
github.com/opencontainers/runc v1.0.0-rc6
github.com/opencontainers/image-spec 7b1e489870acb042978a3935d2fb76f8a79aff81
# -- start OCI image validation requirements.
github.com/opencontainers/runtime-spec v1.0.0
github.com/opencontainers/image-tools 6d941547fa1df31900990b3fb47ec2468c9c6469
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
go4.org master https://github.com/camlistore/go4
github.com/xeipuuv/gojsonpointer 4e3ac2762d5f479393488629ee9370b50873b3a6
github.com/xeipuuv/gojsonreference bd5ef7bd5415a7ac448318e64f11a24cd21e594b
github.com/xeipuuv/gojsonschema v1.1.0
go4.org ce4c26f7be8eb27dc77f996b08d286dd80bc4a01 https://github.com/camlistore/go4
github.com/ostreedev/ostree-go 56f3a639dbc0f2f5051c6d52dade28a882ba78ce
# -- end OCI image validation requirements
github.com/mtrmac/gpgme master
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
k8s.io/client-go master
k8s.io/client-go kubernetes-1.10.13-beta.0
github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
# containers/storage's dependencies that aren't already being pulled in
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
github.com/opencontainers/selinux master
github.com/opencontainers/selinux v1.1
golang.org/x/sys 43e60d72a8e2bd92ee98319ba9a384a0e9837c08
github.com/tchap/go-patricia v2.2.6
github.com/BurntSushi/toml master
github.com/BurntSushi/toml v0.3.1
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac
github.com/syndtr/gocapability master
github.com/syndtr/gocapability d98352740cb2c55f81556b63d4a1ec64c5a319c2
github.com/klauspost/pgzip v1.2.1
github.com/klauspost/compress v1.4.1
github.com/klauspost/cpuid v1.2.0

View File

@@ -1,6 +1,6 @@
The MIT License (MIT)
The MIT License
Copyright (c) 2016 Yasuhiro Matsumoto
Copyright (c) 2013 VividCortex
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -9,13 +9,13 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

140
vendor/github.com/VividCortex/ewma/README.md generated vendored Normal file
View File

@@ -0,0 +1,140 @@
# EWMA [![GoDoc](https://godoc.org/github.com/VividCortex/ewma?status.svg)](https://godoc.org/github.com/VividCortex/ewma) ![Build Status](https://circleci.com/gh/VividCortex/moving_average.png?circle-token=1459fa37f9ca0e50cef05d1963146d96d47ea523)
This repo provides Exponentially Weighted Moving Average algorithms, or EWMAs for short, [based on our
Quantifying Abnormal Behavior talk](https://vividcortex.com/blog/2013/07/23/a-fast-go-library-for-exponential-moving-averages/).
### Exponentially Weighted Moving Average
An exponentially weighted moving average is a way to continuously compute a type of
average for a series of numbers, as the numbers arrive. After a value in the series is
added to the average, its weight in the average decreases exponentially over time. This
biases the average towards more recent data. EWMAs are useful for several reasons, chiefly
their inexpensive computational and memory cost, as well as the fact that they represent
the recent central tendency of the series of values.
The EWMA algorithm requires a decay factor, alpha. The larger the alpha, the more the average
is biased towards recent history. The alpha must be between 0 and 1, and is typically
a fairly small number, such as 0.04. We will discuss the choice of alpha later.
The algorithm works thus, in pseudocode:
1. Multiply the next number in the series by alpha.
2. Multiply the current value of the average by 1 minus alpha.
3. Add the result of steps 1 and 2, and store it as the new current value of the average.
4. Repeat for each number in the series.
There are special-case behaviors for how to initialize the current value, and these vary
between implementations. One approach is to start with the first value in the series;
another is to average the first 10 or so values in the series using an arithmetic average,
and then begin the incremental updating of the average. Each method has pros and cons.
It may help to look at it pictorially. Suppose the series has five numbers, and we choose
alpha to be 0.50 for simplicity. Here's the series, with numbers in the neighborhood of 300.
![Data Series](https://user-images.githubusercontent.com/279875/28242350-463289a2-6977-11e7-88ca-fd778ccef1f0.png)
Now let's take the moving average of those numbers. First we set the average to the value
of the first number.
![EWMA Step 1](https://user-images.githubusercontent.com/279875/28242353-464c96bc-6977-11e7-9981-dc4e0789c7ba.png)
Next we multiply the next number by alpha, multiply the current value by 1-alpha, and add
them to generate a new value.
![EWMA Step 2](https://user-images.githubusercontent.com/279875/28242351-464abefa-6977-11e7-95d0-43900f29bef2.png)
This continues until we are done.
![EWMA Step N](https://user-images.githubusercontent.com/279875/28242352-464c58f0-6977-11e7-8cd0-e01e4efaac7f.png)
Notice how each of the values in the series decays by half each time a new value
is added, and the top of the bars in the lower portion of the image represents the
size of the moving average. It is a smoothed, or low-pass, average of the original
series.
For further reading, see [Exponentially weighted moving average](http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average) on wikipedia.
### Choosing Alpha
Consider a fixed-size sliding-window moving average (not an exponentially weighted moving average)
that averages over the previous N samples. What is the average age of each sample? It is N/2.
Now suppose that you wish to construct a EWMA whose samples have the same average age. The formula
to compute the alpha required for this is: alpha = 2/(N+1). Proof is in the book
"Production and Operations Analysis" by Steven Nahmias.
So, for example, if you have a time-series with samples once per second, and you want to get the
moving average over the previous minute, you should use an alpha of .032786885. This, by the way,
is the constant alpha used for this repository's SimpleEWMA.
### Implementations
This repository contains two implementations of the EWMA algorithm, with different properties.
The implementations all conform to the MovingAverage interface, and the constructor returns
that type.
Current implementations assume an implicit time interval of 1.0 between every sample added.
That is, the passage of time is treated as though it's the same as the arrival of samples.
If you need time-based decay when samples are not arriving precisely at set intervals, then
this package will not support your needs at present.
#### SimpleEWMA
A SimpleEWMA is designed for low CPU and memory consumption. It **will** have different behavior than the VariableEWMA
for multiple reasons. It has no warm-up period and it uses a constant
decay. These properties let it use less memory. It will also behave
differently when it's equal to zero, which is assumed to mean
uninitialized, so if a value is likely to actually become zero over time,
then any non-zero value will cause a sharp jump instead of a small change.
#### VariableEWMA
Unlike SimpleEWMA, this supports a custom age which must be stored, and thus uses more memory.
It also has a "warmup" time when you start adding values to it. It will report a value of 0.0
until you have added the required number of samples to it. It uses some memory to store the
number of samples added to it. As a result it uses a little over twice the memory of SimpleEWMA.
## Usage
### API Documentation
View the GoDoc generated documentation [here](http://godoc.org/github.com/VividCortex/ewma).
```go
package main
import "github.com/VividCortex/ewma"
func main() {
samples := [100]float64{
4599, 5711, 4746, 4621, 5037, 4218, 4925, 4281, 5207, 5203, 5594, 5149,
}
e := ewma.NewMovingAverage() //=> Returns a SimpleEWMA if called without params
a := ewma.NewMovingAverage(5) //=> returns a VariableEWMA with a decay of 2 / (5 + 1)
for _, f := range samples {
e.Add(f)
a.Add(f)
}
e.Value() //=> 13.577404704631077
a.Value() //=> 1.5806140565521463e-12
}
```
## Contributing
We only accept pull requests for minor fixes or improvements. This includes:
* Small bug fixes
* Typos
* Documentation or comments
Please open issues to discuss new features. Pull requests for new features will be rejected,
so we recommend forking the repository and making changes in your fork for your use case.
## License
This repository is Copyright (c) 2013 VividCortex, Inc. All rights reserved.
It is licensed under the MIT license. Please see the LICENSE file for applicable license terms.

126
vendor/github.com/VividCortex/ewma/ewma.go generated vendored Normal file
View File

@@ -0,0 +1,126 @@
// Package ewma implements exponentially weighted moving averages.
package ewma
// Copyright (c) 2013 VividCortex, Inc. All rights reserved.
// Please see the LICENSE file for applicable license terms.
const (
// By default, we average over a one-minute period, which means the average
// age of the metrics in the period is 30 seconds.
AVG_METRIC_AGE float64 = 30.0
// The formula for computing the decay factor from the average age comes
// from "Production and Operations Analysis" by Steven Nahmias.
DECAY float64 = 2 / (float64(AVG_METRIC_AGE) + 1)
// For best results, the moving average should not be initialized to the
// samples it sees immediately. The book "Production and Operations
// Analysis" by Steven Nahmias suggests initializing the moving average to
// the mean of the first 10 samples. Until the VariableEwma has seen this
// many samples, it is not "ready" to be queried for the value of the
// moving average. This adds some memory cost.
WARMUP_SAMPLES uint8 = 10
)
// MovingAverage is the interface that computes a moving average over a time-
// series stream of numbers. The average may be over a window or exponentially
// decaying.
type MovingAverage interface {
Add(float64)
Value() float64
Set(float64)
}
// NewMovingAverage constructs a MovingAverage that computes an average with the
// desired characteristics in the moving window or exponential decay. If no
// age is given, it constructs a default exponentially weighted implementation
// that consumes minimal memory. The age is related to the decay factor alpha
// by the formula given for the DECAY constant. It signifies the average age
// of the samples as time goes to infinity.
func NewMovingAverage(age ...float64) MovingAverage {
if len(age) == 0 || age[0] == AVG_METRIC_AGE {
return new(SimpleEWMA)
}
return &VariableEWMA{
decay: 2 / (age[0] + 1),
}
}
// A SimpleEWMA represents the exponentially weighted moving average of a
// series of numbers. It WILL have different behavior than the VariableEWMA
// for multiple reasons. It has no warm-up period and it uses a constant
// decay. These properties let it use less memory. It will also behave
// differently when it's equal to zero, which is assumed to mean
// uninitialized, so if a value is likely to actually become zero over time,
// then any non-zero value will cause a sharp jump instead of a small change.
// However, note that this takes a long time, and the value may just
// decays to a stable value that's close to zero, but which won't be mistaken
// for uninitialized. See http://play.golang.org/p/litxBDr_RC for example.
type SimpleEWMA struct {
// The current value of the average. After adding with Add(), this is
// updated to reflect the average of all values seen thus far.
value float64
}
// Add adds a value to the series and updates the moving average.
func (e *SimpleEWMA) Add(value float64) {
if e.value == 0 { // this is a proxy for "uninitialized"
e.value = value
} else {
e.value = (value * DECAY) + (e.value * (1 - DECAY))
}
}
// Value returns the current value of the moving average.
func (e *SimpleEWMA) Value() float64 {
return e.value
}
// Set sets the EWMA's value.
func (e *SimpleEWMA) Set(value float64) {
e.value = value
}
// VariableEWMA represents the exponentially weighted moving average of a series of
// numbers. Unlike SimpleEWMA, it supports a custom age, and thus uses more memory.
type VariableEWMA struct {
// The multiplier factor by which the previous samples decay.
decay float64
// The current value of the average.
value float64
// The number of samples added to this instance.
count uint8
}
// Add adds a value to the series and updates the moving average.
func (e *VariableEWMA) Add(value float64) {
switch {
case e.count < WARMUP_SAMPLES:
e.count++
e.value += value
case e.count == WARMUP_SAMPLES:
e.count++
e.value = e.value / float64(WARMUP_SAMPLES)
e.value = (value * e.decay) + (e.value * (1 - e.decay))
default:
e.value = (value * e.decay) + (e.value * (1 - e.decay))
}
}
// Value returns the current value of the average, or 0.0 if the series hasn't
// warmed up yet.
func (e *VariableEWMA) Value() float64 {
if e.count <= WARMUP_SAMPLES {
return 0.0
}
return e.value
}
// Set sets the EWMA's value.
func (e *VariableEWMA) Set(value float64) {
e.value = value
if e.count <= WARMUP_SAMPLES {
e.count = WARMUP_SAMPLES + 1
}
}

View File

@@ -1,20 +0,0 @@
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1,292 +0,0 @@
// Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds.
//
// A small amount of accuracy is traded to achieve the above properties.
//
// Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples.
//
// For more detailed information about the algorithm used, see:
//
// Effective Computation of Biased Quantiles over Data Streams
//
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile
import (
"math"
"sort"
)
// Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience.
type Sample struct {
Value float64 `json:",string"`
Width float64 `json:",string"`
Delta float64 `json:",string"`
}
// Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample
func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r
}
return newStream(ƒ)
}
// NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r)
}
return newStream(ƒ)
}
// NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targets map[float64]float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for quantile, epsilon := range targets {
if quantile*s.n <= r {
f = (2 * epsilon * r) / quantile
} else {
f = (2 * epsilon * (s.n - r)) / (1 - quantile)
}
if f < m {
m = f
}
}
return m
}
return newStream(ƒ)
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {
*stream
b Samples
sorted bool
}
func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true}
}
// Insert inserts v into the stream.
func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1})
}
func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample)
s.sorted = false
if len(s.b) == cap(s.b) {
s.flush()
}
}
// Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result.
func (s *Stream) Query(q float64) float64 {
if !s.flushed() {
// Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data.
l := len(s.b)
if l == 0 {
return 0
}
i := int(math.Ceil(float64(l) * q))
if i > 0 {
i -= 1
}
s.maybeSort()
return s.b[i].Value
}
s.flush()
return s.stream.query(q)
}
// Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc.
//
// ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) {
sort.Sort(samples)
s.stream.merge(samples)
}
// Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() {
s.stream.reset()
s.b = s.b[:0]
}
// Samples returns stream samples held by s.
func (s *Stream) Samples() Samples {
if !s.flushed() {
return s.b
}
s.flush()
return s.stream.samples()
}
// Count returns the total number of samples observed in the stream
// since initialization.
func (s *Stream) Count() int {
return len(s.b) + s.stream.count()
}
func (s *Stream) flush() {
s.maybeSort()
s.stream.merge(s.b)
s.b = s.b[:0]
}
func (s *Stream) maybeSort() {
if !s.sorted {
s.sorted = true
sort.Sort(s.b)
}
}
func (s *Stream) flushed() bool {
return len(s.stream.l) > 0
}
type stream struct {
n float64
l []Sample
ƒ invariant
}
func (s *stream) reset() {
s.l = s.l[:0]
s.n = 0
}
func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}})
}
func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly.
var r float64
i := 0
for _, sample := range samples {
for ; i < len(s.l); i++ {
c := s.l[i]
if c.Value > sample.Value {
// Insert at position i.
s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{
sample.Value,
sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly?
}
i++
goto inserted
}
r += c.Width
}
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++
inserted:
s.n += sample.Width
r += sample.Width
}
s.compress()
}
func (s *stream) count() int {
return int(s.n)
}
func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0]
var r float64
for _, c := range s.l[1:] {
r += p.Width
if r+c.Width+c.Delta > t {
return p.Value
}
p = c
}
return p.Value
}
func (s *stream) compress() {
if len(s.l) < 2 {
return
}
x := s.l[len(s.l)-1]
xi := len(s.l) - 1
r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width
s.l[xi] = x
// Remove element at i.
copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1]
xi -= 1
} else {
x = c
xi = i
}
r -= c.Width
}
}
func (s *stream) samples() Samples {
samples := make(Samples, len(s.l))
copy(samples, s.l)
return samples
}

View File

@@ -1,252 +0,0 @@
package bolt
import (
"fmt"
"sort"
"unsafe"
)
// freelist represents a list of all pages that are available for allocation.
// It also tracks pages that have been freed but are still in use by open transactions.
type freelist struct {
ids []pgid // all free and available free page ids.
pending map[txid][]pgid // mapping of soon-to-be free page ids by tx.
cache map[pgid]bool // fast lookup of all free and pending page ids.
}
// newFreelist returns an empty, initialized freelist.
func newFreelist() *freelist {
return &freelist{
pending: make(map[txid][]pgid),
cache: make(map[pgid]bool),
}
}
// size returns the size of the page after serialization.
func (f *freelist) size() int {
n := f.count()
if n >= 0xFFFF {
// The first element will be used to store the count. See freelist.write.
n++
}
return pageHeaderSize + (int(unsafe.Sizeof(pgid(0))) * n)
}
// count returns count of pages on the freelist
func (f *freelist) count() int {
return f.free_count() + f.pending_count()
}
// free_count returns count of free pages
func (f *freelist) free_count() int {
return len(f.ids)
}
// pending_count returns count of pending pages
func (f *freelist) pending_count() int {
var count int
for _, list := range f.pending {
count += len(list)
}
return count
}
// copyall copies into dst a list of all free ids and all pending ids in one sorted list.
// f.count returns the minimum length required for dst.
func (f *freelist) copyall(dst []pgid) {
m := make(pgids, 0, f.pending_count())
for _, list := range f.pending {
m = append(m, list...)
}
sort.Sort(m)
mergepgids(dst, f.ids, m)
}
// allocate returns the starting page id of a contiguous list of pages of a given size.
// If a contiguous block cannot be found then 0 is returned.
func (f *freelist) allocate(n int) pgid {
if len(f.ids) == 0 {
return 0
}
var initial, previd pgid
for i, id := range f.ids {
if id <= 1 {
panic(fmt.Sprintf("invalid page allocation: %d", id))
}
// Reset initial page if this is not contiguous.
if previd == 0 || id-previd != 1 {
initial = id
}
// If we found a contiguous block then remove it and return it.
if (id-initial)+1 == pgid(n) {
// If we're allocating off the beginning then take the fast path
// and just adjust the existing slice. This will use extra memory
// temporarily but the append() in free() will realloc the slice
// as is necessary.
if (i + 1) == n {
f.ids = f.ids[i+1:]
} else {
copy(f.ids[i-n+1:], f.ids[i+1:])
f.ids = f.ids[:len(f.ids)-n]
}
// Remove from the free cache.
for i := pgid(0); i < pgid(n); i++ {
delete(f.cache, initial+i)
}
return initial
}
previd = id
}
return 0
}
// free releases a page and its overflow for a given transaction id.
// If the page is already free then a panic will occur.
func (f *freelist) free(txid txid, p *page) {
if p.id <= 1 {
panic(fmt.Sprintf("cannot free page 0 or 1: %d", p.id))
}
// Free page and all its overflow pages.
var ids = f.pending[txid]
for id := p.id; id <= p.id+pgid(p.overflow); id++ {
// Verify that page is not already free.
if f.cache[id] {
panic(fmt.Sprintf("page %d already freed", id))
}
// Add to the freelist and cache.
ids = append(ids, id)
f.cache[id] = true
}
f.pending[txid] = ids
}
// release moves all page ids for a transaction id (or older) to the freelist.
func (f *freelist) release(txid txid) {
m := make(pgids, 0)
for tid, ids := range f.pending {
if tid <= txid {
// Move transaction's pending pages to the available freelist.
// Don't remove from the cache since the page is still free.
m = append(m, ids...)
delete(f.pending, tid)
}
}
sort.Sort(m)
f.ids = pgids(f.ids).merge(m)
}
// rollback removes the pages from a given pending tx.
func (f *freelist) rollback(txid txid) {
// Remove page ids from cache.
for _, id := range f.pending[txid] {
delete(f.cache, id)
}
// Remove pages from pending list.
delete(f.pending, txid)
}
// freed returns whether a given page is in the free list.
func (f *freelist) freed(pgid pgid) bool {
return f.cache[pgid]
}
// read initializes the freelist from a freelist page.
func (f *freelist) read(p *page) {
// If the page.count is at the max uint16 value (64k) then it's considered
// an overflow and the size of the freelist is stored as the first element.
idx, count := 0, int(p.count)
if count == 0xFFFF {
idx = 1
count = int(((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[0])
}
// Copy the list of page ids from the freelist.
if count == 0 {
f.ids = nil
} else {
ids := ((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[idx:count]
f.ids = make([]pgid, len(ids))
copy(f.ids, ids)
// Make sure they're sorted.
sort.Sort(pgids(f.ids))
}
// Rebuild the page cache.
f.reindex()
}
// write writes the page ids onto a freelist page. All free and pending ids are
// saved to disk since in the event of a program crash, all pending ids will
// become free.
func (f *freelist) write(p *page) error {
// Combine the old free pgids and pgids waiting on an open transaction.
// Update the header flag.
p.flags |= freelistPageFlag
// The page.count can only hold up to 64k elements so if we overflow that
// number then we handle it by putting the size in the first element.
lenids := f.count()
if lenids == 0 {
p.count = uint16(lenids)
} else if lenids < 0xFFFF {
p.count = uint16(lenids)
f.copyall(((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[:])
} else {
p.count = 0xFFFF
((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[0] = pgid(lenids)
f.copyall(((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[1:])
}
return nil
}
// reload reads the freelist from a page and filters out pending items.
func (f *freelist) reload(p *page) {
f.read(p)
// Build a cache of only pending pages.
pcache := make(map[pgid]bool)
for _, pendingIDs := range f.pending {
for _, pendingID := range pendingIDs {
pcache[pendingID] = true
}
}
// Check each page in the freelist and build a new available freelist
// with any pages not in the pending lists.
var a []pgid
for _, id := range f.ids {
if !pcache[id] {
a = append(a, id)
}
}
f.ids = a
// Once the available list is rebuilt then rebuild the free cache so that
// it includes the available and pending free pages.
f.reindex()
}
// reindex rebuilds the free cache based on available and pending free lists.
func (f *freelist) reindex() {
f.cache = make(map[pgid]bool, len(f.ids))
for _, id := range f.ids {
f.cache[id] = true
}
for _, pendingIDs := range f.pending {
for _, pendingID := range pendingIDs {
f.cache[pendingID] = true
}
}
}

129
vendor/github.com/containers/buildah/README.md generated vendored Normal file
View File

@@ -0,0 +1,129 @@
![buildah logo](https://cdn.rawgit.com/containers/buildah/master/logos/buildah-logo_large.png)
# [Buildah](https://www.youtube.com/embed/YVk5NgSiUw8) - a tool that facilitates building [Open Container Initiative (OCI)](https://www.opencontainers.org/) container images
[![Go Report Card](https://goreportcard.com/badge/github.com/containers/buildah)](https://goreportcard.com/report/github.com/containers/buildah)
[![Travis](https://travis-ci.org/containers/buildah.svg?branch=master)](https://travis-ci.org/containers/buildah)
The Buildah package provides a command line tool that can be used to
* create a working container, either from scratch or using an image as a starting point
* create an image, either from a working container or via the instructions in a Dockerfile
* images can be built in either the OCI image format or the traditional upstream docker image format
* mount a working container's root filesystem for manipulation
* unmount a working container's root filesystem
* use the updated contents of a container's root filesystem as a filesystem layer to create a new image
* delete a working container or an image
* rename a local container
## Buildah Information for Developers
For blogs, release announcements and more, please checkout the [buildah.io](https://buildah.io) website!
**[Buildah Demos](demos)**
**[Changelog](CHANGELOG.md)**
**[Contributing](CONTRIBUTING.md)**
**[Development Plan](developmentplan.md)**
**[Installation notes](install.md)**
**[Troubleshooting Guide](troubleshooting.md)**
**[Tutorials](docs/tutorials)**
## Buildah and Podman relationship
Buildah and Podman are two complementary open-source projects that are
available on most Linux platforms and both projects reside at
[GitHub.com](https://github.com) with Buildah
[here](https://github.com/containers/buildah) and Podman
[here](https://github.com/containers/libpod). Both, Buildah and Podman are
command line tools that work on Open Container Initiative (OCI) images and
containers. The two projects differentiate in their specialization.
Buildah specializes in building OCI images. Buildah's commands replicate all
of the commands that are found in a Dockerfile. This allows building images
with and without Dockerfiles while not requiring any root privileges.
Buildahs ultimate goal is to provide a lower-level coreutils interface to
build images. The flexibility of building images without Dockerfiles allows
for the integration of other scripting languages into the build process.
Buildah follows a simple fork-exec model and does not run as a daemon
but it is based on a comprehensive API in golang, which can be vendored
into other tools.
Podman specializes in all of the commands and functions that help you to maintain and modify
OCI images, such as pulling and tagging. It also allows you to create, run, and maintain those containers
created from those images.
A major difference between Podman and Buildah is their concept of a container. Podman
allows users to create "traditional containers" where the intent of these containers is
to be long lived. While Buildah containers are really just created to allow content
to be added back to the container image. An easy way to think of it is the
`buildah run` command emulates the RUN command in a Dockerfile while the `podman run`
command emulates the `docker run` command in functionality. Because of this and their underlying
storage differences, you can not see Podman containers from within Buildah or vice versa.
In short, Buildah is an efficient way to create OCI images while Podman allows
you to manage and maintain those images and containers in a production environment using
familiar container cli commands. For more details, see the
[Container Tools Guide](https://github.com/containers/buildah/tree/master/docs/containertools).
## Example
From [`./examples/lighttpd.sh`](examples/lighttpd.sh):
```bash
$ cat > lighttpd.sh <<"EOF"
#!/bin/bash -x
ctr1=$(buildah from "${1:-fedora}")
## Get all updates and install our minimal httpd server
buildah run "$ctr1" -- dnf update -y
buildah run "$ctr1" -- dnf install -y lighttpd
## Include some buildtime annotations
buildah config --annotation "com.example.build.host=$(uname -n)" "$ctr1"
## Run our server and expose the port
buildah config --cmd "/usr/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf" "$ctr1"
buildah config --port 80 "$ctr1"
## Commit this container to an image name
buildah commit "$ctr1" "${2:-$USER/lighttpd}"
EOF
$ chmod +x lighttpd.sh
$ sudo ./lighttpd.sh
```
## Commands
| Command | Description |
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| [buildah-add(1)](/docs/buildah-add.md) | Add the contents of a file, URL, or a directory to the container. |
| [buildah-bud(1)](/docs/buildah-bud.md) | Build an image using instructions from Dockerfiles. |
| [buildah-commit(1)](/docs/buildah-commit.md) | Create an image from a working container. |
| [buildah-config(1)](/docs/buildah-config.md) | Update image configuration settings. |
| [buildah-containers(1)](/docs/buildah-containers.md) | List the working containers and their base images. |
| [buildah-copy(1)](/docs/buildah-copy.md) | Copies the contents of a file, URL, or directory into a container's working directory. |
| [buildah-from(1)](/docs/buildah-from.md) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| [buildah-images(1)](/docs/buildah-images.md) | List images in local storage. |
| [buildah-info(1)](/docs/buildah-info.md) | Display Buildah system information. |
| [buildah-inspect(1)](/docs/buildah-inspect.md) | Inspects the configuration of a container or image. |
| [buildah-mount(1)](/docs/buildah-mount.md) | Mount the working container's root filesystem. |
| [buildah-pull(1)](/docs/buildah-pull.md) | Pull an image from the specified location. |
| [buildah-push(1)](/docs/buildah-push.md) | Push an image from local storage to elsewhere. |
| [buildah-rename(1)](/docs/buildah-rename.md) | Rename a local container. |
| [buildah-rm(1)](/docs/buildah-rm.md) | Removes one or more working containers. |
| [buildah-rmi(1)](/docs/buildah-rmi.md) | Removes one or more images. |
| [buildah-run(1)](/docs/buildah-run.md) | Run a command inside of the container. |
| [buildah-tag(1)](/docs/buildah-tag.md) | Add an additional name to a local image. |
| [buildah-umount(1)](/docs/buildah-umount.md) | Unmount a working container's root file system. |
| [buildah-unshare(1)](/docs/buildah-unshare.md) | Launch a command in a user namespace with modified ID mappings. |
| [buildah-version(1)](/docs/buildah-version.md) | Display the Buildah Version Information |
**Future goals include:**
* more CI tests
* additional CLI commands (?)

View File

@@ -0,0 +1,276 @@
#define _GNU_SOURCE
#include <sys/types.h>
#include <sys/ioctl.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <linux/memfd.h>
#include <fcntl.h>
#include <grp.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <termios.h>
#include <errno.h>
#include <unistd.h>
#ifndef F_LINUX_SPECIFIC_BASE
#define F_LINUX_SPECIFIC_BASE 1024
#endif
#ifndef F_ADD_SEALS
#define F_ADD_SEALS (F_LINUX_SPECIFIC_BASE + 9)
#define F_GET_SEALS (F_LINUX_SPECIFIC_BASE + 10)
#endif
#ifndef F_SEAL_SEAL
#define F_SEAL_SEAL 0x0001LU
#endif
#ifndef F_SEAL_SHRINK
#define F_SEAL_SHRINK 0x0002LU
#endif
#ifndef F_SEAL_GROW
#define F_SEAL_GROW 0x0004LU
#endif
#ifndef F_SEAL_WRITE
#define F_SEAL_WRITE 0x0008LU
#endif
#define BUFSTEP 1024
static const char *_max_user_namespaces = "/proc/sys/user/max_user_namespaces";
static const char *_unprivileged_user_namespaces = "/proc/sys/kernel/unprivileged_userns_clone";
static int _containers_unshare_parse_envint(const char *envname) {
char *p, *q;
long l;
p = getenv(envname);
if (p == NULL) {
return -1;
}
q = NULL;
l = strtol(p, &q, 10);
if ((q == NULL) || (*q != '\0')) {
fprintf(stderr, "Error parsing \"%s\"=\"%s\"!\n", envname, p);
_exit(1);
}
unsetenv(envname);
return l;
}
static void _check_proc_sys_file(const char *path)
{
FILE *fp;
char buf[32];
size_t n_read;
long r;
fp = fopen(path, "r");
if (fp == NULL) {
if (errno != ENOENT)
fprintf(stderr, "Error reading %s: %m\n", _max_user_namespaces);
} else {
memset(buf, 0, sizeof(buf));
n_read = fread(buf, 1, sizeof(buf) - 1, fp);
if (n_read > 0) {
r = atoi(buf);
if (r == 0) {
fprintf(stderr, "User namespaces are not enabled in %s.\n", path);
}
} else {
fprintf(stderr, "Error reading %s: no contents, should contain a number greater than 0.\n", path);
}
fclose(fp);
}
}
static char **parse_proc_stringlist(const char *list) {
int fd, n, i, n_strings;
char *buf, *new_buf, **ret;
size_t size, new_size, used;
fd = open(list, O_RDONLY);
if (fd == -1) {
return NULL;
}
buf = NULL;
size = 0;
used = 0;
for (;;) {
new_size = used + BUFSTEP;
new_buf = realloc(buf, new_size);
if (new_buf == NULL) {
free(buf);
fprintf(stderr, "realloc(%ld): out of memory\n", (long)(size + BUFSTEP));
return NULL;
}
buf = new_buf;
size = new_size;
memset(buf + used, '\0', size - used);
n = read(fd, buf + used, size - used - 1);
if (n < 0) {
fprintf(stderr, "read(): %m\n");
return NULL;
}
if (n == 0) {
break;
}
used += n;
}
close(fd);
n_strings = 0;
for (n = 0; n < used; n++) {
if ((n == 0) || (buf[n-1] == '\0')) {
n_strings++;
}
}
ret = calloc(n_strings + 1, sizeof(char *));
if (ret == NULL) {
fprintf(stderr, "calloc(): out of memory\n");
return NULL;
}
i = 0;
for (n = 0; n < used; n++) {
if ((n == 0) || (buf[n-1] == '\0')) {
ret[i++] = &buf[n];
}
}
ret[i] = NULL;
return ret;
}
static int containers_reexec(void) {
char **argv, *exename;
int fd, mmfd, n_read, n_written;
struct stat st;
char buf[2048];
argv = parse_proc_stringlist("/proc/self/cmdline");
if (argv == NULL) {
return -1;
}
fd = open("/proc/self/exe", O_RDONLY | O_CLOEXEC);
if (fd == -1) {
fprintf(stderr, "open(\"/proc/self/exe\"): %m\n");
return -1;
}
if (fstat(fd, &st) == -1) {
fprintf(stderr, "fstat(\"/proc/self/exe\"): %m\n");
return -1;
}
exename = basename(argv[0]);
mmfd = syscall(SYS_memfd_create, exename, (long) MFD_ALLOW_SEALING | MFD_CLOEXEC);
if (mmfd == -1) {
fprintf(stderr, "memfd_create(): %m\n");
return -1;
}
for (;;) {
n_read = read(fd, buf, sizeof(buf));
if (n_read < 0) {
fprintf(stderr, "read(\"/proc/self/exe\"): %m\n");
return -1;
}
if (n_read == 0) {
break;
}
n_written = write(mmfd, buf, n_read);
if (n_written < 0) {
fprintf(stderr, "write(anonfd): %m\n");
return -1;
}
if (n_written != n_read) {
fprintf(stderr, "write(anonfd): short write (%d != %d)\n", n_written, n_read);
return -1;
}
}
close(fd);
if (fcntl(mmfd, F_ADD_SEALS, F_SEAL_SHRINK | F_SEAL_GROW | F_SEAL_WRITE | F_SEAL_SEAL) == -1) {
close(mmfd);
fprintf(stderr, "Error sealing memfd copy: %m\n");
return -1;
}
if (fexecve(mmfd, argv, environ) == -1) {
close(mmfd);
fprintf(stderr, "Error during reexec(...): %m\n");
return -1;
}
return 0;
}
void _containers_unshare(void)
{
int flags, pidfd, continuefd, n, pgrp, sid, ctty;
char buf[2048];
flags = _containers_unshare_parse_envint("_Containers-unshare");
if (flags == -1) {
return;
}
if ((flags & CLONE_NEWUSER) != 0) {
if (unshare(CLONE_NEWUSER) == -1) {
fprintf(stderr, "Error during unshare(CLONE_NEWUSER): %m\n");
_check_proc_sys_file (_max_user_namespaces);
_check_proc_sys_file (_unprivileged_user_namespaces);
_exit(1);
}
}
pidfd = _containers_unshare_parse_envint("_Containers-pid-pipe");
if (pidfd != -1) {
snprintf(buf, sizeof(buf), "%llu", (unsigned long long) getpid());
size_t size = write(pidfd, buf, strlen(buf));
if (size != strlen(buf)) {
fprintf(stderr, "Error writing PID to pipe on fd %d: %m\n", pidfd);
_exit(1);
}
close(pidfd);
}
continuefd = _containers_unshare_parse_envint("_Containers-continue-pipe");
if (continuefd != -1) {
n = read(continuefd, buf, sizeof(buf));
if (n > 0) {
fprintf(stderr, "Error: %.*s\n", n, buf);
_exit(1);
}
close(continuefd);
}
sid = _containers_unshare_parse_envint("_Containers-setsid");
if (sid == 1) {
if (setsid() == -1) {
fprintf(stderr, "Error during setsid: %m\n");
_exit(1);
}
}
pgrp = _containers_unshare_parse_envint("_Containers-setpgrp");
if (pgrp == 1) {
if (setpgrp() == -1) {
fprintf(stderr, "Error during setpgrp: %m\n");
_exit(1);
}
}
ctty = _containers_unshare_parse_envint("_Containers-ctty");
if (ctty != -1) {
if (ioctl(ctty, TIOCSCTTY, 0) == -1) {
fprintf(stderr, "Error while setting controlling terminal to %d: %m\n", ctty);
_exit(1);
}
}
if ((flags & CLONE_NEWUSER) != 0) {
if (setresgid(0, 0, 0) != 0) {
fprintf(stderr, "Error during setresgid(0): %m\n");
_exit(1);
}
if (setresuid(0, 0, 0) != 0) {
fprintf(stderr, "Error during setresuid(0): %m\n");
_exit(1);
}
}
if ((flags & ~CLONE_NEWUSER) != 0) {
if (unshare(flags & ~CLONE_NEWUSER) == -1) {
fprintf(stderr, "Error during unshare(...): %m\n");
_exit(1);
}
}
if (containers_reexec() != 0) {
_exit(1);
}
return;
}

View File

@@ -0,0 +1,545 @@
// +build linux
package unshare
import (
"bufio"
"bytes"
"fmt"
"io"
"os"
"os/exec"
"os/user"
"runtime"
"strconv"
"strings"
"sync"
"syscall"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/reexec"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/syndtr/gocapability/capability"
)
// Cmd wraps an exec.Cmd created by the reexec package in unshare(), and
// handles setting ID maps and other related settings by triggering
// initialization code in the child.
type Cmd struct {
*exec.Cmd
UnshareFlags int
UseNewuidmap bool
UidMappings []specs.LinuxIDMapping
UseNewgidmap bool
GidMappings []specs.LinuxIDMapping
GidMappingsEnableSetgroups bool
Setsid bool
Setpgrp bool
Ctty *os.File
OOMScoreAdj *int
Hook func(pid int) error
}
// Command creates a new Cmd which can be customized.
func Command(args ...string) *Cmd {
cmd := reexec.Command(args...)
return &Cmd{
Cmd: cmd,
}
}
func (c *Cmd) Start() error {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
// Set an environment variable to tell the child to synchronize its startup.
if c.Env == nil {
c.Env = os.Environ()
}
c.Env = append(c.Env, fmt.Sprintf("_Containers-unshare=%d", c.UnshareFlags))
// Please the libpod "rootless" package to find the expected env variables.
if os.Geteuid() != 0 {
c.Env = append(c.Env, "_CONTAINERS_USERNS_CONFIGURED=done")
c.Env = append(c.Env, fmt.Sprintf("_CONTAINERS_ROOTLESS_UID=%d", os.Geteuid()))
}
// Create the pipe for reading the child's PID.
pidRead, pidWrite, err := os.Pipe()
if err != nil {
return errors.Wrapf(err, "error creating pid pipe")
}
c.Env = append(c.Env, fmt.Sprintf("_Containers-pid-pipe=%d", len(c.ExtraFiles)+3))
c.ExtraFiles = append(c.ExtraFiles, pidWrite)
// Create the pipe for letting the child know to proceed.
continueRead, continueWrite, err := os.Pipe()
if err != nil {
pidRead.Close()
pidWrite.Close()
return errors.Wrapf(err, "error creating pid pipe")
}
c.Env = append(c.Env, fmt.Sprintf("_Containers-continue-pipe=%d", len(c.ExtraFiles)+3))
c.ExtraFiles = append(c.ExtraFiles, continueRead)
// Pass along other instructions.
if c.Setsid {
c.Env = append(c.Env, "_Containers-setsid=1")
}
if c.Setpgrp {
c.Env = append(c.Env, "_Containers-setpgrp=1")
}
if c.Ctty != nil {
c.Env = append(c.Env, fmt.Sprintf("_Containers-ctty=%d", len(c.ExtraFiles)+3))
c.ExtraFiles = append(c.ExtraFiles, c.Ctty)
}
// Make sure we clean up our pipes.
defer func() {
if pidRead != nil {
pidRead.Close()
}
if pidWrite != nil {
pidWrite.Close()
}
if continueRead != nil {
continueRead.Close()
}
if continueWrite != nil {
continueWrite.Close()
}
}()
// Start the new process.
err = c.Cmd.Start()
if err != nil {
return err
}
// Close the ends of the pipes that the parent doesn't need.
continueRead.Close()
continueRead = nil
pidWrite.Close()
pidWrite = nil
// Read the child's PID from the pipe.
pidString := ""
b := new(bytes.Buffer)
io.Copy(b, pidRead)
pidString = b.String()
pid, err := strconv.Atoi(pidString)
if err != nil {
fmt.Fprintf(continueWrite, "error parsing PID %q: %v", pidString, err)
return errors.Wrapf(err, "error parsing PID %q", pidString)
}
pidString = fmt.Sprintf("%d", pid)
// If we created a new user namespace, set any specified mappings.
if c.UnshareFlags&syscall.CLONE_NEWUSER != 0 {
// Always set "setgroups".
setgroups, err := os.OpenFile(fmt.Sprintf("/proc/%s/setgroups", pidString), os.O_TRUNC|os.O_WRONLY, 0)
if err != nil {
fmt.Fprintf(continueWrite, "error opening setgroups: %v", err)
return errors.Wrapf(err, "error opening /proc/%s/setgroups", pidString)
}
defer setgroups.Close()
if c.GidMappingsEnableSetgroups {
if _, err := fmt.Fprintf(setgroups, "allow"); err != nil {
fmt.Fprintf(continueWrite, "error writing \"allow\" to setgroups: %v", err)
return errors.Wrapf(err, "error opening \"allow\" to /proc/%s/setgroups", pidString)
}
} else {
if _, err := fmt.Fprintf(setgroups, "deny"); err != nil {
fmt.Fprintf(continueWrite, "error writing \"deny\" to setgroups: %v", err)
return errors.Wrapf(err, "error writing \"deny\" to /proc/%s/setgroups", pidString)
}
}
if len(c.UidMappings) == 0 || len(c.GidMappings) == 0 {
uidmap, gidmap, err := GetHostIDMappings("")
if err != nil {
fmt.Fprintf(continueWrite, "error reading ID mappings in parent: %v", err)
return errors.Wrapf(err, "error reading ID mappings in parent")
}
if len(c.UidMappings) == 0 {
c.UidMappings = uidmap
for i := range c.UidMappings {
c.UidMappings[i].HostID = c.UidMappings[i].ContainerID
}
}
if len(c.GidMappings) == 0 {
c.GidMappings = gidmap
for i := range c.GidMappings {
c.GidMappings[i].HostID = c.GidMappings[i].ContainerID
}
}
}
if len(c.GidMappings) > 0 {
// Build the GID map, since writing to the proc file has to be done all at once.
g := new(bytes.Buffer)
for _, m := range c.GidMappings {
fmt.Fprintf(g, "%d %d %d\n", m.ContainerID, m.HostID, m.Size)
}
// Set the GID map.
if c.UseNewgidmap {
cmd := exec.Command("newgidmap", append([]string{pidString}, strings.Fields(strings.Replace(g.String(), "\n", " ", -1))...)...)
g.Reset()
cmd.Stdout = g
cmd.Stderr = g
err := cmd.Run()
if err != nil {
fmt.Fprintf(continueWrite, "error running newgidmap: %v: %s", err, g.String())
return errors.Wrapf(err, "error running newgidmap: %s", g.String())
}
} else {
gidmap, err := os.OpenFile(fmt.Sprintf("/proc/%s/gid_map", pidString), os.O_TRUNC|os.O_WRONLY, 0)
if err != nil {
fmt.Fprintf(continueWrite, "error opening /proc/%s/gid_map: %v", pidString, err)
return errors.Wrapf(err, "error opening /proc/%s/gid_map", pidString)
}
defer gidmap.Close()
if _, err := fmt.Fprintf(gidmap, "%s", g.String()); err != nil {
fmt.Fprintf(continueWrite, "error writing %q to /proc/%s/gid_map: %v", g.String(), pidString, err)
return errors.Wrapf(err, "error writing %q to /proc/%s/gid_map", g.String(), pidString)
}
}
}
if len(c.UidMappings) > 0 {
// Build the UID map, since writing to the proc file has to be done all at once.
u := new(bytes.Buffer)
for _, m := range c.UidMappings {
fmt.Fprintf(u, "%d %d %d\n", m.ContainerID, m.HostID, m.Size)
}
// Set the GID map.
if c.UseNewuidmap {
cmd := exec.Command("newuidmap", append([]string{pidString}, strings.Fields(strings.Replace(u.String(), "\n", " ", -1))...)...)
u.Reset()
cmd.Stdout = u
cmd.Stderr = u
err := cmd.Run()
if err != nil {
fmt.Fprintf(continueWrite, "error running newuidmap: %v: %s", err, u.String())
return errors.Wrapf(err, "error running newuidmap: %s", u.String())
}
} else {
uidmap, err := os.OpenFile(fmt.Sprintf("/proc/%s/uid_map", pidString), os.O_TRUNC|os.O_WRONLY, 0)
if err != nil {
fmt.Fprintf(continueWrite, "error opening /proc/%s/uid_map: %v", pidString, err)
return errors.Wrapf(err, "error opening /proc/%s/uid_map", pidString)
}
defer uidmap.Close()
if _, err := fmt.Fprintf(uidmap, "%s", u.String()); err != nil {
fmt.Fprintf(continueWrite, "error writing %q to /proc/%s/uid_map: %v", u.String(), pidString, err)
return errors.Wrapf(err, "error writing %q to /proc/%s/uid_map", u.String(), pidString)
}
}
}
}
if c.OOMScoreAdj != nil {
oomScoreAdj, err := os.OpenFile(fmt.Sprintf("/proc/%s/oom_score_adj", pidString), os.O_TRUNC|os.O_WRONLY, 0)
if err != nil {
fmt.Fprintf(continueWrite, "error opening oom_score_adj: %v", err)
return errors.Wrapf(err, "error opening /proc/%s/oom_score_adj", pidString)
}
defer oomScoreAdj.Close()
if _, err := fmt.Fprintf(oomScoreAdj, "%d\n", *c.OOMScoreAdj); err != nil {
fmt.Fprintf(continueWrite, "error writing \"%d\" to oom_score_adj: %v", c.OOMScoreAdj, err)
return errors.Wrapf(err, "error writing \"%d\" to /proc/%s/oom_score_adj", c.OOMScoreAdj, pidString)
}
}
// Run any additional setup that we want to do before the child starts running proper.
if c.Hook != nil {
if err = c.Hook(pid); err != nil {
fmt.Fprintf(continueWrite, "hook error: %v", err)
return err
}
}
return nil
}
func (c *Cmd) Run() error {
if err := c.Start(); err != nil {
return err
}
return c.Wait()
}
func (c *Cmd) CombinedOutput() ([]byte, error) {
return nil, errors.New("unshare: CombinedOutput() not implemented")
}
func (c *Cmd) Output() ([]byte, error) {
return nil, errors.New("unshare: Output() not implemented")
}
var (
isRootlessOnce sync.Once
isRootless bool
)
const (
// UsernsEnvName is the environment variable, if set indicates in rootless mode
UsernsEnvName = "_CONTAINERS_USERNS_CONFIGURED"
)
// IsRootless tells us if we are running in rootless mode
func IsRootless() bool {
isRootlessOnce.Do(func() {
isRootless = os.Geteuid() != 0 || os.Getenv(UsernsEnvName) != ""
})
return isRootless
}
// GetRootlessUID returns the UID of the user in the parent userNS
func GetRootlessUID() int {
uidEnv := os.Getenv("_CONTAINERS_ROOTLESS_UID")
if uidEnv != "" {
u, _ := strconv.Atoi(uidEnv)
return u
}
return os.Getuid()
}
// RootlessEnv returns the environment settings for the rootless containers
func RootlessEnv() []string {
return append(os.Environ(), UsernsEnvName+"=done")
}
type Runnable interface {
Run() error
}
func bailOnError(err error, format string, a ...interface{}) {
if err != nil {
if format != "" {
logrus.Errorf("%s: %v", fmt.Sprintf(format, a...), err)
} else {
logrus.Errorf("%v", err)
}
os.Exit(1)
}
}
// MaybeReexecUsingUserNamespace re-exec the process in a new namespace
func MaybeReexecUsingUserNamespace(evenForRoot bool) {
// If we've already been through this once, no need to try again.
if os.Geteuid() == 0 && IsRootless() {
return
}
var uidNum, gidNum uint64
// Figure out who we are.
me, err := user.Current()
if !os.IsNotExist(err) {
bailOnError(err, "error determining current user")
uidNum, err = strconv.ParseUint(me.Uid, 10, 32)
bailOnError(err, "error parsing current UID %s", me.Uid)
gidNum, err = strconv.ParseUint(me.Gid, 10, 32)
bailOnError(err, "error parsing current GID %s", me.Gid)
}
runtime.LockOSThread()
defer runtime.UnlockOSThread()
// ID mappings to use to reexec ourselves.
var uidmap, gidmap []specs.LinuxIDMapping
if uidNum != 0 || evenForRoot {
// Read the set of ID mappings that we're allowed to use. Each
// range in /etc/subuid and /etc/subgid file is a starting host
// ID and a range size.
uidmap, gidmap, err = GetSubIDMappings(me.Username, me.Username)
bailOnError(err, "error reading allowed ID mappings")
if len(uidmap) == 0 {
logrus.Warnf("Found no UID ranges set aside for user %q in /etc/subuid.", me.Username)
}
if len(gidmap) == 0 {
logrus.Warnf("Found no GID ranges set aside for user %q in /etc/subgid.", me.Username)
}
// Map our UID and GID, then the subuid and subgid ranges,
// consecutively, starting at 0, to get the mappings to use for
// a copy of ourselves.
uidmap = append([]specs.LinuxIDMapping{{HostID: uint32(uidNum), ContainerID: 0, Size: 1}}, uidmap...)
gidmap = append([]specs.LinuxIDMapping{{HostID: uint32(gidNum), ContainerID: 0, Size: 1}}, gidmap...)
var rangeStart uint32
for i := range uidmap {
uidmap[i].ContainerID = rangeStart
rangeStart += uidmap[i].Size
}
rangeStart = 0
for i := range gidmap {
gidmap[i].ContainerID = rangeStart
rangeStart += gidmap[i].Size
}
} else {
// If we have CAP_SYS_ADMIN, then we don't need to create a new namespace in order to be able
// to use unshare(), so don't bother creating a new user namespace at this point.
capabilities, err := capability.NewPid(0)
bailOnError(err, "error reading the current capabilities sets")
if capabilities.Get(capability.EFFECTIVE, capability.CAP_SYS_ADMIN) {
return
}
// Read the set of ID mappings that we're currently using.
uidmap, gidmap, err = GetHostIDMappings("")
bailOnError(err, "error reading current ID mappings")
// Just reuse them.
for i := range uidmap {
uidmap[i].HostID = uidmap[i].ContainerID
}
for i := range gidmap {
gidmap[i].HostID = gidmap[i].ContainerID
}
}
// Unlike most uses of reexec or unshare, we're using a name that
// _won't_ be recognized as a registered reexec handler, since we
// _want_ to fall through reexec.Init() to the normal main().
cmd := Command(append([]string{fmt.Sprintf("%s-in-a-user-namespace", os.Args[0])}, os.Args[1:]...)...)
// If, somehow, we don't become UID 0 in our child, indicate that the child shouldn't try again.
err = os.Setenv(UsernsEnvName, "1")
bailOnError(err, "error setting %s=1 in environment", UsernsEnvName)
// Set the default isolation type to use the "rootless" method.
if _, present := os.LookupEnv("BUILDAH_ISOLATION"); !present {
if err = os.Setenv("BUILDAH_ISOLATION", "rootless"); err != nil {
if err := os.Setenv("BUILDAH_ISOLATION", "rootless"); err != nil {
logrus.Errorf("error setting BUILDAH_ISOLATION=rootless in environment: %v", err)
os.Exit(1)
}
}
}
// Reuse our stdio.
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
// Set up a new user namespace with the ID mapping.
cmd.UnshareFlags = syscall.CLONE_NEWUSER | syscall.CLONE_NEWNS
cmd.UseNewuidmap = uidNum != 0
cmd.UidMappings = uidmap
cmd.UseNewgidmap = uidNum != 0
cmd.GidMappings = gidmap
cmd.GidMappingsEnableSetgroups = true
// Finish up.
logrus.Debugf("running %+v with environment %+v, UID map %+v, and GID map %+v", cmd.Cmd.Args, os.Environ(), cmd.UidMappings, cmd.GidMappings)
ExecRunnable(cmd)
}
// ExecRunnable runs the specified unshare command, captures its exit status,
// and exits with the same status.
func ExecRunnable(cmd Runnable) {
if err := cmd.Run(); err != nil {
if exitError, ok := errors.Cause(err).(*exec.ExitError); ok {
if exitError.ProcessState.Exited() {
if waitStatus, ok := exitError.ProcessState.Sys().(syscall.WaitStatus); ok {
if waitStatus.Exited() {
logrus.Errorf("%v", exitError)
os.Exit(waitStatus.ExitStatus())
}
if waitStatus.Signaled() {
logrus.Errorf("%v", exitError)
os.Exit(int(waitStatus.Signal()) + 128)
}
}
}
}
logrus.Errorf("%v", err)
logrus.Errorf("(unable to determine exit status)")
os.Exit(1)
}
os.Exit(0)
}
// getHostIDMappings reads mappings from the named node under /proc.
func getHostIDMappings(path string) ([]specs.LinuxIDMapping, error) {
var mappings []specs.LinuxIDMapping
f, err := os.Open(path)
if err != nil {
return nil, errors.Wrapf(err, "error reading ID mappings from %q", path)
}
defer f.Close()
scanner := bufio.NewScanner(f)
for scanner.Scan() {
line := scanner.Text()
fields := strings.Fields(line)
if len(fields) != 3 {
return nil, errors.Errorf("line %q from %q has %d fields, not 3", line, path, len(fields))
}
cid, err := strconv.ParseUint(fields[0], 10, 32)
if err != nil {
return nil, errors.Wrapf(err, "error parsing container ID value %q from line %q in %q", fields[0], line, path)
}
hid, err := strconv.ParseUint(fields[1], 10, 32)
if err != nil {
return nil, errors.Wrapf(err, "error parsing host ID value %q from line %q in %q", fields[1], line, path)
}
size, err := strconv.ParseUint(fields[2], 10, 32)
if err != nil {
return nil, errors.Wrapf(err, "error parsing size value %q from line %q in %q", fields[2], line, path)
}
mappings = append(mappings, specs.LinuxIDMapping{ContainerID: uint32(cid), HostID: uint32(hid), Size: uint32(size)})
}
return mappings, nil
}
// GetHostIDMappings reads mappings for the specified process (or the current
// process if pid is "self" or an empty string) from the kernel.
func GetHostIDMappings(pid string) ([]specs.LinuxIDMapping, []specs.LinuxIDMapping, error) {
if pid == "" {
pid = "self"
}
uidmap, err := getHostIDMappings(fmt.Sprintf("/proc/%s/uid_map", pid))
if err != nil {
return nil, nil, err
}
gidmap, err := getHostIDMappings(fmt.Sprintf("/proc/%s/gid_map", pid))
if err != nil {
return nil, nil, err
}
return uidmap, gidmap, nil
}
// GetSubIDMappings reads mappings from /etc/subuid and /etc/subgid.
func GetSubIDMappings(user, group string) ([]specs.LinuxIDMapping, []specs.LinuxIDMapping, error) {
mappings, err := idtools.NewIDMappings(user, group)
if err != nil {
return nil, nil, errors.Wrapf(err, "error reading subuid mappings for user %q and subgid mappings for group %q", user, group)
}
var uidmap, gidmap []specs.LinuxIDMapping
for _, m := range mappings.UIDs() {
uidmap = append(uidmap, specs.LinuxIDMapping{
ContainerID: uint32(m.ContainerID),
HostID: uint32(m.HostID),
Size: uint32(m.Size),
})
}
for _, m := range mappings.GIDs() {
gidmap = append(gidmap, specs.LinuxIDMapping{
ContainerID: uint32(m.ContainerID),
HostID: uint32(m.HostID),
Size: uint32(m.Size),
})
}
return uidmap, gidmap, nil
}
// ParseIDMappings parses mapping triples.
func ParseIDMappings(uidmap, gidmap []string) ([]idtools.IDMap, []idtools.IDMap, error) {
uid, err := idtools.ParseIDMap(uidmap, "userns-uid-map")
if err != nil {
return nil, nil, err
}
gid, err := idtools.ParseIDMap(gidmap, "userns-gid-map")
if err != nil {
return nil, nil, err
}
return uid, gid, nil
}

View File

@@ -0,0 +1,10 @@
// +build linux,cgo,!gccgo
package unshare
// #cgo CFLAGS: -Wall
// extern void _containers_unshare(void);
// void __attribute__((constructor)) init(void) {
// _containers_unshare();
// }
import "C"

View File

@@ -0,0 +1,25 @@
// +build linux,cgo,gccgo
package unshare
// #cgo CFLAGS: -Wall -Wextra
// extern void _containers_unshare(void);
// void __attribute__((constructor)) init(void) {
// _containers_unshare();
// }
import "C"
// This next bit is straight out of libcontainer.
// AlwaysFalse is here to stay false
// (and be exported so the compiler doesn't optimize out its reference)
var AlwaysFalse bool
func init() {
if AlwaysFalse {
// by referencing this C init() in a noop test, it will ensure the compiler
// links in the C function.
// https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65134
C.init()
}
}

View File

@@ -0,0 +1,45 @@
// +build !linux
package unshare
import (
"os"
"github.com/containers/storage/pkg/idtools"
"github.com/opencontainers/runtime-spec/specs-go"
)
const (
// UsernsEnvName is the environment variable, if set indicates in rootless mode
UsernsEnvName = "_CONTAINERS_USERNS_CONFIGURED"
)
// IsRootless tells us if we are running in rootless mode
func IsRootless() bool {
return false
}
// GetRootlessUID returns the UID of the user in the parent userNS
func GetRootlessUID() int {
return os.Getuid()
}
// RootlessEnv returns the environment settings for the rootless containers
func RootlessEnv() []string {
return append(os.Environ(), UsernsEnvName+"=")
}
// MaybeReexecUsingUserNamespace re-exec the process in a new namespace
func MaybeReexecUsingUserNamespace(evenForRoot bool) {
}
// GetHostIDMappings reads mappings for the specified process (or the current
// process if pid is "self" or an empty string) from the kernel.
func GetHostIDMappings(pid string) ([]specs.LinuxIDMapping, []specs.LinuxIDMapping, error) {
return nil, nil, nil
}
// ParseIDMappings parses mapping triples.
func ParseIDMappings(uidmap, gidmap []string) ([]idtools.IDMap, []idtools.IDMap, error) {
return nil, nil, nil
}

68
vendor/github.com/containers/buildah/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,68 @@
github.com/Azure/go-ansiterm d6e3b3328b783f23731bc4d058875b0371ff8109
github.com/blang/semver v3.5.0
github.com/BurntSushi/toml v0.2.0
github.com/containerd/continuity 004b46473808b3e7a4a3049c20e4376c91eb966d
github.com/containernetworking/cni v0.7.0-rc2
github.com/containers/image f52cf78ebfa1916da406f8b6210d8f7764ec1185
github.com/vbauerster/mpb v3.3.4
github.com/mattn/go-isatty v0.0.4
github.com/VividCortex/ewma v1.1.1
github.com/boltdb/bolt v1.3.1
github.com/containers/storage v1.12.2
github.com/docker/distribution 5f6282db7d65e6d72ad7c2cc66310724a57be716
github.com/docker/docker 54dddadc7d5d89fe0be88f76979f6f6ab0dede83
github.com/docker/docker-credential-helpers v0.6.1
github.com/docker/go-connections v0.4.0
github.com/docker/go-units v0.3.2
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
github.com/docker/libnetwork 1a06131fb8a047d919f7deaf02a4c414d7884b83
github.com/fsouza/go-dockerclient v1.3.0
github.com/ghodss/yaml v1.0.0
github.com/gogo/protobuf v1.2.0
github.com/gorilla/context v1.1.1
github.com/gorilla/mux v1.6.2
github.com/hashicorp/errwrap v1.0.0
github.com/hashicorp/go-multierror v1.0.0
github.com/imdario/mergo v0.3.6
github.com/mattn/go-shellwords v1.0.3
github.com/Microsoft/go-winio v0.4.11
github.com/Microsoft/hcsshim v0.8.3
github.com/mistifyio/go-zfs v2.1.1
github.com/moby/moby f8806b18b4b92c5e1980f6e11c917fad201cd73c
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
# TODO: Gotty has not been updated since 2012. Can we find a replacement?
github.com/Nvveen/Gotty cd527374f1e5bff4938207604a14f2e38a9cf512
github.com/opencontainers/go-digest c9281466c8b2f606084ac71339773efd177436e7
github.com/opencontainers/image-spec v1.0.0
github.com/opencontainers/runc v1.0.0-rc6
github.com/opencontainers/runtime-spec v1.0.0
github.com/opencontainers/runtime-tools v0.8.0
github.com/opencontainers/selinux v1.1
github.com/openshift/imagebuilder v1.1.0
github.com/ostreedev/ostree-go 9ab99253d365aac3a330d1f7281cf29f3d22820b
github.com/pkg/errors v0.8.1
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac
github.com/seccomp/libseccomp-golang v0.9.0
github.com/seccomp/containers-golang v0.1
github.com/sirupsen/logrus v1.0.0
github.com/syndtr/gocapability d98352740cb2c55f81556b63d4a1ec64c5a319c2
github.com/tchap/go-patricia v2.2.6
github.com/ulikunitz/xz v0.5.5
github.com/vbatts/tar-split v0.10.2
github.com/xeipuuv/gojsonpointer 4e3ac2762d5f479393488629ee9370b50873b3a6
github.com/xeipuuv/gojsonreference bd5ef7bd5415a7ac448318e64f11a24cd21e594b
github.com/xeipuuv/gojsonschema v1.1.0
golang.org/x/crypto ff983b9c42bc9fbf91556e191cc8efb585c16908 https://github.com/golang/crypto
golang.org/x/net 45ffb0cd1ba084b73e26dee67e667e1be5acce83 https://github.com/golang/net
golang.org/x/sync 37e7f081c4d4c64e13b10787722085407fe5d15f https://github.com/golang/sync
golang.org/x/sys 7fbe1cd0fcc20051e1fcb87fbabec4a1bacaaeba https://github.com/golang/sys
golang.org/x/text e6919f6577db79269a6443b9dc46d18f2238fb5d https://github.com/golang/text
gopkg.in/yaml.v2 v2.2.2
k8s.io/client-go kubernetes-1.10.13-beta.0 https://github.com/kubernetes/client-go
github.com/klauspost/pgzip v1.2.1
github.com/klauspost/compress v1.4.1
github.com/klauspost/cpuid v1.2.0
github.com/onsi/gomega v1.4.3
github.com/spf13/cobra v0.0.3
github.com/spf13/pflag v1.0.3
github.com/ishidawataru/sctp 07191f837fedd2f13d1ec7b5f885f0f3ec54b1cb

View File

@@ -65,7 +65,7 @@ the primary downside is that creating new signatures with the Golang-only implem
- `containers_image_ostree_stub`: Instead of importing `ostree:` transport in `github.com/containers/image/transports/alltransports`, use a stub which reports that the transport is not supported. This allows building the library without requiring the `libostree` development libraries. The `github.com/containers/image/ostree` package is completely disabled
and impossible to import when this build tag is in use.
## [Contributing](CONTRIBUTING.md)**
## [Contributing](CONTRIBUTING.md)
Information about contributing to this project.

View File

@@ -6,24 +6,29 @@ import (
"fmt"
"io"
"io/ioutil"
"os"
"reflect"
"runtime"
"strings"
"sync"
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/blobinfocache"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/klauspost/pgzip"
"github.com/opencontainers/go-digest"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/vbauerster/mpb"
"github.com/vbauerster/mpb/decor"
"golang.org/x/crypto/ssh/terminal"
"golang.org/x/sync/semaphore"
pb "gopkg.in/cheggaaa/pb.v1"
)
type digestingReader struct {
@@ -84,6 +89,7 @@ type copier struct {
dest types.ImageDestination
rawSource types.ImageSource
reportWriter io.Writer
progressOutput io.Writer
progressInterval time.Duration
progress chan types.ProgressProperties
blobInfoCache types.BlobInfoCache
@@ -152,11 +158,19 @@ func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef,
}
}()
// If reportWriter is not a TTY (e.g., when piping to a file), do not
// print the progress bars to avoid long and hard to parse output.
// createProgressBar() will print a single line instead.
progressOutput := reportWriter
if !isTTY(reportWriter) {
progressOutput = ioutil.Discard
}
copyInParallel := dest.HasThreadSafePutBlob() && rawSource.HasThreadSafeGetBlob()
c := &copier{
dest: dest,
rawSource: rawSource,
reportWriter: reportWriter,
progressOutput: progressOutput,
progressInterval: options.ProgressInterval,
progress: options.Progress,
copyInParallel: copyInParallel,
@@ -201,7 +215,7 @@ func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef,
// Image copies a single (on-manifest-list) image unparsedImage, using policyContext to validate
// source image admissibility.
func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.PolicyContext, options *Options, unparsedImage *image.UnparsedImage) (manifest []byte, retErr error) {
func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.PolicyContext, options *Options, unparsedImage *image.UnparsedImage) (manifestBytes []byte, retErr error) {
// The caller is handling manifest lists; this could happen only if a manifest list contains a manifest list.
// Make sure we fail cleanly in such cases.
multiImage, err := isMultiImage(ctx, unparsedImage)
@@ -224,6 +238,26 @@ func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.Poli
return nil, errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(c.rawSource.Reference()))
}
// If the destination is a digested reference, make a note of that, determine what digest value we're
// expecting, and check that the source manifest matches it.
destIsDigestedReference := false
if named := c.dest.Reference().DockerReference(); named != nil {
if digested, ok := named.(reference.Digested); ok {
destIsDigestedReference = true
sourceManifest, _, err := src.Manifest(ctx)
if err != nil {
return nil, errors.Wrapf(err, "Error reading manifest from source image")
}
matches, err := manifest.MatchesDigest(sourceManifest, digested.Digest())
if err != nil {
return nil, errors.Wrapf(err, "Error computing digest of source image's manifest")
}
if !matches {
return nil, errors.New("Digest of source image's manifest would not match destination reference")
}
}
}
if err := checkImageDestinationForCurrentRuntimeOS(ctx, options.DestinationCtx, src, c.dest); err != nil {
return nil, err
}
@@ -251,15 +285,15 @@ func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.Poli
manifestUpdates: &types.ManifestUpdateOptions{InformationOnly: types.ManifestUpdateInformation{Destination: c.dest}},
src: src,
// diffIDsAreNeeded is computed later
canModifyManifest: len(sigs) == 0,
// Ensure _this_ copy sees exactly the intended data when either processing a signed image or signing it.
// This may be too conservative, but for now, better safe than sorry, _especially_ on the SignBy path:
// The signature makes the content non-repudiable, so it very much matters that the signature is made over exactly what the user intended.
// We do intend the RecordDigestUncompressedPair calls to only work with reliable data, but at least theres a risk
// that the compressed version coming from a third party may be designed to attack some other decompressor implementation,
// and we would reuse and sign it.
canSubstituteBlobs: len(sigs) == 0 && options.SignBy == "",
canModifyManifest: len(sigs) == 0 && !destIsDigestedReference,
}
// Ensure _this_ copy sees exactly the intended data when either processing a signed image or signing it.
// This may be too conservative, but for now, better safe than sorry, _especially_ on the SignBy path:
// The signature makes the content non-repudiable, so it very much matters that the signature is made over exactly what the user intended.
// We do intend the RecordDigestUncompressedPair calls to only work with reliable data, but at least theres a risk
// that the compressed version coming from a third party may be designed to attack some other decompressor implementation,
// and we would reuse and sign it.
ic.canSubstituteBlobs = ic.canModifyManifest && options.SignBy == ""
if err := ic.updateEmbeddedDockerReference(); err != nil {
return nil, err
@@ -283,7 +317,7 @@ func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.Poli
// and at least with the OpenShift registry "acceptschema2" option, there is no way to detect the support
// without actually trying to upload something and getting a types.ManifestTypeRejectedError.
// So, try the preferred manifest MIME type. If the process succeeds, fine…
manifest, err = ic.copyUpdatedConfigAndManifest(ctx)
manifestBytes, err = ic.copyUpdatedConfigAndManifest(ctx)
if err != nil {
logrus.Debugf("Writing manifest using preferred type %s failed: %v", preferredManifestMIMEType, err)
// … if it fails, _and_ the failure is because the manifest is rejected, we may have other options.
@@ -314,7 +348,7 @@ func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.Poli
}
// We have successfully uploaded a manifest.
manifest = attemptedManifest
manifestBytes = attemptedManifest
errs = nil // Mark this as a success so that we don't abort below.
break
}
@@ -324,7 +358,7 @@ func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.Poli
}
if options.SignBy != "" {
newSig, err := c.createSignature(manifest, options.SignBy)
newSig, err := c.createSignature(manifestBytes, options.SignBy)
if err != nil {
return nil, err
}
@@ -336,7 +370,7 @@ func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.Poli
return nil, errors.Wrap(err, "Error writing signatures")
}
return manifest, nil
return manifestBytes, nil
}
// Printf writes a formatted string to c.reportWriter.
@@ -389,19 +423,12 @@ func (ic *imageCopier) updateEmbeddedDockerReference() error {
return nil
}
// shortDigest returns the first 12 characters of the digest.
func shortDigest(d digest.Digest) string {
return d.Encoded()[:12]
}
// createProgressBar creates a pb.ProgressBar.
func createProgressBar(srcInfo types.BlobInfo, kind string) *pb.ProgressBar {
bar := pb.New(int(srcInfo.Size)).SetUnits(pb.U_BYTES)
bar.SetMaxWidth(80)
bar.ShowTimeLeft = false
bar.ShowPercent = false
bar.Prefix(fmt.Sprintf("Copying %s %s:", kind, shortDigest(srcInfo.Digest)))
return bar
// isTTY returns true if the io.Writer is a file and a tty.
func isTTY(w io.Writer) bool {
if f, ok := w.(*os.File); ok {
return terminal.IsTerminal(int(f.Fd()))
}
return false
}
// copyLayers copies layers from ic.src/ic.c.rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
@@ -430,6 +457,7 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
// copyGroup is used to determine if all layers are copied
copyGroup := sync.WaitGroup{}
copyGroup.Add(numLayers)
// copySemaphore is used to limit the number of parallel downloads to
// avoid malicious images causing troubles and to be nice to servers.
var copySemaphore *semaphore.Weighted
@@ -440,8 +468,7 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
}
data := make([]copyLayerData, numLayers)
copyLayerHelper := func(index int, srcLayer types.BlobInfo, bar *pb.ProgressBar) {
defer bar.Finish()
copyLayerHelper := func(index int, srcLayer types.BlobInfo, pool *mpb.Progress) {
defer copySemaphore.Release(1)
defer copyGroup.Done()
cld := copyLayerData{}
@@ -451,43 +478,31 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
// does not support them.
if ic.diffIDsAreNeeded {
cld.err = errors.New("getting DiffID for foreign layers is unimplemented")
bar.Prefix(fmt.Sprintf("Skipping blob %s (DiffID foreign layer unimplemented):", shortDigest(srcLayer.Digest)))
bar.Finish()
} else {
cld.destInfo = srcLayer
logrus.Debugf("Skipping foreign layer %q copy to %s\n", cld.destInfo.Digest, ic.c.dest.Reference().Transport().Name())
bar.Prefix(fmt.Sprintf("Skipping blob %s (foreign layer):", shortDigest(srcLayer.Digest)))
bar.Add64(bar.Total)
bar.Finish()
logrus.Debugf("Skipping foreign layer %q copy to %s", cld.destInfo.Digest, ic.c.dest.Reference().Transport().Name())
}
} else {
cld.destInfo, cld.diffID, cld.err = ic.copyLayer(ctx, srcLayer, bar)
cld.destInfo, cld.diffID, cld.err = ic.copyLayer(ctx, srcLayer, pool)
}
data[index] = cld
}
progressBars := make([]*pb.ProgressBar, numLayers)
for i, srcInfo := range srcInfos {
bar := createProgressBar(srcInfo, "blob")
progressBars[i] = bar
}
func() { // A scope for defer
progressPool, progressCleanup := ic.c.newProgressPool(ctx)
defer progressCleanup()
progressPool := pb.NewPool(progressBars...)
if err := progressPool.Start(); err != nil {
return errors.Wrapf(err, "error creating progress-bar pool")
}
for i, srcLayer := range srcInfos {
copySemaphore.Acquire(ctx, 1)
go copyLayerHelper(i, srcLayer, progressPool)
}
for i, srcLayer := range srcInfos {
copySemaphore.Acquire(ctx, 1)
go copyLayerHelper(i, srcLayer, progressBars[i])
}
// Wait for all layers to be copied
copyGroup.Wait()
}()
destInfos := make([]types.BlobInfo, numLayers)
diffIDs := make([]digest.Digest, numLayers)
copyGroup.Wait()
progressPool.Stop()
for i, cld := range data {
if cld.err != nil {
return cld.err
@@ -558,6 +573,45 @@ func (ic *imageCopier) copyUpdatedConfigAndManifest(ctx context.Context) ([]byte
return manifest, nil
}
// newProgressPool creates a *mpb.Progress and a cleanup function.
// The caller must eventually call the returned cleanup function after the pool will no longer be updated.
func (c *copier) newProgressPool(ctx context.Context) (*mpb.Progress, func()) {
ctx, cancel := context.WithCancel(ctx)
pool := mpb.New(mpb.WithWidth(40), mpb.WithOutput(c.progressOutput), mpb.WithContext(ctx))
return pool, func() {
cancel()
pool.Wait()
}
}
// createProgressBar creates a mpb.Bar in pool. Note that if the copier's reportWriter
// is ioutil.Discard, the progress bar's output will be discarded
func (c *copier) createProgressBar(pool *mpb.Progress, info types.BlobInfo, kind string, onComplete string) *mpb.Bar {
// shortDigestLen is the length of the digest used for blobs.
const shortDigestLen = 12
prefix := fmt.Sprintf("Copying %s %s", kind, info.Digest.Encoded())
// Truncate the prefix (chopping of some part of the digest) to make all progress bars aligned in a column.
maxPrefixLen := len("Copying blob ") + shortDigestLen
if len(prefix) > maxPrefixLen {
prefix = prefix[:maxPrefixLen]
}
bar := pool.AddBar(info.Size,
mpb.BarClearOnComplete(),
mpb.PrependDecorators(
decor.Name(prefix),
),
mpb.AppendDecorators(
decor.OnComplete(decor.CountersKibiByte("%.1f / %.1f"), " "+onComplete),
),
)
if c.progressOutput == ioutil.Discard {
c.Printf("Copying %s %s\n", kind, info.Digest)
}
return bar
}
// copyConfig copies config.json, if any, from src to dest.
func (c *copier) copyConfig(ctx context.Context, src types.Image) error {
srcInfo := src.ConfigInfo()
@@ -566,12 +620,20 @@ func (c *copier) copyConfig(ctx context.Context, src types.Image) error {
if err != nil {
return errors.Wrapf(err, "Error reading config blob %s", srcInfo.Digest)
}
bar := createProgressBar(srcInfo, "config")
defer bar.Finish()
bar.Start()
destInfo, err := c.copyBlobFromStream(ctx, bytes.NewReader(configBlob), srcInfo, nil, false, true, bar)
destInfo, err := func() (types.BlobInfo, error) { // A scope for defer
progressPool, progressCleanup := c.newProgressPool(ctx)
defer progressCleanup()
bar := c.createProgressBar(progressPool, srcInfo, "config", "done")
destInfo, err := c.copyBlobFromStream(ctx, bytes.NewReader(configBlob), srcInfo, nil, false, true, bar)
if err != nil {
return types.BlobInfo{}, err
}
bar.SetTotal(int64(len(configBlob)), true)
return destInfo, nil
}()
if err != nil {
return err
return nil
}
if destInfo.Digest != srcInfo.Digest {
return errors.Errorf("Internal error: copying uncompressed config blob %s changed digest to %s", srcInfo.Digest, destInfo.Digest)
@@ -589,7 +651,7 @@ type diffIDResult struct {
// copyLayer copies a layer with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied layer, and a value for LayerDiffIDs if diffIDIsNeeded
func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, bar *pb.ProgressBar) (types.BlobInfo, digest.Digest, error) {
func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, pool *mpb.Progress) (types.BlobInfo, digest.Digest, error) {
cachedDiffID := ic.c.blobInfoCache.UncompressedDigest(srcInfo.Digest) // May be ""
diffIDIsNeeded := ic.diffIDsAreNeeded && cachedDiffID == ""
@@ -600,9 +662,9 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, ba
return types.BlobInfo{}, "", errors.Wrapf(err, "Error trying to reuse blob %s at destination", srcInfo.Digest)
}
if reused {
bar.Prefix(fmt.Sprintf("Skipping blob %s (already present):", shortDigest(srcInfo.Digest)))
bar.Add64(bar.Total)
bar.Finish()
logrus.Debugf("Skipping blob %s (already present):", srcInfo.Digest)
bar := ic.c.createProgressBar(pool, srcInfo, "blob", "skipped: already exists")
bar.SetTotal(0, true)
return blobInfo, cachedDiffID, nil
}
}
@@ -614,11 +676,14 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, ba
}
defer srcStream.Close()
blobInfo, diffIDChan, err := ic.copyLayerFromStream(ctx, srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
diffIDIsNeeded, bar)
bar := ic.c.createProgressBar(pool, srcInfo, "blob", "done")
blobInfo, diffIDChan, err := ic.copyLayerFromStream(ctx, srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize}, diffIDIsNeeded, bar)
if err != nil {
return types.BlobInfo{}, "", err
}
diffID := cachedDiffID
if diffIDIsNeeded {
select {
case <-ctx.Done():
@@ -631,11 +696,12 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, ba
// This is safe because we have just computed diffIDResult.Digest ourselves, and in the process
// we have read all of the input blob, so srcInfo.Digest must have been validated by digestingReader.
ic.c.blobInfoCache.RecordDigestUncompressedPair(srcInfo.Digest, diffIDResult.digest)
return blobInfo, diffIDResult.digest, nil
diffID = diffIDResult.digest
}
} else {
return blobInfo, cachedDiffID, nil
}
bar.SetTotal(srcInfo.Size, true)
return blobInfo, diffID, nil
}
// copyLayerFromStream is an implementation detail of copyLayer; mostly providing a separate “defer” scope.
@@ -643,7 +709,7 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, ba
// perhaps compressing the stream if canCompress,
// and returns a complete blobInfo of the copied blob and perhaps a <-chan diffIDResult if diffIDIsNeeded, to be read by the caller.
func (ic *imageCopier) copyLayerFromStream(ctx context.Context, srcStream io.Reader, srcInfo types.BlobInfo,
diffIDIsNeeded bool, bar *pb.ProgressBar) (types.BlobInfo, <-chan diffIDResult, error) {
diffIDIsNeeded bool, bar *mpb.Bar) (types.BlobInfo, <-chan diffIDResult, error) {
var getDiffIDRecorder func(compression.DecompressorFunc) io.Writer // = nil
var diffIDChan chan diffIDResult
@@ -704,7 +770,7 @@ func computeDiffID(stream io.Reader, decompressor compression.DecompressorFunc)
// and returns a complete blobInfo of the copied blob.
func (c *copier) copyBlobFromStream(ctx context.Context, srcStream io.Reader, srcInfo types.BlobInfo,
getOriginalLayerCopyWriter func(decompressor compression.DecompressorFunc) io.Writer,
canModifyBlob bool, isConfig bool, bar *pb.ProgressBar) (types.BlobInfo, error) {
canModifyBlob bool, isConfig bool, bar *mpb.Bar) (types.BlobInfo, error) {
// The copying happens through a pipeline of connected io.Readers.
// === Input: srcStream
@@ -727,7 +793,7 @@ func (c *copier) copyBlobFromStream(ctx context.Context, srcStream io.Reader, sr
return types.BlobInfo{}, errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
isCompressed := decompressor != nil
destStream = bar.NewProxyReader(destStream)
destStream = bar.ProxyReader(destStream)
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.

View File

@@ -23,7 +23,7 @@ import (
"github.com/containers/image/types"
"github.com/docker/distribution/registry/client"
"github.com/docker/go-connections/tlsconfig"
"github.com/opencontainers/go-digest"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -81,29 +81,30 @@ type bearerToken struct {
// dockerClient is configuration for dealing with a single Docker registry.
type dockerClient struct {
// The following members are set by newDockerClient and do not change afterwards.
sys *types.SystemContext
registry string
client *http.Client
insecureSkipTLSVerify bool
sys *types.SystemContext
registry string
// tlsClientConfig is setup by newDockerClient and will be used and updated
// by detectProperties(). Callers can edit tlsClientConfig.InsecureSkipVerify in the meantime.
tlsClientConfig *tls.Config
// The following members are not set by newDockerClient and must be set by callers if needed.
username string
password string
signatureBase signatureStorageBase
scope authScope
extraScope *authScope // If non-nil, a temporary extra token scope (necessary for mounting from another repo)
// The following members are detected registry properties:
// They are set after a successful detectProperties(), and never change afterwards.
scheme string // Empty value also used to indicate detectProperties() has not yet succeeded.
client *http.Client
scheme string
challenges []challenge
supportsSignatures bool
// Private state for setupRequestAuth (key: string, value: bearerToken)
tokenCache sync.Map
// detectPropertiesError caches the initial error.
detectPropertiesError error
// detectPropertiesOnce is used to execuute detectProperties() at most once in in makeRequest().
detectPropertiesOnce sync.Once
// Private state for detectProperties:
detectPropertiesOnce sync.Once // detectPropertiesOnce is used to execute detectProperties() at most once.
detectPropertiesError error // detectPropertiesError caches the initial error.
}
type authScope struct {
@@ -198,7 +199,7 @@ func dockerCertDir(sys *types.SystemContext, hostPort string) (string, error) {
// “write” specifies whether the client will be used for "write" access (in particular passed to lookaside.go:toplevelFromSection)
func newDockerClientFromRef(sys *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
registry := reference.Domain(ref.ref)
username, password, err := config.GetAuthentication(sys, reference.Domain(ref.ref))
username, password, err := config.GetAuthentication(sys, registry)
if err != nil {
return nil, errors.Wrapf(err, "error getting username and password")
}
@@ -230,8 +231,7 @@ func newDockerClient(sys *types.SystemContext, registry, reference string) (*doc
if registry == dockerHostname {
registry = dockerRegistry
}
tr := tlsclientconfig.NewTransport()
tr.TLSClientConfig = serverDefault()
tlsClientConfig := serverDefault()
// It is undefined whether the host[:port] string for dockerHostname should be dockerHostname or dockerRegistry,
// because docker/docker does not read the certs.d subdirectory at all in that case. We use the user-visible
@@ -242,38 +242,31 @@ func newDockerClient(sys *types.SystemContext, registry, reference string) (*doc
if err != nil {
return nil, err
}
if err := tlsclientconfig.SetupCertificates(certDir, tr.TLSClientConfig); err != nil {
if err := tlsclientconfig.SetupCertificates(certDir, tlsClientConfig); err != nil {
return nil, err
}
// Check if TLS verification shall be skipped (default=false) which can
// either be specified in the sysregistriesv2 configuration or via the
// SystemContext, whereas the SystemContext is prioritized.
// be specified in the sysregistriesv2 configuration.
skipVerify := false
if sys != nil && sys.DockerInsecureSkipTLSVerify != types.OptionalBoolUndefined {
// Only use the SystemContext if the actual value is defined.
skipVerify = sys.DockerInsecureSkipTLSVerify == types.OptionalBoolTrue
} else {
reg, err := sysregistriesv2.FindRegistry(sys, reference)
if err != nil {
return nil, errors.Wrapf(err, "error loading registries")
}
if reg != nil {
skipVerify = reg.Insecure
}
reg, err := sysregistriesv2.FindRegistry(sys, reference)
if err != nil {
return nil, errors.Wrapf(err, "error loading registries")
}
tr.TLSClientConfig.InsecureSkipVerify = skipVerify
if reg != nil {
skipVerify = reg.Insecure
}
tlsClientConfig.InsecureSkipVerify = skipVerify
return &dockerClient{
sys: sys,
registry: registry,
client: &http.Client{Transport: tr},
insecureSkipTLSVerify: skipVerify,
sys: sys,
registry: registry,
tlsClientConfig: tlsClientConfig,
}, nil
}
// CheckAuth validates the credentials by attempting to log into the registry
// returns an error if an error occcured while making the http request or the status code received was 401
// returns an error if an error occurred while making the http request or the status code received was 401
func CheckAuth(ctx context.Context, sys *types.SystemContext, username, password, registry string) error {
client, err := newDockerClient(sys, registry, registry)
if err != nil {
@@ -282,7 +275,7 @@ func CheckAuth(ctx context.Context, sys *types.SystemContext, username, password
client.username = username
client.password = password
resp, err := client.makeRequest(ctx, "GET", "/v2/", nil, nil, v2Auth)
resp, err := client.makeRequest(ctx, "GET", "/v2/", nil, nil, v2Auth, nil)
if err != nil {
return err
}
@@ -362,8 +355,8 @@ func SearchRegistry(ctx context.Context, sys *types.SystemContext, registry, ima
q.Set("n", strconv.Itoa(limit))
u.RawQuery = q.Encode()
logrus.Debugf("trying to talk to v1 search endpoint\n")
resp, err := client.makeRequest(ctx, "GET", u.String(), nil, nil, noAuth)
logrus.Debugf("trying to talk to v1 search endpoint")
resp, err := client.makeRequest(ctx, "GET", u.String(), nil, nil, noAuth, nil)
if err != nil {
logrus.Debugf("error getting search results from v1 endpoint %q: %v", registry, err)
} else {
@@ -379,8 +372,8 @@ func SearchRegistry(ctx context.Context, sys *types.SystemContext, registry, ima
}
}
logrus.Debugf("trying to talk to v2 search endpoint\n")
resp, err := client.makeRequest(ctx, "GET", "/v2/_catalog", nil, nil, v2Auth)
logrus.Debugf("trying to talk to v2 search endpoint")
resp, err := client.makeRequest(ctx, "GET", "/v2/_catalog", nil, nil, v2Auth, nil)
if err != nil {
logrus.Debugf("error getting search results from v2 endpoint %q: %v", registry, err)
} else {
@@ -409,20 +402,20 @@ func SearchRegistry(ctx context.Context, sys *types.SystemContext, registry, ima
// makeRequest creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// The host name and schema is taken from the client or autodetected, and the path is relative to it, i.e. the path usually starts with /v2/.
func (c *dockerClient) makeRequest(ctx context.Context, method, path string, headers map[string][]string, stream io.Reader, auth sendAuth) (*http.Response, error) {
func (c *dockerClient) makeRequest(ctx context.Context, method, path string, headers map[string][]string, stream io.Reader, auth sendAuth, extraScope *authScope) (*http.Response, error) {
if err := c.detectProperties(ctx); err != nil {
return nil, err
}
url := fmt.Sprintf("%s://%s%s", c.scheme, c.registry, path)
return c.makeRequestToResolvedURL(ctx, method, url, headers, stream, -1, auth)
return c.makeRequestToResolvedURL(ctx, method, url, headers, stream, -1, auth, extraScope)
}
// makeRequestToResolvedURL creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// streamLen, if not -1, specifies the length of the data expected on stream.
// makeRequest should generally be preferred.
// TODO(runcom): too many arguments here, use a struct
func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url string, headers map[string][]string, stream io.Reader, streamLen int64, auth sendAuth) (*http.Response, error) {
func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url string, headers map[string][]string, stream io.Reader, streamLen int64, auth sendAuth, extraScope *authScope) (*http.Response, error) {
req, err := http.NewRequest(method, url, stream)
if err != nil {
return nil, err
@@ -441,7 +434,7 @@ func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url
req.Header.Add("User-Agent", c.sys.DockerRegistryUserAgent)
}
if auth == v2Auth {
if err := c.setupRequestAuth(req); err != nil {
if err := c.setupRequestAuth(req, extraScope); err != nil {
return nil, err
}
}
@@ -460,7 +453,7 @@ func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url
// 2) gcr.io is sending 401 without a WWW-Authenticate header in the real request
//
// debugging: https://github.com/containers/image/pull/211#issuecomment-273426236 and follows up
func (c *dockerClient) setupRequestAuth(req *http.Request) error {
func (c *dockerClient) setupRequestAuth(req *http.Request, extraScope *authScope) error {
if len(c.challenges) == 0 {
return nil
}
@@ -474,10 +467,10 @@ func (c *dockerClient) setupRequestAuth(req *http.Request) error {
case "bearer":
cacheKey := ""
scopes := []authScope{c.scope}
if c.extraScope != nil {
if extraScope != nil {
// Using ':' as a separator here is unambiguous because getBearerToken below uses the same separator when formatting a remote request (and because repository names can't contain colons).
cacheKey = fmt.Sprintf("%s:%s", c.extraScope.remoteName, c.extraScope.actions)
scopes = append(scopes, *c.extraScope)
cacheKey = fmt.Sprintf("%s:%s", extraScope.remoteName, extraScope.actions)
scopes = append(scopes, *extraScope)
}
var token bearerToken
t, inCache := c.tokenCache.Load(cacheKey)
@@ -558,13 +551,18 @@ func (c *dockerClient) getBearerToken(ctx context.Context, challenge challenge,
// detectPropertiesHelper performs the work of detectProperties which executes
// it at most once.
func (c *dockerClient) detectPropertiesHelper(ctx context.Context) error {
if c.scheme != "" {
return nil
// We overwrite the TLS clients `InsecureSkipVerify` only if explicitly
// specified by the system context
if c.sys != nil && c.sys.DockerInsecureSkipTLSVerify != types.OptionalBoolUndefined {
c.tlsClientConfig.InsecureSkipVerify = c.sys.DockerInsecureSkipTLSVerify == types.OptionalBoolTrue
}
tr := tlsclientconfig.NewTransport()
tr.TLSClientConfig = c.tlsClientConfig
c.client = &http.Client{Transport: tr}
ping := func(scheme string) error {
url := fmt.Sprintf(resolvedPingV2URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth, nil)
if err != nil {
logrus.Debugf("Ping %s err %s (%#v)", url, err.Error(), err)
return err
@@ -580,7 +578,7 @@ func (c *dockerClient) detectPropertiesHelper(ctx context.Context) error {
return nil
}
err := ping("https")
if err != nil && c.insecureSkipTLSVerify {
if err != nil && c.tlsClientConfig.InsecureSkipVerify {
err = ping("http")
}
if err != nil {
@@ -591,7 +589,7 @@ func (c *dockerClient) detectPropertiesHelper(ctx context.Context) error {
// best effort to understand if we're talking to a V1 registry
pingV1 := func(scheme string) bool {
url := fmt.Sprintf(resolvedPingV1URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth, nil)
if err != nil {
logrus.Debugf("Ping %s err %s (%#v)", url, err.Error(), err)
return false
@@ -604,7 +602,7 @@ func (c *dockerClient) detectPropertiesHelper(ctx context.Context) error {
return true
}
isV1 := pingV1("https")
if !isV1 && c.insecureSkipTLSVerify {
if !isV1 && c.tlsClientConfig.InsecureSkipVerify {
isV1 = pingV1("http")
}
if isV1 {
@@ -625,7 +623,7 @@ func (c *dockerClient) detectProperties(ctx context.Context) error {
// using the original data structures.
func (c *dockerClient) getExtensionsSignatures(ctx context.Context, ref dockerReference, manifestDigest digest.Digest) (*extensionSignatureList, error) {
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(ref.ref), manifestDigest)
res, err := c.makeRequest(ctx, "GET", path, nil, nil, v2Auth)
res, err := c.makeRequest(ctx, "GET", path, nil, nil, v2Auth, nil)
if err != nil {
return nil, err
}

View File

@@ -25,7 +25,7 @@ type Image struct {
// a client to the registry hosting the given image.
// The caller must call .Close() on the returned Image.
func newImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) (types.ImageCloser, error) {
s, err := newImageSource(sys, ref)
s, err := newImageSource(ctx, sys, ref)
if err != nil {
return nil, err
}
@@ -66,7 +66,7 @@ func GetRepositoryTags(ctx context.Context, sys *types.SystemContext, ref types.
tags := make([]string, 0)
for {
res, err := client.makeRequest(ctx, "GET", path, nil, nil, v2Auth)
res, err := client.makeRequest(ctx, "GET", path, nil, nil, v2Auth, nil)
if err != nil {
return nil, err
}

View File

@@ -12,10 +12,11 @@ import (
"net/url"
"os"
"path/filepath"
"strings"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/blobinfocache"
"github.com/containers/image/pkg/blobinfocache/none"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
@@ -128,7 +129,7 @@ func (d *dockerImageDestination) PutBlob(ctx context.Context, stream io.Reader,
// This should not really be necessary, at least the copy code calls TryReusingBlob automatically.
// Still, we need to check, if only because the "initiate upload" endpoint does not have a documented "blob already exists" return value.
// But we do that with NoCache, so that it _only_ checks the primary destination, instead of trying all mount candidates _again_.
haveBlob, reusedInfo, err := d.TryReusingBlob(ctx, inputInfo, blobinfocache.NoCache, false)
haveBlob, reusedInfo, err := d.TryReusingBlob(ctx, inputInfo, none.NoCache, false)
if err != nil {
return types.BlobInfo{}, err
}
@@ -140,7 +141,7 @@ func (d *dockerImageDestination) PutBlob(ctx context.Context, stream io.Reader,
// FIXME? Chunked upload, progress reporting, etc.
uploadPath := fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref))
logrus.Debugf("Uploading %s", uploadPath)
res, err := d.c.makeRequest(ctx, "POST", uploadPath, nil, nil, v2Auth)
res, err := d.c.makeRequest(ctx, "POST", uploadPath, nil, nil, v2Auth, nil)
if err != nil {
return types.BlobInfo{}, err
}
@@ -157,7 +158,7 @@ func (d *dockerImageDestination) PutBlob(ctx context.Context, stream io.Reader,
digester := digest.Canonical.Digester()
sizeCounter := &sizeCounter{}
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
res, err = d.c.makeRequestToResolvedURL(ctx, "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, v2Auth)
res, err = d.c.makeRequestToResolvedURL(ctx, "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, v2Auth, nil)
if err != nil {
logrus.Debugf("Error uploading layer chunked, response %#v", res)
return types.BlobInfo{}, err
@@ -176,7 +177,7 @@ func (d *dockerImageDestination) PutBlob(ctx context.Context, stream io.Reader,
// TODO: check inputInfo.Digest == computedDigest https://github.com/containers/image/pull/70#discussion_r77646717
locationQuery.Set("digest", computedDigest.String())
uploadLocation.RawQuery = locationQuery.Encode()
res, err = d.c.makeRequestToResolvedURL(ctx, "PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, v2Auth)
res, err = d.c.makeRequestToResolvedURL(ctx, "PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, v2Auth, nil)
if err != nil {
return types.BlobInfo{}, err
}
@@ -194,10 +195,10 @@ func (d *dockerImageDestination) PutBlob(ctx context.Context, stream io.Reader,
// blobExists returns true iff repo contains a blob with digest, and if so, also its size.
// If the destination does not contain the blob, or it is unknown, blobExists ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dockerImageDestination) blobExists(ctx context.Context, repo reference.Named, digest digest.Digest) (bool, int64, error) {
func (d *dockerImageDestination) blobExists(ctx context.Context, repo reference.Named, digest digest.Digest, extraScope *authScope) (bool, int64, error) {
checkPath := fmt.Sprintf(blobsPath, reference.Path(repo), digest.String())
logrus.Debugf("Checking %s", checkPath)
res, err := d.c.makeRequest(ctx, "HEAD", checkPath, nil, nil, v2Auth)
res, err := d.c.makeRequest(ctx, "HEAD", checkPath, nil, nil, v2Auth, extraScope)
if err != nil {
return false, -1, err
}
@@ -218,7 +219,7 @@ func (d *dockerImageDestination) blobExists(ctx context.Context, repo reference.
}
// mountBlob tries to mount blob srcDigest from srcRepo to the current destination.
func (d *dockerImageDestination) mountBlob(ctx context.Context, srcRepo reference.Named, srcDigest digest.Digest) error {
func (d *dockerImageDestination) mountBlob(ctx context.Context, srcRepo reference.Named, srcDigest digest.Digest, extraScope *authScope) error {
u := url.URL{
Path: fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref)),
RawQuery: url.Values{
@@ -228,7 +229,7 @@ func (d *dockerImageDestination) mountBlob(ctx context.Context, srcRepo referenc
}
mountPath := u.String()
logrus.Debugf("Trying to mount %s", mountPath)
res, err := d.c.makeRequest(ctx, "POST", mountPath, nil, nil, v2Auth)
res, err := d.c.makeRequest(ctx, "POST", mountPath, nil, nil, v2Auth, extraScope)
if err != nil {
return err
}
@@ -246,7 +247,7 @@ func (d *dockerImageDestination) mountBlob(ctx context.Context, srcRepo referenc
return errors.Wrap(err, "Error determining upload URL after a mount attempt")
}
logrus.Debugf("... started an upload instead of mounting, trying to cancel at %s", uploadLocation.String())
res2, err := d.c.makeRequestToResolvedURL(ctx, "DELETE", uploadLocation.String(), nil, nil, -1, v2Auth)
res2, err := d.c.makeRequestToResolvedURL(ctx, "DELETE", uploadLocation.String(), nil, nil, -1, v2Auth, extraScope)
if err != nil {
logrus.Debugf("Error trying to cancel an inadvertent upload: %s", err)
} else {
@@ -276,7 +277,7 @@ func (d *dockerImageDestination) TryReusingBlob(ctx context.Context, info types.
}
// First, check whether the blob happens to already exist at the destination.
exists, size, err := d.blobExists(ctx, d.ref.ref, info.Digest)
exists, size, err := d.blobExists(ctx, d.ref.ref, info.Digest, nil)
if err != nil {
return false, types.BlobInfo{}, err
}
@@ -286,15 +287,6 @@ func (d *dockerImageDestination) TryReusingBlob(ctx context.Context, info types.
}
// Then try reusing blobs from other locations.
// Checking candidateRepo, and mounting from it, requires an expanded token scope.
// We still want to reuse the ping information and other aspects of the client, so rather than make a fresh copy, there is this a bit ugly extraScope hack.
if d.c.extraScope != nil {
return false, types.BlobInfo{}, errors.New("Internal error: dockerClient.extraScope was set before TryReusingBlob")
}
defer func() {
d.c.extraScope = nil
}()
for _, candidate := range cache.CandidateLocations(d.ref.Transport(), bicTransportScope(d.ref), info.Digest, canSubstitute) {
candidateRepo, err := parseBICLocationReference(candidate.Location)
if err != nil {
@@ -314,7 +306,10 @@ func (d *dockerImageDestination) TryReusingBlob(ctx context.Context, info types.
}
// Whatever happens here, don't abort the entire operation. It's likely we just don't have permissions, and if it is a critical network error, we will find out soon enough anyway.
d.c.extraScope = &authScope{
// Checking candidateRepo, and mounting from it, requires an
// expanded token scope.
extraScope := &authScope{
remoteName: reference.Path(candidateRepo),
actions: "pull",
}
@@ -325,7 +320,7 @@ func (d *dockerImageDestination) TryReusingBlob(ctx context.Context, info types.
// Even worse, docker/distribution does not actually reasonably implement canceling uploads
// (it would require a "delete" action in the token, and Quay does not give that to anyone, so we can't ask);
// so, be a nice client and don't create unnecesary upload sessions on the server.
exists, size, err := d.blobExists(ctx, candidateRepo, candidate.Digest)
exists, size, err := d.blobExists(ctx, candidateRepo, candidate.Digest, extraScope)
if err != nil {
logrus.Debugf("... Failed: %v", err)
continue
@@ -335,7 +330,7 @@ func (d *dockerImageDestination) TryReusingBlob(ctx context.Context, info types.
continue // logrus.Debug() already happened in blobExists
}
if candidateRepo.Name() != d.ref.ref.Name() {
if err := d.mountBlob(ctx, candidateRepo, candidate.Digest); err != nil {
if err := d.mountBlob(ctx, candidateRepo, candidate.Digest, extraScope); err != nil {
logrus.Debugf("... Mount failed: %v", err)
continue
}
@@ -369,7 +364,7 @@ func (d *dockerImageDestination) PutManifest(ctx context.Context, m []byte) erro
if mimeType != "" {
headers["Content-Type"] = []string{mimeType}
}
res, err := d.c.makeRequest(ctx, "PUT", path, headers, bytes.NewReader(m), v2Auth)
res, err := d.c.makeRequest(ctx, "PUT", path, headers, bytes.NewReader(m), v2Auth, nil)
if err != nil {
return err
}
@@ -396,14 +391,29 @@ func isManifestInvalidError(err error) bool {
if !ok || len(errors) == 0 {
return false
}
ec, ok := errors[0].(errcode.ErrorCoder)
err = errors[0]
ec, ok := err.(errcode.ErrorCoder)
if !ok {
return false
}
switch ec.ErrorCode() {
// ErrorCodeManifestInvalid is returned by OpenShift with acceptschema2=false.
case v2.ErrorCodeManifestInvalid:
return true
// ErrorCodeTagInvalid is returned by docker/distribution (at least as of commit ec87e9b6971d831f0eff752ddb54fb64693e51cd)
// when uploading to a tag (because it cant find a matching tag inside the manifest)
return ec.ErrorCode() == v2.ErrorCodeManifestInvalid || ec.ErrorCode() == v2.ErrorCodeTagInvalid
case v2.ErrorCodeTagInvalid:
return true
// ErrorCodeUnsupported with 'Invalid JSON syntax' is returned by AWS ECR when
// uploading an OCI manifest that is (correctly, according to the spec) missing
// a top-level media type. See libpod issue #1719
// FIXME: remove this case when ECR behavior is fixed
case errcode.ErrorCodeUnsupported:
return strings.Contains(err.Error(), "Invalid JSON syntax")
default:
return false
}
}
func (d *dockerImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
@@ -574,7 +584,7 @@ sigExists:
}
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), d.manifestDigest.String())
res, err := d.c.makeRequest(ctx, "PUT", path, nil, bytes.NewReader(body), v2Auth)
res, err := d.c.makeRequest(ctx, "PUT", path, nil, bytes.NewReader(body), v2Auth, nil)
if err != nil {
return err
}

View File

@@ -13,9 +13,10 @@ import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/sysregistriesv2"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -28,17 +29,94 @@ type dockerImageSource struct {
cachedManifestMIMEType string // Only valid if cachedManifest != nil
}
// newImageSource creates a new ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(sys *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
c, err := newDockerClientFromRef(sys, ref, false, "pull")
// newImageSource creates a new `ImageSource` for the specified image reference
// `ref`.
//
// The following steps will be done during the instance creation:
//
// - Lookup the registry within the configured location in
// `sys.SystemRegistriesConfPath`. If there is no configured registry available,
// we fallback to the provided docker reference `ref`.
//
// - References which contain a configured prefix will be automatically rewritten
// to the correct target reference. For example, if the configured
// `prefix = "example.com/foo"`, `location = "example.com"` and the image will be
// pulled from the ref `example.com/foo/image`, then the resulting pull will
// effectively point to `example.com/image`.
//
// - If the rewritten reference succeeds, it will be used as the `dockerRef`
// in the client. If the rewrite fails, the function immediately returns an error.
//
// - Each mirror will be used (in the configured order) to test the
// availability of the image manifest on the remote location. For example,
// if the manifest is not reachable due to connectivity issues, then the next
// mirror will be tested instead. If no mirror is configured or contains the
// target manifest, then the initial `ref` will be tested as fallback. The
// creation of the new `dockerImageSource` only succeeds if a remote
// location with the available manifest was found.
//
// A cleanup call to `.Close()` is needed if the caller is done using the returned
// `ImageSource`.
func newImageSource(ctx context.Context, sys *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
registry, err := sysregistriesv2.FindRegistry(sys, ref.ref.Name())
if err != nil {
return nil, err
return nil, errors.Wrapf(err, "error loading registries configuration")
}
return &dockerImageSource{
ref: ref,
c: c,
}, nil
if registry == nil {
// No configuration was found for the provided reference, so we create
// a fallback registry by hand to make the client creation below work
// as intended.
registry = &sysregistriesv2.Registry{
Endpoint: sysregistriesv2.Endpoint{
Location: ref.ref.String(),
},
Prefix: ref.ref.String(),
}
}
primaryDomain := reference.Domain(ref.ref)
// Found the registry within the sysregistriesv2 configuration. Now we test
// all endpoints for the manifest availability. If a working image source
// was found, it will be used for all future pull actions.
manifestLoadErr := errors.New("Internal error: newImageSource returned without trying any endpoint")
for _, endpoint := range append(registry.Mirrors, registry.Endpoint) {
logrus.Debugf("Trying to pull %q from endpoint %q", ref.ref, endpoint.Location)
newRef, err := endpoint.RewriteReference(ref.ref, registry.Prefix)
if err != nil {
return nil, err
}
dockerRef, err := newReference(newRef)
if err != nil {
return nil, err
}
endpointSys := sys
// sys.DockerAuthConfig does not explicitly specify a registry; we must not blindly send the credentials intended for the primary endpoint to mirrors.
if endpointSys != nil && endpointSys.DockerAuthConfig != nil && reference.Domain(dockerRef.ref) != primaryDomain {
copy := *endpointSys
copy.DockerAuthConfig = nil
endpointSys = &copy
}
client, err := newDockerClientFromRef(endpointSys, dockerRef, false, "pull")
if err != nil {
return nil, err
}
client.tlsClientConfig.InsecureSkipVerify = endpoint.Insecure
testImageSource := &dockerImageSource{
ref: dockerRef,
c: client,
}
manifestLoadErr = testImageSource.ensureManifestIsLoaded(ctx)
if manifestLoadErr == nil {
return testImageSource, nil
}
}
return nil, manifestLoadErr
}
// Reference returns the reference used to set up this source, _as specified by the user_
@@ -89,7 +167,7 @@ func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest strin
path := fmt.Sprintf(manifestPath, reference.Path(s.ref.ref), tagOrDigest)
headers := make(map[string][]string)
headers["Accept"] = manifest.DefaultRequestedManifestMIMETypes
res, err := s.c.makeRequest(ctx, "GET", path, headers, nil, v2Auth)
res, err := s.c.makeRequest(ctx, "GET", path, headers, nil, v2Auth, nil)
if err != nil {
return nil, "", err
}
@@ -137,7 +215,7 @@ func (s *dockerImageSource) getExternalBlob(ctx context.Context, urls []string)
err error
)
for _, url := range urls {
resp, err = s.c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth)
resp, err = s.c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth, nil)
if err == nil {
if resp.StatusCode != http.StatusOK {
err = errors.Errorf("error fetching external blob from %q: %d (%s)", url, resp.StatusCode, http.StatusText(resp.StatusCode))
@@ -147,10 +225,10 @@ func (s *dockerImageSource) getExternalBlob(ctx context.Context, urls []string)
break
}
}
if resp.Body != nil && err == nil {
return resp.Body, getBlobSize(resp), nil
if err != nil {
return nil, 0, err
}
return nil, 0, err
return resp.Body, getBlobSize(resp), nil
}
func getBlobSize(resp *http.Response) int64 {
@@ -176,7 +254,7 @@ func (s *dockerImageSource) GetBlob(ctx context.Context, info types.BlobInfo, ca
path := fmt.Sprintf(blobsPath, reference.Path(s.ref.ref), info.Digest.String())
logrus.Debugf("Downloading %s", path)
res, err := s.c.makeRequest(ctx, "GET", path, nil, nil, v2Auth)
res, err := s.c.makeRequest(ctx, "GET", path, nil, nil, v2Auth, nil)
if err != nil {
return nil, 0, err
}
@@ -340,7 +418,7 @@ func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerRefere
return err
}
getPath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), refTail)
get, err := c.makeRequest(ctx, "GET", getPath, headers, nil, v2Auth)
get, err := c.makeRequest(ctx, "GET", getPath, headers, nil, v2Auth, nil)
if err != nil {
return err
}
@@ -362,7 +440,7 @@ func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerRefere
// When retrieving the digest from a registry >= 2.3 use the following header:
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
delete, err := c.makeRequest(ctx, "DELETE", deletePath, headers, nil, v2Auth)
delete, err := c.makeRequest(ctx, "DELETE", deletePath, headers, nil, v2Auth, nil)
if err != nil {
return err
}

View File

@@ -61,8 +61,13 @@ func ParseReference(refString string) (types.ImageReference, error) {
// NewReference returns a Docker reference for a named reference. The reference must satisfy !reference.IsNameOnly().
func NewReference(ref reference.Named) (types.ImageReference, error) {
return newReference(ref)
}
// newReference returns a dockerReference for a named reference.
func newReference(ref reference.Named) (dockerReference, error) {
if reference.IsNameOnly(ref) {
return nil, errors.Errorf("Docker reference %s has neither a tag nor a digest", reference.FamiliarString(ref))
return dockerReference{}, errors.Errorf("Docker reference %s has neither a tag nor a digest", reference.FamiliarString(ref))
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// The docker/distribution API does not really support that (we cant ask for an image with a specific
@@ -72,8 +77,9 @@ func NewReference(ref reference.Named) (types.ImageReference, error) {
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
return nil, errors.Errorf("Docker references with both a tag and digest are currently not supported")
return dockerReference{}, errors.Errorf("Docker references with both a tag and digest are currently not supported")
}
return dockerReference{
ref: ref,
}, nil
@@ -135,7 +141,7 @@ func (ref dockerReference) NewImage(ctx context.Context, sys *types.SystemContex
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref dockerReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(sys, ref)
return newImageSource(ctx, sys, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.

View File

@@ -9,6 +9,7 @@ import (
"io/ioutil"
"os"
"path"
"sync"
"github.com/containers/image/internal/tmpdir"
"github.com/containers/image/manifest"
@@ -21,8 +22,10 @@ import (
// Source is a partial implementation of types.ImageSource for reading from tarPath.
type Source struct {
tarPath string
removeTarPathOnClose bool // Remove temp file on close if true
removeTarPathOnClose bool // Remove temp file on close if true
cacheDataLock sync.Once // Atomic way to ensure that ensureCachedDataIsPresent is only invoked once
// The following data is only available after ensureCachedDataIsPresent() succeeds
cacheDataResult error // The return value of ensureCachedDataIsPresent, since it should be as safe to cache as the side effects
tarManifest *ManifestItem // nil if not available yet.
configBytes []byte
configDigest digest.Digest
@@ -199,43 +202,46 @@ func (s *Source) readTarComponent(path string) ([]byte, error) {
// ensureCachedDataIsPresent loads data necessary for any of the public accessors.
func (s *Source) ensureCachedDataIsPresent() error {
if s.tarManifest != nil {
return nil
}
s.cacheDataLock.Do(func() {
// Read and parse manifest.json
tarManifest, err := s.loadTarManifest()
if err != nil {
s.cacheDataResult = err
return
}
// Read and parse manifest.json
tarManifest, err := s.loadTarManifest()
if err != nil {
return err
}
// Check to make sure length is 1
if len(tarManifest) != 1 {
s.cacheDataResult = errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(tarManifest))
return
}
// Check to make sure length is 1
if len(tarManifest) != 1 {
return errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(tarManifest))
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest[0].Config)
if err != nil {
s.cacheDataResult = err
return
}
var parsedConfig manifest.Schema2Image // There's a lot of info there, but we only really care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
s.cacheDataResult = errors.Wrapf(err, "Error decoding tar config %s", tarManifest[0].Config)
return
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest[0].Config)
if err != nil {
return err
}
var parsedConfig manifest.Schema2Image // There's a lot of info there, but we only really care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest[0].Config)
}
knownLayers, err := s.prepareLayerData(&tarManifest[0], &parsedConfig)
if err != nil {
s.cacheDataResult = err
return
}
knownLayers, err := s.prepareLayerData(&tarManifest[0], &parsedConfig)
if err != nil {
return err
}
// Success; commit.
s.tarManifest = &tarManifest[0]
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
s.knownLayers = knownLayers
return nil
// Success; commit.
s.tarManifest = &tarManifest[0]
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
s.knownLayers = knownLayers
})
return s.cacheDataResult
}
// loadTarManifest loads and decodes the manifest.json.
@@ -399,7 +405,7 @@ func (r uncompressedReadCloser) Close() error {
// HasThreadSafeGetBlob indicates whether GetBlob can be executed concurrently.
func (s *Source) HasThreadSafeGetBlob() bool {
return false
return true
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).

View File

@@ -0,0 +1,66 @@
{
"title": "JSON embedded in an atomic container signature",
"description": "This schema is a supplement to atomic-signature.md in this directory.\n\nConsumers of the JSON MUST use the processing rules documented in atomic-signature.md, especially the requirements for the 'critical' subjobject.\n\nWhenever this schema and atomic-signature.md, or the github.com/containers/image/signature implementation, differ,\nit is the atomic-signature.md document, or the github.com/containers/image/signature implementation, which governs.\n\nUsers are STRONGLY RECOMMENDED to use the github.com/containeres/image/signature implementation instead of writing\ntheir own, ESPECIALLY when consuming signatures, so that the policy.json format can be shared by all image consumers.\n",
"type": "object",
"required": [
"critical",
"optional"
],
"additionalProperties": false,
"properties": {
"critical": {
"type": "object",
"required": [
"type",
"image",
"identity"
],
"additionalProperties": false,
"properties": {
"type": {
"type": "string",
"enum": [
"atomic container signature"
]
},
"image": {
"type": "object",
"required": [
"docker-manifest-digest"
],
"additionalProperties": false,
"properties": {
"docker-manifest-digest": {
"type": "string"
}
}
},
"identity": {
"type": "object",
"required": [
"docker-reference"
],
"additionalProperties": false,
"properties": {
"docker-reference": {
"type": "string"
}
}
}
}
},
"optional": {
"type": "object",
"description": "All members are optional, but if they are included, they must be valid.",
"additionalProperties": true,
"properties": {
"creator": {
"type": "string"
},
"timestamp": {
"type": "integer"
}
}
}
}
}

View File

@@ -0,0 +1,28 @@
% containers-certs.d(5)
# NAME
containers-certs.d - Directory for storing custom container-registry TLS configurations
# DESCRIPTION
A custom TLS configuration for a container registry can be configured by creating a directory under `/etc/containers/certs.d`.
The name of the directory must correspond to the `host:port` of the registry (e.g., `my-registry.com:5000`).
## Directory Structure
A certs directory can contain one or more files with the following extensions:
* `*.crt` files with this extensions will be interpreted as CA certificates
* `*.cert` files with this extensions will be interpreted as client certificates
* `*.key` files with this extensions will be interpreted as client keys
Note that the client certificate-key pair will be selected by the file name (e.g., `client.{cert,key}`).
An examplary setup for a registry running at `my-registry.com:5000` may look as follows:
```
/etc/containers/certs.d/ <- Certificate directory
└── my-registry.com:5000 <- Hostname:port
├── client.cert <- Client certificate
├── client.key <- Client key
└── ca.crt <- Certificate authority that signed the registry certificate
```
# HISTORY
Feb 2019, Originally compiled by Valentin Rothberg <rothberg@redhat.com>

View File

@@ -0,0 +1,283 @@
% CONTAINERS-POLICY.JSON(5) policy.json Man Page
% Miloslav Trmač
% September 2016
# NAME
containers-policy.json - syntax for the signature verification policy file
## DESCRIPTION
Signature verification policy files are used to specify policy, e.g. trusted keys,
applicable when deciding whether to accept an image, or individual signatures of that image, as valid.
The default policy is stored (unless overridden at compile-time) at `/etc/containers/policy.json`;
applications performing verification may allow using a different policy instead.
## FORMAT
The signature verification policy file, usually called `policy.json`,
uses a JSON format. Unlike some other JSON files, its parsing is fairly strict:
unrecognized, duplicated or otherwise invalid fields cause the entire file,
and usually the entire operation, to be rejected.
The purpose of the policy file is to define a set of *policy requirements* for a container image,
usually depending on its location (where it is being pulled from) or otherwise defined identity.
Policy requirements can be defined for:
- An individual *scope* in a *transport*.
The *transport* values are the same as the transport prefixes when pushing/pulling images (e.g. `docker:`, `atomic:`),
and *scope* values are defined by each transport; see below for more details.
Usually, a scope can be defined to match a single image, and various prefixes of
such a most specific scope define namespaces of matching images.
- A default policy for a single transport, expressed using an empty string as a scope
- A global default policy.
If multiple policy requirements match a given image, only the requirements from the most specific match apply,
the more general policy requirements definitions are ignored.
This is expressed in JSON using the top-level syntax
```js
{
"default": [/* policy requirements: global default */]
"transports": {
transport_name: {
"": [/* policy requirements: default for transport $transport_name */],
scope_1: [/* policy requirements: default for $scope_1 in $transport_name */],
scope_2: [/*…*/]
/*…*/
},
transport_name_2: {/*…*/}
/*…*/
}
}
```
The global `default` set of policy requirements is mandatory; all of the other fields
(`transports` itself, any specific transport, the transport-specific default, etc.) are optional.
<!-- NOTE: Keep this in sync with transports/transports.go! -->
## Supported transports and their scopes
### `atomic:`
The `atomic:` transport refers to images in an Atomic Registry.
Supported scopes use the form _hostname_[`:`_port_][`/`_namespace_[`/`_imagestream_ [`:`_tag_]]],
i.e. either specifying a complete name of a tagged image, or prefix denoting
a host/namespace/image stream.
*Note:* The _hostname_ and _port_ refer to the Docker registry host and port (the one used
e.g. for `docker pull`), _not_ to the OpenShift API host and port.
### `dir:`
The `dir:` transport refers to images stored in local directories.
Supported scopes are paths of directories (either containing a single image or
subdirectories possibly containing images).
*Note:* The paths must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored.
The top-level scope `"/"` is forbidden; use the transport default scope `""`,
for consistency with other transports.
### `docker:`
The `docker:` transport refers to images in a registry implementing the "Docker Registry HTTP API V2".
Scopes matching individual images are named Docker references *in the fully expanded form*, either
using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`).
More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest),
a repository namespace, or a registry host (by only specifying the host name).
### `oci:`
The `oci:` transport refers to images in directories compliant with "Open Container Image Layout Specification".
Supported scopes use the form _directory_`:`_tag_, and _directory_ referring to
a directory containing one or more tags, or any of the parent directories.
*Note:* See `dir:` above for semantics and restrictions on the directory paths, they apply to `oci:` equivalently.
### `tarball:`
The `tarball:` transport refers to tarred up container root filesystems.
Scopes are ignored.
## Policy Requirements
Using the mechanisms above, a set of policy requirements is looked up. The policy requirements
are represented as a JSON array of individual requirement objects. For an image to be accepted,
*all* of the requirements must be satisfied simulatenously.
The policy requirements can also be used to decide whether an individual signature is accepted (= is signed by a recognized key of a known author);
in that case some requirements may apply only to some signatures, but each signature must be accepted by *at least one* requirement object.
The following requirement objects are supported:
### `insecureAcceptAnything`
A simple requirement with the following syntax
```json
{"type":"insecureAcceptAnything"}
```
This requirement accepts any image (but note that other requirements in the array still apply).
When deciding to accept an individual signature, this requirement does not have any effect; it does *not* cause the signature to be accepted, though.
This is useful primarily for policy scopes where no signature verification is required;
because the array of policy requirements must not be empty, this requirement is used
to represent the lack of requirements explicitly.
### `reject`
A simple requirement with the following syntax:
```json
{"type":"reject"}
```
This requirement rejects every image, and every signature.
### `signedBy`
This requirement requires an image to be signed with an expected identity, or accepts a signature if it is using an expected identity and key.
```js
{
"type": "signedBy",
"keyType": "GPGKeys", /* The only currently supported value */
"keyPath": "/path/to/local/keyring/file",
"keyData": "base64-encoded-keyring-data",
"signedIdentity": identity_requirement
}
```
<!-- Later: other keyType values -->
Exactly one of `keyPath` and `keyData` must be present, containing a GPG keyring of one or more public keys. Only signatures made by these keys are accepted.
The `signedIdentity` field, a JSON object, specifies what image identity the signature claims about the image.
One of the following alternatives are supported:
- The identity in the signature must exactly match the image identity. Note that with this, referencing an image by digest (with a signature claiming a _repository_`:`_tag_ identity) will fail.
```json
{"type":"matchExact"}
```
- If the image identity carries a tag, the identity in the signature must exactly match;
if the image identity uses a digest reference, the identity in the signature must be in the same repository as the image identity (using any tag).
(Note that with images identified using digest references, the digest from the reference is validated even before signature verification starts.)
```json
{"type":"matchRepoDigestOrExact"}
```
- The identity in the signature must be in the same repository as the image identity. This is useful e.g. to pull an image using the `:latest` tag when the image is signed with a tag specifing an exact image version.
```json
{"type":"matchRepository"}
```
- The identity in the signature must exactly match a specified identity.
This is useful e.g. when locally mirroring images signed using their public identity.
```js
{
"type": "exactReference",
"dockerReference": docker_reference_value
}
```
- The identity in the signature must be in the same repository as a specified identity.
This combines the properties of `matchRepository` and `exactReference`.
```js
{
"type": "exactRepository",
"dockerRepository": docker_repository_value
}
```
If the `signedIdentity` field is missing, it is treated as `matchRepoDigestOrExact`.
*Note*: `matchExact`, `matchRepoDigestOrExact` and `matchRepository` can be only used if a Docker-like image identity is
provided by the transport. In particular, the `dir:` and `oci:` transports can be only
used with `exactReference` or `exactRepository`.
<!-- ### `signedBaseLayer` -->
## Examples
It is *strongly* recommended to set the `default` policy to `reject`, and then
selectively allow individual transports and scopes as desired.
### A reasonably locked-down system
(Note that the `/*`…`*/` comments are not valid in JSON, and must not be used in real policies.)
```js
{
"default": [{"type": "reject"}], /* Reject anything not explicitly allowed */
"transports": {
"docker": {
/* Allow installing images from a specific repository namespace, without cryptographic verification.
This namespace includes images like openshift/hello-openshift and openshift/origin. */
"docker.io/openshift": [{"type": "insecureAcceptAnything"}],
/* Similarly, allow installing the “official” busybox images. Note how the fully expanded
form, with the explicit /library/, must be used. */
"docker.io/library/busybox": [{"type": "insecureAcceptAnything"}]
/* Other docker: images use the global default policy and are rejected */
},
"dir": {
"": [{"type": "insecureAcceptAnything"}] /* Allow any images originating in local directories */
},
"atomic": {
/* The common case: using a known key for a repository or set of repositories */
"hostname:5000/myns/official": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/path/to/official-pubkey.gpg"
}
],
/* A more complex example, for a repository which contains a mirror of a third-party product,
which must be signed-off by local IT */
"hostname:5000/vendor/product": [
{ /* Require the image to be signed by the original vendor, using the vendor's repository location. */
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/path/to/vendor-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "vendor-hostname/product/repository"
}
},
{ /* Require the image to _also_ be signed by a local reviewer. */
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/path/to/reviewer-pubkey.gpg"
}
]
}
}
}
```
### Completely disable security, allow all images, do not trust any signatures
```json
{
"default": [{"type": "insecureAcceptAnything"}]
}
```
## SEE ALSO
atomic(1)
## HISTORY
August 2018, Rename to containers-policy.json(5) by Valentin Rothberg <vrothberg@suse.com>
September 2016, Originally compiled by Miloslav Trmač <mitr@redhat.com>

View File

@@ -0,0 +1,107 @@
% CONTAINERS-REGISTRIES.CONF(5) System-wide registry configuration file
% Brent Baude
% Aug 2017
# NAME
containers-registries.conf - Syntax of System Registry Configuration File
# DESCRIPTION
The CONTAINERS-REGISTRIES configuration file is a system-wide configuration
file for container image registries. The file format is TOML.
By default, the configuration file is located at `/etc/containers/registries.conf`.
# FORMATS
## VERSION 2
VERSION 2 is the latest format of the `registries.conf` and is currently in
beta. This means in general VERSION 1 should be used in production environments
for now.
Every registry can have its own mirrors configured. The mirrors will be tested
in order for the availability of the remote manifest. This happens currently
only during an image pull. If the manifest is not reachable due to connectivity
issues or the unavailability of the remote manifest, then the next mirror will
be tested instead. If no mirror is configured or contains the manifest to be
pulled, then the initially provided reference will be used as fallback. It is
possible to set the `insecure` option per mirror, too.
Furthermore it is possible to specify a `prefix` for a registry. The `prefix`
is used to find the relevant target registry from where the image has to be
pulled. During the test for the availability of the image, the prefixed
location will be rewritten to the correct remote location. This applies to
mirrors as well as the fallback `location`. If no prefix is specified, it
defaults to the specified `location`. For example, if
`prefix = "example.com/foo"`, `location = "example.com"` and the image will be
pulled from `example.com/foo/image`, then the resulting pull will be effectively
point to `example.com/image`.
By default container runtimes use TLS when retrieving images from a registry.
If the registry is not setup with TLS, then the container runtime will fail to
pull images from the registry. If you set `insecure = true` for a registry or a
mirror you overwrite the `insecure` flag for that specific entry. This means
that the container runtime will attempt use unencrypted HTTP to pull the image.
It also allows you to pull from a registry with self-signed certificates.
If you set the `unqualified-search = true` for the registry, then it is possible
to omit the registry hostname when pulling images. This feature does not work
together with a specified `prefix`.
If `blocked = true` then it is not allowed to pull images from that registry.
### EXAMPLE
```
[[registry]]
location = "example.com"
insecure = false
prefix = "example.com/foo"
unqualified-search = false
blocked = false
mirror = [
{ location = "example-mirror-0.local", insecure = false },
{ location = "example-mirror-1.local", insecure = true }
]
```
## VERSION 1
VERSION 1 can be used as alternative to the VERSION 2, but it is not capable in
using registry mirrors or a prefix.
The TOML_format is used to build a simple list for registries under three
categories: `registries.search`, `registries.insecure`, and `registries.block`.
You can list multiple registries using a comma separated list.
Search registries are used when the caller of a container runtime does not fully specify the
container image that they want to execute. These registries are prepended onto the front
of the specified container image until the named image is found at a registry.
Note insecure registries can be used for any registry, not just the registries listed
under search.
The fields `registries.insecure` and `registries.block` work as like as the
`insecure` and `blocked` from VERSION 2.
### EXAMPLE
The following example configuration defines two searchable registries, one
insecure registry, and two blocked registries.
```
[registries.search]
registries = ['registry1.com', 'registry2.com']
[registries.insecure]
registries = ['registry3.com']
[registries.block]
registries = ['registry.untrusted.com', 'registry.unsafe.com']
```
# HISTORY
Mar 2019, Added additional configuration format by Sascha Grunert <sgrunert@suse.com>
Aug 2018, Renamed to containers-registries.conf(5) by Valentin Rothberg <vrothberg@suse.com>
Jun 2018, Updated by Tom Sweeney <tsweeney@redhat.com>
Aug 2017, Originally compiled by Brent Baude <bbaude@redhat.com>

View File

@@ -0,0 +1,128 @@
% CONTAINERS-REGISTRIES.D(5) Registries.d Man Page
% Miloslav Trmač
% August 2016
# NAME
containers-registries.d - Directory for various registries configurations
# DESCRIPTION
The registries configuration directory contains configuration for various registries
(servers storing remote container images), and for content stored in them,
so that the configuration does not have to be provided in command-line options over and over for every command,
and so that it can be shared by all users of containers/image.
By default (unless overridden at compile-time), the registries configuration directory is `/etc/containers/registries.d`;
applications may allow using a different directory instead.
## Directory Structure
The directory may contain any number of files with the extension `.yaml`,
each using the YAML format. Other than the mandatory extension, names of the files
dont matter.
The contents of these files are merged together; to have a well-defined and easy to understand
behavior, there can be only one configuration section describing a single namespace within a registry
(in particular there can be at most one one `default-docker` section across all files,
and there can be at most one instance of any key under the the `docker` section;
these sections are documented later).
Thus, it is forbidden to have two conflicting configurations for a single registry or scope,
and it is also forbidden to split a configuration for a single registry or scope across
more than one file (even if they are not semantically in conflict).
## Registries, Scopes and Search Order
Each YAML file must contain a “YAML mapping” (key-value pairs). Two top-level keys are defined:
- `default-docker` is the _configuration section_ (as documented below)
for registries implementing "Docker Registry HTTP API V2".
This key is optional.
- `docker` is a mapping, using individual registries implementing "Docker Registry HTTP API V2",
or namespaces and individual images within these registries, as keys;
the value assigned to any such key is a _configuration section_.
This key is optional.
Scopes matching individual images are named Docker references *in the fully expanded form*, either
using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`).
More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest),
a repository namespace, or a registry host (and a port if it differs from the default).
Note that if a registry is accessed using a hostname+port configuration, the port-less hostname
is _not_ used as parent scope.
When searching for a configuration to apply for an individual container image, only
the configuration for the most-precisely matching scope is used; configuration using
more general scopes is ignored. For example, if _any_ configuration exists for
`docker.io/library/busybox`, the configuration for `docker.io` is ignored
(even if some element of the configuration is defined for `docker.io` and not for `docker.io/library/busybox`).
## Individual Configuration Sections
A single configuration section is selected for a container image using the process
described above. The configuration section is a YAML mapping, with the following keys:
- `sigstore-staging` defines an URL of of the signature storage, used for editing it (adding or deleting signatures).
This key is optional; if it is missing, `sigstore` below is used.
- `sigstore` defines an URL of the signature storage.
This URL is used for reading existing signatures,
and if `sigstore-staging` does not exist, also for adding or removing them.
This key is optional; if it is missing, no signature storage is defined (no signatures
are download along with images, adding new signatures is possible only if `sigstore-staging` is defined).
## Examples
### Using Containers from Various Origins
The following demonstrates how to to consume and run images from various registries and namespaces:
```yaml
docker:
registry.database-supplier.com:
sigstore: https://sigstore.database-supplier.com
distribution.great-middleware.org:
sigstore: https://security-team.great-middleware.org/sigstore
docker.io/web-framework:
sigstore: https://sigstore.web-framework.io:8080
```
### Developing and Signing Containers, Staging Signatures
For developers in `example.com`:
- Consume most container images using the public servers also used by clients.
- Use a separate sigure storage for an container images in a namespace corresponding to the developers' department, with a staging storage used before publishing signatures.
- Craft an individual exception for a single branch a specific developer is working on locally.
```yaml
docker:
registry.example.com:
sigstore: https://registry-sigstore.example.com
registry.example.com/mydepartment:
sigstore: https://sigstore.mydepartment.example.com
sigstore-staging: file:///mnt/mydepartment/sigstore-staging
registry.example.com/mydepartment/myproject:mybranch:
sigstore: http://localhost:4242/sigstore
sigstore-staging: file:///home/useraccount/webroot/sigstore
```
### A Global Default
If a company publishes its products using a different domain, and different registry hostname for each of them, it is still possible to use a single signature storage server
without listing each domain individually. This is expected to rarely happen, usually only for staging new signatures.
```yaml
default-docker:
sigstore-staging: file:///mnt/company/common-sigstore-staging
```
# AUTHORS
Miloslav Trmač <mitr@redhat.com>

View File

@@ -0,0 +1,241 @@
% container-signature(5) Container signature format
% Miloslav Trmač
% March 2017
# Container signature format
This document describes the format of container signatures,
as implemented by the `github.com/containers/image/signature` package.
Most users should be able to consume these signatures by using the `github.com/containers/image/signature` package
(preferably through the higher-level `signature.PolicyContext` interface)
without having to care about the details of the format described below.
This documentation exists primarily for maintainers of the package
and to allow independent reimplementations.
## High-level overview
The signature provides an end-to-end authenticated claim that a container image
has been approved by a specific party (e.g. the creator of the image as their work,
an automated build system as a result of an automated build,
a company IT department approving the image for production) under a specified _identity_
(e.g. an OS base image / specific application, with a specific version).
A container signature consists of a cryptographic signature which identifies
and authenticates who signed the image, and carries as a signed payload a JSON document.
The JSON document identifies the image being signed, claims a specific identity of the
image and if applicable, contains other information about the image.
The signatures do not modify the container image (the layers, configuration, manifest, …);
e.g. their presence does not change the manifest digest used to identify the image in
docker/distribution servers; rather, the signatures are associated with an immutable image.
An image can have any number of signatures so signature distribution systems SHOULD support
associating more than one signature with an image.
## The cryptographic signature
As distributed, the container signature is a blob which contains a cryptographic signature
in an industry-standard format, carrying a signed JSON payload (i.e. the blob contains both the
JSON document and a signature of the JSON document; it is not a “detached signature” with
independent blobs containing the JSON document and a cryptographic signature).
Currently the only defined cryptographic signature format is an OpenPGP signature (RFC 4880),
but others may be added in the future. (The blob does not contain metadata identifying the
cryptographic signature format. It is expected that most formats are sufficiently self-describing
that this is not necessary and the configured expected public key provides another indication
of the expected cryptographic signature format. Such metadata may be added in the future for
newly added cryptographic signature formats, if necessary.)
Consumers of container signatures SHOULD verify the cryptographic signature
against one or more trusted public keys
(e.g. defined in a [policy.json signature verification policy file](policy.json.md))
before parsing or processing the JSON payload in _any_ way,
in particular they SHOULD stop processing the container signature
if the cryptographic signature verification fails, without even starting to process the JSON payload.
(Consumers MAY extract identification of the signing key and other metadata from the cryptographic signature,
and the JSON payload, without verifying the signature, if the purpose is to allow managing the signature blobs,
e.g. to list the authors and image identities of signatures associated with a single container image;
if so, they SHOULD design the output of such processing to minimize the risk of users considering the output trusted
or in any way usable for making policy decisions about the image.)
### OpenPGP signature verification
When verifying a cryptographic signature in the OpenPGP format,
the consumer MUST verify at least the following aspects of the signature
(like the `github.com/containers/image/signature` package does):
- The blob MUST be a “Signed Message” as defined RFC 4880 section 11.3.
(e.g. it MUST NOT be an unsigned “Literal Message”, or any other non-signature format).
- The signature MUST have been made by an expected key trusted for the purpose (and the specific container image).
- The signature MUST be correctly formed and pass the cryptographic validation.
- The signature MUST correctly authenticate the included JSON payload
(in particular, the parsing of the JSON payload MUST NOT start before the complete payload has been cryptographically authenticated).
- The signature MUST NOT be expired.
The consumer SHOULD have tests for its verification code which verify that signatures failing any of the above are rejected.
## JSON processing and forward compatibility
The payload of the cryptographic signature is a JSON document (RFC 7159).
Consumers SHOULD parse it very strictly,
refusing any signature which violates the expected format (e.g. missing members, incorrect member types)
or can be interpreted ambiguously (e.g. a duplicated member in a JSON object).
Any violations of the JSON format or of other requirements in this document MAY be accepted if the JSON document can be recognized
to have been created by a known-incorrect implementation (see [`optional.creator`](#optionalcreator) below)
and if the semantics of the invalid document, as created by such an implementation, is clear.
The top-level value of the JSON document MUST be a JSON object with exactly two members, `critical` and `optional`,
each a JSON object.
The `critical` object MUST contain a `type` member identifying the document as a container signature
(as defined [below](#criticaltype))
and signature consumers MUST reject signatures which do not have this member or in which this member does not have the expected value.
To ensure forward compatibility (allowing older signature consumers to correctly
accept or reject signatures created at a later date, with possible extensions to this format),
consumers MUST reject the signature if the `critical` object, or _any_ of its subobjects,
contain _any_ member or data value which is unrecognized, unsupported, invalid, or in any other way unexpected.
At a minimum, this includes unrecognized members in a JSON object, or incorrect types of expected members.
For the same reason, consumers SHOULD accept any members with unrecognized names in the `optional` object,
and MAY accept signatures where the object member is recognized but unsupported, or the value of the member is unsupported.
Consumers still SHOULD reject signatures where a member of an `optional` object is supported but the value is recognized as invalid.
## JSON data format
An example of the full format follows, with detailed description below.
To reiterate, consumers of the signature SHOULD perform successful cryptographic verification,
and MUST reject unexpected data in the `critical` object, or in the top-level object, as described above.
```json
{
"critical": {
"type": "atomic container signature",
"image": {
"docker-manifest-digest": "sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e"
},
"identity": {
"docker-reference": "docker.io/library/busybox:latest"
}
},
"optional": {
"creator": "some software package v1.0.1-35",
"timestamp": 1483228800,
}
}
```
### `critical`
This MUST be a JSON object which contains data critical to correctly evaluating the validity of a signature.
Consumers MUST reject any signature where the `critical` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data.
### `critical.type`
This MUST be a string with a string value exactly equal to `atomic container signature` (three words, including the spaces).
Signature consumers MUST reject signatures which do not have this member or this member does not have exactly the expected value.
(The consumers MAY support signatures with a different value of the `type` member, if any is defined in the future;
if so, the rest of the JSON document is interpreted according to rules defining that value of `critical.type`,
not by this document.)
### `critical.image`
This MUST be a JSON object which identifies the container image this signature applies to.
Consumers MUST reject any signature where the `critical.image` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data.
(Currently only the `docker-manifest-digest` way of identifying a container image is defined;
alternatives to this may be defined in the future,
but existing consumers are required to reject signatures which use formats they do not support.)
### `critical.image.docker-manifest-digest`
This MUST be a JSON string, in the `github.com/opencontainers/go-digest.Digest` string format.
The value of this member MUST match the manifest of the signed container image, as implemented in the docker/distribution manifest addressing system.
The consumer of the signature SHOULD verify the manifest digest against a fully verified signature before processing the contents of the image manifest in any other way
(e.g. parsing the manifest further or downloading layers of the image).
Implementation notes:
* A single container image manifest may have several valid manifest digest values, using different algorithms.
* For “signed” [docker/distribution schema 1](https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md) manifests,
the manifest digest applies to the payload of the JSON web signature, not to the raw manifest blob.
### `critical.identity`
This MUST be a JSON object which identifies the claimed identity of the image (usually the purpose of the image, or the application, along with a version information),
as asserted by the author of the signature.
Consumers MUST reject any signature where the `critical.identity` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data.
(Currently only the `docker-reference` way of claiming an image identity/purpose is defined;
alternatives to this may be defined in the future,
but existing consumers are required to reject signatures which use formats they do not support.)
### `critical.identity.docker-reference`
This MUST be a JSON string, in the `github.com/docker/distribution/reference` string format,
and using the same normalization semantics (where e.g. `busybox:latest` is equivalent to `docker.io/library/busybox:latest`).
If the normalization semantics allows multiple string representations of the claimed identity with equivalent meaning,
the `critical.identity.docker-reference` member SHOULD use the fully explicit form (including the full host name and namespaces).
The value of this member MUST match the image identity/purpose expected by the consumer of the image signature and the image
(again, accounting for the `docker/distribution/reference` normalization semantics).
In the most common case, this means that the `critical.identity.docker-reference` value must be equal to the docker/distribution reference used to refer to or download the image.
However, depending on the specific application, users or system administrators may accept less specific matches
(e.g. ignoring the tag value in the signature when pulling the `:latest` tag or when referencing an image by digest),
or they may require `critical.identity.docker-reference` values with a completely different namespace to the reference used to refer to/download the image
(e.g. requiring a `critical.identity.docker-reference` value which identifies the image as coming from a supplier when fetching it from a company-internal mirror of approved images).
The software performing this verification SHOULD allow the users to define such a policy using the [policy.json signature verification policy file format](policy.json.md).
The `critical.identity.docker-reference` value SHOULD contain either a tag or digest;
in most cases, it SHOULD use a tag rather than a digest. (See also the default [`matchRepoDigestOrExact` matching semantics in `policy.json`](policy.json.md#signedby).)
### `optional`
This MUST be a JSON object.
Consumers SHOULD accept any members with unrecognized names in the `optional` object,
and MAY accept a signature where the object member is recognized but unsupported, or the value of the member is valid but unsupported.
Consumers still SHOULD reject any signature where a member of an `optional` object is supported but the value is recognized as invalid.
### `optional.creator`
If present, this MUST be a JSON string, identifying the name and version of the software which has created the signature.
The contents of this string is not defined in detail; however each implementation creating container signatures:
- SHOULD define the contents to unambiguously define the software in practice (e.g. it SHOULD contain the name of the software, not only the version number)
- SHOULD use a build and versioning process which ensures that the contents of this string (e.g. an included version number)
changes whenever the format or semantics of the generated signature changes in any way;
it SHOULD not be possible for two implementations which use a different format or semantics to have the same `optional.creator` value
- SHOULD use a format which is reasonably easy to parse in software (perhaps using a regexp),
and which makes it easy enough to recognize a range of versions of a specific implementation
(e.g. the version of the implementation SHOULD NOT be only a git hash, because they dont have an easily defined ordering;
the string should contain a version number, or at least a date of the commit).
Consumers of container signatures MAY recognize specific values or sets of values of `optional.creator`
(perhaps augmented with `optional.timestamp`),
and MAY change their processing of the signature based on these values
(usually to acommodate violations of this specification in past versions of the signing software which cannot be fixed retroactively),
as long as the semantics of the invalid document, as created by such an implementation, is clear.
If consumers of signatures do change their behavior based on the `optional.creator` value,
they SHOULD take care that the way they process the signatures is not inconsistent with
strictly validating signature consumers.
(I.e. it is acceptable for a consumer to accept a signature based on a specific `optional.creator` value
if other implementations would completely reject the signature,
but it would be very undesirable for the two kinds of implementations to accept the signature in different
and inconsistent situations.)
### `optional.timestamp`
If present, this MUST be a JSON number, which is representable as a 64-bit integer, and identifies the time when the signature was created
as the number of seconds since the UNIX epoch (Jan 1 1970 00:00 UTC).

View File

@@ -0,0 +1,109 @@
% CONTAINERS-TRANSPORTS(5) Containers Transports Man Page
% Valentin Rothberg
% April 2019
## NAME
containers-transports - description of supported transports for copying and storing container images
## DESCRIPTION
Tools which use the containers/image library, including skopeo(1), buildah(1), podman(1), all share a common syntax for referring to container images in various locations.
The general form of the syntax is _transport:details_, where details are dependent on the specified transport, which are documented below.
### **containers-storage:** [storage-specifier]{image-id|docker-reference[@image-id]}
An image located in a local containers storage.
The format of _docker-reference_ is described in detail in the **docker** transport.
The _storage-specifier_ allows for referencing storage locations on the file system and has the format `[[driver@]root[+run-root][:options]]` where the optional `driver` refers to the storage driver (e.g., overlay or btrfs) and where `root` is an absolute path to the storage's root directory.
The optional `run-root` can be used to specify the run directory of the storage where all temporary writable content is stored.
The optional `options` are a comma-separated list of driver-specific options.
Please refer to containers-storage.conf(5) for further information on the drivers and supported options.
### **dir:**_path_
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files.
This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
### **docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2".
By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using podman-login(1).
If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using docker-login(1).
The containers-registries.conf(5) further allows for configuring various settings of a registry.
Note that a _docker-reference_ has the following format: `name[:tag|@digest]`.
While the docker transport does not support both a tag and a digest at the same time some formats like containers-storage do.
Digests can also be used in an image destination as long as the manifest matches the provided digest.
The digest of images can be explored with skopeo-inspect(1).
If `name` does not contain a slash, it is treated as `docker.io/library/name`.
Otherwise, the component before the first slash is checked if it is recognized as a `hostname[:port]` (i.e., it contains either a . or a :, or the component is exactly localhost).
If the first component of name is not recognized as a `hostname[:port]`, `name` is treated as `docker.io/name`.
### **docker-archive:**_path[:docker-reference]_
An image is stored in the docker-save(1) formatted file.
_docker-reference_ is only used when creating such a file, and it must not contain a digest.
It is further possible to copy data to stdin by specifying `docker-archive:/dev/stdin` but note that the used file must be seekable.
### **docker-daemon:**_docker-reference|algo:digest_
An image stored in the docker daemon's internal storage.
The image must be specified as a _docker-reference_ or in an alternative _algo:digest_ format when being used as an image source.
The _algo:digest_ refers to the image ID reported by docker-inspect(1).
### **oci:**_path[:tag]_
An image compliant with the "Open Container Image Layout Specification" at _path_.
Using a _tag_ is optional and allows for storing multiple images at the same _path_.
### **oci-archive:**_path[:tag]_
An image compliant with the "Open Container Image Layout Specification" stored as a tar(1) archive at _path_.
### **ostree:**_docker-reference[@/absolute/repo/path]_
An image in the local ostree(1) repository.
_/absolute/repo/path_ defaults to _/ostree/repo_.
## Examples
The following examples demonstrate how some of the containers transports can be used.
The examples use skopeo-copy(1) for copying container images.
**Copying an image from one registry to another**:
```
$ skopeo copy docker://docker.io/library/alpine:latest docker://localhost:5000/alpine:latest
```
**Copying an image from a running Docker daemon to a directory in the OCI layout**:
```
$ mkdir alpine-oci
$ skopeo copy docker-daemon:alpine:latest oci:alpine-oci
$ tree alpine-oci
test-oci/
├── blobs
│   └── sha256
│   ├── 83ef92b73cf4595aa7fe214ec6747228283d585f373d8f6bc08d66bebab531b7
│   ├── 9a6259e911dcd0a53535a25a9760ad8f2eded3528e0ad5604c4488624795cecc
│   └── ff8df268d29ccbe81cdf0a173076dcfbbea4bb2b6df1dd26766a73cb7b4ae6f7
├── index.json
└── oci-layout
2 directories, 5 files
```
**Copying an image from a registry to the local storage**:
```
$ skopeo copy docker://docker.io/library/alpine:latest containers-storage:alpine:latest
```
## SEE ALSO
docker-login(1), docker-save(1), ostree(1), podman-login(1), skopeo-copy(1), skopeo-inspect(1), tar(1), container-registries.conf(5), containers-storage.conf(5)
## AUTHORS
Miloslav Trmač <mitr@redhat.com>
Valentin Rothberg <rothberg@redhat.com>

View File

@@ -0,0 +1,136 @@
# Signature access protocols
The `github.com/containers/image` library supports signatures implemented as blobs “attached to” an image.
Some image transports (local storage formats and remote procotocols) implement these signatures natively
or trivially; for others, the protocol extensions described below are necessary.
## docker/distribution registries—separate storage
### Usage
Any existing docker/distribution registry, whether or not it natively supports signatures,
can be augmented with separate signature storage by configuring a signature storage URL in [`registries.d`](registries.d.md).
`registries.d` can be configured to use one storage URL for a whole docker/distribution server,
or also separate URLs for smaller namespaces or individual repositories within the server
(which e.g. allows image authors to manage their own signature storage while publishing
the images on the public `docker.io` server).
The signature storage URL defines a root of a path hierarchy.
It can be either a `file:///…` URL, pointing to a local directory structure,
or a `http`/`https` URL, pointing to a remote server.
`file:///` signature storage can be both read and written, `http`/`https` only supports reading.
The same path hierarchy is used in both cases, so the HTTP/HTTPS server can be
a simple static web server serving a directory structure created by writing to a `file:///` signature storage.
(This of course does not prevent other server implementations,
e.g. a HTTP server reading signatures from a database.)
The usual workflow for producing and distributing images using the separate storage mechanism
is to configure the repository in `registries.d` with `sigstore-staging` URL pointing to a private
`file:///` staging area, and a `sigstore` URL pointing to a public web server.
To publish an image, the image author would sign the image as necessary (e.g. using `skopeo copy`),
and then copy the created directory structure from the `file:///` staging area
to a subdirectory of a webroot of the public web server so that they are accessible using the public `sigstore` URL.
The author would also instruct consumers of the image to, or provide a `registries.d` configuration file to,
set up a `sigstore` URL pointing to the public web server.
### Path structure
Given a _base_ signature storage URL configured in `registries.d` as mentioned above,
and a container image stored in a docker/distribution registry using the _fully-expanded_ name
_hostname_`/`_namespaces_`/`_name_{`@`_digest_,`:`_tag_} (e.g. for `docker.io/library/busybox:latest`,
_namespaces_ is `library`, even if the user refers to the image using the shorter syntax as `busybox:latest`),
signatures are accessed using URLs of the form
> _base_`/`_namespaces_`/`_name_`@`_digest-algo_`=`_digest-value_`/signature-`_index_
where _digest-algo_`:`_digest-value_ is a manifest digest usable for referencing the relevant image manifest
(i.e. even if the user referenced the image using a tag,
the signature storage is always disambiguated using digest references).
Note that in the URLs used for signatures,
_digest-algo_ and _digest-value_ are separated using the `=` character,
not `:` like when acessing the manifest using the docker/distribution API.
Within the URL, _index_ is a decimal integer (in the canonical form), starting with 1.
Signatures are stored at URLs with successive _index_ values; to read all of them, start with _index_=1,
and continue reading signatures and increasing _index_ as long as signatures with these _index_ values exist.
Similarly, to add one more signature to an image, find the first _index_ which does not exist, and
then store the new signature using that _index_ value.
There is no way to list existing signatures other than iterating through the successive _index_ values,
and no way to download all of the signatures at once.
### Examples
For a docker/distribution image available as `busybox@sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e`
(or as `busybox:latest` if the `latest` tag points to to a manifest with the same digest),
and with a `registries.d` configuration specifying a `sigstore` URL `https://example.com/sigstore` for the same image,
the following URLs would be accessed to download all signatures:
> - `https://example.com/sigstore/library/busybox@sha256=817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e/signature-1`
> - `https://example.com/sigstore/library/busybox@sha256=817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e/signature-2`
> - …
For a docker/distribution image available as `example.com/ns1/ns2/ns3/repo@somedigest:digestvalue` and the same
`sigstore` URL, the signatures would be available at
> `https://example.com/sigstore/ns1/ns2/ns3/repo@somedigest=digestvalue/signature-1`
and so on.
## (OpenShift) docker/distribution API extension
As of https://github.com/openshift/origin/pull/12504/ , the OpenShift-embedded registry also provides
an extension of the docker/distribution API which allows simpler access to the signatures,
using only the docker/distribution API endpoint.
This API is not inherently OpenShift-specific (e.g. the client does not need to know the OpenShift API endpoint,
and credentials sufficient to access the docker/distribution API server are sufficient to access signatures as well),
and it is the preferred way implement signature storage in registries.
See https://github.com/openshift/openshift-docs/pull/3556 for the upstream documentation of the API.
To read the signature, any user with access to an image can use the `/extensions/v2/…/signatures/…`
path to read an array of signatures. Use only the signature objects
which have `version` equal to `2`, `type` equal to `atomic`, and read the signature from `content`;
ignore the other fields of the signature object.
To add a single signature, `PUT` a new object with `version` set to `2`, `type` set to `atomic`,
and `content` set to the signature. Also set `name` to an unique name with the form
_digest_`@`_per-image-name_, where _digest_ is an image manifest digest (also used in the URL),
and _per-image-name_ is any unique identifier.
To add more than one signature, add them one at a time. This API does not allow deleting signatures.
Note that because signatures are stored within the cluster-wide image objects,
i.e. different namespaces can not associate different sets of signatures to the same image,
updating signatures requires a cluster-wide access to the `imagesignatures` resource
(by default available to the `system:image-signer` role),
## OpenShift-embedded registries
The OpenShift-embedded registry implements the ordinary docker/distribution API,
and it also exposes images through the OpenShift REST API (available through the “API master” servers).
Note: OpenShift versions 1.5 and later support the above-described [docker/distribution API extension](#openshift-dockerdistribution-api-extension),
which is easier to set up and should usually be preferred.
Continue reading for details on using older versions of OpenShift.
As of https://github.com/openshift/origin/pull/9181,
signatures are exposed through the OpenShift API
(i.e. to access the complete image, it is necessary to use both APIs,
in particular to know the URLs for both the docker/distribution and the OpenShift API master endpoints).
To read the signature, any user with access to an image can use the `imagestreamimages` namespaced
resource to read an `Image` object and its `Signatures` array. Use only the `ImageSignature` objects
which have `Type` equal to `atomic`, and read the signature from `Content`; ignore the other fields of
the `ImageSignature` object.
To add or remove signatures, use the cluster-wide (non-namespaced) `imagesignatures` resource,
with `Type` set to `atomic` and `Content` set to the signature. Signature names must have the form
_digest_`@`_per-image-name_, where _digest_ is an image manifest digest (OpenShift “image name”),
and _per-image-name_ is any unique identifier.
Note that because signatures are stored within the cluster-wide image objects,
i.e. different namespaces can not associate different sets of signatures to the same image,
updating signatures requires a cluster-wide access to the `imagesignatures` resource
(by default available to the `system:image-signer` role),
and deleting signatures is strongly discouraged
(it deletes the signature from all namespaces which contain the same image).

View File

@@ -11,7 +11,7 @@ import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/blobinfocache"
"github.com/containers/image/pkg/blobinfocache/none"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
@@ -96,7 +96,7 @@ func (m *manifestSchema2) ConfigBlob(ctx context.Context) ([]byte, error) {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestSchema2")
}
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromSchema2Descriptor(m.m.ConfigDescriptor), blobinfocache.NoCache)
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromSchema2Descriptor(m.m.ConfigDescriptor), none.NoCache)
if err != nil {
return nil, err
}
@@ -252,7 +252,7 @@ func (m *manifestSchema2) convertToManifestSchema1(ctx context.Context, dest typ
logrus.Debugf("Uploading empty layer during conversion to schema 1")
// Ideally we should update the relevant BlobInfoCache about this layer, but that would require passing it down here,
// and anyway this blob is so small that its easier to just copy it than to worry about figuring out another location where to get it.
info, err := dest.PutBlob(ctx, bytes.NewReader(GzippedEmptyLayer), types.BlobInfo{Digest: GzippedEmptyLayerDigest, Size: int64(len(GzippedEmptyLayer))}, blobinfocache.NoCache, false)
info, err := dest.PutBlob(ctx, bytes.NewReader(GzippedEmptyLayer), types.BlobInfo{Digest: GzippedEmptyLayerDigest, Size: int64(len(GzippedEmptyLayer))}, none.NoCache, false)
if err != nil {
return nil, errors.Wrap(err, "Error uploading empty layer")
}

View File

@@ -7,7 +7,7 @@ import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/blobinfocache"
"github.com/containers/image/pkg/blobinfocache/none"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
@@ -61,7 +61,7 @@ func (m *manifestOCI1) ConfigBlob(ctx context.Context) ([]byte, error) {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestOCI1")
}
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromOCI1Descriptor(m.m.Config), blobinfocache.NoCache)
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromOCI1Descriptor(m.m.Config), none.NoCache)
if err != nil {
return nil, err
}

View File

@@ -17,7 +17,7 @@ import (
"github.com/containers/image/types"
"github.com/containers/storage/pkg/ioutils"
"github.com/klauspost/pgzip"
"github.com/opencontainers/go-digest"
digest "github.com/opencontainers/go-digest"
glib "github.com/ostreedev/ostree-go/pkg/glibobject"
"github.com/pkg/errors"
"github.com/vbatts/tar-split/tar/asm"
@@ -313,24 +313,19 @@ func (s *ostreeImageSource) GetBlob(ctx context.Context, info types.BlobInfo, ca
if err != nil {
return nil, 0, err
}
defer mfz.Close()
metaUnpacker := storage.NewJSONUnpacker(mfz)
getter, err := newOSTreePathFileGetter(s.repo, branch)
if err != nil {
mfz.Close()
return nil, 0, err
}
ots := asm.NewOutputTarStream(getter, metaUnpacker)
pipeReader, pipeWriter := io.Pipe()
go func() {
io.Copy(pipeWriter, ots)
pipeWriter.Close()
}()
rc := ioutils.NewReadCloserWrapper(pipeReader, func() error {
rc := ioutils.NewReadCloserWrapper(ots, func() error {
getter.Close()
mfz.Close()
return ots.Close()
})
return rc, layerSize, nil

View File

@@ -1,4 +1,5 @@
package blobinfocache
// Package boltdb implements a BlobInfoCache backed by BoltDB.
package boltdb
import (
"fmt"
@@ -6,8 +7,9 @@ import (
"sync"
"time"
"github.com/boltdb/bolt"
"github.com/containers/image/pkg/blobinfocache/internal/prioritize"
"github.com/containers/image/types"
bolt "github.com/etcd-io/bbolt"
"github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
)
@@ -81,22 +83,23 @@ func unlockPath(path string) {
}
}
// boltDBCache si a BlobInfoCache implementation which uses a BoltDB file at the specified path.
// cache is a BlobInfoCache implementation which uses a BoltDB file at the specified path.
//
// Note that we dont keep the database open across operations, because that would lock the file and block any other
// users; instead, we need to open/close it for every single write or lookup.
type boltDBCache struct {
type cache struct {
path string
}
// NewBoltDBCache returns a BlobInfoCache implementation which uses a BoltDB file at path.
// Most users should call DefaultCache instead.
func NewBoltDBCache(path string) types.BlobInfoCache {
return &boltDBCache{path: path}
// New returns a BlobInfoCache implementation which uses a BoltDB file at path.
//
// Most users should call blobinfocache.DefaultCache instead.
func New(path string) types.BlobInfoCache {
return &cache{path: path}
}
// view returns runs the specified fn within a read-only transaction on the database.
func (bdc *boltDBCache) view(fn func(tx *bolt.Tx) error) (retErr error) {
func (bdc *cache) view(fn func(tx *bolt.Tx) error) (retErr error) {
// bolt.Open(bdc.path, 0600, &bolt.Options{ReadOnly: true}) will, if the file does not exist,
// nevertheless create it, but with an O_RDONLY file descriptor, try to initialize it, and fail — while holding
// a read lock, blocking any future writes.
@@ -122,7 +125,7 @@ func (bdc *boltDBCache) view(fn func(tx *bolt.Tx) error) (retErr error) {
}
// update returns runs the specified fn within a read-write transaction on the database.
func (bdc *boltDBCache) update(fn func(tx *bolt.Tx) error) (retErr error) {
func (bdc *cache) update(fn func(tx *bolt.Tx) error) (retErr error) {
lockPath(bdc.path)
defer unlockPath(bdc.path)
db, err := bolt.Open(bdc.path, 0600, nil)
@@ -139,7 +142,7 @@ func (bdc *boltDBCache) update(fn func(tx *bolt.Tx) error) (retErr error) {
}
// uncompressedDigest implements BlobInfoCache.UncompressedDigest within the provided read-only transaction.
func (bdc *boltDBCache) uncompressedDigest(tx *bolt.Tx, anyDigest digest.Digest) digest.Digest {
func (bdc *cache) uncompressedDigest(tx *bolt.Tx, anyDigest digest.Digest) digest.Digest {
if b := tx.Bucket(uncompressedDigestBucket); b != nil {
if uncompressedBytes := b.Get([]byte(anyDigest.String())); uncompressedBytes != nil {
d, err := digest.Parse(string(uncompressedBytes))
@@ -166,7 +169,7 @@ func (bdc *boltDBCache) uncompressedDigest(tx *bolt.Tx, anyDigest digest.Digest)
// UncompressedDigest returns an uncompressed digest corresponding to anyDigest.
// May return anyDigest if it is known to be uncompressed.
// Returns "" if nothing is known about the digest (it may be compressed or uncompressed).
func (bdc *boltDBCache) UncompressedDigest(anyDigest digest.Digest) digest.Digest {
func (bdc *cache) UncompressedDigest(anyDigest digest.Digest) digest.Digest {
var res digest.Digest
if err := bdc.view(func(tx *bolt.Tx) error {
res = bdc.uncompressedDigest(tx, anyDigest)
@@ -182,7 +185,7 @@ func (bdc *boltDBCache) UncompressedDigest(anyDigest digest.Digest) digest.Diges
// WARNING: Only call this for LOCALLY VERIFIED data; dont record a digest pair just because some remote author claims so (e.g.
// because a manifest/config pair exists); otherwise the cache could be poisoned and allow substituting unexpected blobs.
// (Eventually, the DiffIDs in image config could detect the substitution, but that may be too late, and not all image formats contain that data.)
func (bdc *boltDBCache) RecordDigestUncompressedPair(anyDigest digest.Digest, uncompressed digest.Digest) {
func (bdc *cache) RecordDigestUncompressedPair(anyDigest digest.Digest, uncompressed digest.Digest) {
_ = bdc.update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucketIfNotExists(uncompressedDigestBucket)
if err != nil {
@@ -219,7 +222,7 @@ func (bdc *boltDBCache) RecordDigestUncompressedPair(anyDigest digest.Digest, un
// RecordKnownLocation records that a blob with the specified digest exists within the specified (transport, scope) scope,
// and can be reused given the opaque location data.
func (bdc *boltDBCache) RecordKnownLocation(transport types.ImageTransport, scope types.BICTransportScope, blobDigest digest.Digest, location types.BICLocationReference) {
func (bdc *cache) RecordKnownLocation(transport types.ImageTransport, scope types.BICTransportScope, blobDigest digest.Digest, location types.BICLocationReference) {
_ = bdc.update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucketIfNotExists(knownLocationsBucket)
if err != nil {
@@ -248,8 +251,8 @@ func (bdc *boltDBCache) RecordKnownLocation(transport types.ImageTransport, scop
}) // FIXME? Log error (but throttle the log volume on repeated accesses)?
}
// appendReplacementCandiates creates candidateWithTime values for digest in scopeBucket, and returns the result of appending them to candidates.
func (bdc *boltDBCache) appendReplacementCandidates(candidates []candidateWithTime, scopeBucket *bolt.Bucket, digest digest.Digest) []candidateWithTime {
// appendReplacementCandiates creates prioritize.CandidateWithTime values for digest in scopeBucket, and returns the result of appending them to candidates.
func (bdc *cache) appendReplacementCandidates(candidates []prioritize.CandidateWithTime, scopeBucket *bolt.Bucket, digest digest.Digest) []prioritize.CandidateWithTime {
b := scopeBucket.Bucket([]byte(digest.String()))
if b == nil {
return candidates
@@ -259,12 +262,12 @@ func (bdc *boltDBCache) appendReplacementCandidates(candidates []candidateWithTi
if err := t.UnmarshalBinary(v); err != nil {
return err
}
candidates = append(candidates, candidateWithTime{
candidate: types.BICReplacementCandidate{
candidates = append(candidates, prioritize.CandidateWithTime{
Candidate: types.BICReplacementCandidate{
Digest: digest,
Location: types.BICLocationReference{Opaque: string(k)},
},
lastSeen: t,
LastSeen: t,
})
return nil
}) // FIXME? Log error (but throttle the log volume on repeated accesses)?
@@ -277,8 +280,8 @@ func (bdc *boltDBCache) appendReplacementCandidates(candidates []candidateWithTi
// If !canSubstitute, the returned cadidates will match the submitted digest exactly; if canSubstitute,
// data from previous RecordDigestUncompressedPair calls is used to also look up variants of the blob which have the same
// uncompressed digest.
func (bdc *boltDBCache) CandidateLocations(transport types.ImageTransport, scope types.BICTransportScope, primaryDigest digest.Digest, canSubstitute bool) []types.BICReplacementCandidate {
res := []candidateWithTime{}
func (bdc *cache) CandidateLocations(transport types.ImageTransport, scope types.BICTransportScope, primaryDigest digest.Digest, canSubstitute bool) []types.BICReplacementCandidate {
res := []prioritize.CandidateWithTime{}
var uncompressedDigestValue digest.Digest // = ""
if err := bdc.view(func(tx *bolt.Tx) error {
scopeBucket := tx.Bucket(knownLocationsBucket)
@@ -325,5 +328,5 @@ func (bdc *boltDBCache) CandidateLocations(transport types.ImageTransport, scope
return []types.BICReplacementCandidate{} // FIXME? Log err (but throttle the log volume on repeated accesses)?
}
return destructivelyPrioritizeReplacementCandidates(res, primaryDigest, uncompressedDigestValue)
return prioritize.DestructivelyPrioritizeReplacementCandidates(res, primaryDigest, uncompressedDigestValue)
}

View File

@@ -4,7 +4,10 @@ import (
"fmt"
"os"
"path/filepath"
"strconv"
"github.com/containers/image/pkg/blobinfocache/boltdb"
"github.com/containers/image/pkg/blobinfocache/memory"
"github.com/containers/image/types"
"github.com/sirupsen/logrus"
)
@@ -45,19 +48,28 @@ func blobInfoCacheDir(sys *types.SystemContext, euid int) (string, error) {
return filepath.Join(dataDir, "containers", "cache"), nil
}
func getRootlessUID() int {
uidEnv := os.Getenv("_CONTAINERS_ROOTLESS_UID")
if uidEnv != "" {
u, _ := strconv.Atoi(uidEnv)
return u
}
return os.Geteuid()
}
// DefaultCache returns the default BlobInfoCache implementation appropriate for sys.
func DefaultCache(sys *types.SystemContext) types.BlobInfoCache {
dir, err := blobInfoCacheDir(sys, os.Geteuid())
dir, err := blobInfoCacheDir(sys, getRootlessUID())
if err != nil {
logrus.Debugf("Error determining a location for %s, using a memory-only cache", blobInfoCacheFilename)
return NewMemoryCache()
return memory.New()
}
path := filepath.Join(dir, blobInfoCacheFilename)
if err := os.MkdirAll(dir, 0700); err != nil {
logrus.Debugf("Error creating parent directories for %s, using a memory-only cache: %v", blobInfoCacheFilename, err)
return NewMemoryCache()
return memory.New()
}
logrus.Debugf("Using blob info cache at %s", path)
return NewBoltDBCache(path)
return boltdb.New(path)
}

View File

@@ -1,4 +1,6 @@
package blobinfocache
// Package prioritize provides utilities for prioritizing locations in
// types.BlobInfoCache.CandidateLocations.
package prioritize
import (
"sort"
@@ -13,16 +15,16 @@ import (
// This is a heuristic/guess, and could well use a different value.
const replacementAttempts = 5
// candidateWithTime is the input to types.BICReplacementCandidate prioritization.
type candidateWithTime struct {
candidate types.BICReplacementCandidate // The replacement candidate
lastSeen time.Time // Time the candidate was last known to exist (either read or written)
// CandidateWithTime is the input to types.BICReplacementCandidate prioritization.
type CandidateWithTime struct {
Candidate types.BICReplacementCandidate // The replacement candidate
LastSeen time.Time // Time the candidate was last known to exist (either read or written)
}
// candidateSortState is a local state implementing sort.Interface on candidates to prioritize,
// along with the specially-treated digest values for the implementation of sort.Interface.Less
type candidateSortState struct {
cs []candidateWithTime // The entries to sort
cs []CandidateWithTime // The entries to sort
primaryDigest digest.Digest // The digest the user actually asked for
uncompressedDigest digest.Digest // The uncompressed digest corresponding to primaryDigest. May be "", or even equal to primaryDigest
}
@@ -40,35 +42,35 @@ func (css *candidateSortState) Less(i, j int) bool {
// Other digest values are primarily sorted by time (more recent first), secondarily by digest (to provide a deterministic order)
// First, deal with the primaryDigest/uncompressedDigest cases:
if xi.candidate.Digest != xj.candidate.Digest {
if xi.Candidate.Digest != xj.Candidate.Digest {
// - The two digests are different, and one (or both) of the digests is primaryDigest or uncompressedDigest: time does not matter
if xi.candidate.Digest == css.primaryDigest {
if xi.Candidate.Digest == css.primaryDigest {
return true
}
if xj.candidate.Digest == css.primaryDigest {
if xj.Candidate.Digest == css.primaryDigest {
return false
}
if css.uncompressedDigest != "" {
if xi.candidate.Digest == css.uncompressedDigest {
if xi.Candidate.Digest == css.uncompressedDigest {
return false
}
if xj.candidate.Digest == css.uncompressedDigest {
if xj.Candidate.Digest == css.uncompressedDigest {
return true
}
}
} else { // xi.candidate.Digest == xj.candidate.Digest
} else { // xi.Candidate.Digest == xj.Candidate.Digest
// The two digests are the same, and are either primaryDigest or uncompressedDigest: order by time
if xi.candidate.Digest == css.primaryDigest || (css.uncompressedDigest != "" && xi.candidate.Digest == css.uncompressedDigest) {
return xi.lastSeen.After(xj.lastSeen)
if xi.Candidate.Digest == css.primaryDigest || (css.uncompressedDigest != "" && xi.Candidate.Digest == css.uncompressedDigest) {
return xi.LastSeen.After(xj.LastSeen)
}
}
// Neither of the digests are primaryDigest/uncompressedDigest:
if !xi.lastSeen.Equal(xj.lastSeen) { // Order primarily by time
return xi.lastSeen.After(xj.lastSeen)
if !xi.LastSeen.Equal(xj.LastSeen) { // Order primarily by time
return xi.LastSeen.After(xj.LastSeen)
}
// Fall back to digest, if timestamps end up _exactly_ the same (how?!)
return xi.candidate.Digest < xj.candidate.Digest
return xi.Candidate.Digest < xj.Candidate.Digest
}
func (css *candidateSortState) Swap(i, j int) {
@@ -77,7 +79,7 @@ func (css *candidateSortState) Swap(i, j int) {
// destructivelyPrioritizeReplacementCandidatesWithMax is destructivelyPrioritizeReplacementCandidates with a parameter for the
// number of entries to limit, only to make testing simpler.
func destructivelyPrioritizeReplacementCandidatesWithMax(cs []candidateWithTime, primaryDigest, uncompressedDigest digest.Digest, maxCandidates int) []types.BICReplacementCandidate {
func destructivelyPrioritizeReplacementCandidatesWithMax(cs []CandidateWithTime, primaryDigest, uncompressedDigest digest.Digest, maxCandidates int) []types.BICReplacementCandidate {
// We don't need to use sort.Stable() because nanosecond timestamps are (presumably?) unique, so no two elements should
// compare equal.
sort.Sort(&candidateSortState{
@@ -92,17 +94,17 @@ func destructivelyPrioritizeReplacementCandidatesWithMax(cs []candidateWithTime,
}
res := make([]types.BICReplacementCandidate, resLength)
for i := range res {
res[i] = cs[i].candidate
res[i] = cs[i].Candidate
}
return res
}
// destructivelyPrioritizeReplacementCandidates consumes AND DESTROYS an array of possible replacement candidates with their last known existence times,
// DestructivelyPrioritizeReplacementCandidates consumes AND DESTROYS an array of possible replacement candidates with their last known existence times,
// the primary digest the user actually asked for, and the corresponding uncompressed digest (if known, possibly equal to the primary digest),
// and returns an appropriately prioritized and/or trimmed result suitable for a return value from types.BlobInfoCache.CandidateLocations.
//
// WARNING: The array of candidates is destructively modified. (The implementation of this function could of course
// make a copy, but all CandidateLocations implementations build the slice of candidates only for the single purpose of calling this function anyway.)
func destructivelyPrioritizeReplacementCandidates(cs []candidateWithTime, primaryDigest, uncompressedDigest digest.Digest) []types.BICReplacementCandidate {
func DestructivelyPrioritizeReplacementCandidates(cs []CandidateWithTime, primaryDigest, uncompressedDigest digest.Digest) []types.BICReplacementCandidate {
return destructivelyPrioritizeReplacementCandidatesWithMax(cs, primaryDigest, uncompressedDigest, replacementAttempts)
}

View File

@@ -1,10 +1,13 @@
package blobinfocache
// Package memory implements an in-memory BlobInfoCache.
package memory
import (
"sync"
"time"
"github.com/containers/image/pkg/blobinfocache/internal/prioritize"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
digest "github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
)
@@ -15,19 +18,25 @@ type locationKey struct {
blobDigest digest.Digest
}
// memoryCache implements an in-memory-only BlobInfoCache
type memoryCache struct {
// cache implements an in-memory-only BlobInfoCache
type cache struct {
mutex sync.Mutex
// The following fields can only be accessed with mutex held.
uncompressedDigests map[digest.Digest]digest.Digest
digestsByUncompressed map[digest.Digest]map[digest.Digest]struct{} // stores a set of digests for each uncompressed digest
knownLocations map[locationKey]map[types.BICLocationReference]time.Time // stores last known existence time for each location reference
}
// NewMemoryCache returns a BlobInfoCache implementation which is in-memory only.
// This is primarily intended for tests, but also used as a fallback if DefaultCache
// cant determine, or set up, the location for a persistent cache.
// Manual users of types.{ImageSource,ImageDestination} might also use this instead of a persistent cache.
func NewMemoryCache() types.BlobInfoCache {
return &memoryCache{
// New returns a BlobInfoCache implementation which is in-memory only.
//
// This is primarily intended for tests, but also used as a fallback
// if blobinfocache.DefaultCache cant determine, or set up, the
// location for a persistent cache. Most users should use
// blobinfocache.DefaultCache. instead of calling this directly.
// Manual users of types.{ImageSource,ImageDestination} might also use
// this instead of a persistent cache.
func New() types.BlobInfoCache {
return &cache{
uncompressedDigests: map[digest.Digest]digest.Digest{},
digestsByUncompressed: map[digest.Digest]map[digest.Digest]struct{}{},
knownLocations: map[locationKey]map[types.BICLocationReference]time.Time{},
@@ -37,7 +46,14 @@ func NewMemoryCache() types.BlobInfoCache {
// UncompressedDigest returns an uncompressed digest corresponding to anyDigest.
// May return anyDigest if it is known to be uncompressed.
// Returns "" if nothing is known about the digest (it may be compressed or uncompressed).
func (mem *memoryCache) UncompressedDigest(anyDigest digest.Digest) digest.Digest {
func (mem *cache) UncompressedDigest(anyDigest digest.Digest) digest.Digest {
mem.mutex.Lock()
defer mem.mutex.Unlock()
return mem.uncompressedDigestLocked(anyDigest)
}
// uncompressedDigestLocked implements types.BlobInfoCache.UncompressedDigest, but must be called only with mem.mutex held.
func (mem *cache) uncompressedDigestLocked(anyDigest digest.Digest) digest.Digest {
if d, ok := mem.uncompressedDigests[anyDigest]; ok {
return d
}
@@ -55,7 +71,9 @@ func (mem *memoryCache) UncompressedDigest(anyDigest digest.Digest) digest.Diges
// WARNING: Only call this for LOCALLY VERIFIED data; dont record a digest pair just because some remote author claims so (e.g.
// because a manifest/config pair exists); otherwise the cache could be poisoned and allow substituting unexpected blobs.
// (Eventually, the DiffIDs in image config could detect the substitution, but that may be too late, and not all image formats contain that data.)
func (mem *memoryCache) RecordDigestUncompressedPair(anyDigest digest.Digest, uncompressed digest.Digest) {
func (mem *cache) RecordDigestUncompressedPair(anyDigest digest.Digest, uncompressed digest.Digest) {
mem.mutex.Lock()
defer mem.mutex.Unlock()
if previous, ok := mem.uncompressedDigests[anyDigest]; ok && previous != uncompressed {
logrus.Warnf("Uncompressed digest for blob %s previously recorded as %s, now %s", anyDigest, previous, uncompressed)
}
@@ -71,7 +89,9 @@ func (mem *memoryCache) RecordDigestUncompressedPair(anyDigest digest.Digest, un
// RecordKnownLocation records that a blob with the specified digest exists within the specified (transport, scope) scope,
// and can be reused given the opaque location data.
func (mem *memoryCache) RecordKnownLocation(transport types.ImageTransport, scope types.BICTransportScope, blobDigest digest.Digest, location types.BICLocationReference) {
func (mem *cache) RecordKnownLocation(transport types.ImageTransport, scope types.BICTransportScope, blobDigest digest.Digest, location types.BICLocationReference) {
mem.mutex.Lock()
defer mem.mutex.Unlock()
key := locationKey{transport: transport.Name(), scope: scope, blobDigest: blobDigest}
locationScope, ok := mem.knownLocations[key]
if !ok {
@@ -81,16 +101,16 @@ func (mem *memoryCache) RecordKnownLocation(transport types.ImageTransport, scop
locationScope[location] = time.Now() // Possibly overwriting an older entry.
}
// appendReplacementCandiates creates candidateWithTime values for (transport, scope, digest), and returns the result of appending them to candidates.
func (mem *memoryCache) appendReplacementCandidates(candidates []candidateWithTime, transport types.ImageTransport, scope types.BICTransportScope, digest digest.Digest) []candidateWithTime {
// appendReplacementCandiates creates prioritize.CandidateWithTime values for (transport, scope, digest), and returns the result of appending them to candidates.
func (mem *cache) appendReplacementCandidates(candidates []prioritize.CandidateWithTime, transport types.ImageTransport, scope types.BICTransportScope, digest digest.Digest) []prioritize.CandidateWithTime {
locations := mem.knownLocations[locationKey{transport: transport.Name(), scope: scope, blobDigest: digest}] // nil if not present
for l, t := range locations {
candidates = append(candidates, candidateWithTime{
candidate: types.BICReplacementCandidate{
candidates = append(candidates, prioritize.CandidateWithTime{
Candidate: types.BICReplacementCandidate{
Digest: digest,
Location: l,
},
lastSeen: t,
LastSeen: t,
})
}
return candidates
@@ -102,12 +122,14 @@ func (mem *memoryCache) appendReplacementCandidates(candidates []candidateWithTi
// If !canSubstitute, the returned cadidates will match the submitted digest exactly; if canSubstitute,
// data from previous RecordDigestUncompressedPair calls is used to also look up variants of the blob which have the same
// uncompressed digest.
func (mem *memoryCache) CandidateLocations(transport types.ImageTransport, scope types.BICTransportScope, primaryDigest digest.Digest, canSubstitute bool) []types.BICReplacementCandidate {
res := []candidateWithTime{}
func (mem *cache) CandidateLocations(transport types.ImageTransport, scope types.BICTransportScope, primaryDigest digest.Digest, canSubstitute bool) []types.BICReplacementCandidate {
mem.mutex.Lock()
defer mem.mutex.Unlock()
res := []prioritize.CandidateWithTime{}
res = mem.appendReplacementCandidates(res, transport, scope, primaryDigest)
var uncompressedDigest digest.Digest // = ""
if canSubstitute {
if uncompressedDigest = mem.UncompressedDigest(primaryDigest); uncompressedDigest != "" {
if uncompressedDigest = mem.uncompressedDigestLocked(primaryDigest); uncompressedDigest != "" {
otherDigests := mem.digestsByUncompressed[uncompressedDigest] // nil if not present in the map
for d := range otherDigests {
if d != primaryDigest && d != uncompressedDigest {
@@ -119,5 +141,5 @@ func (mem *memoryCache) CandidateLocations(transport types.ImageTransport, scope
}
}
}
return destructivelyPrioritizeReplacementCandidates(res, primaryDigest, uncompressedDigest)
return prioritize.DestructivelyPrioritizeReplacementCandidates(res, primaryDigest, uncompressedDigest)
}

View File

@@ -1,4 +1,5 @@
package blobinfocache
// Package none implements a dummy BlobInfoCache which records no data.
package none
import (
"github.com/containers/image/types"
@@ -11,9 +12,10 @@ type noCache struct {
// NoCache implements BlobInfoCache by not recording any data.
//
// This exists primarily for implementations of configGetter for Manifest.Inspect,
// because configs only have one representation.
// Any use of BlobInfoCache with blobs should usually use at least a short-lived cache.
// This exists primarily for implementations of configGetter for
// Manifest.Inspect, because configs only have one representation.
// Any use of BlobInfoCache with blobs should usually use at least a
// short-lived cache, ideally blobinfocache.DefaultCache.
var NoCache types.BlobInfoCache = noCache{}
// UncompressedDigest returns an uncompressed digest corresponding to anyDigest.

View File

@@ -85,21 +85,6 @@ func GetAuthentication(sys *types.SystemContext, registry string) (string, strin
return "", "", nil
}
// GetUserLoggedIn returns the username logged in to registry from either
// auth.json or XDG_RUNTIME_DIR
// Used to tell the user if someone is logged in to the registry when logging in
func GetUserLoggedIn(sys *types.SystemContext, registry string) (string, error) {
path, err := getPathToAuth(sys)
if err != nil {
return "", err
}
username, _, _ := findAuthentication(registry, path, false)
if username != "" {
return username, nil
}
return "", nil
}
// RemoveAuthentication deletes the credentials stored in auth.json
func RemoveAuthentication(sys *types.SystemContext, registry string) error {
return modifyJSON(sys, func(auths *dockerConfigFile) (bool, error) {

View File

@@ -10,6 +10,10 @@ import (
"github.com/BurntSushi/toml"
"github.com/containers/image/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/containers/image/docker/reference"
)
// systemRegistriesConfPath is the path to the system-wide registry
@@ -22,51 +26,68 @@ var systemRegistriesConfPath = builtinRegistriesConfPath
// DO NOT change this, instead see systemRegistriesConfPath above.
const builtinRegistriesConfPath = "/etc/containers/registries.conf"
// Mirror represents a mirror. Mirrors can be used as pull-through caches for
// registries.
type Mirror struct {
// The mirror's URL.
URL string `toml:"url"`
// Endpoint describes a remote location of a registry.
type Endpoint struct {
// The endpoint's remote location.
Location string `toml:"location"`
// If true, certs verification will be skipped and HTTP (non-TLS)
// connections will be allowed.
Insecure bool `toml:"insecure"`
}
// RewriteReference will substitute the provided reference `prefix` to the
// endpoints `location` from the `ref` and creates a new named reference from it.
// The function errors if the newly created reference is not parsable.
func (e *Endpoint) RewriteReference(ref reference.Named, prefix string) (reference.Named, error) {
refString := ref.String()
if !refMatchesPrefix(refString, prefix) {
return nil, fmt.Errorf("invalid prefix '%v' for reference '%v'", prefix, refString)
}
newNamedRef := strings.Replace(refString, prefix, e.Location, 1)
newParsedRef, err := reference.ParseNamed(newNamedRef)
if err != nil {
return nil, errors.Wrapf(err, "error rewriting reference")
}
logrus.Debugf("reference rewritten from '%v' to '%v'", refString, newParsedRef.String())
return newParsedRef, nil
}
// Registry represents a registry.
type Registry struct {
// Serializable registry URL.
URL string `toml:"url"`
// A registry is an Endpoint too
Endpoint
// The registry's mirrors.
Mirrors []Mirror `toml:"mirror"`
Mirrors []Endpoint `toml:"mirror"`
// If true, pulling from the registry will be blocked.
Blocked bool `toml:"blocked"`
// If true, certs verification will be skipped and HTTP (non-TLS)
// connections will be allowed.
Insecure bool `toml:"insecure"`
// If true, the registry can be used when pulling an unqualified image.
Search bool `toml:"unqualified-search"`
// Prefix is used for matching images, and to translate one namespace to
// another. If `Prefix="example.com/bar"`, `URL="example.com/foo/bar"`
// another. If `Prefix="example.com/bar"`, `location="example.com/foo/bar"`
// and we pull from "example.com/bar/myimage:latest", the image will
// effectively be pulled from "example.com/foo/bar/myimage:latest".
// If no Prefix is specified, it defaults to the specified URL.
// If no Prefix is specified, it defaults to the specified location.
Prefix string `toml:"prefix"`
}
// backwards compatability to sysregistries v1
type v1TOMLregistries struct {
// V1TOMLregistries is for backwards compatibility to sysregistries v1
type V1TOMLregistries struct {
Registries []string `toml:"registries"`
}
// V1TOMLConfig is for backwards compatibility to sysregistries v1
type V1TOMLConfig struct {
Search V1TOMLregistries `toml:"search"`
Insecure V1TOMLregistries `toml:"insecure"`
Block V1TOMLregistries `toml:"block"`
}
// tomlConfig is the data type used to unmarshal the toml config.
type tomlConfig struct {
Registries []Registry `toml:"registry"`
// backwards compatability to sysregistries v1
V1Registries struct {
Search v1TOMLregistries `toml:"search"`
Insecure v1TOMLregistries `toml:"insecure"`
Block v1TOMLregistries `toml:"block"`
} `toml:"registries"`
V1TOMLConfig `toml:"registries"`
}
// InvalidRegistries represents an invalid registry configurations. An example
@@ -81,18 +102,18 @@ func (e *InvalidRegistries) Error() string {
return e.s
}
// parseURL parses the input string, performs some sanity checks and returns
// parseLocation parses the input string, performs some sanity checks and returns
// the sanitized input string. An error is returned if the input string is
// empty or if contains an "http{s,}://" prefix.
func parseURL(input string) (string, error) {
func parseLocation(input string) (string, error) {
trimmed := strings.TrimRight(input, "/")
if trimmed == "" {
return "", &InvalidRegistries{s: "invalid URL: cannot be empty"}
return "", &InvalidRegistries{s: "invalid location: cannot be empty"}
}
if strings.HasPrefix(trimmed, "http://") || strings.HasPrefix(trimmed, "https://") {
msg := fmt.Sprintf("invalid URL '%s': URI schemes are not supported", input)
msg := fmt.Sprintf("invalid location '%s': URI schemes are not supported", input)
return "", &InvalidRegistries{s: msg}
}
@@ -108,42 +129,42 @@ func getV1Registries(config *tomlConfig) ([]Registry, error) {
// to minimize behavior inconsistency and not contribute to difficult-to-reproduce situations.
registryOrder := []string{}
getRegistry := func(url string) (*Registry, error) { // Note: _pointer_ to a long-lived object
getRegistry := func(location string) (*Registry, error) { // Note: _pointer_ to a long-lived object
var err error
url, err = parseURL(url)
location, err = parseLocation(location)
if err != nil {
return nil, err
}
reg, exists := regMap[url]
reg, exists := regMap[location]
if !exists {
reg = &Registry{
URL: url,
Mirrors: []Mirror{},
Prefix: url,
Endpoint: Endpoint{Location: location},
Mirrors: []Endpoint{},
Prefix: location,
}
regMap[url] = reg
registryOrder = append(registryOrder, url)
regMap[location] = reg
registryOrder = append(registryOrder, location)
}
return reg, nil
}
// Note: config.V1Registries.Search needs to be processed first to ensure registryOrder is populated in the right order
// if one of the search registries is also in one of the other lists.
for _, search := range config.V1Registries.Search.Registries {
for _, search := range config.V1TOMLConfig.Search.Registries {
reg, err := getRegistry(search)
if err != nil {
return nil, err
}
reg.Search = true
}
for _, blocked := range config.V1Registries.Block.Registries {
for _, blocked := range config.V1TOMLConfig.Block.Registries {
reg, err := getRegistry(blocked)
if err != nil {
return nil, err
}
reg.Blocked = true
}
for _, insecure := range config.V1Registries.Insecure.Registries {
for _, insecure := range config.V1TOMLConfig.Insecure.Registries {
reg, err := getRegistry(insecure)
if err != nil {
return nil, err
@@ -152,15 +173,15 @@ func getV1Registries(config *tomlConfig) ([]Registry, error) {
}
registries := []Registry{}
for _, url := range registryOrder {
reg := regMap[url]
for _, location := range registryOrder {
reg := regMap[location]
registries = append(registries, *reg)
}
return registries, nil
}
// postProcessRegistries checks the consistency of all registries (e.g., set
// the Prefix to URL if not set) and applies conflict checks. It returns an
// the Prefix to Location if not set) and applies conflict checks. It returns an
// array of cleaned registries and error in case of conflicts.
func postProcessRegistries(regs []Registry) ([]Registry, error) {
var registries []Registry
@@ -169,16 +190,16 @@ func postProcessRegistries(regs []Registry) ([]Registry, error) {
for _, reg := range regs {
var err error
// make sure URL and Prefix are valid
reg.URL, err = parseURL(reg.URL)
// make sure Location and Prefix are valid
reg.Location, err = parseLocation(reg.Location)
if err != nil {
return nil, err
}
if reg.Prefix == "" {
reg.Prefix = reg.URL
reg.Prefix = reg.Location
} else {
reg.Prefix, err = parseURL(reg.Prefix)
reg.Prefix, err = parseLocation(reg.Prefix)
if err != nil {
return nil, err
}
@@ -186,13 +207,13 @@ func postProcessRegistries(regs []Registry) ([]Registry, error) {
// make sure mirrors are valid
for _, mir := range reg.Mirrors {
mir.URL, err = parseURL(mir.URL)
mir.Location, err = parseLocation(mir.Location)
if err != nil {
return nil, err
}
}
registries = append(registries, reg)
regMap[reg.URL] = append(regMap[reg.URL], reg)
regMap[reg.Location] = append(regMap[reg.Location], reg)
}
// Given a registry can be mentioned multiple times (e.g., to have
@@ -202,15 +223,15 @@ func postProcessRegistries(regs []Registry) ([]Registry, error) {
// Note: we need to iterate over the registries array to ensure a
// deterministic behavior which is not guaranteed by maps.
for _, reg := range registries {
others, _ := regMap[reg.URL]
others, _ := regMap[reg.Location]
for _, other := range others {
if reg.Insecure != other.Insecure {
msg := fmt.Sprintf("registry '%s' is defined multiple times with conflicting 'insecure' setting", reg.URL)
msg := fmt.Sprintf("registry '%s' is defined multiple times with conflicting 'insecure' setting", reg.Location)
return nil, &InvalidRegistries{s: msg}
}
if reg.Blocked != other.Blocked {
msg := fmt.Sprintf("registry '%s' is defined multiple times with conflicting 'blocked' setting", reg.URL)
msg := fmt.Sprintf("registry '%s' is defined multiple times with conflicting 'blocked' setting", reg.Location)
return nil, &InvalidRegistries{s: msg}
}
}

65
vendor/github.com/containers/image/registries.conf generated vendored Normal file
View File

@@ -0,0 +1,65 @@
# For more information on this configuration file, see containers-registries.conf(5).
#
# There are multiple versions of the configuration syntax available, where the
# second iteration is backwards compatible to the first one. Mixing up both
# formats will result in an runtime error.
#
# The initial configuration format looks like this:
#
# Registries to search for images that are not fully-qualified.
# i.e. foobar.com/my_image:latest vs my_image:latest
[registries.search]
registries = []
# Registries that do not use TLS when pulling images or uses self-signed
# certificates.
[registries.insecure]
registries = []
# Blocked Registries, blocks the `docker daemon` from pulling from the blocked registry. If you specify
# "*", then the docker daemon will only be allowed to pull from registries listed above in the search
# registries. Blocked Registries is deprecated because other container runtimes and tools will not use it.
# It is recommended that you use the trust policy file /etc/containers/policy.json to control which
# registries you want to allow users to pull and push from. policy.json gives greater flexibility, and
# supports all container runtimes and tools including the docker daemon, cri-o, buildah ...
# The atomic CLI `atomic trust` can be used to easily configure the policy.json file.
[registries.block]
registries = []
# The second version of the configuration format allows to specify registry
# mirrors:
#
# [[registry]]
# # The main location of the registry
# location = "example.com"
#
# # If true, certs verification will be skipped and HTTP (non-TLS) connections
# # will be allowed.
# insecure = false
#
# # Prefix is used for matching images, and to translate one namespace to
# # another. If `prefix = "example.com/foo"`, `location = "example.com"` and
# # we pull from "example.com/foo/myimage:latest", the image will effectively be
# # pulled from "example.com/myimage:latest". If no Prefix is specified,
# # it defaults to the specified `location`. When a prefix is used, then a pull
# # without specifying the prefix is not possible any more.
# prefix = "example.com/foo"
#
# # If true, the registry can be used when pulling an unqualified image. If a
# # prefix is specified, unqualified pull is not possible any more.
# unqualified-search = false
#
# # If true, pulling from the registry will be blocked.
# blocked = false
#
# # All available mirrors of the registry. The mirrors will be evaluated in
# # order during an image pull. Furthermore it is possible to specify the
# # `insecure` flag per registry mirror, too.
# mirror = [
# { location = "example-mirror-0.local", insecure = false },
# { location = "example-mirror-1.local", insecure = true },
# # It is also possible to specify an additional path within the `location`.
# # A pull to `example.com/foo/image:latest` will then result in
# # `example-mirror-2.local/path/image:latest`.
# { location = "example-mirror-2.local/path" },
# ]

View File

@@ -30,7 +30,7 @@ import (
// -ldflags '-X github.com/containers/image/signature.systemDefaultPolicyPath=$your_path'
var systemDefaultPolicyPath = builtinDefaultPolicyPath
// builtinDefaultPolicyPath is the policy pat used for DefaultPolicy().
// builtinDefaultPolicyPath is the policy path used for DefaultPolicy().
// DO NOT change this, instead see systemDefaultPolicyPath above.
const builtinDefaultPolicyPath = "/etc/containers/policy.json"

View File

@@ -14,10 +14,11 @@ import (
"sync"
"sync/atomic"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/internal/tmpdir"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/blobinfocache"
"github.com/containers/image/pkg/blobinfocache/none"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/archive"
@@ -70,6 +71,13 @@ type storageImageCloser struct {
size int64
}
// manifestBigDataKey returns a key suitable for recording a manifest with the specified digest using storage.Store.ImageBigData and related functions.
// If a specific manifest digest is explicitly requested by the user, the key retruned function should be used preferably;
// for compatibility, if a manifest is not available under this key, check also storage.ImageDigestBigDataKey
func manifestBigDataKey(digest digest.Digest) string {
return storage.ImageDigestManifestBigDataNamePrefix + "-" + digest.String()
}
// newImageSource sets up an image for reading.
func newImageSource(imageRef storageReference) (*storageImageSource, error) {
// First, locate the image.
@@ -177,12 +185,29 @@ func (s *storageImageSource) GetManifest(ctx context.Context, instanceDigest *di
return nil, "", ErrNoManifestLists
}
if len(s.cachedManifest) == 0 {
// We stored the manifest as an item named after storage.ImageDigestBigDataKey.
cachedBlob, err := s.imageRef.transport.store.ImageBigData(s.image.ID, storage.ImageDigestBigDataKey)
if err != nil {
return nil, "", err
// The manifest is stored as a big data item.
// Prefer the manifest corresponding to the user-specified digest, if available.
if s.imageRef.named != nil {
if digested, ok := s.imageRef.named.(reference.Digested); ok {
key := manifestBigDataKey(digested.Digest())
blob, err := s.imageRef.transport.store.ImageBigData(s.image.ID, key)
if err != nil && !os.IsNotExist(err) { // os.IsNotExist is true if the image exists but there is no data corresponding to key
return nil, "", err
}
if err == nil {
s.cachedManifest = blob
}
}
}
// If the user did not specify a digest, or this is an old image stored before manifestBigDataKey was introduced, use the default manifest.
// Note that the manifest may not match the expected digest, and that is likely to fail eventually, e.g. in c/image/image/UnparsedImage.Manifest().
if len(s.cachedManifest) == 0 {
cachedBlob, err := s.imageRef.transport.store.ImageBigData(s.image.ID, storage.ImageDigestBigDataKey)
if err != nil {
return nil, "", err
}
s.cachedManifest = cachedBlob
}
s.cachedManifest = cachedBlob
}
return s.cachedManifest, manifest.GuessMIMEType(s.cachedManifest), err
}
@@ -331,7 +356,7 @@ func (s *storageImageDestination) computeNextBlobCacheFile() string {
}
// HasThreadSafePutBlob indicates whether PutBlob can be executed concurrently.
func (d *storageImageDestination) HasThreadSafePutBlob() bool {
func (s *storageImageDestination) HasThreadSafePutBlob() bool {
return true
}
@@ -570,12 +595,12 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
if !haveDiffID {
// Check if it's elsewhere and the caller just forgot to pass it to us in a PutBlob(),
// or to even check if we had it.
// Use blobinfocache.NoCache to avoid a repeated DiffID lookup in the BlobInfoCache; a caller
// Use none.NoCache to avoid a repeated DiffID lookup in the BlobInfoCache; a caller
// that relies on using a blob digest that has never been seeen by the store had better call
// TryReusingBlob; not calling PutBlob already violates the documented API, so theres only
// so far we are going to accommodate that (if we should be doing that at all).
logrus.Debugf("looking for diffID for blob %+v", blob.Digest)
has, _, err := s.TryReusingBlob(ctx, blob.BlobInfo, blobinfocache.NoCache, false)
has, _, err := s.TryReusingBlob(ctx, blob.BlobInfo, none.NoCache, false)
if err != nil {
return errors.Wrapf(err, "error checking for a layer based on blob %q", blob.Digest.String())
}
@@ -660,6 +685,7 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
}
lastLayer = layer.ID
}
// If one of those blobs was a configuration blob, then we can try to dig out the date when the image
// was originally created, in case we're just copying it. If not, no harm done.
options := &storage.ImageOptions{}
@@ -667,9 +693,6 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
logrus.Debugf("setting image creation date to %s", inspect.Created)
options.CreationDate = *inspect.Created
}
if manifestDigest, err := manifest.Digest(s.manifest); err == nil {
options.Digest = manifestDigest
}
// Create the image record, pointing to the most-recently added layer.
intendedID := s.imageRef.id
if intendedID == "" {
@@ -709,7 +732,7 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
if err != nil {
return errors.Wrapf(err, "error copying non-layer blob %q to image", blob)
}
if err := s.imageRef.transport.store.SetImageBigData(img.ID, blob.String(), v); err != nil {
if err := s.imageRef.transport.store.SetImageBigData(img.ID, blob.String(), v, manifest.Digest); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
@@ -735,9 +758,21 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
}
logrus.Debugf("set names of image %q to %v", img.ID, names)
}
// Save the manifest. Use storage.ImageDigestBigDataKey as the item's
// name, so that its digest can be used to locate the image in the Store.
if err := s.imageRef.transport.store.SetImageBigData(img.ID, storage.ImageDigestBigDataKey, s.manifest); err != nil {
// Save the manifest. Allow looking it up by digest by using the key convention defined by the Store.
// Record the manifest twice: using a digest-specific key to allow references to that specific digest instance,
// and using storage.ImageDigestBigDataKey for future users that dont specify any digest and for compatibility with older readers.
manifestDigest, err := manifest.Digest(s.manifest)
if err != nil {
return errors.Wrapf(err, "error computing manifest digest")
}
if err := s.imageRef.transport.store.SetImageBigData(img.ID, manifestBigDataKey(manifestDigest), s.manifest, manifest.Digest); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error saving manifest for image %q: %v", img.ID, err)
return err
}
if err := s.imageRef.transport.store.SetImageBigData(img.ID, storage.ImageDigestBigDataKey, s.manifest, manifest.Digest); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
@@ -746,7 +781,7 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
}
// Save the signatures, if we have any.
if len(s.signatures) > 0 {
if err := s.imageRef.transport.store.SetImageBigData(img.ID, "signatures", s.signatures); err != nil {
if err := s.imageRef.transport.store.SetImageBigData(img.ID, "signatures", s.signatures, manifest.Digest); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
@@ -788,9 +823,21 @@ func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
}
// PutManifest writes the manifest to the destination.
func (s *storageImageDestination) PutManifest(ctx context.Context, manifest []byte) error {
s.manifest = make([]byte, len(manifest))
copy(s.manifest, manifest)
func (s *storageImageDestination) PutManifest(ctx context.Context, manifestBlob []byte) error {
if s.imageRef.named != nil {
if digested, ok := s.imageRef.named.(reference.Digested); ok {
matches, err := manifest.MatchesDigest(manifestBlob, digested.Digest())
if err != nil {
return err
}
if !matches {
return fmt.Errorf("Manifest does not match expected digest %s", digested.Digest())
}
}
}
s.manifest = make([]byte, len(manifestBlob))
copy(s.manifest, manifestBlob)
return nil
}

View File

@@ -55,7 +55,7 @@ func imageMatchesRepo(image *storage.Image, ref reference.Named) bool {
// one present with the same name or ID, and return the image.
func (s *storageReference) resolveImage() (*storage.Image, error) {
var loadedImage *storage.Image
if s.id == "" {
if s.id == "" && s.named != nil {
// Look for an image that has the expanded reference name as an explicit Name value.
image, err := s.transport.store.Image(s.named.String())
if image != nil && err == nil {
@@ -69,7 +69,7 @@ func (s *storageReference) resolveImage() (*storage.Image, error) {
// though possibly with a different tag or digest, as a Name value, so
// that the canonical reference can be implicitly resolved to the image.
images, err := s.transport.store.ImagesByDigest(digested.Digest())
if images != nil && err == nil {
if err == nil && len(images) > 0 {
for _, image := range images {
if imageMatchesRepo(image, s.named) {
loadedImage = image
@@ -97,6 +97,24 @@ func (s *storageReference) resolveImage() (*storage.Image, error) {
return nil, ErrNoSuchImage
}
}
// Default to having the image digest that we hand back match the most recently
// added manifest...
if digest, ok := loadedImage.BigDataDigests[storage.ImageDigestBigDataKey]; ok {
loadedImage.Digest = digest
}
// ... unless the named reference says otherwise, and it matches one of the digests
// in the image. For those cases, set the Digest field to that value, for the
// sake of older consumers that don't know there's a whole list in there now.
if s.named != nil {
if digested, ok := s.named.(reference.Digested); ok {
for _, digest := range loadedImage.Digests {
if digest == digested.Digest() {
loadedImage.Digest = digest
break
}
}
}
}
return loadedImage, nil
}

View File

@@ -180,7 +180,10 @@ func (s *storageTransport) GetStore() (storage.Store, error) {
// Return the transport's previously-set store. If we don't have one
// of those, initialize one now.
if s.store == nil {
options := storage.DefaultStoreOptions
options, err := storage.DefaultStoreOptionsAutoDetectUID()
if err != nil {
return nil, err
}
options.UIDMap = s.defaultUIDMap
options.GIDMap = s.defaultGIDMap
store, err := storage.GetStore(options)
@@ -284,11 +287,6 @@ func (s storageTransport) GetStoreImage(store storage.Store, ref types.ImageRefe
}
}
if sref, ok := ref.(*storageReference); ok {
if sref.id != "" {
if img, err := store.Image(sref.id); err == nil {
return img, nil
}
}
tmpRef := *sref
if img, err := tmpRef.resolveImage(); err == nil {
return img, nil

View File

@@ -207,7 +207,7 @@ func (is *tarballImageSource) Close() error {
}
// HasThreadSafeGetBlob indicates whether GetBlob can be executed concurrently.
func (s *tarballImageSource) HasThreadSafeGetBlob() bool {
func (is *tarballImageSource) HasThreadSafeGetBlob() bool {
return false
}

View File

@@ -22,6 +22,7 @@ import (
// ParseImageName converts a URL-like image name to a types.ImageReference.
func ParseImageName(imgName string) (types.ImageReference, error) {
// Keep this in sync with TransportFromImageName!
parts := strings.SplitN(imgName, ":", 2)
if len(parts) != 2 {
return nil, errors.Errorf(`Invalid image name "%s", expected colon-separated transport:reference`, imgName)
@@ -32,3 +33,14 @@ func ParseImageName(imgName string) (types.ImageReference, error) {
}
return transport.ParseReference(parts[1])
}
// TransportFromImageName converts an URL-like name to a types.ImageTransport or nil when
// the transport is unknown or when the input is invalid.
func TransportFromImageName(imageName string) types.ImageTransport {
// Keep this in sync with ParseImageName!
parts := strings.SplitN(imageName, ":", 2)
if len(parts) == 2 {
return transports.Get(parts[0])
}
return nil
}

View File

@@ -401,6 +401,7 @@ type ImageInspectInfo struct {
}
// DockerAuthConfig contains authorization information for connecting to a registry.
// the value of Username and Password can be empty for accessing the registry anonymously
type DockerAuthConfig struct {
Username string
Password string

View File

@@ -1,7 +1,7 @@
github.com/containers/image
github.com/sirupsen/logrus v1.0.0
github.com/containers/storage master
github.com/containers/storage v1.12.2
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/docker-credential-helpers d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1
github.com/docker/distribution 5f6282db7d65e6d72ad7c2cc66310724a57be716
@@ -13,7 +13,6 @@ github.com/containerd/continuity d8fb8589b0e8e85b8c8bbaa8840226d0dfeb7371
github.com/ghodss/yaml 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
github.com/gorilla/mux 94e7d24fd285520f3d12ae998f7fdd6b5393d453
github.com/imdario/mergo 50d4dbd4eb0e84778abe37cefef140271d96fade
github.com/mattn/go-runewidth 14207d285c6c197daabb5c9793d63e7af9ab2d50
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
github.com/opencontainers/go-digest c9281466c8b2f606084ac71339773efd177436e7
@@ -28,7 +27,6 @@ golang.org/x/crypto 453249f01cfeb54c3d549ddb75ff152ca243f9d8
golang.org/x/net 6b27048ae5e6ad1ef927e72e437531493de612fe
golang.org/x/sync 42b317875d0fa942474b76e1b46a6060d720ae6e
golang.org/x/sys 43e60d72a8e2bd92ee98319ba9a384a0e9837c08
gopkg.in/cheggaaa/pb.v1 v1.0.27
gopkg.in/yaml.v2 a3f3340b5840cee44f372bddb5880fcbc419b46a
k8s.io/client-go bcde30fb7eaed76fd98a36b4120321b94995ffb6
github.com/xeipuuv/gojsonschema master
@@ -44,7 +42,10 @@ github.com/syndtr/gocapability master
github.com/Microsoft/go-winio ab35fc04b6365e8fcb18e6e9e41ea4a02b10b175
github.com/Microsoft/hcsshim eca7177590cdcbd25bbc5df27e3b693a54b53a6a
github.com/ulikunitz/xz v0.5.4
github.com/boltdb/bolt master
github.com/etcd-io/bbolt v1.3.2
github.com/klauspost/pgzip v1.2.1
github.com/klauspost/compress v1.4.1
github.com/klauspost/cpuid v1.2.0
github.com/vbauerster/mpb v3.3.4
github.com/mattn/go-isatty v0.0.4
github.com/VividCortex/ewma v1.1.1

View File

@@ -4,9 +4,9 @@ import "fmt"
const (
// VersionMajor is for an API incompatible changes
VersionMajor = 0
VersionMajor = 1
// VersionMinor is for functionality in a backwards-compatible manner
VersionMinor = 1
VersionMinor = 7
// VersionPatch is for backwards-compatible bug fixes
VersionPatch = 0

View File

@@ -71,7 +71,7 @@ type Container struct {
type ContainerStore interface {
FileBasedStore
MetadataStore
BigDataStore
ContainerBigDataStore
FlaggableStore
// Create creates a container that has a specified ID (or generates a
@@ -456,7 +456,7 @@ func (r *containerStore) BigDataSize(id, key string) (int64, error) {
return size, nil
}
if data, err := r.BigData(id, key); err == nil && data != nil {
if r.SetBigData(id, key, data) == nil {
if err = r.SetBigData(id, key, data); err == nil {
c, ok := r.lookup(id)
if !ok {
return -1, ErrContainerUnknown
@@ -464,6 +464,8 @@ func (r *containerStore) BigDataSize(id, key string) (int64, error) {
if size, ok := c.BigDataSizes[key]; ok {
return size, nil
}
} else {
return -1, err
}
}
return -1, ErrSizeUnknown
@@ -484,7 +486,7 @@ func (r *containerStore) BigDataDigest(id, key string) (digest.Digest, error) {
return d, nil
}
if data, err := r.BigData(id, key); err == nil && data != nil {
if r.SetBigData(id, key, data) == nil {
if err = r.SetBigData(id, key, data); err == nil {
c, ok := r.lookup(id)
if !ok {
return "", ErrContainerUnknown
@@ -492,6 +494,8 @@ func (r *containerStore) BigDataDigest(id, key string) (digest.Digest, error) {
if d, ok := c.BigDataDigests[key]; ok {
return d, nil
}
} else {
return "", err
}
}
return "", ErrDigestUnknown
@@ -568,6 +572,9 @@ func (r *containerStore) Lock() {
r.lockfile.Lock()
}
func (r *containerStore) RLock() {
r.lockfile.RLock()
}
func (r *containerStore) Unlock() {
r.lockfile.Unlock()
}

View File

@@ -253,6 +253,11 @@ func (a *Driver) AdditionalImageStores() []string {
return nil
}
// CreateFromTemplate creates a layer with the same contents and parent as another layer.
func (a *Driver) CreateFromTemplate(id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *graphdriver.CreateOpts, readWrite bool) error {
return graphdriver.NaiveCreateFromTemplate(a, id, template, templateIDMappings, parent, parentIDMappings, opts, readWrite)
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (a *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {

View File

@@ -490,6 +490,11 @@ func (d *Driver) quotasDirID(id string) string {
return path.Join(d.quotasDir(), id)
}
// CreateFromTemplate creates a layer with the same contents and parent as another layer.
func (d *Driver) CreateFromTemplate(id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *graphdriver.CreateOpts, readWrite bool) error {
return d.Create(id, template, opts)
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {

View File

@@ -1,4 +1,4 @@
// +build linux
// +build cgo
package copy
@@ -19,6 +19,7 @@ import (
"syscall"
"time"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/pools"
"github.com/containers/storage/pkg/system"
rsystem "github.com/opencontainers/runc/libcontainer/system"
@@ -152,8 +153,8 @@ func DirCopy(srcDir, dstDir string, copyMode Mode, copyXattrs bool) error {
isHardlink := false
switch f.Mode() & os.ModeType {
case 0: // Regular file
switch mode := f.Mode(); {
case mode.IsRegular():
id := fileID{dev: stat.Dev, ino: stat.Ino}
if copyMode == Hardlink {
isHardlink = true
@@ -171,12 +172,12 @@ func DirCopy(srcDir, dstDir string, copyMode Mode, copyXattrs bool) error {
copiedFiles[id] = dstPath
}
case os.ModeDir:
case mode.IsDir():
if err := os.Mkdir(dstPath, f.Mode()); err != nil && !os.IsExist(err) {
return err
}
case os.ModeSymlink:
case mode&os.ModeSymlink != 0:
link, err := os.Readlink(srcPath)
if err != nil {
return err
@@ -186,14 +187,15 @@ func DirCopy(srcDir, dstDir string, copyMode Mode, copyXattrs bool) error {
return err
}
case os.ModeNamedPipe:
case mode&os.ModeNamedPipe != 0:
fallthrough
case os.ModeSocket:
case mode&os.ModeSocket != 0:
if err := unix.Mkfifo(dstPath, stat.Mode); err != nil {
return err
}
case os.ModeDevice:
case mode&os.ModeDevice != 0:
if rsystem.RunningInUserNS() {
// cannot create a device if running in user namespace
return nil
@@ -203,7 +205,7 @@ func DirCopy(srcDir, dstDir string, copyMode Mode, copyXattrs bool) error {
}
default:
return fmt.Errorf("unknown file type for %s", srcPath)
return fmt.Errorf("unknown file type with mode %v for %s", mode, srcPath)
}
// Everything below is copying metadata from src to dst. All this metadata
@@ -212,7 +214,7 @@ func DirCopy(srcDir, dstDir string, copyMode Mode, copyXattrs bool) error {
return nil
}
if err := os.Lchown(dstPath, int(stat.Uid), int(stat.Gid)); err != nil {
if err := idtools.SafeLchown(dstPath, int(stat.Uid), int(stat.Gid)); err != nil {
return err
}

View File

@@ -0,0 +1,19 @@
// +build !linux !cgo
package copy
import "github.com/containers/storage/pkg/chrootarchive"
// Mode indicates whether to use hardlink or copy content
type Mode int
const (
// Content creates a new file, and copies the content of the file
Content Mode = iota
)
// DirCopy copies or hardlinks the contents of one directory to another,
// properly handling soft links
func DirCopy(srcDir, dstDir string, _ Mode, _ bool) error {
return chrootarchive.NewArchiver(nil).CopyWithTar(srcDir, dstDir)
}

View File

@@ -119,10 +119,17 @@ func checkDevHasFS(dev string) error {
}
func verifyBlockDevice(dev string, force bool) error {
if err := checkDevAvailable(dev); err != nil {
realPath, err := filepath.Abs(dev)
if err != nil {
return errors.Errorf("unable to get absolute path for %s: %s", dev, err)
}
if realPath, err = filepath.EvalSymlinks(realPath); err != nil {
return errors.Errorf("failed to canonicalise path for %s: %s", dev, err)
}
if err := checkDevAvailable(realPath); err != nil {
return err
}
if err := checkDevInVG(dev); err != nil {
if err := checkDevInVG(realPath); err != nil {
return err
}
@@ -130,7 +137,7 @@ func verifyBlockDevice(dev string, force bool) error {
return nil
}
if err := checkDevHasFS(dev); err != nil {
if err := checkDevHasFS(realPath); err != nil {
return err
}
return nil

View File

@@ -123,6 +123,11 @@ func (d *Driver) Cleanup() error {
return err
}
// CreateFromTemplate creates a layer with the same contents and parent as another layer.
func (d *Driver) CreateFromTemplate(id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *graphdriver.CreateOpts, readWrite bool) error {
return d.Create(id, template, opts)
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {

View File

@@ -72,6 +72,9 @@ type ProtoDriver interface {
// specified id and parent and options passed in opts. Parent
// may be "" and opts may be nil.
Create(id, parent string, opts *CreateOpts) error
// CreateFromTemplate creates a new filesystem layer with the specified id
// and parent, with contents identical to the specified template layer.
CreateFromTemplate(id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *CreateOpts, readWrite bool) error
// Remove attempts to remove the filesystem layer with this id.
Remove(id string) error
// Get returns the mountpoint for the layered filesystem referred

View File

@@ -10,6 +10,8 @@ import (
"path/filepath"
"syscall"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/mount"
"github.com/containers/storage/pkg/system"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -57,10 +59,11 @@ func doesSupportNativeDiff(d, mountOpts string) error {
}
opts := fmt.Sprintf("lowerdir=%s:%s,upperdir=%s,workdir=%s", path.Join(td, "l2"), path.Join(td, "l1"), path.Join(td, "l3"), path.Join(td, "work"))
if mountOpts != "" {
opts = fmt.Sprintf("%s,%s", opts, mountOpts)
flags, data := mount.ParseOptions(mountOpts)
if data != "" {
opts = fmt.Sprintf("%s,%s", opts, data)
}
if err := unix.Mount("overlay", filepath.Join(td, "merged"), "overlay", 0, opts); err != nil {
if err := unix.Mount("overlay", filepath.Join(td, "merged"), "overlay", uintptr(flags), opts); err != nil {
return errors.Wrap(err, "failed to mount overlay")
}
defer func() {
@@ -103,3 +106,60 @@ func doesSupportNativeDiff(d, mountOpts string) error {
return nil
}
// doesMetacopy checks if the filesystem is going to optimize changes to
// metadata by using nodes marked with an "overlay.metacopy" attribute to avoid
// copying up a file from a lower layer unless/until its contents are being
// modified
func doesMetacopy(d, mountOpts string) (bool, error) {
td, err := ioutil.TempDir(d, "metacopy-check")
if err != nil {
return false, err
}
defer func() {
if err := os.RemoveAll(td); err != nil {
logrus.Warnf("Failed to remove check directory %v: %v", td, err)
}
}()
// Make directories l1, l2, work, merged
if err := os.MkdirAll(filepath.Join(td, "l1"), 0755); err != nil {
return false, err
}
if err := ioutils.AtomicWriteFile(filepath.Join(td, "l1", "f"), []byte{0xff}, 0700); err != nil {
return false, err
}
if err := os.MkdirAll(filepath.Join(td, "l2"), 0755); err != nil {
return false, err
}
if err := os.Mkdir(filepath.Join(td, "work"), 0755); err != nil {
return false, err
}
if err := os.Mkdir(filepath.Join(td, "merged"), 0755); err != nil {
return false, err
}
// Mount using the mandatory options and configured options
opts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", path.Join(td, "l1"), path.Join(td, "l2"), path.Join(td, "work"))
flags, data := mount.ParseOptions(mountOpts)
if data != "" {
opts = fmt.Sprintf("%s,%s", opts, data)
}
if err := unix.Mount("overlay", filepath.Join(td, "merged"), "overlay", uintptr(flags), opts); err != nil {
return false, errors.Wrap(err, "failed to mount overlay for metacopy check")
}
defer func() {
if err := unix.Unmount(filepath.Join(td, "merged"), 0); err != nil {
logrus.Warnf("Failed to unmount check directory %v: %v", filepath.Join(td, "merged"), err)
}
}()
// Make a change that only impacts the inode, and check if the pulled-up copy is marked
// as a metadata-only copy
if err := os.Chmod(filepath.Join(td, "merged", "f"), 0600); err != nil {
return false, errors.Wrap(err, "error changing permissions on file for metacopy check")
}
metacopy, err := system.Lgetxattr(filepath.Join(td, "l2", "f"), "trusted.overlay.metacopy")
if err != nil {
return false, errors.Wrap(err, "metacopy flag was not set on file in upper layer")
}
return metacopy != nil, nil
}

View File

@@ -14,6 +14,7 @@ import (
"strconv"
"strings"
"sync"
"syscall"
"github.com/containers/storage/drivers"
"github.com/containers/storage/drivers/overlayutils"
@@ -84,13 +85,12 @@ const (
)
type overlayOptions struct {
overrideKernelCheck bool
imageStores []string
quota quota.Quota
mountProgram string
ostreeRepo string
skipMountHome bool
mountOptions string
imageStores []string
quota quota.Quota
mountProgram string
ostreeRepo string
skipMountHome bool
mountOptions string
}
// Driver contains information about the home directory and the list of active mounts that are created using this driver.
@@ -104,6 +104,7 @@ type Driver struct {
options overlayOptions
naiveDiff graphdriver.DiffDriver
supportsDType bool
usingMetacopy bool
locker *locker.Locker
convert map[string]bool
}
@@ -157,6 +158,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
return nil, err
}
var usingMetacopy bool
var supportsDType bool
if opts.mountProgram != "" {
supportsDType = true
@@ -165,8 +167,23 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
if err != nil {
os.Remove(filepath.Join(home, linkDir))
os.Remove(home)
patherr, ok := err.(*os.PathError)
if ok && patherr.Err == syscall.ENOSPC {
return nil, err
}
return nil, errors.Wrap(err, "kernel does not support overlay fs")
}
usingMetacopy, err = doesMetacopy(home, opts.mountOptions)
if err == nil {
if usingMetacopy {
logrus.Debugf("overlay test mount indicated that metacopy is being used")
} else {
logrus.Debugf("overlay test mount indicated that metacopy is not being used")
}
} else {
logrus.Warnf("overlay test mount did not indicate whether or not metacopy is being used: %v", err)
return nil, err
}
}
if !opts.skipMountHome {
@@ -188,6 +205,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
gidMaps: gidMaps,
ctr: graphdriver.NewRefCounter(graphdriver.NewFsChecker(graphdriver.FsMagicOverlay)),
supportsDType: supportsDType,
usingMetacopy: usingMetacopy,
locker: locker.New(),
options: *opts,
convert: make(map[string]bool),
@@ -207,7 +225,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
return nil, fmt.Errorf("Storage option overlay.size only supported for backingFS XFS. Found %v", backingFs)
}
logrus.Debugf("backingFs=%s, projectQuotaSupported=%v, useNativeDiff=%v", backingFs, projectQuotaSupported, !d.useNaiveDiff())
logrus.Debugf("backingFs=%s, projectQuotaSupported=%v, useNativeDiff=%v, usingMetacopy=%v", backingFs, projectQuotaSupported, !d.useNaiveDiff(), d.usingMetacopy)
return d, nil
}
@@ -222,11 +240,7 @@ func parseOptions(options []string) (*overlayOptions, error) {
key = strings.ToLower(key)
switch key {
case ".override_kernel_check", "overlay.override_kernel_check", "overlay2.override_kernel_check":
logrus.Debugf("overlay: override_kernelcheck=%s", val)
o.overrideKernelCheck, err = strconv.ParseBool(val)
if err != nil {
return nil, err
}
logrus.Warnf("overlay: override_kernel_check option was specified, but is no longer necessary")
case ".mountopt", "overlay.mountopt", "overlay2.mountopt":
o.mountOptions = val
case ".size", "overlay.size", "overlay2.size":
@@ -285,6 +299,12 @@ func supportsOverlay(home string, homeMagic graphdriver.FsMagic, rootUID, rootGI
exec.Command("modprobe", "overlay").Run()
layerDir, err := ioutil.TempDir(home, "compat")
if err != nil {
patherr, ok := err.(*os.PathError)
if ok && patherr.Err == syscall.ENOSPC {
return false, err
}
}
if err == nil {
// Check if reading the directory's contents populates the d_type field, which is required
// for proper operation of the overlay filesystem.
@@ -364,6 +384,7 @@ func (d *Driver) Status() [][2]string {
{"Backing Filesystem", backingFs},
{"Supports d_type", strconv.FormatBool(d.supportsDType)},
{"Native Overlay Diff", strconv.FormatBool(!d.useNaiveDiff())},
{"Using metacopy", strconv.FormatBool(d.usingMetacopy)},
}
}
@@ -399,6 +420,14 @@ func (d *Driver) Cleanup() error {
return mount.Unmount(d.home)
}
// CreateFromTemplate creates a layer with the same contents and parent as another layer.
func (d *Driver) CreateFromTemplate(id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *graphdriver.CreateOpts, readWrite bool) error {
if readWrite {
return d.CreateReadWrite(id, template, opts)
}
return d.Create(id, template, opts)
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {
@@ -767,7 +796,17 @@ func (d *Driver) get(id string, disableShifting bool, options graphdriver.MountO
mountProgram := exec.Command(d.options.mountProgram, "-o", label, target)
mountProgram.Dir = d.home
return mountProgram.Run()
var b bytes.Buffer
mountProgram.Stderr = &b
err := mountProgram.Run()
if err != nil {
output := b.String()
if output == "" {
output = "<stderr empty>"
}
return errors.Wrapf(err, "using mount program %s: %s", d.options.mountProgram, output)
}
return nil
}
} else if len(mountData) > pageSize {
//FIXME: We need to figure out to get this to work with additional stores
@@ -782,6 +821,7 @@ func (d *Driver) get(id string, disableShifting bool, options graphdriver.MountO
mountTarget = path.Join(id, "merged")
}
flags, data := mount.ParseOptions(mountData)
logrus.Debugf("overlay: mount_data=%s", mountData)
if err := mountFunc("overlay", mountTarget, "overlay", uintptr(flags), data); err != nil {
return "", fmt.Errorf("error creating overlay mount to %s: %v", mountTarget, err)
}
@@ -975,6 +1015,7 @@ func (d *Driver) UpdateLayerIDMap(id string, toContainer, toHost *idtools.IDMapp
// Mount the new layer and handle ownership changes and possible copy_ups in it.
options := graphdriver.MountOpts{
MountLabel: mountLabel,
Options: strings.Split(d.options.mountOptions, ","),
}
layerFs, err := d.get(id, true, options)
if err != nil {

View File

@@ -0,0 +1,45 @@
package graphdriver
import (
"github.com/sirupsen/logrus"
"github.com/containers/storage/pkg/idtools"
)
// TemplateDriver is just barely enough of a driver that we can implement a
// naive version of CreateFromTemplate on top of it.
type TemplateDriver interface {
DiffDriver
CreateReadWrite(id, parent string, opts *CreateOpts) error
Create(id, parent string, opts *CreateOpts) error
Remove(id string) error
}
// CreateFromTemplate creates a layer with the same contents and parent as
// another layer. Internally, it may even depend on that other layer
// continuing to exist, as if it were actually a child of the child layer.
func NaiveCreateFromTemplate(d TemplateDriver, id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *CreateOpts, readWrite bool) error {
var err error
if readWrite {
err = d.CreateReadWrite(id, parent, opts)
} else {
err = d.Create(id, parent, opts)
}
if err != nil {
return err
}
diff, err := d.Diff(template, templateIDMappings, parent, parentIDMappings, opts.MountLabel)
if err != nil {
if err2 := d.Remove(id); err2 != nil {
logrus.Errorf("error removing layer %q: %v", id, err2)
}
return err
}
if _, err = d.ApplyDiff(id, templateIDMappings, parent, opts.MountLabel, diff); err != nil {
if err2 := d.Remove(id); err2 != nil {
logrus.Errorf("error removing layer %q: %v", id, err2)
}
return err
}
return nil
}

View File

@@ -99,6 +99,14 @@ func (d *Driver) Cleanup() error {
return nil
}
// CreateFromTemplate creates a layer with the same contents and parent as another layer.
func (d *Driver) CreateFromTemplate(id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *graphdriver.CreateOpts, readWrite bool) error {
if readWrite {
return d.CreateReadWrite(id, template, opts)
}
return d.Create(id, template, opts)
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {

View File

@@ -185,6 +185,11 @@ func (d *Driver) Exists(id string) bool {
return result
}
// CreateFromTemplate creates a layer with the same contents and parent as another layer.
func (d *Driver) CreateFromTemplate(id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *graphdriver.CreateOpts, readWrite bool) error {
return graphdriver.NaiveCreateFromTemplate(d, id, template, templateIDMappings, parent, parentIDMappings, opts, readWrite)
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {

View File

@@ -1,4 +1,4 @@
// +build linux freebsd solaris
// +build linux freebsd
package zfs
@@ -16,7 +16,7 @@ import (
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/mount"
"github.com/containers/storage/pkg/parsers"
zfs "github.com/mistifyio/go-zfs"
"github.com/mistifyio/go-zfs"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -38,7 +38,7 @@ type Logger struct{}
// Log wraps log message from ZFS driver with a prefix '[zfs]'.
func (*Logger) Log(cmd []string) {
logrus.Debugf("[zfs] %s", strings.Join(cmd, " "))
logrus.WithField("storage-driver", "zfs").Debugf("%s", strings.Join(cmd, " "))
}
// Init returns a new ZFS driver.
@@ -47,14 +47,16 @@ func (*Logger) Log(cmd []string) {
func Init(base string, opt []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
var err error
logger := logrus.WithField("storage-driver", "zfs")
if _, err := exec.LookPath("zfs"); err != nil {
logrus.Debugf("[zfs] zfs command is not available: %v", err)
logger.Debugf("zfs command is not available: %v", err)
return nil, errors.Wrap(graphdriver.ErrPrerequisites, "the 'zfs' command is not available")
}
file, err := os.OpenFile("/dev/zfs", os.O_RDWR, 0600)
if err != nil {
logrus.Debugf("[zfs] cannot open /dev/zfs: %v", err)
logger.Debugf("cannot open /dev/zfs: %v", err)
return nil, errors.Wrapf(graphdriver.ErrPrerequisites, "could not open /dev/zfs: %v", err)
}
defer file.Close()
@@ -109,9 +111,6 @@ func Init(base string, opt []string, uidMaps, gidMaps []idtools.IDMap) (graphdri
return nil, fmt.Errorf("Failed to create '%s': %v", base, err)
}
if err := mount.MakePrivate(base); err != nil {
return nil, err
}
d := &Driver{
dataset: rootDataset,
options: options,
@@ -157,7 +156,7 @@ func lookupZfsDataset(rootdir string) (string, error) {
}
for _, m := range mounts {
if err := unix.Stat(m.Mountpoint, &stat); err != nil {
logrus.Debugf("[zfs] failed to stat '%s' while scanning for zfs mount: %v", m.Mountpoint, err)
logrus.WithField("storage-driver", "zfs").Debugf("failed to stat '%s' while scanning for zfs mount: %v", m.Mountpoint, err)
continue // may fail on fuse file systems
}
@@ -184,7 +183,7 @@ func (d *Driver) String() string {
return "zfs"
}
// Cleanup is used to implement graphdriver.ProtoDriver. There is no cleanup required for this driver.
// Cleanup is called on when program exits, it is a no-op for ZFS.
func (d *Driver) Cleanup() error {
return nil
}
@@ -260,6 +259,11 @@ func (d *Driver) mountPath(id string) string {
return path.Join(d.options.mountPath, "graph", getMountpoint(id))
}
// CreateFromTemplate creates a layer with the same contents and parent as another layer.
func (d *Driver) CreateFromTemplate(id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *graphdriver.CreateOpts, readWrite bool) error {
return d.Create(id, template, opts)
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {
@@ -360,11 +364,25 @@ func (d *Driver) Remove(id string) error {
}
// Get returns the mountpoint for the given id after creating the target directories if necessary.
func (d *Driver) Get(id string, options graphdriver.MountOpts) (string, error) {
func (d *Driver) Get(id string, options graphdriver.MountOpts) (_ string, retErr error) {
mountpoint := d.mountPath(id)
if count := d.ctr.Increment(mountpoint); count > 1 {
return mountpoint, nil
}
defer func() {
if retErr != nil {
if c := d.ctr.Decrement(mountpoint); c <= 0 {
if mntErr := unix.Unmount(mountpoint, 0); mntErr != nil {
logrus.WithField("storage-driver", "zfs").Errorf("Error unmounting %v: %v", mountpoint, mntErr)
}
if rmErr := unix.Rmdir(mountpoint); rmErr != nil && !os.IsNotExist(rmErr) {
logrus.WithField("storage-driver", "zfs").Debugf("Failed to remove %s: %v", id, rmErr)
}
}
}
}()
mountOptions := d.options.mountOptions
if len(options.Options) > 0 {
@@ -373,29 +391,24 @@ func (d *Driver) Get(id string, options graphdriver.MountOpts) (string, error) {
filesystem := d.zfsPath(id)
opts := label.FormatMountLabel(mountOptions, options.MountLabel)
logrus.Debugf(`[zfs] mount("%s", "%s", "%s")`, filesystem, mountpoint, opts)
logrus.WithField("storage-driver", "zfs").Debugf(`mount("%s", "%s", "%s")`, filesystem, mountpoint, opts)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
d.ctr.Decrement(mountpoint)
return "", err
}
// Create the target directories if they don't exist
if err := idtools.MkdirAllAs(mountpoint, 0755, rootUID, rootGID); err != nil {
d.ctr.Decrement(mountpoint)
return "", err
}
if err := mount.Mount(filesystem, mountpoint, "zfs", opts); err != nil {
d.ctr.Decrement(mountpoint)
return "", fmt.Errorf("error creating zfs mount of %s to %s: %v", filesystem, mountpoint, err)
return "", errors.Wrap(err, "error creating zfs mount")
}
// this could be our first mount after creation of the filesystem, and the root dir may still have root
// permissions instead of the remapped root uid:gid (if user namespaces are enabled):
if err := os.Chown(mountpoint, rootUID, rootGID); err != nil {
mount.Unmount(mountpoint)
d.ctr.Decrement(mountpoint)
return "", fmt.Errorf("error modifying zfs mountpoint (%s) directory ownership: %v", mountpoint, err)
}
@@ -408,16 +421,18 @@ func (d *Driver) Put(id string) error {
if count := d.ctr.Decrement(mountpoint); count > 0 {
return nil
}
mounted, err := graphdriver.Mounted(graphdriver.FsMagicZfs, mountpoint)
if err != nil || !mounted {
return err
logger := logrus.WithField("storage-driver", "zfs")
logger.Debugf(`unmount("%s")`, mountpoint)
if err := unix.Unmount(mountpoint, unix.MNT_DETACH); err != nil {
logger.Warnf("Failed to unmount %s mount %s: %v", id, mountpoint, err)
}
if err := unix.Rmdir(mountpoint); err != nil && !os.IsNotExist(err) {
logger.Debugf("Failed to remove %s mount point %s: %v", id, mountpoint, err)
}
logrus.Debugf(`[zfs] unmount("%s")`, mountpoint)
if err := mount.Unmount(mountpoint); err != nil {
return fmt.Errorf("error unmounting to %s: %v", mountpoint, err)
}
return nil
}

View File

@@ -18,7 +18,7 @@ func checkRootdirFs(rootdir string) error {
// on FreeBSD buf.Fstypename contains ['z', 'f', 's', 0 ... ]
if (buf.Fstypename[0] != 122) || (buf.Fstypename[1] != 102) || (buf.Fstypename[2] != 115) || (buf.Fstypename[3] != 0) {
logrus.Debugf("[zfs] no zfs dataset found for rootdir '%s'", rootdir)
logrus.WithField("storage-driver", "zfs").Debugf("no zfs dataset found for rootdir '%s'", rootdir)
return errors.Wrapf(graphdriver.ErrPrerequisites, "no zfs dataset found for rootdir '%s'", rootdir)
}

View File

@@ -1,23 +1,24 @@
package zfs
import (
"fmt"
"github.com/containers/storage/drivers"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
)
func checkRootdirFs(rootdir string) error {
var buf unix.Statfs_t
if err := unix.Statfs(rootdir, &buf); err != nil {
return fmt.Errorf("Failed to access '%s': %s", rootdir, err)
func checkRootdirFs(rootDir string) error {
fsMagic, err := graphdriver.GetFSMagic(rootDir)
if err != nil {
return err
}
backingFS := "unknown"
if fsName, ok := graphdriver.FsNames[fsMagic]; ok {
backingFS = fsName
}
if graphdriver.FsMagic(buf.Type) != graphdriver.FsMagicZfs {
logrus.Debugf("[zfs] no zfs dataset found for rootdir '%s'", rootdir)
return errors.Wrapf(graphdriver.ErrPrerequisites, "no zfs dataset found for rootdir '%s'", rootdir)
if fsMagic != graphdriver.FsMagicZfs {
logrus.WithField("root", rootDir).WithField("backingFS", backingFS).WithField("storage-driver", "zfs").Error("No zfs dataset found for root")
return errors.Wrapf(graphdriver.ErrPrerequisites, "no zfs dataset found for rootdir '%s'", rootDir)
}
return nil

View File

@@ -1,59 +0,0 @@
// +build solaris,cgo
package zfs
/*
#include <sys/statvfs.h>
#include <stdlib.h>
static inline struct statvfs *getstatfs(char *s) {
struct statvfs *buf;
int err;
buf = (struct statvfs *)malloc(sizeof(struct statvfs));
err = statvfs(s, buf);
return buf;
}
*/
import "C"
import (
"path/filepath"
"strings"
"unsafe"
"github.com/containers/storage/drivers"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
func checkRootdirFs(rootdir string) error {
cs := C.CString(filepath.Dir(rootdir))
defer C.free(unsafe.Pointer(cs))
buf := C.getstatfs(cs)
defer C.free(unsafe.Pointer(buf))
// on Solaris buf.f_basetype contains ['z', 'f', 's', 0 ... ]
if (buf.f_basetype[0] != 122) || (buf.f_basetype[1] != 102) || (buf.f_basetype[2] != 115) ||
(buf.f_basetype[3] != 0) {
logrus.Debugf("[zfs] no zfs dataset found for rootdir '%s'", rootdir)
return errors.Wrapf(graphdriver.ErrPrerequisites, "no zfs dataset found for rootdir '%s'", rootdir)
}
return nil
}
/* rootfs is introduced to comply with the OCI spec
which states that root filesystem must be mounted at <CID>/rootfs/ instead of <CID>/
*/
func getMountpoint(id string) string {
maxlen := 12
// we need to preserve filesystem suffix
suffix := strings.SplitN(id, "-", 2)
if len(suffix) > 1 {
return filepath.Join(id[:maxlen]+"-"+suffix[1], "rootfs", "root")
}
return filepath.Join(id[:maxlen], "rootfs", "root")
}

View File

@@ -1,4 +1,4 @@
// +build !linux,!freebsd,!solaris
// +build !linux,!freebsd
package zfs

View File

@@ -43,8 +43,6 @@ var (
ErrSizeUnknown = errors.New("size is not known")
// ErrStoreIsReadOnly is returned when the caller makes a call to a read-only store that would require modifying its contents.
ErrStoreIsReadOnly = errors.New("called a write method on a read-only store")
// ErrLockReadOnly indicates that the caller only took a read-only lock, and is not allowed to write.
ErrLockReadOnly = errors.New("lock is not a read-write lock")
// ErrDuplicateImageNames indicates that the read-only store uses the same name for multiple images.
ErrDuplicateImageNames = errors.New("read-only image store assigns the same name to multiple images")
// ErrDuplicateLayerNames indicates that the read-only store uses the same name for multiple layers.

View File

@@ -5,6 +5,7 @@ import (
"io/ioutil"
"os"
"path/filepath"
"strings"
"time"
"github.com/containers/storage/pkg/ioutils"
@@ -15,9 +16,13 @@ import (
)
const (
// ImageDigestBigDataKey is the name of the big data item whose
// contents we consider useful for computing a "digest" of the
// image, by which we can locate the image later.
// ImageDigestManifestBigDataNamePrefix is a prefix of big data item
// names which we consider to be manifests, used for computing a
// "digest" value for the image as a whole, by which we can locate the
// image later.
ImageDigestManifestBigDataNamePrefix = "manifest"
// ImageDigestBigDataKey is provided for compatibility with older
// versions of the image library. It will be removed in the future.
ImageDigestBigDataKey = "manifest"
)
@@ -27,12 +32,19 @@ type Image struct {
// value which was generated by the library.
ID string `json:"id"`
// Digest is a digest value that we can use to locate the image.
// Digest is a digest value that we can use to locate the image, if one
// was specified at creation-time.
Digest digest.Digest `json:"digest,omitempty"`
// Digests is a list of digest values of the image's manifests, and
// possibly a manually-specified value, that we can use to locate the
// image. If Digest is set, its value is also in this list.
Digests []digest.Digest `json:"-"`
// Names is an optional set of user-defined convenience values. The
// image can be referred to by its ID or any of its names. Names are
// unique among images.
// unique among images, and are often the text representation of tagged
// or canonical references.
Names []string `json:"names,omitempty"`
// TopLayer is the ID of the topmost layer of the image itself, if the
@@ -42,7 +54,9 @@ type Image struct {
// MappedTopLayers are the IDs of alternate versions of the top layer
// which have the same contents and parent, and which differ from
// TopLayer only in which ID mappings they use.
// TopLayer only in which ID mappings they use. When the image is
// to be removed, they should be removed before the TopLayer, as the
// graph driver may depend on that.
MappedTopLayers []string `json:"mapped-layers,omitempty"`
// Metadata is data we keep for the convenience of the caller. It is not
@@ -90,8 +104,10 @@ type ROImageStore interface {
// Images returns a slice enumerating the known images.
Images() ([]Image, error)
// Images returns a slice enumerating the images which have a big data
// item with the name ImageDigestBigDataKey and the specified digest.
// ByDigest returns a slice enumerating the images which have either an
// explicitly-set digest, or a big data item with a name that starts
// with ImageDigestManifestBigDataNamePrefix, which matches the
// specified digest.
ByDigest(d digest.Digest) ([]*Image, error)
}
@@ -100,7 +116,7 @@ type ImageStore interface {
ROImageStore
RWFileBasedStore
RWMetadataStore
RWBigDataStore
RWImageBigDataStore
FlaggableStore
// Create creates an image that has a specified ID (or a random one) and
@@ -109,7 +125,8 @@ type ImageStore interface {
Create(id string, names []string, layer, metadata string, created time.Time, searchableDigest digest.Digest) (*Image, error)
// SetNames replaces the list of names associated with an image with the
// supplied values.
// supplied values. The values are expected to be valid normalized
// named image references.
SetNames(id string, names []string) error
// Delete removes the record of the image.
@@ -133,6 +150,7 @@ func copyImage(i *Image) *Image {
return &Image{
ID: i.ID,
Digest: i.Digest,
Digests: copyDigestSlice(i.Digests),
Names: copyStringSlice(i.Names),
TopLayer: i.TopLayer,
MappedTopLayers: copyStringSlice(i.MappedTopLayers),
@@ -145,6 +163,17 @@ func copyImage(i *Image) *Image {
}
}
func copyImageSlice(slice []*Image) []*Image {
if len(slice) > 0 {
cp := make([]*Image, len(slice))
for i := range slice {
cp[i] = copyImage(slice[i])
}
return cp
}
return nil
}
func (r *imageStore) Images() ([]Image, error) {
images := make([]Image, len(r.images))
for i := range r.images {
@@ -165,6 +194,46 @@ func (r *imageStore) datapath(id, key string) string {
return filepath.Join(r.datadir(id), makeBigDataBaseName(key))
}
// bigDataNameIsManifest determines if a big data item with the specified name
// is considered to be representative of the image, in that its digest can be
// said to also be the image's digest. Currently, if its name is, or begins
// with, "manifest", we say that it is.
func bigDataNameIsManifest(name string) bool {
return strings.HasPrefix(name, ImageDigestManifestBigDataNamePrefix)
}
// recomputeDigests takes a fixed digest and a name-to-digest map and builds a
// list of the unique values that would identify the image.
func (image *Image) recomputeDigests() error {
validDigests := make([]digest.Digest, 0, len(image.BigDataDigests)+1)
digests := make(map[digest.Digest]struct{})
if image.Digest != "" {
if err := image.Digest.Validate(); err != nil {
return errors.Wrapf(err, "error validating image digest %q", string(image.Digest))
}
digests[image.Digest] = struct{}{}
validDigests = append(validDigests, image.Digest)
}
for name, digest := range image.BigDataDigests {
if !bigDataNameIsManifest(name) {
continue
}
if digest.Validate() != nil {
return errors.Wrapf(digest.Validate(), "error validating digest %q for big data item %q", string(digest), name)
}
// Deduplicate the digest values.
if _, known := digests[digest]; !known {
digests[digest] = struct{}{}
validDigests = append(validDigests, digest)
}
}
if image.Digest == "" && len(validDigests) > 0 {
image.Digest = validDigests[0]
}
image.Digests = validDigests
return nil
}
func (r *imageStore) Load() error {
shouldSave := false
rpath := r.imagespath()
@@ -187,21 +256,22 @@ func (r *imageStore) Load() error {
r.removeName(conflict, name)
shouldSave = true
}
names[name] = images[n]
}
// Implicit digest
if digest, ok := image.BigDataDigests[ImageDigestBigDataKey]; ok {
digests[digest] = append(digests[digest], images[n])
// Compute the digest list.
err = image.recomputeDigests()
if err != nil {
return errors.Wrapf(err, "error computing digests for image with ID %q (%v)", image.ID, image.Names)
}
// Explicit digest
if image.Digest == "" {
image.Digest = image.BigDataDigests[ImageDigestBigDataKey]
} else if image.Digest != image.BigDataDigests[ImageDigestBigDataKey] {
digests[image.Digest] = append(digests[image.Digest], images[n])
for _, name := range image.Names {
names[name] = image
}
for _, digest := range image.Digests {
list := digests[digest]
digests[digest] = append(list, image)
}
}
}
if shouldSave && !r.IsReadWrite() {
if shouldSave && (!r.IsReadWrite() || !r.Locked()) {
return ErrDuplicateImageNames
}
r.images = images
@@ -220,7 +290,7 @@ func (r *imageStore) Save() error {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify the image store at %q", r.imagespath())
}
if !r.Locked() {
return errors.New("image store is not locked")
return errors.New("image store is not locked for writing")
}
rpath := r.imagespath()
if err := os.MkdirAll(filepath.Dir(rpath), 0700); err != nil {
@@ -331,12 +401,12 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string, c
}
}
if _, idInUse := r.byid[id]; idInUse {
return nil, ErrDuplicateID
return nil, errors.Wrapf(ErrDuplicateID, "an image with ID %q already exists", id)
}
names = dedupeNames(names)
for _, name := range names {
if _, nameInUse := r.byname[name]; nameInUse {
return nil, ErrDuplicateName
if image, nameInUse := r.byname[name]; nameInUse {
return nil, errors.Wrapf(ErrDuplicateName, "image name %q is already associated with image %q", name, image.ID)
}
}
if created.IsZero() {
@@ -346,6 +416,7 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string, c
image = &Image{
ID: id,
Digest: searchableDigest,
Digests: nil,
Names: names,
TopLayer: layer,
Metadata: metadata,
@@ -355,16 +426,20 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string, c
Created: created,
Flags: make(map[string]interface{}),
}
err := image.recomputeDigests()
if err != nil {
return nil, errors.Wrapf(err, "error validating digests for new image")
}
r.images = append(r.images, image)
r.idindex.Add(id)
r.byid[id] = image
if searchableDigest != "" {
list := r.bydigest[searchableDigest]
r.bydigest[searchableDigest] = append(list, image)
}
for _, name := range names {
r.byname[name] = image
}
for _, digest := range image.Digests {
list := r.bydigest[digest]
r.bydigest[digest] = append(list, image)
}
err = r.Save()
image = copyImage(image)
}
@@ -442,6 +517,14 @@ func (r *imageStore) Delete(id string) error {
for _, name := range image.Names {
delete(r.byname, name)
}
for _, digest := range image.Digests {
prunedList := imageSliceWithoutValue(r.bydigest[digest], image)
if len(prunedList) == 0 {
delete(r.bydigest, digest)
} else {
r.bydigest[digest] = prunedList
}
}
if toDeleteIndex != -1 {
// delete the image at toDeleteIndex
if toDeleteIndex == len(r.images)-1 {
@@ -450,28 +533,6 @@ func (r *imageStore) Delete(id string) error {
r.images = append(r.images[:toDeleteIndex], r.images[toDeleteIndex+1:]...)
}
}
if digest, ok := image.BigDataDigests[ImageDigestBigDataKey]; ok {
// remove the image from the digest-based index
if list, ok := r.bydigest[digest]; ok {
prunedList := imageSliceWithoutValue(list, image)
if len(prunedList) == 0 {
delete(r.bydigest, digest)
} else {
r.bydigest[digest] = prunedList
}
}
}
if image.Digest != "" {
// remove the image's hard-coded digest from the digest-based index
if list, ok := r.bydigest[image.Digest]; ok {
prunedList := imageSliceWithoutValue(list, image)
if len(prunedList) == 0 {
delete(r.bydigest, image.Digest)
} else {
r.bydigest[image.Digest] = prunedList
}
}
}
if err := r.Save(); err != nil {
return err
}
@@ -502,7 +563,7 @@ func (r *imageStore) Exists(id string) bool {
func (r *imageStore) ByDigest(d digest.Digest) ([]*Image, error) {
if images, ok := r.bydigest[d]; ok {
return images, nil
return copyImageSlice(images), nil
}
return nil, ErrImageUnknown
}
@@ -533,15 +594,7 @@ func (r *imageStore) BigDataSize(id, key string) (int64, error) {
return size, nil
}
if data, err := r.BigData(id, key); err == nil && data != nil {
if r.SetBigData(id, key, data) == nil {
image, ok := r.lookup(id)
if !ok {
return -1, ErrImageUnknown
}
if size, ok := image.BigDataSizes[key]; ok {
return size, nil
}
}
return int64(len(data)), nil
}
return -1, ErrSizeUnknown
}
@@ -560,17 +613,6 @@ func (r *imageStore) BigDataDigest(id, key string) (digest.Digest, error) {
if d, ok := image.BigDataDigests[key]; ok {
return d, nil
}
if data, err := r.BigData(id, key); err == nil && data != nil {
if r.SetBigData(id, key, data) == nil {
image, ok := r.lookup(id)
if !ok {
return "", ErrImageUnknown
}
if d, ok := image.BigDataDigests[key]; ok {
return d, nil
}
}
}
return "", ErrDigestUnknown
}
@@ -593,7 +635,7 @@ func imageSliceWithoutValue(slice []*Image, value *Image) []*Image {
return modified
}
func (r *imageStore) SetBigData(id, key string, data []byte) error {
func (r *imageStore) SetBigData(id, key string, data []byte, digestManifest func([]byte) (digest.Digest, error)) error {
if key == "" {
return errors.Wrapf(ErrInvalidBigDataName, "can't set empty name for image big data item")
}
@@ -604,10 +646,22 @@ func (r *imageStore) SetBigData(id, key string, data []byte) error {
if !ok {
return ErrImageUnknown
}
if err := os.MkdirAll(r.datadir(image.ID), 0700); err != nil {
err := os.MkdirAll(r.datadir(image.ID), 0700)
if err != nil {
return err
}
err := ioutils.AtomicWriteFile(r.datapath(image.ID, key), data, 0600)
var newDigest digest.Digest
if bigDataNameIsManifest(key) {
if digestManifest == nil {
return errors.Wrapf(ErrDigestUnknown, "error digesting manifest: no manifest digest callback provided")
}
if newDigest, err = digestManifest(data); err != nil {
return errors.Wrapf(err, "error digesting manifest")
}
} else {
newDigest = digest.Canonical.FromBytes(data)
}
err = ioutils.AtomicWriteFile(r.datapath(image.ID, key), data, 0600)
if err == nil {
save := false
if image.BigDataSizes == nil {
@@ -619,7 +673,6 @@ func (r *imageStore) SetBigData(id, key string, data []byte) error {
image.BigDataDigests = make(map[string]digest.Digest)
}
oldDigest, digestOk := image.BigDataDigests[key]
newDigest := digest.Canonical.FromBytes(data)
image.BigDataDigests[key] = newDigest
if !sizeOk || oldSize != image.BigDataSizes[key] || !digestOk || oldDigest != newDigest {
save = true
@@ -635,20 +688,21 @@ func (r *imageStore) SetBigData(id, key string, data []byte) error {
image.BigDataNames = append(image.BigDataNames, key)
save = true
}
if key == ImageDigestBigDataKey {
if oldDigest != "" && oldDigest != newDigest && oldDigest != image.Digest {
// remove the image from the list of images in the digest-based
// index which corresponds to the old digest for this item, unless
// it's also the hard-coded digest
if list, ok := r.bydigest[oldDigest]; ok {
prunedList := imageSliceWithoutValue(list, image)
if len(prunedList) == 0 {
delete(r.bydigest, oldDigest)
} else {
r.bydigest[oldDigest] = prunedList
}
for _, oldDigest := range image.Digests {
// remove the image from the list of images in the digest-based index
if list, ok := r.bydigest[oldDigest]; ok {
prunedList := imageSliceWithoutValue(list, image)
if len(prunedList) == 0 {
delete(r.bydigest, oldDigest)
} else {
r.bydigest[oldDigest] = prunedList
}
}
}
if err = image.recomputeDigests(); err != nil {
return errors.Wrapf(err, "error loading recomputing image digest information for %s", image.ID)
}
for _, newDigest := range image.Digests {
// add the image to the list of images in the digest-based index which
// corresponds to the new digest for this item, unless it's already there
list := r.bydigest[newDigest]
@@ -685,6 +739,10 @@ func (r *imageStore) Lock() {
r.lockfile.Lock()
}
func (r *imageStore) RLock() {
r.lockfile.RLock()
}
func (r *imageStore) Unlock() {
r.lockfile.Unlock()
}

View File

@@ -229,6 +229,7 @@ type LayerStore interface {
type layerStore struct {
lockfile Locker
mountsLockfile Locker
rundir string
driver drivers.Driver
layerdir string
@@ -291,7 +292,6 @@ func (r *layerStore) Load() error {
idlist := []string{}
ids := make(map[string]*Layer)
names := make(map[string]*Layer)
mounts := make(map[string]*Layer)
compressedsums := make(map[digest.Digest][]string)
uncompressedsums := make(map[digest.Digest][]string)
if r.lockfile.IsReadWrite() {
@@ -319,39 +319,29 @@ func (r *layerStore) Load() error {
label.ReserveLabel(layer.MountLabel)
}
}
err = nil
}
if shouldSave && !r.IsReadWrite() {
if shouldSave && (!r.IsReadWrite() || !r.Locked()) {
return ErrDuplicateLayerNames
}
mpath := r.mountspath()
data, err = ioutil.ReadFile(mpath)
if err != nil && !os.IsNotExist(err) {
return err
}
layerMounts := []layerMountPoint{}
if err = json.Unmarshal(data, &layerMounts); len(data) == 0 || err == nil {
for _, mount := range layerMounts {
if mount.MountPoint != "" {
if layer, ok := ids[mount.ID]; ok {
mounts[mount.MountPoint] = layer
layer.MountPoint = mount.MountPoint
layer.MountCount = mount.MountCount
}
}
}
}
r.layers = layers
r.idindex = truncindex.NewTruncIndex(idlist)
r.byid = ids
r.byname = names
r.bymount = mounts
r.bycompressedsum = compressedsums
r.byuncompressedsum = uncompressedsums
err = nil
// Load and merge information about which layers are mounted, and where.
if r.IsReadWrite() {
r.mountsLockfile.RLock()
defer r.mountsLockfile.Unlock()
if err = r.loadMounts(); err != nil {
return err
}
}
// Last step: if we're writable, try to remove anything that a previous
// user of this storage area marked for deletion but didn't manage to
// actually delete.
if r.IsReadWrite() {
if r.IsReadWrite() && r.Locked() {
for _, layer := range r.layers {
if layer.Flags == nil {
layer.Flags = make(map[string]interface{})
@@ -373,12 +363,36 @@ func (r *layerStore) Load() error {
return err
}
func (r *layerStore) loadMounts() error {
mounts := make(map[string]*Layer)
mpath := r.mountspath()
data, err := ioutil.ReadFile(mpath)
if err != nil && !os.IsNotExist(err) {
return err
}
layerMounts := []layerMountPoint{}
if err = json.Unmarshal(data, &layerMounts); len(data) == 0 || err == nil {
for _, mount := range layerMounts {
if mount.MountPoint != "" {
if layer, ok := r.lookup(mount.ID); ok {
mounts[mount.MountPoint] = layer
layer.MountPoint = mount.MountPoint
layer.MountCount = mount.MountCount
}
}
}
err = nil
}
r.bymount = mounts
return err
}
func (r *layerStore) Save() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify the layer store at %q", r.layerspath())
}
if !r.Locked() {
return errors.New("layer store is not locked")
return errors.New("layer store is not locked for writing")
}
rpath := r.layerspath()
if err := os.MkdirAll(filepath.Dir(rpath), 0700); err != nil {
@@ -388,6 +402,25 @@ func (r *layerStore) Save() error {
if err != nil {
return err
}
if err := ioutils.AtomicWriteFile(rpath, jldata, 0600); err != nil {
return err
}
if !r.IsReadWrite() {
return nil
}
r.mountsLockfile.Lock()
defer r.mountsLockfile.Unlock()
defer r.mountsLockfile.Touch()
return r.saveMounts()
}
func (r *layerStore) saveMounts() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify the layer store at %q", r.layerspath())
}
if !r.mountsLockfile.Locked() {
return errors.New("layer store mount information is not locked for writing")
}
mpath := r.mountspath()
if err := os.MkdirAll(filepath.Dir(mpath), 0700); err != nil {
return err
@@ -406,11 +439,10 @@ func (r *layerStore) Save() error {
if err != nil {
return err
}
if err := ioutils.AtomicWriteFile(rpath, jldata, 0600); err != nil {
if err = ioutils.AtomicWriteFile(mpath, jmdata, 0600); err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(mpath, jmdata, 0600)
return r.loadMounts()
}
func newLayerStore(rundir string, layerdir string, driver drivers.Driver, uidMap, gidMap []idtools.IDMap) (LayerStore, error) {
@@ -426,16 +458,21 @@ func newLayerStore(rundir string, layerdir string, driver drivers.Driver, uidMap
}
lockfile.Lock()
defer lockfile.Unlock()
mountsLockfile, err := GetLockfile(filepath.Join(rundir, "mountpoints.lock"))
if err != nil {
return nil, err
}
rlstore := layerStore{
lockfile: lockfile,
driver: driver,
rundir: rundir,
layerdir: layerdir,
byid: make(map[string]*Layer),
bymount: make(map[string]*Layer),
byname: make(map[string]*Layer),
uidMap: copyIDMap(uidMap),
gidMap: copyIDMap(gidMap),
lockfile: lockfile,
mountsLockfile: mountsLockfile,
driver: driver,
rundir: rundir,
layerdir: layerdir,
byid: make(map[string]*Layer),
bymount: make(map[string]*Layer),
byname: make(map[string]*Layer),
uidMap: copyIDMap(uidMap),
gidMap: copyIDMap(gidMap),
}
if err := rlstore.Load(); err != nil {
return nil, err
@@ -451,13 +488,14 @@ func newROLayerStore(rundir string, layerdir string, driver drivers.Driver) (ROL
lockfile.Lock()
defer lockfile.Unlock()
rlstore := layerStore{
lockfile: lockfile,
driver: driver,
rundir: rundir,
layerdir: layerdir,
byid: make(map[string]*Layer),
bymount: make(map[string]*Layer),
byname: make(map[string]*Layer),
lockfile: lockfile,
mountsLockfile: nil,
driver: driver,
rundir: rundir,
layerdir: layerdir,
byid: make(map[string]*Layer),
bymount: make(map[string]*Layer),
byname: make(map[string]*Layer),
}
if err := rlstore.Load(); err != nil {
return nil, err
@@ -551,9 +589,20 @@ func (r *layerStore) Put(id string, parentLayer *Layer, names []string, mountLab
}
}
parent := ""
var parentMappings *idtools.IDMappings
if parentLayer != nil {
parent = parentLayer.ID
}
var parentMappings, templateIDMappings, oldMappings *idtools.IDMappings
if moreOptions.TemplateLayer != "" {
templateLayer, ok := r.lookup(moreOptions.TemplateLayer)
if !ok {
return nil, -1, ErrLayerUnknown
}
templateIDMappings = idtools.NewIDMappingsFromMaps(templateLayer.UIDMap, templateLayer.GIDMap)
} else {
templateIDMappings = &idtools.IDMappings{}
}
if parentLayer != nil {
parentMappings = idtools.NewIDMappingsFromMaps(parentLayer.UIDMap, parentLayer.GIDMap)
} else {
parentMappings = &idtools.IDMappings{}
@@ -566,23 +615,34 @@ func (r *layerStore) Put(id string, parentLayer *Layer, names []string, mountLab
MountLabel: mountLabel,
StorageOpt: options,
}
if writeable {
if err = r.driver.CreateReadWrite(id, parent, &opts); err != nil {
if moreOptions.TemplateLayer != "" {
if err = r.driver.CreateFromTemplate(id, moreOptions.TemplateLayer, templateIDMappings, parent, parentMappings, &opts, writeable); err != nil {
if id != "" {
return nil, -1, errors.Wrapf(err, "error creating read-write layer with ID %q", id)
return nil, -1, errors.Wrapf(err, "error creating copy of template layer %q with ID %q", moreOptions.TemplateLayer, id)
}
return nil, -1, errors.Wrapf(err, "error creating read-write layer")
return nil, -1, errors.Wrapf(err, "error creating copy of template layer %q", moreOptions.TemplateLayer)
}
oldMappings = templateIDMappings
} else {
if err = r.driver.Create(id, parent, &opts); err != nil {
if id != "" {
return nil, -1, errors.Wrapf(err, "error creating layer with ID %q", id)
if writeable {
if err = r.driver.CreateReadWrite(id, parent, &opts); err != nil {
if id != "" {
return nil, -1, errors.Wrapf(err, "error creating read-write layer with ID %q", id)
}
return nil, -1, errors.Wrapf(err, "error creating read-write layer")
}
} else {
if err = r.driver.Create(id, parent, &opts); err != nil {
if id != "" {
return nil, -1, errors.Wrapf(err, "error creating layer with ID %q", id)
}
return nil, -1, errors.Wrapf(err, "error creating layer")
}
return nil, -1, errors.Wrapf(err, "error creating layer")
}
oldMappings = parentMappings
}
if !reflect.DeepEqual(parentMappings.UIDs(), idMappings.UIDs()) || !reflect.DeepEqual(parentMappings.GIDs(), idMappings.GIDs()) {
if err = r.driver.UpdateLayerIDMap(id, parentMappings, idMappings, mountLabel); err != nil {
if !reflect.DeepEqual(oldMappings.UIDs(), idMappings.UIDs()) || !reflect.DeepEqual(oldMappings.GIDs(), idMappings.GIDs()) {
if err = r.driver.UpdateLayerIDMap(id, oldMappings, idMappings, mountLabel); err != nil {
// We don't have a record of this layer, but at least
// try to clean it up underneath us.
r.driver.Remove(id)
@@ -651,6 +711,16 @@ func (r *layerStore) Create(id string, parent *Layer, names []string, mountLabel
}
func (r *layerStore) Mounted(id string) (int, error) {
if !r.IsReadWrite() {
return 0, errors.Wrapf(ErrStoreIsReadOnly, "no mount information for layers at %q", r.mountspath())
}
r.mountsLockfile.RLock()
defer r.mountsLockfile.Unlock()
if modified, err := r.mountsLockfile.Modified(); modified || err != nil {
if err = r.loadMounts(); err != nil {
return 0, err
}
}
layer, ok := r.lookup(id)
if !ok {
return 0, ErrLayerUnknown
@@ -662,13 +732,21 @@ func (r *layerStore) Mount(id string, options drivers.MountOpts) (string, error)
if !r.IsReadWrite() {
return "", errors.Wrapf(ErrStoreIsReadOnly, "not allowed to update mount locations for layers at %q", r.mountspath())
}
r.mountsLockfile.Lock()
defer r.mountsLockfile.Unlock()
if modified, err := r.mountsLockfile.Modified(); modified || err != nil {
if err = r.loadMounts(); err != nil {
return "", err
}
}
defer r.mountsLockfile.Touch()
layer, ok := r.lookup(id)
if !ok {
return "", ErrLayerUnknown
}
if layer.MountCount > 0 {
layer.MountCount++
return layer.MountPoint, r.Save()
return layer.MountPoint, r.saveMounts()
}
if options.MountLabel == "" {
options.MountLabel = layer.MountLabel
@@ -687,7 +765,7 @@ func (r *layerStore) Mount(id string, options drivers.MountOpts) (string, error)
layer.MountPoint = filepath.Clean(mountpoint)
layer.MountCount++
r.bymount[layer.MountPoint] = layer
err = r.Save()
err = r.saveMounts()
}
return mountpoint, err
}
@@ -696,6 +774,14 @@ func (r *layerStore) Unmount(id string, force bool) (bool, error) {
if !r.IsReadWrite() {
return false, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to update mount locations for layers at %q", r.mountspath())
}
r.mountsLockfile.Lock()
defer r.mountsLockfile.Unlock()
if modified, err := r.mountsLockfile.Modified(); modified || err != nil {
if err = r.loadMounts(); err != nil {
return false, err
}
}
defer r.mountsLockfile.Touch()
layer, ok := r.lookup(id)
if !ok {
layerByMount, ok := r.bymount[filepath.Clean(id)]
@@ -709,7 +795,7 @@ func (r *layerStore) Unmount(id string, force bool) (bool, error) {
}
if layer.MountCount > 1 {
layer.MountCount--
return true, r.Save()
return true, r.saveMounts()
}
err := r.driver.Put(id)
if err == nil || os.IsNotExist(err) {
@@ -718,12 +804,22 @@ func (r *layerStore) Unmount(id string, force bool) (bool, error) {
}
layer.MountCount--
layer.MountPoint = ""
return false, r.Save()
return false, r.saveMounts()
}
return true, err
}
func (r *layerStore) ParentOwners(id string) (uids, gids []int, err error) {
if !r.IsReadWrite() {
return nil, nil, errors.Wrapf(ErrStoreIsReadOnly, "no mount information for layers at %q", r.mountspath())
}
r.mountsLockfile.RLock()
defer r.mountsLockfile.Unlock()
if modified, err := r.mountsLockfile.Modified(); modified || err != nil {
if err = r.loadMounts(); err != nil {
return nil, nil, err
}
}
layer, ok := r.lookup(id)
if !ok {
return nil, nil, ErrLayerUnknown
@@ -840,14 +936,23 @@ func (r *layerStore) Delete(id string) error {
return ErrLayerUnknown
}
id = layer.ID
// This check is needed for idempotency of delete where the layer could have been
// already unmounted (since c/storage gives you that API directly)
for layer.MountCount > 0 {
// The layer may already have been explicitly unmounted, but if not, we
// should try to clean that up before we start deleting anything at the
// driver level.
mountCount, err := r.Mounted(id)
if err != nil {
return errors.Wrapf(err, "error checking if layer %q is still mounted", id)
}
for mountCount > 0 {
if _, err := r.Unmount(id, false); err != nil {
return err
}
mountCount, err = r.Mounted(id)
if err != nil {
return errors.Wrapf(err, "error checking if layer %q is still mounted", id)
}
}
err := r.driver.Remove(id)
err = r.driver.Remove(id)
if err == nil {
os.Remove(r.tspath(id))
delete(r.byid, id)
@@ -1200,6 +1305,10 @@ func (r *layerStore) Lock() {
r.lockfile.Lock()
}
func (r *layerStore) RLock() {
r.lockfile.RLock()
}
func (r *layerStore) Unlock() {
r.lockfile.Unlock()
}
@@ -1209,7 +1318,20 @@ func (r *layerStore) Touch() error {
}
func (r *layerStore) Modified() (bool, error) {
return r.lockfile.Modified()
var mmodified bool
lmodified, err := r.lockfile.Modified()
if err != nil {
return lmodified, err
}
if r.IsReadWrite() {
r.mountsLockfile.RLock()
defer r.mountsLockfile.Unlock()
mmodified, err = r.mountsLockfile.Modified()
if err != nil {
return lmodified, err
}
}
return lmodified || mmodified, nil
}
func (r *layerStore) IsReadWrite() bool {

View File

@@ -1,7 +1,6 @@
package storage
import (
"fmt"
"path/filepath"
"sync"
"time"
@@ -13,7 +12,14 @@ import (
// identifier of the last party that made changes to whatever's being protected
// by the lock.
type Locker interface {
sync.Locker
// Acquire a writer lock.
Lock()
// Unlock the lock.
Unlock()
// Acquire a reader lock.
RLock()
// Touch records, for others sharing the lock, that the caller was the
// last writer. It should only be called with the lock held.
@@ -29,7 +35,7 @@ type Locker interface {
// IsReadWrite() checks if the lock file is read-write
IsReadWrite() bool
// Locked() checks if lock is locked
// Locked() checks if lock is locked for writing by a thread in this process
Locked() bool
}
@@ -39,44 +45,50 @@ var (
)
// GetLockfile opens a read-write lock file, creating it if necessary. The
// Locker object it returns will be returned unlocked.
// Locker object may already be locked if the path has already been requested
// by the current process.
func GetLockfile(path string) (Locker, error) {
lockfilesLock.Lock()
defer lockfilesLock.Unlock()
if lockfiles == nil {
lockfiles = make(map[string]Locker)
}
cleanPath := filepath.Clean(path)
if locker, ok := lockfiles[cleanPath]; ok {
if !locker.IsReadWrite() {
return nil, errors.Wrapf(ErrLockReadOnly, "lock %q is a read-only lock", cleanPath)
}
return locker, nil
}
locker, err := getLockFile(path, false) // platform dependent locker
if err != nil {
return nil, err
}
lockfiles[filepath.Clean(path)] = locker
return locker, nil
return getLockfile(path, false)
}
// GetROLockfile opens a read-only lock file. The Locker object it returns
// will be returned unlocked.
// GetROLockfile opens a read-only lock file, creating it if necessary. The
// Locker object may already be locked if the path has already been requested
// by the current process.
func GetROLockfile(path string) (Locker, error) {
return getLockfile(path, true)
}
// getLockfile returns a Locker object, possibly (depending on the platform)
// working inter-process, and associated with the specified path.
//
// If ro, the lock is a read-write lock and the returned Locker should correspond to the
// “lock for reading” (shared) operation; otherwise, the lock is either an exclusive lock,
// or a read-write lock and Locker should correspond to the “lock for writing” (exclusive) operation.
//
// WARNING:
// - The lock may or MAY NOT be inter-process.
// - There may or MAY NOT be an actual object on the filesystem created for the specified path.
// - Even if ro, the lock MAY be exclusive.
func getLockfile(path string, ro bool) (Locker, error) {
lockfilesLock.Lock()
defer lockfilesLock.Unlock()
if lockfiles == nil {
lockfiles = make(map[string]Locker)
}
cleanPath := filepath.Clean(path)
cleanPath, err := filepath.Abs(path)
if err != nil {
return nil, errors.Wrapf(err, "error ensuring that path %q is an absolute path", path)
}
if locker, ok := lockfiles[cleanPath]; ok {
if locker.IsReadWrite() {
return nil, fmt.Errorf("lock %q is a read-write lock", cleanPath)
if ro && locker.IsReadWrite() {
return nil, errors.Errorf("lock %q is not a read-only lock", cleanPath)
}
if !ro && !locker.IsReadWrite() {
return nil, errors.Errorf("lock %q is not a read-write lock", cleanPath)
}
return locker, nil
}
locker, err := getLockFile(path, true) // platform dependent locker
locker, err := createLockerForPath(path, ro) // platform-dependent locker
if err != nil {
return nil, err
}

Some files were not shown because too many files have changed in this diff Show More