Compare commits

..

103 Commits

Author SHA1 Message Date
Antonio Murdaca
b08008c5b2 bump to v0.1.18
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-02-02 17:42:32 +01:00
Antonio Murdaca
c011e81b38 Merge pull request #301 from runcom/perf-gain
vendor c/image@c1893ff40c
2017-02-02 17:41:13 +01:00
Antonio Murdaca
683d45ffd6 vendor c/image@c1893ff40c
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-02-02 17:23:47 +01:00
Antonio Murdaca
07d6e7db03 Merge pull request #300 from runcom/fix-v2s2-oci-conversion
oci: fix config conversion from docker v2s2
2017-01-31 18:42:31 +01:00
Antonio Murdaca
02ea29a99d oci: fix config conversion from docker v2s2
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-31 18:25:18 +01:00
Antonio Murdaca
f1849c6a47 Merge pull request #297 from runcom/remove-engine-api
docker: remove github.com/docker/engine-api
2017-01-31 17:27:13 +01:00
Antonio Murdaca
03bb3c2f74 docker: remove github.com/docker/engine-api
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-31 17:09:02 +01:00
Antonio Murdaca
fdf0bec556 Merge pull request #296 from runcom/fix-registry-auth
docker: fix registry authentication issues
2017-01-30 17:00:52 +01:00
Antonio Murdaca
c736e69d48 docker: fix registry authentication issues
vendor containers/image@d23efe9c5a
fix https://bugzilla.redhat.com/show_bug.cgi?id=1413987

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-30 16:43:00 +01:00
Antonio Murdaca
d7156f9b3d Merge pull request #294 from runcom/fix-image-spec-dep
fix OCI image-spec dependency
2017-01-28 13:31:46 +01:00
Antonio Murdaca
15b3fdf6f4 fix OCI image-spec dependency
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-28 12:53:41 +01:00
Miloslav Trmač
0e1ba1fb70 Merge pull request #293 from mtrmac/unverified-contents
REPLACE: Vendor in mtrmac/image:unverified-contents and add a (skopeo dump-untrusted-signature-contents-without-verification)
2017-01-23 17:34:11 +01:00
Miloslav Trmač
ee590a9795 Add an undocumented (skopeo untrusted-signature-dump-without-verification)
This is a bit better than raw (gpg -d $signature), and it allows testing
of the signature.GetSignatureInformationWithoutVerification function;
but, still, keeping it hidden because relying on this in common
workflows is probably a bad idea and we don’t _neeed_ to expose it right
now.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-01-23 16:45:18 +01:00
Miloslav Trmač
076d41d627 Vendor after merging mtrmac/image:unverified-contents 2017-01-23 16:45:03 +01:00
Miloslav Trmač
4830d90c32 Merge pull request #292 from mtrmac/ParseNormalizedNamed
REPLACE: Vendor in mtrmac/image:ParseNormalizedNamed
2017-01-20 00:02:14 +01:00
Miloslav Trmač
2f8cc39a1a Vendor after merging mtrmac/image:ParseNormalizedNamed
… and use the master branch of docker/distribution which provides
docker/distribution/reference.ParseNormalizedNamed.
2017-01-19 23:11:08 +01:00
Miloslav Trmač
8602471486 Merge pull request #291 from mtrmac/erikh-kubernetes-apimachinery
Vendor after merging erikh/image:kube-fix
2017-01-19 20:37:33 +01:00
Erik Hollensbe
1ee74864e9 Vendor after merging erikh/image:kube-fix
Based on https://github.com/projectatomic/skopeo/pull/289 by Erik
Hollensbe <github@hollensbe.org>

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-01-19 20:17:36 +01:00
Antonio Murdaca
81404fb71c Merge pull request #286 from AkihiroSuda/man2
Makefile: /usr/bin/go-md2man -> go-md2man
2017-01-12 15:29:57 +01:00
Akihiro Suda
845ad88cec Makefile: /usr/bin/go-md2man -> go-md2man
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
2017-01-12 09:57:52 +00:00
Antonio Murdaca
9b6b57df50 Merge pull request #284 from runcom/switch-to-vndr
switch to vndr
2017-01-09 18:11:22 +01:00
Antonio Murdaca
fefeeb4c70 switch to vndr
vndr is almost exactly the same as our old good hack/vendor.sh. Except
it's cleaner and it allows to re-vendor just one dependency if needed
(which we do a lot for containers/image).

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-09 17:54:17 +01:00
Antonio Murdaca
bbc0c69624 Merge pull request #283 from runcom/oci-digest
*: switch to opencontaniners/go-digest
2017-01-09 16:51:47 +01:00
Antonio Murdaca
dcfcfdaa1e *: switch to opencontaniners/go-digest
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-09 16:20:00 +01:00
Antonio Murdaca
e41b0d67d6 Merge pull request #282 from runcom/rev-c-image-2
bump c/image
2017-01-08 13:26:08 +01:00
Antonio Murdaca
1ec992abd1 bump c/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-08 13:10:07 +01:00
Antonio Murdaca
b8ae5c6054 Merge pull request #278 from runcom/up-oci-image
update opencontainers/image-spec to v1.0.0-rc3
2016-12-23 12:07:14 +01:00
Antonio Murdaca
7c530bc55f update opencontainers/image-spec to v1.0.0-rc3
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-23 11:39:27 +01:00
Antonio Murdaca
3377542e27 Merge pull request #277 from runcom/up-c-image
update c/image
2016-12-22 17:59:22 +01:00
Antonio Murdaca
cf5d9ffa49 update c/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-22 17:43:37 +01:00
Antonio Murdaca
686c3fcd7a Merge pull request #265 from masters-of-cats/vendor-errorspkg
Add pkg/errors dependency
2016-12-19 17:47:15 +01:00
George Lestaris
78a24cea81 Vendors new containers/image version using pkg/errors
Signed-off-by: George Lestaris <glestaris@pivotal.io>
2016-12-19 16:25:50 +00:00
Antonio Murdaca
fd93ebb78d Merge pull request #275 from glestaris/patch-1
Fix typo in CONTRIBUTING.md
2016-12-16 12:30:43 +01:00
George Lestaris
56dd3fc928 Fix typo
Signed-off-by: Gareth Clay <gclay@pivotal.io>
Signed-off-by: George Lestaris <glestaris@pivotal.io>
2016-12-16 10:56:57 +00:00
Antonio Murdaca
d830304e6d Merge pull request #179 from nalind/vendor-storage
Vendor containers/storage
2016-12-15 18:49:51 +01:00
Nalin Dahyabhai
4960f390e2 Vendor containers/storage, update containers/image
Vendor containers/storage, and its dependencies github.com/pborman/uuid
and github.com/mistifyio/go-zfs, which we didn't already use.

Update the build Dockerfile to install their dependencies.

Add scriptlets that try to detect whether or not we need to use the
"libdm_no_deferred_remove" and/or "btrfs_noversion" build tags.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-12-15 12:29:07 -05:00
Nalin Dahyabhai
9ba6dd71d7 Initialize reexec() handlers at startup-time
When we start up, initialize handlers so that we can import blobs
correctly when using the storage library.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-12-09 13:31:54 -05:00
Antonio Murdaca
dd6441b546 Merge pull request #273 from mtrmac/FIXMEs
Remove an obsolete FIXME
2016-12-09 17:28:19 +01:00
Miloslav Trmač
09cc6c3199 Remove an obsolete FIXME
With https://github.com/containers/image/pull/183 , this no longer
applies.
2016-12-09 17:10:32 +01:00
Antonio Murdaca
7f7b648443 Merge pull request #272 from runcom/bump-v0.1.17-and-again
Bump v0.1.17 and again to v0.1.18-dev
2016-12-08 21:15:36 +01:00
Antonio Murdaca
cc571eb1ea bump to v0.1.18-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 20:58:26 +01:00
Antonio Murdaca
b3b4e2b8f8 bump to v0.1.17
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 20:58:03 +01:00
Antonio Murdaca
28647cf29f Merge pull request #271 from runcom/fix-nokey-err
provide better error when key not found
2016-12-08 19:49:49 +01:00
Antonio Murdaca
0c8511f222 provide better error when key not found
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 18:51:30 +01:00
Antonio Murdaca
a515fefda9 Merge pull request #264 from runcom/split-flags
cmd: per command tls flags
2016-12-08 18:12:11 +01:00
Antonio Murdaca
1215f5fe69 cmd: per command tls flags
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 17:56:22 +01:00
Antonio Murdaca
93cde78d9b cmd/skopeo: hide layers command
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 17:56:22 +01:00
Antonio Murdaca
1730fd0d5f Merge pull request #269 from nalind/buildtags
Makefile: run "go" with $(BUILDTAGS)
2016-12-08 17:10:45 +01:00
Nalin Dahyabhai
7d58309a4f Makefile: run "go" with $(BUILDTAGS)
Run the "go" command with the $(BUILDTAGS) makefile variable passed in
as build tags.  We don't currently set it, but we'll need to eventually,
and adding it now does no harm.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-12-08 10:49:36 -05:00
Antonio Murdaca
a865c07818 Merge pull request #267 from cyphar/vendor-image
vendor: update to containers/image@a791c54467
2016-12-08 10:03:48 +01:00
Aleksa Sarai
cd269a4558 vendor: update to containers/image@a791c54467
Signed-off-by: Aleksa Sarai <asarai@suse.de>
2016-12-08 12:32:36 +11:00
Miloslav Trmač
2b3af4ad51 Merge pull request #266 from runcom/update-readme-contrib
README.md: add dependencies management tips
2016-12-05 17:38:21 +01:00
Antonio Murdaca
6ec338aa30 README.md: add dependencies management tips
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-05 13:07:20 +01:00
Antonio Murdaca
fb61d0c98f Merge pull request #262 from runcom/fix-quay-io
docker: fix ping routine
2016-12-02 19:16:31 +01:00
Antonio Murdaca
7620193722 docker: fix ping routine
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-02 19:00:38 +01:00
Miloslav Trmač
f8bd406deb Merge pull request #261 from runcom/fix-inline-df-comments
Revert "Dockerfile: remove inline comments"
2016-12-02 18:28:40 +01:00
Antonio Murdaca
d9b60e7fc9 Revert "Dockerfile: remove inline comments"
This reverts commit 3dcdb1ff7d.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-01 22:03:10 +01:00
Miloslav Trmač
d0a41799da Merge pull request #260 from runcom/rerevendor-cimage
revendor containers/image
2016-12-01 19:19:44 +01:00
Antonio Murdaca
dcc5395140 revendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-01 19:03:10 +01:00
Antonio Murdaca
980ff3eadd Merge pull request #249 from runcom/layers-federation
Support layers federation
2016-11-30 22:14:30 +01:00
Antonio Murdaca
f36fde92d6 Supports layers federation
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-30 19:46:54 +01:00
Antonio Murdaca
6b616d1730 Merge pull request #256 from rhatdan/bash_completions
Complete bash completions for skopeo
2016-11-30 19:39:25 +01:00
Dan Walsh
6dc36483f4 Complete bash completions for skopeo
Current code only handled commands not the options.
2016-11-30 13:22:57 -05:00
Antonio Murdaca
574b764391 Merge pull request #257 from runcom/revendor-cimage
revendor containers/image
2016-11-30 18:24:05 +01:00
Antonio Murdaca
1e795e038b revendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-30 18:07:35 +01:00
Antonio Murdaca
7cca84ba57 Merge pull request #254 from runcom/enable-cli-userpass
use user/pass flags
2016-11-30 17:29:48 +01:00
Antonio Murdaca
342ba18561 use user/pass flags
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-30 17:10:42 +01:00
Antonio Murdaca
d69c51e958 Merge pull request #248 from Crazykev/use-docker-digest
Use docker/distribution/digest
2016-11-28 17:31:33 +01:00
Crazykev
8b73542d89 use docker/distribution/digest
Signed-off-by: Crazykev <crazykev@zju.edu.cn>
2016-11-28 16:14:51 +00:00
Crazykev
1c76bc950d update containers/image vendor
Signed-off-by: Crazykev <crazykev@zju.edu.cn>
2016-11-28 11:36:18 +00:00
Antonio Murdaca
141212f27d Merge pull request #255 from runcom/fix-Dockerfile
Dockerfile: remove inline comments
2016-11-25 18:27:05 +01:00
Antonio Murdaca
3dcdb1ff7d Dockerfile: remove inline comments
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-25 17:35:59 +01:00
Antonio Murdaca
4620d5849c Merge pull request #252 from mtrmac/one-skopeo-in-all
Only build one skopeo binary by (make all)
2016-11-23 17:48:35 +01:00
Miloslav Trmač
a0af3619d3 Only build one skopeo binary by (make all)
Both (make binary) and (make binary-static) compile the code and create
a skopeo binary, so (make all) should only depend on one of them.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2016-11-23 16:57:37 +01:00
Antonio Murdaca
fcdf9c1b91 Merge pull request #243 from hustcat/static-branch
Add static compile target in Makefile
2016-11-03 09:00:26 +01:00
fightingdu
12cc3a9cbf Add static compile target in Makefile.
Add install `go-md2man` in Dockerfile.build

Signed-off-by: Ye Yin <eyniy@qq.com>
2016-11-03 15:31:50 +08:00
Antonio Murdaca
bd816574ed Merge pull request #242 from runcom/fix-skopeo-layers
Fix skopeo layers and deprecate it
2016-11-02 17:36:36 +01:00
Antonio Murdaca
b3b322e10b deprecate skopeo layers
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-02 17:10:25 +01:00
Antonio Murdaca
2c5532746f cmd/skopeo/layers: fix index out of range
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-02 09:20:43 +01:00
Miloslav Trmač
c70e58e6b5 Merge pull request #224 from aweiteka/default-policy
Add insecureAcceptAnything to default docker-daemon transport
2016-10-31 20:18:30 +01:00
Aaron Weitekamp
879dbc3757 Add insecureAcceptAnything to default docker-daemon transport
Signed-off-by: Aaron Weitekamp <aweiteka@redhat.com>
2016-10-31 14:43:35 -04:00
Antonio Murdaca
1f655f3f09 Merge pull request #238 from runcom/integrate-all-the-things-into-master
Pull in schema1 and docker-daemon
2016-10-21 17:13:01 +02:00
Antonio Murdaca
69e08d78ad Pull in schema1 and docker-daemon
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-21 16:48:39 +02:00
Antonio Murdaca
f4f69742ad Merge pull request #237 from so0k/add-osx-instructions
Add OSX instructions to Readme
2016-10-21 12:22:49 +02:00
Vincent De Smet
066463201a Add OSX instructions to Readme 2016-10-21 17:58:55 +08:00
Antonio Murdaca
5d589d6d54 Merge pull request #233 from runcom/add-defyaml
add sigstore default configuration
2016-10-12 19:19:15 +02:00
Antonio Murdaca
012f89d16b add sigstore default configuration
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-12 18:58:14 +02:00
Antonio Murdaca
ce42c70d4c Merge pull request #232 from runcom/add-better-errors
vendor containers/image for better registry errors
2016-10-12 15:14:46 +02:00
Antonio Murdaca
a48f7597e3 vendor containers/image for better registry errors
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-12 14:53:46 +02:00
Antonio Murdaca
d166555fb4 Merge pull request #231 from runcom/add-reg-user-agent
vendor containers/iamge for DockerRegistryUserAgent
2016-10-11 18:45:42 +02:00
Antonio Murdaca
5721355da7 vendor containers/iamge for DockerRegistryUserAgent
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-11 18:23:09 +02:00
Antonio Murdaca
c00868148e Merge pull request #229 from runcom/fix-fork-docker-reference
vendor contianers/image with docker/docker/reference forked
2016-10-11 18:04:50 +02:00
Antonio Murdaca
7f757cd253 vendor contianers/image with docker/docker/reference forked
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-11 17:42:19 +02:00
Antonio Murdaca
a720c22303 Merge pull request #230 from mtrmac/image-refactor
Refactor c/i/image
2016-10-11 16:13:42 +02:00
Miloslav Trmač
bd992e3872 Vendor after merging mtrmac/image:image-refactor
… and update for API changes
2016-10-11 15:53:20 +02:00
Antonio Murdaca
5207447327 Merge pull request #228 from runcom/fix-authConfig
vendor containers/image for DockerAuthConfig
2016-10-11 12:36:57 +02:00
Antonio Murdaca
f957e894e6 vendor containers/image for DockerAuthConfig
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-11 12:12:45 +02:00
Antonio Murdaca
0eb841ec8b Merge pull request #226 from runcom/fix-copy-test-manlist
vendor containers/image
2016-10-10 20:06:12 +02:00
Antonio Murdaca
dc1e560d4e vendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-10 19:46:10 +02:00
Antonio Murdaca
f69a78fa0b Merge pull request #221 from runcom/fix-oci-image-spec
update opencontainers/image-spec
2016-10-01 20:31:42 +02:00
Antonio Murdaca
efb47cf374 update opencontainers/image-spec
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-01 20:07:43 +02:00
Antonio Murdaca
507d09876d Merge pull request #219 from runcom/bump-containers-image-1
Bump v0.1.16
2016-09-27 21:25:21 +02:00
Antonio Murdaca
34fe924aff bump to v0.1.17-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-09-27 21:00:11 +02:00
717 changed files with 64178 additions and 45301 deletions

View File

@@ -8,7 +8,7 @@ that we follow.
* [Reporting Issues](#reporting-issues)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Communications](#communications)
* [Becomign a Maintainer](#becoming-a-maintainer)
* [Becoming a Maintainer](#becoming-a-maintainer)
## Reporting Issues

View File

@@ -1,6 +1,9 @@
FROM fedora
RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md2man \
# storage deps
btrfs-progs-devel \
device-mapper-devel \
# gpgme bindings deps
libassuan-devel gpgme-devel \
gnupg \

View File

@@ -1,5 +1,5 @@
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y golang git-core libgpgme11-dev
RUN apt-get install -y golang btrfs-tools git-core libdevmapper-dev libgpgme11-dev go-md2man
ENV GOPATH=/
WORKDIR /src/github.com/projectatomic/skopeo

View File

@@ -7,8 +7,9 @@ INSTALLDIR=${PREFIX}/bin
MANINSTALLDIR=${PREFIX}/share/man
CONTAINERSSYSCONFIGDIR=${DESTDIR}/etc/containers
REGISTRIESDDIR=${CONTAINERSSYSCONFIGDIR}/registries.d
SIGSTOREDIR=${DESTDIR}/var/lib/atomic/sigstore
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
GO_MD2MAN ?= /usr/bin/go-md2man
GO_MD2MAN ?= go-md2man
ifeq ($(DEBUG), 1)
override GOGCFLAGS += -N -l
@@ -31,6 +32,11 @@ GIT_COMMIT := $(shell git rev-parse HEAD 2> /dev/null || true)
MANPAGES_MD = $(wildcard docs/*.md)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh)
LIBDM_BUILD_TAG = $(shell hack/libdm_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG)
BUILDTAGS += $(LOCAL_BUILD_TAGS)
# make all DEBUG=1
# Note: Uses the -N -l go compiler options to disable compiler optimizations
# and inlining. Using these build options allows you to subsequently
@@ -44,10 +50,17 @@ binary: cmd/skopeo
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
skopeobuildimage make binary-local $(if $(DEBUG),DEBUG=$(DEBUG))
binary-static: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
skopeobuildimage make binary-local-static $(if $(DEBUG),DEBUG=$(DEBUG))
# Build w/o using Docker containers
binary-local:
go build -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -o skopeo ./cmd/skopeo
go build -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
binary-local-static:
go build -ldflags "-extldflags \"-static\" -X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
build-container:
docker build ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" .
@@ -62,8 +75,9 @@ clean:
rm -f skopeo docs/*.1
install: install-binary install-docs install-completions
install -d -m 755 ${SIGSTOREDIR}
install -D -m 644 default-policy.json ${CONTAINERSSYSCONFIGDIR}/policy.json
install -d -m 755 ${REGISTRIESDDIR}
install -D -m 644 default.yaml ${REGISTRIESDDIR}/default.yaml
install-binary: ./skopeo
install -D -m 755 skopeo ${INSTALLDIR}/skopeo
@@ -72,7 +86,7 @@ install-docs: docs/skopeo.1
install -D -m 644 docs/skopeo.1 ${MANINSTALLDIR}/man1/skopeo.1
install-completions:
install -m 644 -D hack/make/bash_autocomplete ${BASHINSTALLDIR}/skopeo
install -m 644 -D completions/bash/skopeo ${BASHINSTALLDIR}/skopeo
shell: build-container
$(DOCKER_RUN_DOCKER) bash
@@ -97,4 +111,4 @@ validate-local:
hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
test-unit-local:
go test $$(go list -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')
go test -tags "$(BUILDTAGS)" $$(go list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')

View File

@@ -49,11 +49,12 @@ Copying images
-
`skopeo` can copy container images between various storage mechanisms,
e.g. Docker registries (including the Docker Hub), the Atomic Registry,
and local directories:
local directories, and local OCI-layout directories:
```sh
$ skopeo copy docker://busybox:1-glibc atomic:myns/unsigned:streaming
$ skopeo copy docker://busybox:latest dir:existingemptydirectory
$ skopeo copy docker://busybox:latest oci:busybox_ocilayout
```
Deleting images
@@ -65,14 +66,10 @@ $ skopeo delete docker://localhost:5000/imagename:latest
Private registries with authentication
-
When interacting with private registries, `skopeo` first looks for the Docker's cli config file (usually located at `$HOME/.docker/config.json`) to get the credentials needed to authenticate. When the file isn't available it falls back looking for `--username` and `--password` flags. The ultimate fallback, as Docker does, is to provide an empty authentication when interacting with those registries.
When interacting with private registries, `skopeo` first looks for `--creds` (for `skopeo inspect|delete`) or `--src-creds|--dest-creds` (for `skopeo copy`) flags. If those aren't provided, it looks for the Docker's cli config file (usually located at `$HOME/.docker/config.json`) to get the credentials needed to authenticate. The ultimate fallback, as Docker does, is to provide an empty authentication when interacting with those registries.
Examples:
```sh
# on my system
$ skopeo --help | grep docker-cfg
--docker-cfg "/home/runcom/.docker" Docker's cli config for auth
$ cat /home/runcom/.docker/config.json
{
"auths": {
@@ -88,16 +85,22 @@ $ skopeo inspect docker://myregistrydomain.com:5000/busybox
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
# let's try now to fake a non existent Docker's config file
$ skopeo --docker-cfg="" inspect docker://myregistrydomain.com:5000/busybox
FATA[0000] Get https://myregistrydomain.com:5000/v2/busybox/manifests/latest: no basic auth credentials
$ cat /home/runcom/.docker/config.json
{}
# passing --username and --password - we can see that everything goes fine
$ skopeo --docker-cfg="" --username=testuser --password=testpassword inspect docker://myregistrydomain.com:5000/busybox
$ skopeo inspect docker://myregistrydomain.com:5000/busybox
FATA[0000] unauthorized: authentication required
# passing --creds - we can see that everything goes fine
$ skopeo inspect --creds=testuser:testpassword docker://myregistrydomain.com:5000/busybox
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
# skopeo copy example:
$ skopeo copy --src-creds=testuser:testpassword docker://myregistrydomain.com:5000/private oci:local_oci_image
```
If your cli config is found but it doesn't contain the necessary credentials for the queried registry
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--username`
and `--password`.
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--creds` or `--src-creds|--dest-creds`.
Building
-
To build the manual you will need go-md2man.
@@ -110,7 +113,13 @@ $ git clone https://github.com/projectatomic/skopeo $GOPATH/src/github.com/proje
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make all
```
You may need to install additional development packages: gpgme-devel and libassuan-devel
To build localy on OSX:
```sh
$ brew install gpgme
$ make binary-local
```
You may need to install additional development packages: `gpgme-devel` and `libassuan-devel`
```sh
$ dnf install gpgme-devel libassuan-devel
```
@@ -137,6 +146,28 @@ NOT TODO
-
- provide a _format_ flag - just use the awesome [jq](https://stedolan.github.io/jq/)
CONTRIBUTING
-
### Dependencies management
`skopeo` uses [`vndr`](https://github.com/LK4D4/vndr) for dependencies management.
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- run `vndr github.com/pkg/errors`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `vndr github.com/pkg/errors`
In order to test out new PRs from [containers/image](https://github.com/containers/image) to not break `skopeo`:
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `vndr github.com/containers/image`
License
-
skopeo is licensed under the Apache License, Version 2.0. See

View File

@@ -7,9 +7,25 @@ import (
"github.com/containers/image/copy"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/urfave/cli"
)
// contextsFromGlobalOptions returns source and destionation types.SystemContext depending on c.
func contextsFromGlobalOptions(c *cli.Context) (*types.SystemContext, *types.SystemContext, error) {
sourceCtx, err := contextFromGlobalOptions(c, "src-")
if err != nil {
return nil, nil, err
}
destinationCtx, err := contextFromGlobalOptions(c, "dest-")
if err != nil {
return nil, nil, err
}
return sourceCtx, destinationCtx, nil
}
func copyHandler(context *cli.Context) error {
if len(context.Args()) != 2 {
return errors.New("Usage: copy source destination")
@@ -32,10 +48,17 @@ func copyHandler(context *cli.Context) error {
signBy := context.String("sign-by")
removeSignatures := context.Bool("remove-signatures")
return copy.Image(contextFromGlobalOptions(context), policyContext, destRef, srcRef, &copy.Options{
sourceCtx, destinationCtx, err := contextsFromGlobalOptions(context)
if err != nil {
return err
}
return copy.Image(policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: removeSignatures,
SignBy: signBy,
ReportWriter: os.Stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
})
}
@@ -54,5 +77,33 @@ var copyCmd = cli.Command{
Name: "sign-by",
Usage: "Sign the image using a GPG key with the specified `FINGERPRINT`",
},
cli.StringFlag{
Name: "src-creds, screds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the source registry",
},
cli.StringFlag{
Name: "dest-creds, dcreds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the destination registry",
},
cli.StringFlag{
Name: "src-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the source registry",
},
cli.BoolTFlag{
Name: "src-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker source registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the destination registry",
},
cli.BoolTFlag{
Name: "dest-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker destination registry (defaults to true)",
},
},
}

View File

@@ -18,7 +18,11 @@ func deleteHandler(context *cli.Context) error {
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
}
if err := ref.DeleteImage(contextFromGlobalOptions(context)); err != nil {
ctx, err := contextFromGlobalOptions(context, "")
if err != nil {
return err
}
if err := ref.DeleteImage(ctx); err != nil {
return err
}
return nil
@@ -29,4 +33,20 @@ var deleteCmd = cli.Command{
Usage: "Delete image IMAGE-NAME",
ArgsUsage: "IMAGE-NAME",
Action: deleteHandler,
Flags: []cli.Flag{
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
},
},
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/containers/image/docker"
"github.com/containers/image/manifest"
"github.com/opencontainers/go-digest"
"github.com/urfave/cli"
)
@@ -14,7 +15,7 @@ import (
type inspectOutput struct {
Name string `json:",omitempty"`
Tag string `json:",omitempty"`
Digest string
Digest digest.Digest
RepoTags []string
Created time.Time
DockerVersion string
@@ -29,10 +30,24 @@ var inspectCmd = cli.Command{
Usage: "Inspect image IMAGE-NAME",
ArgsUsage: "IMAGE-NAME",
Flags: []cli.Flag{
cli.StringFlag{
Name: "cert-path",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
},
cli.BoolFlag{
Name: "raw",
Usage: "output raw manifest",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
},
},
Action: func(c *cli.Context) error {
img, err := parseImage(c)

View File

@@ -1,24 +1,32 @@
package main
import (
"errors"
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/containers/image/directory"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/urfave/cli"
)
// TODO(runcom): document args and usage
var layersCmd = cli.Command{
Name: "layers",
Usage: "Get layers of IMAGE-NAME",
ArgsUsage: "IMAGE-NAME",
ArgsUsage: "IMAGE-NAME [LAYER...]",
Hidden: true,
Action: func(c *cli.Context) error {
fmt.Fprintln(os.Stderr, `DEPRECATED: skopeo layers is deprecated in favor of skopeo copy`)
if c.NArg() == 0 {
return errors.New("Usage: layers imageReference [layer...]")
}
rawSource, err := parseImageSource(c, c.Args()[0], []string{
// TODO: skopeo layers only support these now
// TODO: skopeo layers only supports these now
// eventually we'll remove this command altogether...
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
@@ -26,26 +34,35 @@ var layersCmd = cli.Command{
if err != nil {
return err
}
src := image.FromSource(rawSource)
src, err := image.FromSource(rawSource)
if err != nil {
rawSource.Close()
return err
}
defer src.Close()
blobDigests := c.Args().Tail()
if len(blobDigests) == 0 {
layers, err := src.LayerInfos()
var blobDigests []digest.Digest
for _, dString := range c.Args().Tail() {
if !strings.HasPrefix(dString, "sha256:") {
dString = "sha256:" + dString
}
d, err := digest.Parse(dString)
if err != nil {
return err
}
seenLayers := map[string]struct{}{}
blobDigests = append(blobDigests, d)
}
if len(blobDigests) == 0 {
layers := src.LayerInfos()
seenLayers := map[digest.Digest]struct{}{}
for _, info := range layers {
if _, ok := seenLayers[info.Digest]; !ok {
blobDigests = append(blobDigests, info.Digest)
seenLayers[info.Digest] = struct{}{}
}
}
configInfo, err := src.ConfigInfo()
if err != nil {
return err
}
configInfo := src.ConfigInfo()
if configInfo.Digest != "" {
blobDigests = append(blobDigests, configInfo.Digest)
}
@@ -66,10 +83,7 @@ var layersCmd = cli.Command{
defer dest.Close()
for _, digest := range blobDigests {
if !strings.HasPrefix(digest, "sha256:") {
digest = "sha256:" + digest
}
r, blobSize, err := rawSource.GetBlob(digest)
r, blobSize, err := rawSource.GetBlob(types.BlobInfo{Digest: digest, Size: -1})
if err != nil {
return err
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/signature"
"github.com/containers/storage/pkg/reexec"
"github.com/projectatomic/skopeo/version"
"github.com/urfave/cli"
)
@@ -25,31 +26,15 @@ func createApp() *cli.App {
app.Version = version.Version
}
app.Usage = "Various operations with container images and container image registries"
// TODO(runcom)
//app.EnableBashCompletion = true
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug",
Usage: "enable debug output",
},
cli.StringFlag{
Name: "username",
Value: "",
Usage: "use `USERNAME` for accessing the registry",
},
cli.StringFlag{
Name: "password",
Value: "",
Usage: "use `PASSWORD` for accessing the registry",
},
cli.StringFlag{
Name: "cert-path",
Value: "",
Usage: "use certificates at `PATH` (cert.pem, key.pem) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Hidden: true,
},
cli.StringFlag{
Name: "policy",
@@ -66,6 +51,9 @@ func createApp() *cli.App {
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
if c.GlobalIsSet("tls-verify") {
logrus.Warn("'--tls-verify' is deprecated, please set this on the specific subcommand")
}
return nil
}
app.Commands = []cli.Command{
@@ -76,11 +64,15 @@ func createApp() *cli.App {
manifestDigestCmd,
standaloneSignCmd,
standaloneVerifyCmd,
untrustedSignatureDumpCmd,
}
return app
}
func main() {
if reexec.Init() {
return
}
app := createApp()
if err := app.Run(os.Args); err != nil {
logrus.Fatal(err)

View File

@@ -27,5 +27,5 @@ func TestManifestDigest(t *testing.T) {
// Success
out, err = runSkopeo("manifest-digest", "fixtures/image.manifest.json")
assert.NoError(t, err)
assert.Equal(t, fixturesTestImageManifestDigest+"\n", out)
assert.Equal(t, fixturesTestImageManifestDigest.String()+"\n", out)
}

View File

@@ -1,6 +1,7 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
@@ -88,3 +89,40 @@ var standaloneVerifyCmd = cli.Command{
ArgsUsage: "MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT SIGNATURE",
Action: standaloneVerify,
}
func untrustedSignatureDump(context *cli.Context) error {
if len(context.Args()) != 1 {
return errors.New("Usage: skopeo untrusted-signature-dump-without-verification signature")
}
untrustedSignaturePath := context.Args()[0]
untrustedSignature, err := ioutil.ReadFile(untrustedSignaturePath)
if err != nil {
return fmt.Errorf("Error reading untrusted signature from %s: %v", untrustedSignaturePath, err)
}
untrustedInfo, err := signature.GetUntrustedSignatureInformationWithoutVerifying(untrustedSignature)
if err != nil {
return fmt.Errorf("Error decoding untrusted signature: %v", err)
}
untrustedOut, err := json.MarshalIndent(untrustedInfo, "", " ")
if err != nil {
return err
}
fmt.Fprintln(context.App.Writer, string(untrustedOut))
return nil
}
// WARNING: Do not use the contents of this for ANY security decisions,
// and be VERY CAREFUL about showing this information to humans in any way which suggest that these values “are probably” reliable.
// There is NO REASON to expect the values to be correct, or not intentionally misleading
// (including things like “✅ Verified by $authority”)
//
// The subcommand is undocumented, and it may be renamed or entirely disappear in the future.
var untrustedSignatureDumpCmd = cli.Command{
Name: "untrusted-signature-dump-without-verification",
Usage: "Dump contents of a signature WITHOUT VERIFYING IT",
ArgsUsage: "SIGNATURE",
Hidden: true,
Action: untrustedSignatureDump,
}

View File

@@ -1,20 +1,25 @@
package main
import (
"encoding/json"
"io/ioutil"
"os"
"testing"
"time"
"github.com/containers/image/signature"
"github.com/opencontainers/go-digest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
// fixturesTestImageManifestDigest is the Docker manifest digest of "image.manifest.json"
fixturesTestImageManifestDigest = "sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55"
fixturesTestImageManifestDigest = digest.Digest("sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55")
// fixturesTestKeyFingerprint is the fingerprint of the private key.
fixturesTestKeyFingerprint = "1D8230F6CDB6A06716E414C1DB72F2188BB46CC8"
// fixturesTestKeyFingerprint is the key ID of the private key.
fixturesTestKeyShortID = "DB72F2188BB46CC8"
)
// Test that results of runSkopeo failed with nothing on stdout, and substring
@@ -54,7 +59,7 @@ func TestStandaloneSign(t *testing.T) {
manifestPath, "" /* empty reference */, fixturesTestKeyFingerprint)
assertTestFailed(t, out, err, "empty signature content")
// Unknown key. (FIXME? The error is 'Error creating signature: End of file")
// Unknown key.
out, err = runSkopeo("standalone-sign", "-o", "/dev/null",
manifestPath, dockerReference, "UNKNOWN GPG FINGERPRINT")
assert.Error(t, err)
@@ -122,5 +127,44 @@ func TestStandaloneVerify(t *testing.T) {
out, err = runSkopeo("standalone-verify", manifestPath,
dockerReference, fixturesTestKeyFingerprint, signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified, digest "+fixturesTestImageManifestDigest+"\n", out)
assert.Equal(t, "Signature verified, digest "+fixturesTestImageManifestDigest.String()+"\n", out)
}
func TestUntrustedSignatureDump(t *testing.T) {
// Invalid command-line arguments
for _, args := range [][]string{
{},
{"a1", "a2"},
{"a1", "a2", "a3", "a4"},
} {
out, err := runSkopeo(append([]string{"untrusted-signature-dump-without-verification"}, args...)...)
assertTestFailed(t, out, err, "Usage")
}
// Error reading manifest
out, err := runSkopeo("untrusted-signature-dump-without-verification",
"/this/doesnt/exist")
assertTestFailed(t, out, err, "/this/doesnt/exist")
// Error reading signature (input is not a signature)
out, err = runSkopeo("untrusted-signature-dump-without-verification", "fixtures/image.manifest.json")
assertTestFailed(t, out, err, "Error decoding untrusted signature")
// Success
for _, path := range []string{"fixtures/image.signature", "fixtures/corrupt.signature"} {
// Success
out, err = runSkopeo("untrusted-signature-dump-without-verification", path)
require.NoError(t, err)
var info signature.UntrustedSignatureInformation
err := json.Unmarshal([]byte(out), &info)
require.NoError(t, err)
assert.Equal(t, fixturesTestImageManifestDigest, info.UntrustedDockerManifestDigest)
assert.Equal(t, "testing/manifest", info.UntrustedDockerReference)
assert.NotNil(t, info.UntrustedCreatorID)
assert.Equal(t, "atomic ", *info.UntrustedCreatorID)
assert.NotNil(t, info.UntrustedTimestamp)
assert.True(t, time.Unix(1458239713, 0).Equal(*info.UntrustedTimestamp))
assert.Equal(t, fixturesTestKeyShortID, info.UntrustedShortKeyIdentifier)
}
}

View File

@@ -1,22 +1,61 @@
package main
import (
"errors"
"strings"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/urfave/cli"
)
// contextFromGlobalOptions returns a types.SystemContext depending on c.
func contextFromGlobalOptions(c *cli.Context) *types.SystemContext {
tlsVerify := c.GlobalBoolT("tls-verify")
return &types.SystemContext{
RegistriesDirPath: c.GlobalString("registries.d"),
DockerCertPath: c.GlobalString("cert-path"),
DockerInsecureSkipTLSVerify: !tlsVerify,
func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemContext, error) {
ctx := &types.SystemContext{
RegistriesDirPath: c.GlobalString("registries.d"),
DockerCertPath: c.String(flagPrefix + "cert-dir"),
// DEPRECATED: keep this here for backward compatibility, but override
// them if per subcommand flags are provided (see below).
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
}
if c.IsSet(flagPrefix + "tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT(flagPrefix + "tls-verify")
}
if c.IsSet(flagPrefix + "creds") {
var err error
ctx.DockerAuthConfig, err = getDockerAuth(c.String(flagPrefix + "creds"))
if err != nil {
return nil, err
}
}
return ctx, nil
}
// ParseImage converts image URL-like string to an initialized handler for that image.
func parseCreds(creds string) (string, string, error) {
if creds == "" {
return "", "", errors.New("credentials can't be empty")
}
up := strings.SplitN(creds, ":", 2)
if len(up) == 1 {
return up[0], "", nil
}
if up[0] == "" {
return "", "", errors.New("username can't be empty")
}
return up[0], up[1], nil
}
func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
username, password, err := parseCreds(creds)
if err != nil {
return nil, err
}
return &types.DockerAuthConfig{
Username: username,
Password: password,
}, nil
}
// parseImage converts image URL-like string to an initialized handler for that image.
// The caller must call .Close() on the returned Image.
func parseImage(c *cli.Context) (types.Image, error) {
imgName := c.Args().First()
@@ -24,7 +63,11 @@ func parseImage(c *cli.Context) (types.Image, error) {
if err != nil {
return nil, err
}
return ref.NewImage(contextFromGlobalOptions(c))
ctx, err := contextFromGlobalOptions(c, "")
if err != nil {
return nil, err
}
return ref.NewImage(ctx)
}
// parseImageSource converts image URL-like string to an ImageSource.
@@ -35,5 +78,9 @@ func parseImageSource(c *cli.Context, name string, requestedManifestMIMETypes []
if err != nil {
return nil, err
}
return ref.NewImageSource(contextFromGlobalOptions(c), requestedManifestMIMETypes)
ctx, err := contextFromGlobalOptions(c, "")
if err != nil {
return nil, err
}
return ref.NewImageSource(ctx, requestedManifestMIMETypes)
}

157
completions/bash/skopeo Normal file
View File

@@ -0,0 +1,157 @@
#! /bin/bash
: ${PROG:=$(basename ${BASH_SOURCE})}
_complete_() {
local options_with_args=$1
local boolean_options="$2 -h --help"
case "$prev" in
$options_with_args)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
esac
}
_skopeo_copy() {
local options_with_args="
--sign-by
--src-creds --screds
--src-cert-path
--src-tls-verify
--dest-creds --dcreds
--dest-cert-path
--dest-tls-verify
"
local boolean_options="
--remove-signatures
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_inspect() {
local options_with_args="
--creds
--cert-path
"
local boolean_options="
--raw
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_standalone_sign() {
local options_with_args="
-o --output
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_standalone_verify() {
local options_with_args="
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_manifest_digest() {
local options_with_args="
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_delete() {
local options_with_args="
--creds
--cert-path
"
local boolean_options="
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_layers() {
local options_with_args="
--creds
--cert-path
"
local boolean_options="
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_skopeo() {
local options_with_args="
--policy
--registries.d
"
local boolean_options="
--debug
--version -v
--help -h
"
commands=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
case "$prev" in
$main_options_with_args_glob )
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]} help" -- "$cur" ) )
;;
esac
}
_cli_bash_autocomplete() {
local cur opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=()
local cur prev words cword
_get_comp_words_by_ref -n : cur prev words cword
local command=${PROG} cpos=0
local counter=1
counter=1
while [ $counter -lt $cword ]; do
case "!${words[$counter]}" in
*)
command=$(echo "${words[$counter]}" | sed 's/-/_/g')
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
done
local completions_func=_skopeo_${command}
declare -F $completions_func >/dev/null && $completions_func
eval "$previous_extglob_setting"
return 0
}
complete -F _cli_bash_autocomplete $PROG

View File

@@ -3,5 +3,12 @@
{
"type": "insecureAcceptAnything"
}
]
}
],
"transports":
{
"docker-daemon":
{
"": [{"type":"insecureAcceptAnything"}]
}
}
}

26
default.yaml Normal file
View File

@@ -0,0 +1,26 @@
# This is a default registries.d configuration file. You may
# add to this file or create additional files in registries.d/.
#
# sigstore: indicates a location that is read and write
# sigstore-staging: indicates a location that is only for write
#
# sigstore and sigstore-staging take a value of the following:
# sigstore: {schema}://location
#
# For reading signatures, schema may be http, https, or file.
# For writing signatures, schema may only be file.
# This is the default signature write location for docker registries.
default-docker:
# sigstore: file:///var/lib/atomic/sigstore
sigstore-staging: file:///var/lib/atomic/sigstore
# The 'docker' indicator here is the start of the configuration
# for docker registries.
#
# docker:
#
# privateregistry.com:
# sigstore: http://privateregistry.com/sigstore/
# sigstore-staging: /mnt/nfs/privateregistry/sigstore

View File

@@ -37,18 +37,10 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**--debug** enable debug output
**--username** _username_ for accessing the registry
**--password** _password_ for accessing the registry
**--cert-path** _path_ Use certificates at _path_ (cert.pem, key.pem) to connect to the registry
**--policy** _path-to-policy_ Path to a policy.json file to use for verifying signatures and deciding whether an image is trusted, overriding the default trust policy file.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for docker signature storage), overriding the default path.
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
**--help**|**-h** Show help
**--version**|**-v** print the version number
@@ -70,6 +62,18 @@ Uses the system's trust policy to validate images, rejects images not trusted by
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
**--src-creds** _username[:password]_ for accessing the source registry
**--dest-creds** _username[:password]_ for accessing the destination registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker source registry (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker destination registry (defaults to true)
Existing signatures, if any, are preserved as well.
## skopeo delete
@@ -81,6 +85,12 @@ Mark _image-name_ for deletion. To release the allocated disk space, you need t
$ docker exec -it registry bin/registry garbage-collect /etc/docker/registry/config.yml
```
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
## skopeo inspect
@@ -92,12 +102,11 @@ Return low-level information about _image-name_ in a registry
_image-name_ name of image to retrieve information about
## skopeo layers
**skopeo layers** _image-name_
**--creds** _username[:password]_ for accessing the registry
Get image layers of _image-name_
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
_image-name_ name of the image to retrieve layers
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
## skopeo manifest-digest
**skopeo manifest-digest** _manifest-file_

View File

@@ -1,115 +0,0 @@
#!/usr/bin/env bash
PROJECT=github.com/projectatomic/skopeo
# Downloads dependencies into vendor/ directory
mkdir -p vendor
original_GOPATH=$GOPATH
export GOPATH="${PWD}/vendor:$GOPATH"
find="/usr/bin/find"
clone() {
local delete_vendor=true
if [ "x$1" = x--keep-vendor ]; then
delete_vendor=false
shift
fi
local vcs="$1"
local pkg="$2"
local rev="$3"
local url="$4"
: ${url:=https://$pkg}
local target="vendor/src/$pkg"
echo -n "$pkg @ $rev: "
if [ -d "$target" ]; then
echo -n 'rm old, '
rm -rf "$target"
fi
echo -n 'clone, '
case "$vcs" in
git)
git clone --quiet --no-checkout "$url" "$target"
( cd "$target" && git checkout --quiet "$rev" && git reset --quiet --hard "$rev" -- )
;;
hg)
hg clone --quiet --updaterev "$rev" "$url" "$target"
;;
esac
echo -n 'rm VCS, '
( cd "$target" && rm -rf .{git,hg} )
if $delete_vendor; then
echo -n 'rm vendor, '
( cd "$target" && rm -rf vendor Godeps/_workspace )
fi
echo done
}
clean() {
# If $GOPATH starts with ./vendor, (go list) shows the short-form import paths for packages inside ./vendor.
# So, reset GOPATH to the external value (without ./vendor), so that the grep -v works.
local packages=($(GOPATH=$original_GOPATH go list -e ./... | grep -v "^${PROJECT}/vendor"))
local platforms=( linux/amd64 linux/386 )
local buildTags=( )
echo
echo -n 'collecting import graph, '
local IFS=$'\n'
local imports=( $(
for platform in "${platforms[@]}"; do
export GOOS="${platform%/*}";
export GOARCH="${platform##*/}";
go list -e -tags "$buildTags" -f '{{join .Deps "\n"}}' "${packages[@]}"
go list -e -tags "$buildTags" -f '{{join .TestImports "\n"}}' "${packages[@]}"
done | grep -vE "^${PROJECT}" | sort -u
) )
# .TestImports does not include indirect dependencies, so do one more iteration.
imports+=( $(
go list -e -f '{{join .Deps "\n"}}' "${imports[@]}" | grep -vE "^${PROJECT}" | sort -u
) )
imports=( $(go list -e -f '{{if not .Standard}}{{.ImportPath}}{{end}}' "${imports[@]}") )
unset IFS
echo -n 'pruning unused packages, '
findArgs=(
# This directory contains only .c and .h files which are necessary
# -path vendor/src/github.com/mattn/go-sqlite3/code
)
for import in "${imports[@]}"; do
[ "${#findArgs[@]}" -eq 0 ] || findArgs+=( -or )
findArgs+=( -path "vendor/src/$import" )
done
local IFS=$'\n'
local prune=( $($find vendor -depth -type d -not '(' "${findArgs[@]}" ')') )
unset IFS
for dir in "${prune[@]}"; do
$find "$dir" -maxdepth 1 -not -type d -not -name 'LICENSE*' -not -name 'COPYING*' -exec rm -v -f '{}' ';'
rmdir "$dir" 2>/dev/null || true
done
echo -n 'pruning unused files, '
$find vendor -type f -name '*_test.go' -exec rm -v '{}' ';'
echo done
}
# Fix up hard-coded imports that refer to Godeps paths so they'll work with our vendoring
fix_rewritten_imports () {
local pkg="$1"
local remove="${pkg}/Godeps/_workspace/src/"
local target="vendor/src/$pkg"
echo "$pkg: fixing rewritten imports"
$find "$target" -name \*.go -exec sed -i -e "s|\"${remove}|\"|g" {} \;
}

7
hack/btrfs_tag.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
cc -E - > /dev/null 2> /dev/null << EOF
#include <btrfs/version.h>
EOF
if test $? -ne 0 ; then
echo btrfs_noversion
fi

14
hack/libdm_tag.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
tmpdir="$PWD/tmp.$RANDOM"
mkdir -p "$tmpdir"
trap 'rm -fr "$tmpdir"' EXIT
cc -c -o "$tmpdir"/libdm_tag.o -x c - > /dev/null 2> /dev/null << EOF
#include <libdevmapper.h>
int main() {
struct dm_task *task;
return 0;
}
EOF
if test $? -ne 0 ; then
echo libdm_no_deferred_remove
fi

View File

@@ -1,14 +0,0 @@
#! /bin/bash
: ${PROG:=$(basename ${BASH_SOURCE})}
_cli_bash_autocomplete() {
local cur opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
opts=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
return 0
}
complete -F _cli_bash_autocomplete $PROG

48
hack/vendor.sh Executable file → Normal file
View File

@@ -1,39 +1,15 @@
#!/usr/bin/env bash
#!/bin/bash
# This file is just wrapper around vndr (github.com/LK4D4/vndr) tool.
# For updating dependencies you should change `vendor.conf` file in root of the
# project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for
# vndr usage.
set -e
cd "$(dirname "$BASH_SOURCE")/.."
rm -rf vendor/
source 'hack/.vendor-helpers.sh'
if ! hash vndr; then
echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH"
exit 1
fi
clone git github.com/urfave/cli v1.17.0
clone git github.com/containers/image master
clone git gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
clone git github.com/Sirupsen/logrus v0.10.0
clone git github.com/go-check/check v1
clone git github.com/stretchr/testify v1.1.3
clone git github.com/davecgh/go-spew master
clone git github.com/pmezard/go-difflib master
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
clone git github.com/docker/docker v1.11.2
clone git github.com/docker/engine-api v0.3.3
clone git github.com/docker/go-connections v0.2.0
clone git github.com/vbatts/tar-split v0.9.11
clone git github.com/gorilla/context 14f550f51a
clone git github.com/docker/go-units 651fc226e7441360384da338d0fd37f2440ffbe3
clone git golang.org/x/net master https://github.com/golang/net.git
# end docker deps
clone git github.com/docker/distribution master
clone git github.com/docker/libtrust master
clone git github.com/opencontainers/runc master
clone git github.com/opencontainers/image-spec master
clone git github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
clone git github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
clone git k8s.io/kubernetes 4a3f9c5b19c7ff804cbc1bf37a15c044ca5d2353 https://github.com/openshift/kubernetes
clone git github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
clone git gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
clone git github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
clean
mv vendor/src/* vendor/
vndr "$@"

View File

@@ -75,22 +75,14 @@ func (s *SkopeoSuite) TestVersion(c *check.C) {
assertSkopeoSucceeds(c, wanted, "--version")
}
const (
errFetchManifestRegexp = ".*error fetching manifest: status code: %s.*"
)
func (s *SkopeoSuite) TestCanAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
// TODO(runcom)
c.Skip("we need to restore --username --password flags!")
wanted := fmt.Sprintf(errFetchManifestRegexp, "401")
assertSkopeoFails(c, wanted, "--docker-cfg=''", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
wanted := ".*manifest unknown: manifest unknown.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", "--creds="+s.regV2WithAuth.username+":"+s.regV2WithAuth.password, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
// TODO(runcom): mock the empty docker-cfg by removing it in the test itself (?)
c.Skip("mock empty docker config")
wanted := fmt.Sprintf(errFetchManifestRegexp, "401")
assertSkopeoFails(c, wanted, "--docker-cfg=''", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
wanted := ".*unauthorized: authentication required.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
// TODO(runcom): as soon as we can push to registries ensure you can inspect here
@@ -98,8 +90,8 @@ func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C
func (s *SkopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound(c *check.C) {
out, err := exec.Command(skopeoBinary, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2.url)).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf(string(out)))
wanted := fmt.Sprintf(errFetchManifestRegexp, "404")
wanted := ".*manifest unknown.*"
c.Assert(string(out), check.Matches, "(?s)"+wanted) // (?s) : '.' will also match newlines
wanted = fmt.Sprintf(errFetchManifestRegexp, "401")
wanted = ".*unauthorized: authentication required.*"
c.Assert(string(out), check.Not(check.Matches), "(?s)"+wanted) // (?s) : '.' will also match newlines
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/containers/image/manifest"
"github.com/go-check/check"
"github.com/opencontainers/go-digest"
)
func init() {
@@ -96,8 +97,11 @@ func fileFromFixture(c *check.C, inputPath string, edits map[string]string) stri
return path
}
// The most basic (skopeo copy) use:
func (s *CopySuite) TestCopySimple(c *check.C) {
func (s *CopySuite) TestCopyFailsWithManifestList(c *check.C) {
assertSkopeoFails(c, ".*can not copy docker://estesp/busybox:latest: manifest contains multiple images.*", "copy", "docker://estesp/busybox:latest", "dir:somedir")
}
func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
@@ -107,13 +111,35 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
// FIXME: It would be nice to use one of the local Docker registries instead of neeeding an Internet connection.
// "pull": docker: → dir:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:latest", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
// "push": dir: → atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, "atomic:localhost:5000/myns/unsigned:unsigned")
// The result of pushing and pulling is an unmodified image.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:unsigned", "dir:"+dir2)
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
}
// The most basic (skopeo copy) use:
func (s *CopySuite) TestCopySimple(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
// FIXME: It would be nice to use one of the local Docker registries instead of neeeding an Internet connection.
// "pull": docker: → dir:
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+dir1)
// "push": dir: → docker(v2s2):
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, ourRegistry+"busybox:unsigned")
// The result of pushing and pulling is an unmodified image.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", ourRegistry+"busybox:unsigned", "dir:"+dir2)
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
// docker v2s2 -> OCI image layout
// ociDest will be created by oci: if it doesn't exist
@@ -123,8 +149,6 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:latest", "oci:"+ociDest)
_, err = os.Stat(ociDest)
c.Assert(err, check.IsNil)
// FIXME: Also check pushing to docker://
}
// Streaming (skopeo copy)
@@ -144,7 +168,7 @@ func (s *CopySuite) TestCopyStreaming(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:streaming", "dir:"+dir2)
// The manifests will have different JWS signatures; so, compare the manifests by digests, which
// strips the signatures, and remove them, comparing the rest file by file.
digests := []string{}
digests := []digest.Digest{}
for _, dir := range []string{dir1, dir2} {
manifestPath := filepath.Join(dir, "manifest.json")
m, err := ioutil.ReadFile(manifestPath)
@@ -373,3 +397,20 @@ func (s *CopySuite) TestCopyDockerSigstore(c *check.C) {
splitSigstoreReadServerHandler = http.FileServer(http.Dir(splitSigstoreStaging))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir, "copy", ourRegistry+"public/busybox", dirDest)
}
func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--src-creds=testuser:testpassword", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "dir:"+dir1)
}
func (s *SkopeoSuite) TestCopyDestWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestCopySrcAndDestWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--src-creds=testuser:testpassword", "--dest-creds=testuser:testpassword", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), fmt.Sprintf("docker://%s/test:auth", s.regV2WithAuth.url))
}

35
vendor.conf Normal file
View File

@@ -0,0 +1,35 @@
github.com/urfave/cli v1.17.0
github.com/containers/image master
github.com/opencontainers/go-digest master
gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
github.com/containers/storage master
github.com/Sirupsen/logrus v0.10.0
github.com/go-check/check v1
github.com/stretchr/testify v1.1.3
github.com/davecgh/go-spew master
github.com/pmezard/go-difflib master
github.com/pkg/errors master
golang.org/x/crypto/openpgp master
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
github.com/docker/docker v1.13.0
github.com/docker/go-connections 4ccf312bf1d35e5dbda654e57a9be4c3f3cd0366
github.com/vbatts/tar-split v0.10.1
github.com/gorilla/context 14f550f51a
github.com/gorilla/mux e444e69cbd
github.com/docker/go-units 8a7beacffa3009a9ac66bad506b18ffdd110cf97
golang.org/x/net master
# end docker deps
github.com/docker/distribution master
github.com/docker/libtrust master
github.com/opencontainers/runc master
github.com/opencontainers/image-spec master
github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
k8s.io/client-go master
github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
# containers/storage's dependencies that aren't already being pulled in
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0

View File

@@ -1 +0,0 @@
logrus

View File

@@ -1,9 +0,0 @@
language: go
go:
- 1.3
- 1.4
- 1.5
- tip
install:
- go get -t ./...
script: GOMAXPROCS=4 GORACE="halt_on_error=1" go test -race -v ./...

View File

@@ -1,66 +0,0 @@
# 0.10.0
* feature: Add a test hook (#180)
* feature: `ParseLevel` is now case-insensitive (#326)
* feature: `FieldLogger` interface that generalizes `Logger` and `Entry` (#308)
* performance: avoid re-allocations on `WithFields` (#335)
# 0.9.0
* logrus/text_formatter: don't emit empty msg
* logrus/hooks/airbrake: move out of main repository
* logrus/hooks/sentry: move out of main repository
* logrus/hooks/papertrail: move out of main repository
* logrus/hooks/bugsnag: move out of main repository
* logrus/core: run tests with `-race`
* logrus/core: detect TTY based on `stderr`
* logrus/core: support `WithError` on logger
* logrus/core: Solaris support
# 0.8.7
* logrus/core: fix possible race (#216)
* logrus/doc: small typo fixes and doc improvements
# 0.8.6
* hooks/raven: allow passing an initialized client
# 0.8.5
* logrus/core: revert #208
# 0.8.4
* formatter/text: fix data race (#218)
# 0.8.3
* logrus/core: fix entry log level (#208)
* logrus/core: improve performance of text formatter by 40%
* logrus/core: expose `LevelHooks` type
* logrus/core: add support for DragonflyBSD and NetBSD
* formatter/text: print structs more verbosely
# 0.8.2
* logrus: fix more Fatal family functions
# 0.8.1
* logrus: fix not exiting on `Fatalf` and `Fatalln`
# 0.8.0
* logrus: defaults to stderr instead of stdout
* hooks/sentry: add special field for `*http.Request`
* formatter/text: ignore Windows for colors
# 0.7.3
* formatter/\*: allow configuration of timestamp layout
# 0.7.2
* formatter/text: Add configuration option for time format (#158)

View File

@@ -1,388 +0,0 @@
# Logrus <img src="http://i.imgur.com/hTeVwmJ.png" width="40" height="40" alt=":walrus:" class="emoji" title=":walrus:"/>&nbsp;[![Build Status](https://travis-ci.org/Sirupsen/logrus.svg?branch=master)](https://travis-ci.org/Sirupsen/logrus)&nbsp;[![GoDoc](https://godoc.org/github.com/Sirupsen/logrus?status.svg)](https://godoc.org/github.com/Sirupsen/logrus)
Logrus is a structured logger for Go (golang), completely API compatible with
the standard library logger. [Godoc][godoc]. **Please note the Logrus API is not
yet stable (pre 1.0). Logrus itself is completely stable and has been used in
many large deployments. The core API is unlikely to change much but please
version control your Logrus to make sure you aren't fetching latest `master` on
every build.**
Nicely color-coded in development (when a TTY is attached, otherwise just
plain text):
![Colored](http://i.imgur.com/PY7qMwd.png)
With `log.SetFormatter(&log.JSONFormatter{})`, for easy parsing by logstash
or Splunk:
```json
{"animal":"walrus","level":"info","msg":"A group of walrus emerges from the
ocean","size":10,"time":"2014-03-10 19:57:38.562264131 -0400 EDT"}
{"level":"warning","msg":"The group's number increased tremendously!",
"number":122,"omg":true,"time":"2014-03-10 19:57:38.562471297 -0400 EDT"}
{"animal":"walrus","level":"info","msg":"A giant walrus appears!",
"size":10,"time":"2014-03-10 19:57:38.562500591 -0400 EDT"}
{"animal":"walrus","level":"info","msg":"Tremendously sized cow enters the ocean.",
"size":9,"time":"2014-03-10 19:57:38.562527896 -0400 EDT"}
{"level":"fatal","msg":"The ice breaks!","number":100,"omg":true,
"time":"2014-03-10 19:57:38.562543128 -0400 EDT"}
```
With the default `log.SetFormatter(&log.TextFormatter{})` when a TTY is not
attached, the output is compatible with the
[logfmt](http://godoc.org/github.com/kr/logfmt) format:
```text
time="2015-03-26T01:27:38-04:00" level=debug msg="Started observing beach" animal=walrus number=8
time="2015-03-26T01:27:38-04:00" level=info msg="A group of walrus emerges from the ocean" animal=walrus size=10
time="2015-03-26T01:27:38-04:00" level=warning msg="The group's number increased tremendously!" number=122 omg=true
time="2015-03-26T01:27:38-04:00" level=debug msg="Temperature changes" temperature=-4
time="2015-03-26T01:27:38-04:00" level=panic msg="It's over 9000!" animal=orca size=9009
time="2015-03-26T01:27:38-04:00" level=fatal msg="The ice breaks!" err=&{0x2082280c0 map[animal:orca size:9009] 2015-03-26 01:27:38.441574009 -0400 EDT panic It's over 9000!} number=100 omg=true
exit status 1
```
#### Example
The simplest way to use Logrus is simply the package-level exported logger:
```go
package main
import (
log "github.com/Sirupsen/logrus"
)
func main() {
log.WithFields(log.Fields{
"animal": "walrus",
}).Info("A walrus appears")
}
```
Note that it's completely api-compatible with the stdlib logger, so you can
replace your `log` imports everywhere with `log "github.com/Sirupsen/logrus"`
and you'll now have the flexibility of Logrus. You can customize it all you
want:
```go
package main
import (
"os"
log "github.com/Sirupsen/logrus"
)
func init() {
// Log as JSON instead of the default ASCII formatter.
log.SetFormatter(&log.JSONFormatter{})
// Output to stderr instead of stdout, could also be a file.
log.SetOutput(os.Stderr)
// Only log the warning severity or above.
log.SetLevel(log.WarnLevel)
}
func main() {
log.WithFields(log.Fields{
"animal": "walrus",
"size": 10,
}).Info("A group of walrus emerges from the ocean")
log.WithFields(log.Fields{
"omg": true,
"number": 122,
}).Warn("The group's number increased tremendously!")
log.WithFields(log.Fields{
"omg": true,
"number": 100,
}).Fatal("The ice breaks!")
// A common pattern is to re-use fields between logging statements by re-using
// the logrus.Entry returned from WithFields()
contextLogger := log.WithFields(log.Fields{
"common": "this is a common field",
"other": "I also should be logged always",
})
contextLogger.Info("I'll be logged with common and other field")
contextLogger.Info("Me too")
}
```
For more advanced usage such as logging to multiple locations from the same
application, you can also create an instance of the `logrus` Logger:
```go
package main
import (
"github.com/Sirupsen/logrus"
)
// Create a new instance of the logger. You can have any number of instances.
var log = logrus.New()
func main() {
// The API for setting attributes is a little different than the package level
// exported logger. See Godoc.
log.Out = os.Stderr
log.WithFields(logrus.Fields{
"animal": "walrus",
"size": 10,
}).Info("A group of walrus emerges from the ocean")
}
```
#### Fields
Logrus encourages careful, structured logging though logging fields instead of
long, unparseable error messages. For example, instead of: `log.Fatalf("Failed
to send event %s to topic %s with key %d")`, you should log the much more
discoverable:
```go
log.WithFields(log.Fields{
"event": event,
"topic": topic,
"key": key,
}).Fatal("Failed to send event")
```
We've found this API forces you to think about logging in a way that produces
much more useful logging messages. We've been in countless situations where just
a single added field to a log statement that was already there would've saved us
hours. The `WithFields` call is optional.
In general, with Logrus using any of the `printf`-family functions should be
seen as a hint you should add a field, however, you can still use the
`printf`-family functions with Logrus.
#### Hooks
You can add hooks for logging levels. For example to send errors to an exception
tracking service on `Error`, `Fatal` and `Panic`, info to StatsD or log to
multiple places simultaneously, e.g. syslog.
Logrus comes with [built-in hooks](hooks/). Add those, or your custom hook, in
`init`:
```go
import (
log "github.com/Sirupsen/logrus"
"gopkg.in/gemnasium/logrus-airbrake-hook.v2" // the package is named "aibrake"
logrus_syslog "github.com/Sirupsen/logrus/hooks/syslog"
"log/syslog"
)
func init() {
// Use the Airbrake hook to report errors that have Error severity or above to
// an exception tracker. You can create custom hooks, see the Hooks section.
log.AddHook(airbrake.NewHook(123, "xyz", "production"))
hook, err := logrus_syslog.NewSyslogHook("udp", "localhost:514", syslog.LOG_INFO, "")
if err != nil {
log.Error("Unable to connect to local syslog daemon")
} else {
log.AddHook(hook)
}
}
```
Note: Syslog hook also support connecting to local syslog (Ex. "/dev/log" or "/var/run/syslog" or "/var/run/log"). For the detail, please check the [syslog hook README](hooks/syslog/README.md).
| Hook | Description |
| ----- | ----------- |
| [Airbrake](https://github.com/gemnasium/logrus-airbrake-hook) | Send errors to the Airbrake API V3. Uses the official [`gobrake`](https://github.com/airbrake/gobrake) behind the scenes. |
| [Airbrake "legacy"](https://github.com/gemnasium/logrus-airbrake-legacy-hook) | Send errors to an exception tracking service compatible with the Airbrake API V2. Uses [`airbrake-go`](https://github.com/tobi/airbrake-go) behind the scenes. |
| [Papertrail](https://github.com/polds/logrus-papertrail-hook) | Send errors to the [Papertrail](https://papertrailapp.com) hosted logging service via UDP. |
| [Syslog](https://github.com/Sirupsen/logrus/blob/master/hooks/syslog/syslog.go) | Send errors to remote syslog server. Uses standard library `log/syslog` behind the scenes. |
| [Bugsnag](https://github.com/Shopify/logrus-bugsnag/blob/master/bugsnag.go) | Send errors to the Bugsnag exception tracking service. |
| [Sentry](https://github.com/evalphobia/logrus_sentry) | Send errors to the Sentry error logging and aggregation service. |
| [Hiprus](https://github.com/nubo/hiprus) | Send errors to a channel in hipchat. |
| [Logrusly](https://github.com/sebest/logrusly) | Send logs to [Loggly](https://www.loggly.com/) |
| [Slackrus](https://github.com/johntdyer/slackrus) | Hook for Slack chat. |
| [Journalhook](https://github.com/wercker/journalhook) | Hook for logging to `systemd-journald` |
| [Graylog](https://github.com/gemnasium/logrus-graylog-hook) | Hook for logging to [Graylog](http://graylog2.org/) |
| [Raygun](https://github.com/squirkle/logrus-raygun-hook) | Hook for logging to [Raygun.io](http://raygun.io/) |
| [LFShook](https://github.com/rifflock/lfshook) | Hook for logging to the local filesystem |
| [Honeybadger](https://github.com/agonzalezro/logrus_honeybadger) | Hook for sending exceptions to Honeybadger |
| [Mail](https://github.com/zbindenren/logrus_mail) | Hook for sending exceptions via mail |
| [Rollrus](https://github.com/heroku/rollrus) | Hook for sending errors to rollbar |
| [Fluentd](https://github.com/evalphobia/logrus_fluent) | Hook for logging to fluentd |
| [Mongodb](https://github.com/weekface/mgorus) | Hook for logging to mongodb |
| [InfluxDB](https://github.com/Abramovic/logrus_influxdb) | Hook for logging to influxdb |
| [Octokit](https://github.com/dorajistyle/logrus-octokit-hook) | Hook for logging to github via octokit |
| [DeferPanic](https://github.com/deferpanic/dp-logrus) | Hook for logging to DeferPanic |
| [Redis-Hook](https://github.com/rogierlommers/logrus-redis-hook) | Hook for logging to a ELK stack (through Redis) |
| [Amqp-Hook](https://github.com/vladoatanasov/logrus_amqp) | Hook for logging to Amqp broker (Like RabbitMQ) |
| [KafkaLogrus](https://github.com/goibibo/KafkaLogrus) | Hook for logging to kafka |
| [Typetalk](https://github.com/dragon3/logrus-typetalk-hook) | Hook for logging to [Typetalk](https://www.typetalk.in/) |
| [ElasticSearch](https://github.com/sohlich/elogrus) | Hook for logging to ElasticSearch|
#### Level logging
Logrus has six logging levels: Debug, Info, Warning, Error, Fatal and Panic.
```go
log.Debug("Useful debugging information.")
log.Info("Something noteworthy happened!")
log.Warn("You should probably take a look at this.")
log.Error("Something failed but I'm not quitting.")
// Calls os.Exit(1) after logging
log.Fatal("Bye.")
// Calls panic() after logging
log.Panic("I'm bailing.")
```
You can set the logging level on a `Logger`, then it will only log entries with
that severity or anything above it:
```go
// Will log anything that is info or above (warn, error, fatal, panic). Default.
log.SetLevel(log.InfoLevel)
```
It may be useful to set `log.Level = logrus.DebugLevel` in a debug or verbose
environment if your application has that.
#### Entries
Besides the fields added with `WithField` or `WithFields` some fields are
automatically added to all logging events:
1. `time`. The timestamp when the entry was created.
2. `msg`. The logging message passed to `{Info,Warn,Error,Fatal,Panic}` after
the `AddFields` call. E.g. `Failed to send event.`
3. `level`. The logging level. E.g. `info`.
#### Environments
Logrus has no notion of environment.
If you wish for hooks and formatters to only be used in specific environments,
you should handle that yourself. For example, if your application has a global
variable `Environment`, which is a string representation of the environment you
could do:
```go
import (
log "github.com/Sirupsen/logrus"
)
init() {
// do something here to set environment depending on an environment variable
// or command-line flag
if Environment == "production" {
log.SetFormatter(&log.JSONFormatter{})
} else {
// The TextFormatter is default, you don't actually have to do this.
log.SetFormatter(&log.TextFormatter{})
}
}
```
This configuration is how `logrus` was intended to be used, but JSON in
production is mostly only useful if you do log aggregation with tools like
Splunk or Logstash.
#### Formatters
The built-in logging formatters are:
* `logrus.TextFormatter`. Logs the event in colors if stdout is a tty, otherwise
without colors.
* *Note:* to force colored output when there is no TTY, set the `ForceColors`
field to `true`. To force no colored output even if there is a TTY set the
`DisableColors` field to `true`
* `logrus.JSONFormatter`. Logs fields as JSON.
* `logrus/formatters/logstash.LogstashFormatter`. Logs fields as [Logstash](http://logstash.net) Events.
```go
logrus.SetFormatter(&logstash.LogstashFormatter{Type: "application_name"})
```
Third party logging formatters:
* [`prefixed`](https://github.com/x-cray/logrus-prefixed-formatter). Displays log entry source along with alternative layout.
* [`zalgo`](https://github.com/aybabtme/logzalgo). Invoking the P͉̫o̳̼̊w̖͈̰͎e̬͔̭͂r͚̼̹̲ ̫͓͉̳͈ō̠͕͖̚f̝͍̠ ͕̲̞͖͑Z̖̫̤̫ͪa͉̬͈̗l͖͎g̳̥o̰̥̅!̣͔̲̻͊̄ ̙̘̦̹̦.
You can define your formatter by implementing the `Formatter` interface,
requiring a `Format` method. `Format` takes an `*Entry`. `entry.Data` is a
`Fields` type (`map[string]interface{}`) with all your fields as well as the
default ones (see Entries section above):
```go
type MyJSONFormatter struct {
}
log.SetFormatter(new(MyJSONFormatter))
func (f *MyJSONFormatter) Format(entry *Entry) ([]byte, error) {
// Note this doesn't include Time, Level and Message which are available on
// the Entry. Consult `godoc` on information about those fields or read the
// source of the official loggers.
serialized, err := json.Marshal(entry.Data)
if err != nil {
return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err)
}
return append(serialized, '\n'), nil
}
```
#### Logger as an `io.Writer`
Logrus can be transformed into an `io.Writer`. That writer is the end of an `io.Pipe` and it is your responsibility to close it.
```go
w := logger.Writer()
defer w.Close()
srv := http.Server{
// create a stdlib log.Logger that writes to
// logrus.Logger.
ErrorLog: log.New(w, "", 0),
}
```
Each line written to that writer will be printed the usual way, using formatters
and hooks. The level for those entries is `info`.
#### Rotation
Log rotation is not provided with Logrus. Log rotation should be done by an
external program (like `logrotate(8)`) that can compress and delete old log
entries. It should not be a feature of the application-level logger.
#### Tools
| Tool | Description |
| ---- | ----------- |
|[Logrus Mate](https://github.com/gogap/logrus_mate)|Logrus mate is a tool for Logrus to manage loggers, you can initial logger's level, hook and formatter by config file, the logger will generated with different config at different environment.|
#### Testing
Logrus has a built in facility for asserting the presence of log messages. This is implemented through the `test` hook and provides:
* decorators for existing logger (`test.NewLocal` and `test.NewGlobal`) which basically just add the `test` hook
* a test logger (`test.NewNullLogger`) that just records log messages (and does not output any):
```go
logger, hook := NewNullLogger()
logger.Error("Hello error")
assert.Equal(1, len(hook.Entries))
assert.Equal(logrus.ErrorLevel, hook.LastEntry().Level)
assert.Equal("Hello error", hook.LastEntry().Message)
hook.Reset()
assert.Nil(hook.LastEntry())
```

62
vendor/github.com/containers/image/copy/compression.go generated vendored Normal file
View File

@@ -0,0 +1,62 @@
package copy
import (
"bytes"
"compress/bzip2"
"compress/gzip"
"io"
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
)
// decompressorFunc, given a compressed stream, returns the decompressed stream.
type decompressorFunc func(io.Reader) (io.Reader, error)
func gzipDecompressor(r io.Reader) (io.Reader, error) {
return gzip.NewReader(r)
}
func bzip2Decompressor(r io.Reader) (io.Reader, error) {
return bzip2.NewReader(r), nil
}
func xzDecompressor(r io.Reader) (io.Reader, error) {
return nil, errors.New("Decompressing xz streams is not supported")
}
// compressionAlgos is an internal implementation detail of detectCompression
var compressionAlgos = map[string]struct {
prefix []byte
decompressor decompressorFunc
}{
"gzip": {[]byte{0x1F, 0x8B, 0x08}, gzipDecompressor}, // gzip (RFC 1952)
"bzip2": {[]byte{0x42, 0x5A, 0x68}, bzip2Decompressor}, // bzip2 (decompress.c:BZ2_decompress)
"xz": {[]byte{0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}, xzDecompressor}, // xz (/usr/share/doc/xz/xz-file-format.txt)
}
// detectCompression returns a decompressorFunc if the input is recognized as a compressed format, nil otherwise.
// Because it consumes the start of input, other consumers must use the returned io.Reader instead to also read from the beginning.
func detectCompression(input io.Reader) (decompressorFunc, io.Reader, error) {
buffer := [8]byte{}
n, err := io.ReadAtLeast(input, buffer[:], len(buffer))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// This is a “real” error. We could just ignore it this time, process the data we have, and hope that the source will report the same error again.
// Instead, fail immediately with the original error cause instead of a possibly secondary/misleading error returned later.
return nil, nil, err
}
var decompressor decompressorFunc
for name, algo := range compressionAlgos {
if bytes.HasPrefix(buffer[:n], algo.prefix) {
logrus.Debugf("Detected compression format %s", name)
decompressor = algo.decompressor
break
}
}
if decompressor == nil {
logrus.Debugf("No compression detected")
}
return decompressor, io.MultiReader(bytes.NewReader(buffer[:n]), input), nil
}

View File

@@ -3,60 +3,63 @@ package copy
import (
"bytes"
"compress/gzip"
"crypto/sha256"
"crypto/subtle"
"encoding/hex"
"errors"
"fmt"
"hash"
"io"
"io/ioutil"
"reflect"
"strings"
pb "gopkg.in/cheggaaa/pb.v1"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// supportedDigests lists the supported blob digest types.
var supportedDigests = map[string]func() hash.Hash{
"sha256": sha256.New,
}
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
type digestingReader struct {
source io.Reader
digest hash.Hash
expectedDigest []byte
digester digest.Digester
expectedDigest digest.Digest
validationFailed bool
}
// imageCopier allows us to keep track of diffID values for blobs, and other
// data, that we're copying between images, and cache other information that
// might allow us to take some shortcuts
type imageCopier struct {
copiedBlobs map[digest.Digest]digest.Digest
cachedDiffIDs map[digest.Digest]digest.Digest
manifestUpdates *types.ManifestUpdateOptions
dest types.ImageDestination
src types.Image
rawSource types.ImageSource
diffIDsAreNeeded bool
canModifyManifest bool
reportWriter io.Writer
}
// newDigestingReader returns an io.Reader implementation with contents of source, which will eventually return a non-EOF error
// and set validationFailed to true if the source stream does not match expectedDigestString.
func newDigestingReader(source io.Reader, expectedDigestString string) (*digestingReader, error) {
fields := strings.SplitN(expectedDigestString, ":", 2)
if len(fields) != 2 {
return nil, fmt.Errorf("Invalid digest specification %s", expectedDigestString)
// and set validationFailed to true if the source stream does not match expectedDigest.
func newDigestingReader(source io.Reader, expectedDigest digest.Digest) (*digestingReader, error) {
if err := expectedDigest.Validate(); err != nil {
return nil, errors.Errorf("Invalid digest specification %s", expectedDigest)
}
fn, ok := supportedDigests[fields[0]]
if !ok {
return nil, fmt.Errorf("Invalid digest specification %s: unknown digest type %s", expectedDigestString, fields[0])
}
digest := fn()
expectedDigest, err := hex.DecodeString(fields[1])
if err != nil {
return nil, fmt.Errorf("Invalid digest value %s: %v", expectedDigestString, err)
}
if len(expectedDigest) != digest.Size() {
return nil, fmt.Errorf("Invalid digest specification %s: length %d does not match %d", expectedDigestString, len(expectedDigest), digest.Size())
digestAlgorithm := expectedDigest.Algorithm()
if !digestAlgorithm.Available() {
return nil, errors.Errorf("Invalid digest specification %s: unsupported digest algorithm %s", expectedDigest, digestAlgorithm)
}
return &digestingReader{
source: source,
digest: digest,
digester: digestAlgorithm.Digester(),
expectedDigest: expectedDigest,
validationFailed: false,
}, nil
@@ -65,18 +68,18 @@ func newDigestingReader(source io.Reader, expectedDigestString string) (*digesti
func (d *digestingReader) Read(p []byte) (int, error) {
n, err := d.source.Read(p)
if n > 0 {
if n2, err := d.digest.Write(p[:n]); n2 != n || err != nil {
if n2, err := d.digester.Hash().Write(p[:n]); n2 != n || err != nil {
// Coverage: This should not happen, the hash.Hash interface requires
// d.digest.Write to never return an error, and the io.Writer interface
// requires n2 == len(input) if no error is returned.
return 0, fmt.Errorf("Error updating digest during verification: %d vs. %d, %v", n2, n, err)
return 0, errors.Wrapf(err, "Error updating digest during verification: %d vs. %d", n2, n)
}
}
if err == io.EOF {
actualDigest := d.digest.Sum(nil)
if subtle.ConstantTimeCompare(actualDigest, d.expectedDigest) != 1 {
actualDigest := d.digester.Digest()
if actualDigest != d.expectedDigest {
d.validationFailed = true
return 0, fmt.Errorf("Digest did not match, expected %s, got %s", hex.EncodeToString(d.expectedDigest), hex.EncodeToString(actualDigest))
return 0, errors.Errorf("Digest did not match, expected %s, got %s", d.expectedDigest, actualDigest)
}
}
return n, err
@@ -87,10 +90,12 @@ type Options struct {
RemoveSignatures bool // Remove any pre-existing signatures. SignBy will still add a new signature.
SignBy string // If non-empty, asks for a signature to be added during the copy, and specifies a key ID, as accepted by signature.NewGPGSigningMechanism().SignDockerManifest(),
ReportWriter io.Writer
SourceCtx *types.SystemContext
DestinationCtx *types.SystemContext
}
// Image copies image from srcRef to destRef, using policyContext to validate source image admissibility.
func Image(ctx *types.SystemContext, policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) error {
func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) error {
reportWriter := ioutil.Discard
if options != nil && options.ReportWriter != nil {
reportWriter = options.ReportWriter
@@ -99,28 +104,37 @@ func Image(ctx *types.SystemContext, policyContext *signature.PolicyContext, des
fmt.Fprintf(reportWriter, f, a...)
}
dest, err := destRef.NewImageDestination(ctx)
dest, err := destRef.NewImageDestination(options.DestinationCtx)
if err != nil {
return fmt.Errorf("Error initializing destination %s: %v", transports.ImageName(destRef), err)
return errors.Wrapf(err, "Error initializing destination %s", transports.ImageName(destRef))
}
defer dest.Close()
destSupportedManifestMIMETypes := dest.SupportedManifestMIMETypes()
rawSource, err := srcRef.NewImageSource(ctx, dest.SupportedManifestMIMETypes())
rawSource, err := srcRef.NewImageSource(options.SourceCtx, destSupportedManifestMIMETypes)
if err != nil {
return fmt.Errorf("Error initializing source %s: %v", transports.ImageName(srcRef), err)
return errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
}
src := image.FromSource(rawSource)
defer src.Close()
unparsedImage := image.UnparsedFromSource(rawSource)
defer func() {
if unparsedImage != nil {
unparsedImage.Close()
}
}()
// Please keep this policy check BEFORE reading any other information about the image.
if allowed, err := policyContext.IsRunningImageAllowed(src); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return fmt.Errorf("Source image rejected: %v", err)
if allowed, err := policyContext.IsRunningImageAllowed(unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return errors.Wrap(err, "Source image rejected")
}
writeReport("Getting image source manifest\n")
manifest, _, err := src.Manifest()
src, err := image.FromUnparsedImage(unparsedImage)
if err != nil {
return fmt.Errorf("Error reading manifest: %v", err)
return errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(srcRef))
}
unparsedImage = nil
defer src.Close()
if src.IsMultiImage() {
return errors.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
}
var sigs [][]byte
@@ -130,103 +144,135 @@ func Image(ctx *types.SystemContext, policyContext *signature.PolicyContext, des
writeReport("Getting image source signatures\n")
s, err := src.Signatures()
if err != nil {
return fmt.Errorf("Error reading signatures: %v", err)
return errors.Wrap(err, "Error reading signatures")
}
sigs = s
}
if len(sigs) != 0 {
writeReport("Checking if image destination supports signatures\n")
if err := dest.SupportsSignatures(); err != nil {
return fmt.Errorf("Can not copy signatures: %v", err)
return errors.Wrap(err, "Can not copy signatures")
}
}
canModifyManifest := len(sigs) == 0
writeReport("Getting image source configuration\n")
srcConfigInfo, err := src.ConfigInfo()
if err != nil {
return fmt.Errorf("Error parsing manifest: %v", err)
}
if srcConfigInfo.Digest != "" {
writeReport("Uploading blob %s\n", srcConfigInfo.Digest)
destConfigInfo, err := copyBlob(dest, rawSource, srcConfigInfo, false, reportWriter)
if err != nil {
return err
}
if destConfigInfo.Digest != srcConfigInfo.Digest {
return fmt.Errorf("Internal error: copying uncompressed config blob %s changed digest to %s", srcConfigInfo.Digest, destConfigInfo.Digest)
}
}
srcLayerInfos, err := src.LayerInfos()
if err != nil {
return fmt.Errorf("Error parsing manifest: %v", err)
}
destLayerInfos := []types.BlobInfo{}
copiedLayers := map[string]types.BlobInfo{}
for _, srcLayer := range srcLayerInfos {
destLayer, ok := copiedLayers[srcLayer.Digest]
if !ok {
writeReport("Uploading blob %s\n", srcLayer.Digest)
destLayer, err = copyBlob(dest, rawSource, srcLayer, canModifyManifest, reportWriter)
if err != nil {
return err
}
copiedLayers[srcLayer.Digest] = destLayer
}
destLayerInfos = append(destLayerInfos, destLayer)
}
manifestUpdates := types.ManifestUpdateOptions{}
if layerDigestsDiffer(srcLayerInfos, destLayerInfos) {
manifestUpdates.LayerInfos = destLayerInfos
if err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest); err != nil {
return err
}
if !reflect.DeepEqual(manifestUpdates, types.ManifestUpdateOptions{}) {
// If src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates) will be true, it needs to be true by the time we get here.
ic := imageCopier{
copiedBlobs: make(map[digest.Digest]digest.Digest),
cachedDiffIDs: make(map[digest.Digest]digest.Digest),
manifestUpdates: &manifestUpdates,
dest: dest,
src: src,
rawSource: rawSource,
diffIDsAreNeeded: src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates),
canModifyManifest: canModifyManifest,
reportWriter: reportWriter,
}
if err := ic.copyLayers(); err != nil {
return err
}
pendingImage := src
if !reflect.DeepEqual(manifestUpdates, types.ManifestUpdateOptions{InformationOnly: manifestUpdates.InformationOnly}) {
if !canModifyManifest {
return fmt.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
return errors.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
}
manifest, err = src.UpdatedManifest(manifestUpdates)
manifestUpdates.InformationOnly.Destination = dest
pendingImage, err = src.UpdatedImage(manifestUpdates)
if err != nil {
return fmt.Errorf("Error creating an updated manifest: %v", err)
return errors.Wrap(err, "Error creating an updated image manifest")
}
}
manifest, _, err := pendingImage.Manifest()
if err != nil {
return errors.Wrap(err, "Error reading manifest")
}
if err := ic.copyConfig(pendingImage); err != nil {
return err
}
if options != nil && options.SignBy != "" {
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
return errors.Wrap(err, "Error initializing GPG")
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return fmt.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
return errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
writeReport("Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, options.SignBy)
if err != nil {
return fmt.Errorf("Error creating signature: %v", err)
return errors.Wrap(err, "Error creating signature")
}
sigs = append(sigs, newSig)
}
writeReport("Uploading manifest to image destination\n")
writeReport("Writing manifest to image destination\n")
if err := dest.PutManifest(manifest); err != nil {
return fmt.Errorf("Error writing manifest: %v", err)
return errors.Wrap(err, "Error writing manifest")
}
writeReport("Storing signatures\n")
if err := dest.PutSignatures(sigs); err != nil {
return fmt.Errorf("Error writing signatures: %v", err)
return errors.Wrap(err, "Error writing signatures")
}
if err := dest.Commit(); err != nil {
return fmt.Errorf("Error committing the finished image: %v", err)
return errors.Wrap(err, "Error committing the finished image")
}
return nil
}
// copyLayers copies layers from src/rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
func (ic *imageCopier) copyLayers() error {
srcInfos := ic.src.LayerInfos()
destInfos := []types.BlobInfo{}
diffIDs := []digest.Digest{}
for _, srcLayer := range srcInfos {
var (
destInfo types.BlobInfo
diffID digest.Digest
err error
)
if ic.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
// DiffIDs are, currently, needed only when converting from schema1.
// In which case src.LayerInfos will not have URLs because schema1
// does not support them.
if ic.diffIDsAreNeeded {
return errors.New("getting DiffID for foreign layers is unimplemented")
}
destInfo = srcLayer
fmt.Fprintf(ic.reportWriter, "Skipping foreign layer %q copy to %s\n", destInfo.Digest, ic.dest.Reference().Transport().Name())
} else {
destInfo, diffID, err = ic.copyLayer(srcLayer)
if err != nil {
return err
}
}
destInfos = append(destInfos, destInfo)
diffIDs = append(diffIDs, diffID)
}
ic.manifestUpdates.InformationOnly.LayerInfos = destInfos
if ic.diffIDsAreNeeded {
ic.manifestUpdates.InformationOnly.LayerDiffIDs = diffIDs
}
if layerDigestsDiffer(srcInfos, destInfos) {
ic.manifestUpdates.LayerInfos = destInfos
}
return nil
}
// layerDigestsDiffer return true iff the digests in a and b differ (ignoring sizes and possible other fields)
func layerDigestsDiffer(a, b []types.BlobInfo) bool {
if len(a) != len(b) {
@@ -240,15 +286,154 @@ func layerDigestsDiffer(a, b []types.BlobInfo) bool {
return false
}
// copyBlob copies a blob with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied blob.
func copyBlob(dest types.ImageDestination, src types.ImageSource, srcInfo types.BlobInfo, canCompress bool, reportWriter io.Writer) (types.BlobInfo, error) {
srcStream, srcBlobSize, err := src.GetBlob(srcInfo.Digest) // We currently completely ignore srcInfo.Size throughout.
// copyConfig copies config.json, if any, from src to dest.
func (ic *imageCopier) copyConfig(src types.Image) error {
srcInfo := src.ConfigInfo()
if srcInfo.Digest != "" {
fmt.Fprintf(ic.reportWriter, "Copying config %s\n", srcInfo.Digest)
configBlob, err := src.ConfigBlob()
if err != nil {
return errors.Wrapf(err, "Error reading config blob %s", srcInfo.Digest)
}
destInfo, err := ic.copyBlobFromStream(bytes.NewReader(configBlob), srcInfo, nil, false)
if err != nil {
return err
}
if destInfo.Digest != srcInfo.Digest {
return errors.Errorf("Internal error: copying uncompressed config blob %s changed digest to %s", srcInfo.Digest, destInfo.Digest)
}
}
return nil
}
// diffIDResult contains both a digest value and an error from diffIDComputationGoroutine.
// We could also send the error through the pipeReader, but this more cleanly separates the copying of the layer and the DiffID computation.
type diffIDResult struct {
digest digest.Digest
err error
}
// copyLayer copies a layer with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied layer, and a value for LayerDiffIDs if diffIDIsNeeded
func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest.Digest, error) {
// Check if we already have a blob with this digest
haveBlob, extantBlobSize, err := ic.dest.HasBlob(srcInfo)
if err != nil && err != types.ErrBlobNotFound {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error checking for blob %s at destination", srcInfo.Digest)
}
// If we already have a cached diffID for this blob, we don't need to compute it
diffIDIsNeeded := ic.diffIDsAreNeeded && (ic.cachedDiffIDs[srcInfo.Digest] == "")
// If we already have the blob, and we don't need to recompute the diffID, then we might be able to avoid reading it again
if haveBlob && !diffIDIsNeeded {
// Check the blob sizes match, if we were given a size this time
if srcInfo.Size != -1 && srcInfo.Size != extantBlobSize {
return types.BlobInfo{}, "", errors.Errorf("Error: blob %s is already present, but with size %d instead of %d", srcInfo.Digest, extantBlobSize, srcInfo.Size)
}
srcInfo.Size = extantBlobSize
// Tell the image destination that this blob's delta is being applied again. For some image destinations, this can be faster than using GetBlob/PutBlob
blobinfo, err := ic.dest.ReapplyBlob(srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reapplying blob %s at destination", srcInfo.Digest)
}
fmt.Fprintf(ic.reportWriter, "Skipping fetch of repeat blob %s\n", srcInfo.Digest)
return blobinfo, ic.cachedDiffIDs[srcInfo.Digest], err
}
// Fallback: copy the layer, computing the diffID if we need to do so
fmt.Fprintf(ic.reportWriter, "Copying blob %s\n", srcInfo.Digest)
srcStream, srcBlobSize, err := ic.rawSource.GetBlob(srcInfo)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error reading blob %s: %v", srcInfo.Digest, err)
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
defer srcStream.Close()
blobInfo, diffIDChan, err := ic.copyLayerFromStream(srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
diffIDIsNeeded)
if err != nil {
return types.BlobInfo{}, "", err
}
var diffIDResult diffIDResult // = {digest:""}
if diffIDIsNeeded {
diffIDResult = <-diffIDChan
if diffIDResult.err != nil {
return types.BlobInfo{}, "", errors.Wrap(diffIDResult.err, "Error computing layer DiffID")
}
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
ic.cachedDiffIDs[srcInfo.Digest] = diffIDResult.digest
}
return blobInfo, diffIDResult.digest, nil
}
// copyLayerFromStream is an implementation detail of copyLayer; mostly providing a separate “defer” scope.
// it copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
// perhaps compressing the stream if canCompress,
// and returns a complete blobInfo of the copied blob and perhaps a <-chan diffIDResult if diffIDIsNeeded, to be read by the caller.
func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
diffIDIsNeeded bool) (types.BlobInfo, <-chan diffIDResult, error) {
var getDiffIDRecorder func(decompressorFunc) io.Writer // = nil
var diffIDChan chan diffIDResult
err := errors.New("Internal error: unexpected panic in copyLayer") // For pipeWriter.CloseWithError below
if diffIDIsNeeded {
diffIDChan = make(chan diffIDResult, 1) // Buffered, so that sending a value after this or our caller has failed and exited does not block.
pipeReader, pipeWriter := io.Pipe()
defer func() { // Note that this is not the same as {defer pipeWriter.CloseWithError(err)}; we need err to be evaluated lazily.
pipeWriter.CloseWithError(err) // CloseWithError(nil) is equivalent to Close()
}()
getDiffIDRecorder = func(decompressor decompressorFunc) io.Writer {
// If this fails, e.g. because we have exited and due to pipeWriter.CloseWithError() above further
// reading from the pipe has failed, we dont really care.
// We only read from diffIDChan if the rest of the flow has succeeded, and when we do read from it,
// the return value includes an error indication, which we do check.
//
// If this gets never called, pipeReader will not be used anywhere, but pipeWriter will only be
// closed above, so we are happy enough with both pipeReader and pipeWriter to just get collected by GC.
go diffIDComputationGoroutine(diffIDChan, pipeReader, decompressor) // Closes pipeReader
return pipeWriter
}
}
blobInfo, err := ic.copyBlobFromStream(srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest) // Sets err to nil on success
return blobInfo, diffIDChan, err
// We need the defer … pipeWriter.CloseWithError() to happen HERE so that the caller can block on reading from diffIDChan
}
// diffIDComputationGoroutine reads all input from layerStream, uncompresses using decompressor if necessary, and sends its digest, and status, if any, to dest.
func diffIDComputationGoroutine(dest chan<- diffIDResult, layerStream io.ReadCloser, decompressor decompressorFunc) {
result := diffIDResult{
digest: "",
err: errors.New("Internal error: unexpected panic in diffIDComputationGoroutine"),
}
defer func() { dest <- result }()
defer layerStream.Close() // We do not care to bother the other end of the pipe with other failures; we send them to dest instead.
result.digest, result.err = computeDiffID(layerStream, decompressor)
}
// computeDiffID reads all input from layerStream, uncompresses it using decompressor if necessary, and returns its digest.
func computeDiffID(stream io.Reader, decompressor decompressorFunc) (digest.Digest, error) {
if decompressor != nil {
s, err := decompressor(stream)
if err != nil {
return "", err
}
stream = s
}
return digest.Canonical.FromReader(stream)
}
// copyBlobFromStream copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
// perhaps sending a copy to an io.Writer if getOriginalLayerCopyWriter != nil,
// perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied blob.
func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
getOriginalLayerCopyWriter func(decompressor decompressorFunc) io.Writer,
canCompress bool) (types.BlobInfo, error) {
// The copying happens through a pipeline of connected io.Readers.
// === Input: srcStream
// === Process input through digestingReader to validate against the expected digest.
// Be paranoid; in case PutBlob somehow managed to ignore an error from digestingReader,
// use a separate validation failure indicator.
// Note that we don't use a stronger "validationSucceeded" indicator, because
@@ -256,29 +441,40 @@ func copyBlob(dest types.ImageDestination, src types.ImageSource, srcInfo types.
// read stream to the end, and validation does not happen.
digestingReader, err := newDigestingReader(srcStream, srcInfo.Digest)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error preparing to verify blob %s: %v", srcInfo.Digest, err)
return types.BlobInfo{}, errors.Wrapf(err, "Error preparing to verify blob %s", srcInfo.Digest)
}
var destStream io.Reader = digestingReader
isCompressed, destStream, err := isStreamCompressed(destStream) // We could skip this in some cases, but let's keep the code path uniform
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error reading blob %s: %v", srcInfo.Digest, err)
}
// === Detect compression of the input stream.
// This requires us to “peek ahead” into the stream to read the initial part, which requires us to chain through another io.Reader returned by detectCompression.
decompressor, destStream, err := detectCompression(destStream) // We could skip this in some cases, but let's keep the code path uniform
if err != nil {
return types.BlobInfo{}, errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
isCompressed := decompressor != nil
// === Report progress using a pb.Reader.
bar := pb.New(int(srcInfo.Size)).SetUnits(pb.U_BYTES)
bar.Output = reportWriter
bar.Output = ic.reportWriter
bar.SetMaxWidth(80)
bar.ShowTimeLeft = false
bar.ShowPercent = false
bar.Start()
destStream = bar.NewProxyReader(destStream)
defer fmt.Fprint(ic.reportWriter, "\n")
defer fmt.Fprint(reportWriter, "\n")
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.
if getOriginalLayerCopyWriter != nil {
destStream = io.TeeReader(destStream, getOriginalLayerCopyWriter(decompressor))
originalLayerReader = destStream
}
// === Compress the layer if it is uncompressed and compression is desired
var inputInfo types.BlobInfo
if !canCompress || isCompressed || !dest.ShouldCompressLayers() {
if !canCompress || isCompressed || !ic.dest.ShouldCompressLayers() {
logrus.Debugf("Using original blob without modification")
inputInfo.Digest = srcInfo.Digest
inputInfo.Size = srcBlobSize
inputInfo = srcInfo
} else {
logrus.Debugf("Compressing blob on the fly")
pipeReader, pipeWriter := io.Pipe()
@@ -293,51 +489,31 @@ func copyBlob(dest types.ImageDestination, src types.ImageSource, srcInfo types.
inputInfo.Size = -1
}
uploadedInfo, err := dest.PutBlob(destStream, inputInfo)
// === Finally, send the layer stream to dest.
uploadedInfo, err := ic.dest.PutBlob(destStream, inputInfo)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error writing blob: %v", err)
}
if digestingReader.validationFailed { // Coverage: This should never happen.
return types.BlobInfo{}, fmt.Errorf("Internal error uploading blob %s, digest verification failed but was ignored", srcInfo.Digest)
}
if inputInfo.Digest != "" && uploadedInfo.Digest != inputInfo.Digest {
return types.BlobInfo{}, fmt.Errorf("Internal error uploading blob %s, blob with digest %s uploaded with digest %s", srcInfo.Digest, inputInfo.Digest, uploadedInfo.Digest)
}
return uploadedInfo, nil
}
// compressionPrefixes is an internal implementation detail of isStreamCompressed
var compressionPrefixes = map[string][]byte{
"gzip": {0x1F, 0x8B, 0x08}, // gzip (RFC 1952)
"bzip2": {0x42, 0x5A, 0x68}, // bzip2 (decompress.c:BZ2_decompress)
"xz": {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}, // xz (/usr/share/doc/xz/xz-file-format.txt)
}
// isStreamCompressed returns true if input is recognized as a compressed format.
// Because it consumes the start of input, other consumers must use the returned io.Reader instead to also read from the beginning.
func isStreamCompressed(input io.Reader) (bool, io.Reader, error) {
buffer := [8]byte{}
n, err := io.ReadAtLeast(input, buffer[:], len(buffer))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// This is a “real” error. We could just ignore it this time, process the data we have, and hope that the source will report the same error again.
// Instead, fail immediately with the original error cause instead of a possibly secondary/misleading error returned later.
return false, nil, err
return types.BlobInfo{}, errors.Wrap(err, "Error writing blob")
}
isCompressed := false
for algo, prefix := range compressionPrefixes {
if bytes.HasPrefix(buffer[:n], prefix) {
logrus.Debugf("Detected compression format %s", algo)
isCompressed = true
break
// This is fairly horrible: the writer from getOriginalLayerCopyWriter wants to consumer
// all of the input (to compute DiffIDs), even if dest.PutBlob does not need it.
// So, read everything from originalLayerReader, which will cause the rest to be
// sent there if we are not already at EOF.
if getOriginalLayerCopyWriter != nil {
logrus.Debugf("Consuming rest of the original blob to satisfy getOriginalLayerCopyWriter")
_, err := io.Copy(ioutil.Discard, originalLayerReader)
if err != nil {
return types.BlobInfo{}, errors.Wrapf(err, "Error reading input blob %s", srcInfo.Digest)
}
}
if !isCompressed {
logrus.Debugf("No compression detected")
}
return isCompressed, io.MultiReader(bytes.NewReader(buffer[:n]), input), nil
if digestingReader.validationFailed { // Coverage: This should never happen.
return types.BlobInfo{}, errors.Errorf("Internal error writing blob %s, digest verification failed but was ignored", srcInfo.Digest)
}
if inputInfo.Digest != "" && uploadedInfo.Digest != inputInfo.Digest {
return types.BlobInfo{}, errors.Errorf("Internal error writing blob %s, blob with digest %s saved with digest %s", srcInfo.Digest, inputInfo.Digest, uploadedInfo.Digest)
}
return uploadedInfo, nil
}
// compressGoroutine reads all input from src and writes its compressed equivalent to dest.
@@ -352,3 +528,41 @@ func compressGoroutine(dest *io.PipeWriter, src io.Reader) {
_, err = io.Copy(zipper, src) // Sets err to nil, i.e. causes dest.Close()
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) error {
if len(destSupportedManifestMIMETypes) == 0 {
return nil // Anything goes
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
supportedByDest[t] = struct{}{}
}
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return errors.Wrap(err, "Error reading manifest")
}
if _, ok := supportedByDest[srcType]; ok {
logrus.Debugf("Manifest MIME type %s is declared supported by the destination", srcType)
return nil
}
// OK, we should convert the manifest.
if !canModifyManifest {
logrus.Debugf("Manifest MIME type %s is not supported by the destination, but we can't modify the manifest, hoping for the best...")
return nil // Take our chances - FIXME? Or should we fail without trying?
}
var chosenType = destSupportedManifestMIMETypes[0] // This one is known to be supported.
for _, t := range preferredManifestMIMETypes {
if _, ok := supportedByDest[t]; ok {
chosenType = t
break
}
}
logrus.Debugf("Will convert manifest from MIME type %s to %s", srcType, chosenType)
manifestUpdates.ManifestMIMEType = chosenType
return nil
}

View File

@@ -1,14 +1,13 @@
package directory
import (
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"os"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type dirImageDestination struct {
@@ -45,6 +44,12 @@ func (d *dirImageDestination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *dirImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -64,16 +69,16 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
}
}()
h := sha256.New()
tee := io.TeeReader(stream, h)
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := hex.EncodeToString(h.Sum(nil))
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, fmt.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
@@ -86,7 +91,26 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
return types.BlobInfo{}, err
}
succeeded = true
return types.BlobInfo{Digest: "sha256:" + computedDigest, Size: size}, nil
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
func (d *dirImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
blobPath := d.ref.layerPath(info.Digest)
finfo, err := os.Stat(blobPath)
if err != nil && os.IsNotExist(err) {
return false, -1, types.ErrBlobNotFound
}
if err != nil {
return false, -1, err
}
return true, finfo.Size(), nil
}
func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
func (d *dirImageDestination) PutManifest(manifest []byte) error {

View File

@@ -5,7 +5,10 @@ import (
"io/ioutil"
"os"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type dirImageSource struct {
@@ -28,18 +31,23 @@ func (s *dirImageSource) Reference() types.ImageReference {
func (s *dirImageSource) Close() {
}
// it's up to the caller to determine the MIME type of the returned manifest's bytes
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *dirImageSource) GetManifest() ([]byte, string, error) {
m, err := ioutil.ReadFile(s.ref.manifestPath())
if err != nil {
return nil, "", err
}
return m, "", err
return m, manifest.GuessMIMEType(m), err
}
func (s *dirImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return nil, "", errors.Errorf(`Getting target manifest not supported by "dir:"`)
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dirImageSource) GetBlob(digest string) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(digest))
func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(info.Digest))
if err != nil {
return nil, 0, nil
}

View File

@@ -1,15 +1,17 @@
package directory
import (
"errors"
"fmt"
"path/filepath"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/opencontainers/go-digest"
)
// Transport is an ImageTransport for directory paths.
@@ -32,7 +34,7 @@ func (t dirTransport) ParseReference(reference string) (types.ImageReference, er
// scope passed to this function will not be "", that value is always allowed.
func (t dirTransport) ValidatePolicyConfigurationScope(scope string) error {
if !strings.HasPrefix(scope, "/") {
return fmt.Errorf("Invalid scope %s: Must be an absolute path", scope)
return errors.Errorf("Invalid scope %s: Must be an absolute path", scope)
}
// Refuse also "/", otherwise "/" and "" would have the same semantics,
// and "" could be unexpectedly shadowed by the "/" entry.
@@ -41,7 +43,7 @@ func (t dirTransport) ValidatePolicyConfigurationScope(scope string) error {
}
cleaned := filepath.Clean(scope)
if cleaned != scope {
return fmt.Errorf(`Invalid scope %s: Uses non-canonical format, perhaps try %s`, scope, cleaned)
return errors.Errorf(`Invalid scope %s: Uses non-canonical format, perhaps try %s`, scope, cleaned)
}
return nil
}
@@ -127,11 +129,13 @@ func (ref dirReference) PolicyConfigurationNamespaces() []string {
return res
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref dirReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src := newImageSource(ref)
return image.FromSource(src), nil
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
@@ -150,7 +154,7 @@ func (ref dirReference) NewImageDestination(ctx *types.SystemContext) (types.Ima
// DeleteImage deletes the named image from the registry, if supported.
func (ref dirReference) DeleteImage(ctx *types.SystemContext) error {
return fmt.Errorf("Deleting images not implemented for dir: images")
return errors.Errorf("Deleting images not implemented for dir: images")
}
// manifestPath returns a path for the manifest within a directory using our conventions.
@@ -159,9 +163,9 @@ func (ref dirReference) manifestPath() string {
}
// layerPath returns a path for a layer tarball within a directory using our conventions.
func (ref dirReference) layerPath(digest string) string {
func (ref dirReference) layerPath(digest digest.Digest) string {
// FIXME: Should we keep the digest identification?
return filepath.Join(ref.path, strings.TrimPrefix(digest, "sha256:")+".tar")
return filepath.Join(ref.path, digest.Hex()+".tar")
}
// signaturePath returns a path for a signature within a directory using our conventions.

View File

@@ -1,9 +1,10 @@
package explicitfilepath
import (
"fmt"
"os"
"path/filepath"
"github.com/pkg/errors"
)
// ResolvePathToFullyExplicit returns the input path converted to an absolute, no-symlinks, cleaned up path.
@@ -25,14 +26,14 @@ func ResolvePathToFullyExplicit(path string) (string, error) {
// This can still happen if there is a filesystem race condition, causing the Lstat() above to fail but the later resolution to succeed.
// We do not care to promise anything if such filesystem race conditions can happen, but we definitely don't want to return "."/".." components
// in the resulting path, and especially not at the end.
return "", fmt.Errorf("Unexpectedly missing special filename component in %s", path)
return "", errors.Errorf("Unexpectedly missing special filename component in %s", path)
}
resolvedPath := filepath.Join(resolvedParent, file)
// As a sanity check, ensure that there are no "." or ".." components.
cleanedResolvedPath := filepath.Clean(resolvedPath)
if cleanedResolvedPath != resolvedPath {
// Coverage: This should never happen.
return "", fmt.Errorf("Internal inconsistency: Path %s resolved to %s still cleaned up to %s", path, resolvedPath, cleanedResolvedPath)
return "", errors.Errorf("Internal inconsistency: Path %s resolved to %s still cleaned up to %s", path, resolvedPath, cleanedResolvedPath)
}
return resolvedPath, nil
default: // err != nil, unrecognized

View File

@@ -0,0 +1,319 @@
package daemon
import (
"archive/tar"
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
type daemonImageDestination struct {
ref daemonReference
namedTaggedRef reference.NamedTagged // Strictly speaking redundant with ref above; having the field makes it structurally impossible for later users to fail.
// For talking to imageLoadGoroutine
goroutineCancel context.CancelFunc
statusChannel <-chan error
writer *io.PipeWriter
tar *tar.Writer
// Other state
committed bool // writer has been closed
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
}
// newImageDestination returns a types.ImageDestination for the specified image reference.
func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
if ref.ref == nil {
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
namedTaggedRef, ok := ref.ref.(reference.NamedTagged)
if !ok {
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
reader, writer := io.Pipe()
// Commit() may never be called, so we may never read from this channel; so, make this buffered to allow imageLoadGoroutine to write status and terminate even if we never read it.
statusChannel := make(chan error, 1)
ctx, goroutineCancel := context.WithCancel(context.Background())
go imageLoadGoroutine(ctx, c, reader, statusChannel)
return &daemonImageDestination{
ref: ref,
namedTaggedRef: namedTaggedRef,
goroutineCancel: goroutineCancel,
statusChannel: statusChannel,
writer: writer,
tar: tar.NewWriter(writer),
committed: false,
blobs: make(map[digest.Digest]types.BlobInfo),
}, nil
}
// imageLoadGoroutine accepts tar stream on reader, sends it to c, and reports error or success by writing to statusChannel
func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeReader, statusChannel chan<- error) {
err := errors.New("Internal error: unexpected panic in imageLoadGoroutine")
defer func() {
logrus.Debugf("docker-daemon: sending done, status %v", err)
statusChannel <- err
}()
defer func() {
if err == nil {
reader.Close()
} else {
reader.CloseWithError(err)
}
}()
resp, err := c.ImageLoad(ctx, reader, true)
if err != nil {
err = errors.Wrap(err, "Error saving image to docker engine")
return
}
defer resp.Body.Close()
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *daemonImageDestination) Close() {
if !d.committed {
logrus.Debugf("docker-daemon: Closing tar stream to abort loading")
// In principle, goroutineCancel() should abort the HTTP request and stop the process from continuing.
// In practice, though, various HTTP implementations used by client.Client.ImageLoad() (including
// https://github.com/golang/net/blob/master/context/ctxhttp/ctxhttp_pre17.go and the
// net/http version with native Context support in Go 1.7) do not always actually immediately cancel
// the operation: they may process the HTTP request, or a part of it, to completion in a goroutine, and
// return early if the context is canceled without terminating the goroutine at all.
// So we need this CloseWithError to terminate sending the HTTP request Body
// immediately, and hopefully, through terminating the sending which uses "Transfer-Encoding: chunked"" without sending
// the terminating zero-length chunk, prevent the docker daemon from processing the tar stream at all.
// Whether that works or not, closing the PipeWriter seems desirable in any case.
d.writer.CloseWithError(errors.New("Aborting upload, daemonImageDestination closed without a previous .Commit()"))
}
d.goroutineCancel()
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *daemonImageDestination) Reference() types.ImageReference {
return d.ref
}
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
// If an empty slice or nil it's returned, then any mime type can be tried to upload
func (d *daemonImageDestination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema2MediaType, // We rely on the types.Image.UpdatedImage schema conversion capabilities.
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *daemonImageDestination) SupportsSignatures() error {
return errors.Errorf("Storing signatures for docker-daemon: destinations is not supported")
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *daemonImageDestination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *daemonImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *daemonImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
if inputInfo.Digest.String() == "" {
return types.BlobInfo{}, errors.Errorf(`Can not stream a blob with unknown digest to "docker-daemon:"`)
}
if ok, size, err := d.HasBlob(inputInfo); err == nil && ok {
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
}
if inputInfo.Size == -1 { // Ouch, we need to stream the blob into a temporary file just to determine the size.
logrus.Debugf("docker-daemon: input with unknown size, streaming to disk first…")
streamCopy, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-daemon-blob")
if err != nil {
return types.BlobInfo{}, err
}
defer os.Remove(streamCopy.Name())
defer streamCopy.Close()
size, err := io.Copy(streamCopy, stream)
if err != nil {
return types.BlobInfo{}, err
}
_, err = streamCopy.Seek(0, os.SEEK_SET)
if err != nil {
return types.BlobInfo{}, err
}
inputInfo.Size = size // inputInfo is a struct, so we are only modifying our copy.
stream = streamCopy
logrus.Debugf("… streaming done")
}
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
if err := d.sendFile(inputInfo.Digest.String(), inputInfo.Size, tee); err != nil {
return types.BlobInfo{}, err
}
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}
return types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}, nil
}
func (d *daemonImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
if blob, ok := d.blobs[info.Digest]; ok {
return true, blob.Size, nil
}
return false, -1, types.ErrBlobNotFound
}
func (d *daemonImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
func (d *daemonImageDestination) PutManifest(m []byte) error {
var man schema2Manifest
if err := json.Unmarshal(m, &man); err != nil {
return errors.Wrap(err, "Error parsing manifest")
}
if man.SchemaVersion != 2 || man.MediaType != manifest.DockerV2Schema2MediaType {
return errors.Errorf("Unsupported manifest type, need a Docker schema 2 manifest")
}
layerPaths := []string{}
for _, l := range man.Layers {
layerPaths = append(layerPaths, l.Digest.String())
}
// For github.com/docker/docker consumers, this works just as well as
// refString := d.namedTaggedRef.String() [i.e. d.ref.ref.String()]
// because when reading the RepoTags strings, github.com/docker/docker/reference
// normalizes both of them to the same value.
//
// Doing it this way to include the normalized-out `docker.io[/library]` does make
// a difference for github.com/projectatomic/docker consumers, with the
// “Add --add-registry and --block-registry options to docker daemon” patch.
// These consumers treat reference strings which include a hostname and reference
// strings without a hostname differently.
//
// Using the host name here is more explicit about the intent, and it has the same
// effect as (docker pull) in projectatomic/docker, which tags the result using
// a hostname-qualified reference.
// See https://github.com/containers/image/issues/72 for a more detailed
// analysis and explanation.
refString := fmt.Sprintf("%s:%s", d.namedTaggedRef.FullName(), d.namedTaggedRef.Tag())
items := []manifestItem{{
Config: man.Config.Digest.String(),
RepoTags: []string{refString},
Layers: layerPaths,
Parent: "",
LayerSources: nil,
}}
itemsBytes, err := json.Marshal(&items)
if err != nil {
return err
}
// FIXME? Do we also need to support the legacy format?
return d.sendFile(manifestFileName, int64(len(itemsBytes)), bytes.NewReader(itemsBytes))
}
type tarFI struct {
path string
size int64
}
func (t *tarFI) Name() string {
return t.path
}
func (t *tarFI) Size() int64 {
return t.size
}
func (t *tarFI) Mode() os.FileMode {
return 0444
}
func (t *tarFI) ModTime() time.Time {
return time.Unix(0, 0)
}
func (t *tarFI) IsDir() bool {
return false
}
func (t *tarFI) Sys() interface{} {
return nil
}
// sendFile sends a file into the tar stream.
func (d *daemonImageDestination) sendFile(path string, expectedSize int64, stream io.Reader) error {
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: expectedSize}, "")
if err != nil {
return nil
}
logrus.Debugf("Sending as tar file %s", path)
if err := d.tar.WriteHeader(hdr); err != nil {
return err
}
size, err := io.Copy(d.tar, stream)
if err != nil {
return err
}
if size != expectedSize {
return errors.Errorf("Size mismatch when copying %s, expected %d, got %d", path, expectedSize, size)
}
return nil
}
func (d *daemonImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return errors.Errorf("Storing signatures for docker-daemon: destinations is not supported")
}
return nil
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *daemonImageDestination) Commit() error {
logrus.Debugf("docker-daemon: Closing tar stream")
if err := d.tar.Close(); err != nil {
return err
}
if err := d.writer.Close(); err != nil {
return err
}
d.committed = true // We may still fail, but we are done sending to imageLoadGoroutine.
logrus.Debugf("docker-daemon: Waiting for status")
err := <-d.statusChannel
return err
}

View File

@@ -0,0 +1,361 @@
package daemon
import (
"archive/tar"
"bytes"
"encoding/json"
"io"
"io/ioutil"
"os"
"path"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
type daemonImageSource struct {
ref daemonReference
tarCopyPath string
// The following data is only available after ensureCachedDataIsPresent() succeeds
tarManifest *manifestItem // nil if not available yet.
configBytes []byte
configDigest digest.Digest
orderedDiffIDList []diffID
knownLayers map[diffID]*layerInfo
// Other state
generatedManifest []byte // Private cache for GetManifest(), nil if not set yet.
}
type layerInfo struct {
path string
size int64
}
// newImageSource returns a types.ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
//
// It would be great if we were able to stream the input tar as it is being
// sent; but Docker sends the top-level manifest, which determines which paths
// to look for, at the end, so in we will need to seek back and re-read, several times.
// (We could, perhaps, expect an exact sequence, assume that the first plaintext file
// is the config, and that the following len(RootFS) files are the layers, but that feels
// way too brittle.)
func newImageSource(ctx *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
// Per NewReference(), ref.StringWithinTransport() is either an image ID (config digest), or a !reference.NameOnly() reference.
// Either way ImageSave should create a tarball with exactly one image.
inputStream, err := c.ImageSave(context.TODO(), []string{ref.StringWithinTransport()})
if err != nil {
return nil, errors.Wrap(err, "Error loading image from docker engine")
}
defer inputStream.Close()
// FIXME: use SystemContext here.
tarCopyFile, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-daemon-tar")
if err != nil {
return nil, err
}
defer tarCopyFile.Close()
succeeded := false
defer func() {
if !succeeded {
os.Remove(tarCopyFile.Name())
}
}()
if _, err := io.Copy(tarCopyFile, inputStream); err != nil {
return nil, err
}
succeeded = true
return &daemonImageSource{
ref: ref,
tarCopyPath: tarCopyFile.Name(),
}, nil
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (s *daemonImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *daemonImageSource) Close() {
_ = os.Remove(s.tarCopyPath)
}
// tarReadCloser is a way to close the backing file of a tar.Reader when the user no longer needs the tar component.
type tarReadCloser struct {
*tar.Reader
backingFile *os.File
}
func (t *tarReadCloser) Close() error {
return t.backingFile.Close()
}
// openTarComponent returns a ReadCloser for the specific file within the archive.
// This is linear scan; we assume that the tar file will have a fairly small amount of files (~layers),
// and that filesystem caching will make the repeated seeking over the (uncompressed) tarCopyPath cheap enough.
// The caller should call .Close() on the returned stream.
func (s *daemonImageSource) openTarComponent(componentPath string) (io.ReadCloser, error) {
f, err := os.Open(s.tarCopyPath)
if err != nil {
return nil, err
}
succeeded := false
defer func() {
if !succeeded {
f.Close()
}
}()
tarReader, header, err := findTarComponent(f, componentPath)
if err != nil {
return nil, err
}
if header == nil {
return nil, os.ErrNotExist
}
if header.FileInfo().Mode()&os.ModeType == os.ModeSymlink { // FIXME: untested
// We follow only one symlink; so no loops are possible.
if _, err := f.Seek(0, os.SEEK_SET); err != nil {
return nil, err
}
// The new path could easily point "outside" the archive, but we only compare it to existing tar headers without extracting the archive,
// so we don't care.
tarReader, header, err = findTarComponent(f, path.Join(path.Dir(componentPath), header.Linkname))
if err != nil {
return nil, err
}
if header == nil {
return nil, os.ErrNotExist
}
}
if !header.FileInfo().Mode().IsRegular() {
return nil, errors.Errorf("Error reading tar archive component %s: not a regular file", header.Name)
}
succeeded = true
return &tarReadCloser{Reader: tarReader, backingFile: f}, nil
}
// findTarComponent returns a header and a reader matching path within inputFile,
// or (nil, nil, nil) if not found.
func findTarComponent(inputFile io.Reader, path string) (*tar.Reader, *tar.Header, error) {
t := tar.NewReader(inputFile)
for {
h, err := t.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, nil, err
}
if h.Name == path {
return t, h, nil
}
}
return nil, nil, nil
}
// readTarComponent returns full contents of componentPath.
func (s *daemonImageSource) readTarComponent(path string) ([]byte, error) {
file, err := s.openTarComponent(path)
if err != nil {
return nil, errors.Wrapf(err, "Error loading tar component %s", path)
}
defer file.Close()
bytes, err := ioutil.ReadAll(file)
if err != nil {
return nil, err
}
return bytes, nil
}
// ensureCachedDataIsPresent loads data necessary for any of the public accessors.
func (s *daemonImageSource) ensureCachedDataIsPresent() error {
if s.tarManifest != nil {
return nil
}
// Read and parse manifest.json
tarManifest, err := s.loadTarManifest()
if err != nil {
return err
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest.Config)
if err != nil {
return err
}
var parsedConfig dockerImage // Most fields ommitted, we only care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest.Config)
}
knownLayers, err := s.prepareLayerData(tarManifest, &parsedConfig)
if err != nil {
return err
}
// Success; commit.
s.tarManifest = tarManifest
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
s.knownLayers = knownLayers
return nil
}
// loadTarManifest loads and decodes the manifest.json.
func (s *daemonImageSource) loadTarManifest() (*manifestItem, error) {
// FIXME? Do we need to deal with the legacy format?
bytes, err := s.readTarComponent(manifestFileName)
if err != nil {
return nil, err
}
var items []manifestItem
if err := json.Unmarshal(bytes, &items); err != nil {
return nil, errors.Wrap(err, "Error decoding tar manifest.json")
}
if len(items) != 1 {
return nil, errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(items))
}
return &items[0], nil
}
func (s *daemonImageSource) prepareLayerData(tarManifest *manifestItem, parsedConfig *dockerImage) (map[diffID]*layerInfo, error) {
// Collect layer data available in manifest and config.
if len(tarManifest.Layers) != len(parsedConfig.RootFS.DiffIDs) {
return nil, errors.Errorf("Inconsistent layer count: %d in manifest, %d in config", len(tarManifest.Layers), len(parsedConfig.RootFS.DiffIDs))
}
knownLayers := map[diffID]*layerInfo{}
unknownLayerSizes := map[string]*layerInfo{} // Points into knownLayers, a "to do list" of items with unknown sizes.
for i, diffID := range parsedConfig.RootFS.DiffIDs {
if _, ok := knownLayers[diffID]; ok {
// Apparently it really can happen that a single image contains the same layer diff more than once.
// In that case, the diffID validation ensures that both layers truly are the same, and it should not matter
// which of the tarManifest.Layers paths is used; (docker save) actually makes the duplicates symlinks to the original.
continue
}
layerPath := tarManifest.Layers[i]
if _, ok := unknownLayerSizes[layerPath]; ok {
return nil, errors.Errorf("Layer tarfile %s used for two different DiffID values", layerPath)
}
li := &layerInfo{ // A new element in each iteration
path: layerPath,
size: -1,
}
knownLayers[diffID] = li
unknownLayerSizes[layerPath] = li
}
// Scan the tar file to collect layer sizes.
file, err := os.Open(s.tarCopyPath)
if err != nil {
return nil, err
}
defer file.Close()
t := tar.NewReader(file)
for {
h, err := t.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, err
}
if li, ok := unknownLayerSizes[h.Name]; ok {
li.size = h.Size
delete(unknownLayerSizes, h.Name)
}
}
if len(unknownLayerSizes) != 0 {
return nil, errors.Errorf("Some layer tarfiles are missing in the tarball") // This could do with a better error reporting, if this ever happened in practice.
}
return knownLayers, nil
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *daemonImageSource) GetManifest() ([]byte, string, error) {
if s.generatedManifest == nil {
if err := s.ensureCachedDataIsPresent(); err != nil {
return nil, "", err
}
m := schema2Manifest{
SchemaVersion: 2,
MediaType: manifest.DockerV2Schema2MediaType,
Config: distributionDescriptor{
MediaType: manifest.DockerV2Schema2ConfigMediaType,
Size: int64(len(s.configBytes)),
Digest: s.configDigest,
},
Layers: []distributionDescriptor{},
}
for _, diffID := range s.orderedDiffIDList {
li, ok := s.knownLayers[diffID]
if !ok {
return nil, "", errors.Errorf("Internal inconsistency: Information about layer %s missing", diffID)
}
m.Layers = append(m.Layers, distributionDescriptor{
Digest: digest.Digest(diffID), // diffID is a digest of the uncompressed tarball
MediaType: manifest.DockerV2Schema2LayerMediaType,
Size: li.size,
})
}
manifestBytes, err := json.Marshal(&m)
if err != nil {
return nil, "", err
}
s.generatedManifest = manifestBytes
}
return s.generatedManifest, manifest.DockerV2Schema2MediaType, nil
}
// GetTargetManifest returns an image's manifest given a digest. This is mainly used to retrieve a single image's manifest
// out of a manifest list.
func (s *daemonImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
// How did we even get here? GetManifest() above has returned a manifest.DockerV2Schema2MediaType.
return nil, "", errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *daemonImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if err := s.ensureCachedDataIsPresent(); err != nil {
return nil, 0, err
}
if info.Digest == s.configDigest { // FIXME? Implement a more general algorithm matching instead of assuming sha256.
return ioutil.NopCloser(bytes.NewReader(s.configBytes)), int64(len(s.configBytes)), nil
}
if li, ok := s.knownLayers[diffID(info.Digest)]; ok { // diffID is a digest of the uncompressed tarball,
stream, err := s.openTarComponent(li.path)
if err != nil {
return nil, 0, err
}
return stream, li.size, nil
}
return nil, 0, errors.Errorf("Unknown blob %s", info.Digest)
}
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
func (s *daemonImageSource) GetSignatures() ([][]byte, error) {
return [][]byte{}, nil
}

View File

@@ -0,0 +1,178 @@
package daemon
import (
"github.com/pkg/errors"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
)
// Transport is an ImageTransport for images managed by a local Docker daemon.
var Transport = daemonTransport{}
type daemonTransport struct{}
// Name returns the name of the transport, which must be unique among other transports.
func (t daemonTransport) Name() string {
return "docker-daemon"
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func (t daemonTransport) ParseReference(reference string) (types.ImageReference, error) {
return ParseReference(reference)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t daemonTransport) ValidatePolicyConfigurationScope(scope string) error {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return errors.New(`docker-daemon: does not support any scopes except the default "" one`)
}
// daemonReference is an ImageReference for images managed by a local Docker daemon
// Exactly one of id and ref can be set.
// For daemonImageSource, both id and ref are acceptable, ref must not be a NameOnly (interpreted as all tags in that repository by the daemon)
// For daemonImageDestination, it must be a ref, which is NamedTagged.
// (We could, in principle, also allow storing images without tagging them, and the user would have to refer to them using the docker image ID = config digest.
// Using the config digest requires the caller to parse the manifest themselves, which is very cumbersome; so, for now, we dont bother.)
type daemonReference struct {
id digest.Digest
ref reference.Named // !reference.IsNameOnly
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func ParseReference(refString string) (types.ImageReference, error) {
// This is intended to be compatible with reference.ParseIDOrReference, but more strict about refusing some of the ambiguous cases.
// In particular, this rejects unprefixed digest values (64 hex chars), and sha256 digest prefixes (sha256:fewer-than-64-hex-chars).
// digest:hexstring is structurally the same as a reponame:tag (meaning docker.io/library/reponame:tag).
// reference.ParseIDOrReference interprets such strings as digests.
if dgst, err := digest.Parse(refString); err == nil {
// The daemon explicitly refuses to tag images with a reponame equal to digest.Canonical - but _only_ this digest name.
// Other digest references are ambiguous, so refuse them.
if dgst.Algorithm() != digest.Canonical {
return nil, errors.Errorf("Invalid docker-daemon: reference %s: only digest algorithm %s accepted", refString, digest.Canonical)
}
return NewReference(dgst, nil)
}
ref, err := reference.ParseNamed(refString) // This also rejects unprefixed digest values
if err != nil {
return nil, err
}
if ref.Name() == digest.Canonical.String() {
return nil, errors.Errorf("Invalid docker-daemon: reference %s: The %s repository name is reserved for (non-shortened) digest references", refString, digest.Canonical)
}
return NewReference("", ref)
}
// NewReference returns a docker-daemon reference for either the supplied image ID (config digest) or the supplied reference (which must satisfy !reference.IsNameOnly)
func NewReference(id digest.Digest, ref reference.Named) (types.ImageReference, error) {
if id != "" && ref != nil {
return nil, errors.New("docker-daemon: reference must not have an image ID and a reference string specified at the same time")
}
if ref != nil {
if reference.IsNameOnly(ref) {
return nil, errors.Errorf("docker-daemon: reference %s has neither a tag nor a digest", ref.String())
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// docker/reference does not handle that, so fail.
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
return nil, errors.Errorf("docker-daemon: references with both a tag and digest are currently not supported")
}
}
return daemonReference{
id: id,
ref: ref,
}, nil
}
func (ref daemonReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix;
// instead, see transports.ImageName().
func (ref daemonReference) StringWithinTransport() string {
switch {
case ref.id != "":
return ref.id.String()
case ref.ref != nil:
return ref.ref.String()
default: // Coverage: Should never happen, NewReference above should refuse such values.
panic("Internal inconsistency: daemonReference has empty id and nil ref")
}
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref daemonReference) DockerReference() reference.Named {
return ref.ref // May be nil
}
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
// (i.e. various references with exactly the same semantics should return the same configuration identity)
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
// Returns "" if configuration identities for these references are not supported.
func (ref daemonReference) PolicyConfigurationIdentity() string {
// We must allow referring to images in the daemon by image ID, otherwise untagged images would not be accessible.
// But the existence of image IDs means that we cant truly well namespace the input; the untagged images would have to fall into the default policy,
// which can be unexpected. So, punt.
return "" // This still allows using the default "" scope to define a policy for this transport.
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref daemonReference) PolicyConfigurationNamespaces() []string {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return []string{}
}
// NewImage returns a types.Image for this reference.
// The caller must call .Close() on the returned Image.
func (ref daemonReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := newImageSource(ctx, ref)
if err != nil {
return nil, err
}
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref daemonReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref daemonReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref daemonReference) DeleteImage(ctx *types.SystemContext) error {
// Should this just untag the image? Should this stop running containers?
// The semantics is not quite as clear as for remote repositories.
// The user can run (docker rmi) directly anyway, so, for now(?), punt instead of trying to guess what the user meant.
return errors.Errorf("Deleting images not implemented for docker-daemon: images")
}

View File

@@ -0,0 +1,53 @@
package daemon
import "github.com/opencontainers/go-digest"
// Various data structures.
// Based on github.com/docker/docker/image/tarexport/tarexport.go
const (
manifestFileName = "manifest.json"
// legacyLayerFileName = "layer.tar"
// legacyConfigFileName = "json"
// legacyVersionFileName = "VERSION"
// legacyRepositoriesFileName = "repositories"
)
type manifestItem struct {
Config string
RepoTags []string
Layers []string
Parent imageID `json:",omitempty"`
LayerSources map[diffID]distributionDescriptor `json:",omitempty"`
}
type imageID string
type diffID digest.Digest
// Based on github.com/docker/distribution/blobs.go
type distributionDescriptor struct {
MediaType string `json:"mediaType,omitempty"`
Size int64 `json:"size,omitempty"`
Digest digest.Digest `json:"digest,omitempty"`
URLs []string `json:"urls,omitempty"`
}
// Based on github.com/docker/distribution/manifest/schema2/manifest.go
// FIXME: We are repeating this all over the place; make a public copy?
type schema2Manifest struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType,omitempty"`
Config distributionDescriptor `json:"config"`
Layers []distributionDescriptor `json:"layers"`
}
// Based on github.com/docker/docker/image/image.go
// MOST CONTENT OMITTED AS UNNECESSARY
type dockerImage struct {
RootFS *rootFS `json:"rootfs,omitempty"`
}
type rootFS struct {
Type string `json:"type"`
DiffIDs []diffID `json:"diff_ids,omitempty"`
}

View File

@@ -7,14 +7,19 @@ import (
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"os"
"path/filepath"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/types"
"github.com/docker/docker/pkg/homedir"
"github.com/containers/storage/pkg/homedir"
"github.com/docker/go-connections/sockets"
"github.com/docker/go-connections/tlsconfig"
"github.com/pkg/errors"
)
const (
@@ -27,55 +32,161 @@ const (
dockerCfgObsolete = ".dockercfg"
baseURL = "%s://%s/v2/"
baseURLV1 = "%s://%s/v1/_ping"
tagsURL = "%s/tags/list"
manifestURL = "%s/manifests/%s"
blobsURL = "%s/blobs/%s"
blobUploadURL = "%s/blobs/uploads/"
minimumTokenLifetimeSeconds = 60
)
// ErrV1NotSupported is returned when we're trying to talk to a
// docker V1 registry.
var ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
type bearerToken struct {
Token string `json:"token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
}
// dockerClient is configuration for dealing with a single Docker registry.
type dockerClient struct {
ctx *types.SystemContext
registry string
username string
password string
wwwAuthenticate string // Cache of a value set by ping() if scheme is not empty
scheme string // Cache of a value returned by a successful ping() if not empty
client *http.Client
signatureBase signatureStorageBase
challenges []challenge
scope authScope
token *bearerToken
tokenExpiration time.Time
}
type authScope struct {
remoteName string
actions string
}
// this is cloned from docker/go-connections because upstream docker has changed
// it and make deps here fails otherwise.
// We'll drop this once we upgrade to docker 1.13.x deps.
func serverDefault() *tls.Config {
return &tls.Config{
// Avoid fallback to SSL protocols < TLS1.0
MinVersion: tls.VersionTLS10,
PreferServerCipherSuites: true,
CipherSuites: tlsconfig.DefaultServerAcceptedCiphers,
}
}
func newTransport() *http.Transport {
direct := &net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}
tr := &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: direct.Dial,
TLSHandshakeTimeout: 10 * time.Second,
// TODO(dmcgowan): Call close idle connections when complete and use keep alive
DisableKeepAlives: true,
}
proxyDialer, err := sockets.DialerFromEnvironment(direct)
if err == nil {
tr.Dial = proxyDialer.Dial
}
return tr
}
func setupCertificates(dir string, tlsc *tls.Config) error {
if dir == "" {
return nil
}
fs, err := ioutil.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
return err
}
for _, f := range fs {
fullPath := filepath.Join(dir, f.Name())
if strings.HasSuffix(f.Name(), ".crt") {
systemPool, err := tlsconfig.SystemCertPool()
if err != nil {
return errors.Wrap(err, "unable to get system cert pool")
}
tlsc.RootCAs = systemPool
logrus.Debugf("crt: %s", fullPath)
data, err := ioutil.ReadFile(fullPath)
if err != nil {
return err
}
tlsc.RootCAs.AppendCertsFromPEM(data)
}
if strings.HasSuffix(f.Name(), ".cert") {
certName := f.Name()
keyName := certName[:len(certName)-5] + ".key"
logrus.Debugf("cert: %s", fullPath)
if !hasFile(fs, keyName) {
return errors.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
cert, err := tls.LoadX509KeyPair(filepath.Join(dir, certName), filepath.Join(dir, keyName))
if err != nil {
return err
}
tlsc.Certificates = append(tlsc.Certificates, cert)
}
if strings.HasSuffix(f.Name(), ".key") {
keyName := f.Name()
certName := keyName[:len(keyName)-4] + ".cert"
logrus.Debugf("key: %s", fullPath)
if !hasFile(fs, certName) {
return errors.Errorf("missing client certificate %s for key %s", certName, keyName)
}
}
}
return nil
}
func hasFile(files []os.FileInfo, name string) bool {
for _, f := range files {
if f.Name() == name {
return true
}
}
return false
}
// newDockerClient returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
// “write” specifies whether the client will be used for "write" access (in particular passed to lookaside.go:toplevelFromSection)
func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool) (*dockerClient, error) {
func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
registry := ref.ref.Hostname()
if registry == dockerHostname {
registry = dockerRegistry
}
username, password, err := getAuth(ref.ref.Hostname())
username, password, err := getAuth(ctx, ref.ref.Hostname())
if err != nil {
return nil, err
}
var tr *http.Transport
tr := newTransport()
if ctx != nil && (ctx.DockerCertPath != "" || ctx.DockerInsecureSkipTLSVerify) {
tlsc := &tls.Config{}
if ctx.DockerCertPath != "" {
cert, err := tls.LoadX509KeyPair(filepath.Join(ctx.DockerCertPath, "cert.pem"), filepath.Join(ctx.DockerCertPath, "key.pem"))
if err != nil {
return nil, fmt.Errorf("Error loading x509 key pair: %s", err)
}
tlsc.Certificates = append(tlsc.Certificates, cert)
if err := setupCertificates(ctx.DockerCertPath, tlsc); err != nil {
return nil, err
}
tlsc.InsecureSkipVerify = ctx.DockerInsecureSkipTLSVerify
tr = &http.Transport{
TLSClientConfig: tlsc,
}
tr.TLSClientConfig = tlsc
}
client := &http.Client{}
if tr != nil {
client.Transport = tr
if tr.TLSClientConfig == nil {
tr.TLSClientConfig = serverDefault()
}
client := &http.Client{Transport: tr}
sigBase, err := configuredSignatureStorageBase(ctx, ref, write)
if err != nil {
@@ -89,6 +200,10 @@ func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool)
password: password,
client: client,
signatureBase: sigBase,
scope: authScope{
actions: actions,
remoteName: ref.ref.RemoteName(),
},
}, nil
}
@@ -96,22 +211,20 @@ func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool)
// url is NOT an absolute URL, but a path relative to the /v2/ top-level API path. The host name and schema is taken from the client or autodetected.
func (c *dockerClient) makeRequest(method, url string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
if c.scheme == "" {
pr, err := c.ping()
if err != nil {
if err := c.ping(); err != nil {
return nil, err
}
c.wwwAuthenticate = pr.WWWAuthenticate
c.scheme = pr.scheme
}
url = fmt.Sprintf(baseURL, c.scheme, c.registry) + url
return c.makeRequestToResolvedURL(method, url, headers, stream, -1)
return c.makeRequestToResolvedURL(method, url, headers, stream, -1, true)
}
// makeRequestToResolvedURL creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// streamLen, if not -1, specifies the length of the data expected on stream.
// makeRequest should generally be preferred.
func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[string][]string, stream io.Reader, streamLen int64) (*http.Response, error) {
// TODO(runcom): too many arguments here, use a struct
func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[string][]string, stream io.Reader, streamLen int64, sendAuth bool) (*http.Response, error) {
req, err := http.NewRequest(method, url, stream)
if err != nil {
return nil, err
@@ -125,7 +238,10 @@ func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[
req.Header.Add(n, hh)
}
}
if c.wwwAuthenticate != "" {
if c.ctx != nil && c.ctx.DockerRegistryUserAgent != "" {
req.Header.Add("User-Agent", c.ctx.DockerRegistryUserAgent)
}
if sendAuth {
if err := c.setupRequestAuth(req); err != nil {
return nil, err
}
@@ -138,63 +254,48 @@ func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[
return res, nil
}
// we're using the challenges from the /v2/ ping response and not the one from the destination
// URL in this request because:
//
// 1) docker does that as well
// 2) gcr.io is sending 401 without a WWW-Authenticate header in the real request
//
// debugging: https://github.com/containers/image/pull/211#issuecomment-273426236 and follows up
func (c *dockerClient) setupRequestAuth(req *http.Request) error {
tokens := strings.SplitN(strings.TrimSpace(c.wwwAuthenticate), " ", 2)
if len(tokens) != 2 {
return fmt.Errorf("expected 2 tokens in WWW-Authenticate: %d, %s", len(tokens), c.wwwAuthenticate)
if len(c.challenges) == 0 {
return nil
}
switch tokens[0] {
case "Basic":
// assume just one...
challenge := c.challenges[0]
switch challenge.Scheme {
case "basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "Bearer":
// FIXME? This gets a new token for every API request;
// we may be easily able to reuse a previous token, e.g.
// for OpenShift the token only identifies the user and does not vary
// across operations. Should we just try the request first, and
// only get a new token on failure?
// OTOH what to do with the single-use body stream in that case?
// Try performing the request, expecting it to fail.
testReq := *req
// Do not use the body stream, or we couldn't reuse it for the "real" call later.
testReq.Body = nil
testReq.ContentLength = 0
res, err := c.client.Do(&testReq)
if err != nil {
return err
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope := fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
}
chs := parseAuthHeader(res.Header)
if res.StatusCode != http.StatusUnauthorized || chs == nil || len(chs) == 0 {
// no need for bearer? wtf?
return nil
}
// Arbitrarily use the first challenge, there is no reason to expect more than one.
challenge := chs[0]
if challenge.Scheme != "bearer" { // Another artifact of trying to handle WWW-Authenticate before it actually happens.
return fmt.Errorf("Unimplemented: WWW-Authenticate Bearer replaced by %#v", challenge.Scheme)
}
realm, ok := challenge.Parameters["realm"]
if !ok {
return fmt.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope, _ := challenge.Parameters["scope"] // Will be "" if not present
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
return nil
}
return fmt.Errorf("no handler for %s authentication", tokens[0])
// support docker bearer with authconfig's Auth string? see docker2aci
return errors.Errorf("no handler for %s authentication", challenge.Scheme)
}
func (c *dockerClient) getBearerToken(realm, service, scope string) (string, error) {
func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToken, error) {
authReq, err := http.NewRequest("GET", realm, nil)
if err != nil {
return "", err
return nil, err
}
getParams := authReq.URL.Query()
if service != "" {
@@ -207,145 +308,151 @@ func (c *dockerClient) getBearerToken(realm, service, scope string) (string, err
if c.username != "" && c.password != "" {
authReq.SetBasicAuth(c.username, c.password)
}
// insecure for now to contact the external token service
tr := &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}}
tr := newTransport()
// TODO(runcom): insecure for now to contact the external token service
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
client := &http.Client{Transport: tr}
res, err := client.Do(authReq)
if err != nil {
return "", err
return nil, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusUnauthorized:
return "", fmt.Errorf("unable to retrieve auth token: 401 unauthorized")
return nil, errors.Errorf("unable to retrieve auth token: 401 unauthorized")
case http.StatusOK:
break
default:
return "", fmt.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
return nil, errors.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
}
tokenBlob, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
return nil, err
}
tokenStruct := struct {
Token string `json:"token"`
}{}
if err := json.Unmarshal(tokenBlob, &tokenStruct); err != nil {
return "", err
var token bearerToken
if err := json.Unmarshal(tokenBlob, &token); err != nil {
return nil, err
}
// TODO(runcom): reuse tokens?
//hostAuthTokens, ok = rb.hostsV2AuthTokens[req.URL.Host]
//if !ok {
//hostAuthTokens = make(map[string]string)
//rb.hostsV2AuthTokens[req.URL.Host] = hostAuthTokens
//}
//hostAuthTokens[repo] = tokenStruct.Token
return tokenStruct.Token, nil
if token.ExpiresIn < minimumTokenLifetimeSeconds {
token.ExpiresIn = minimumTokenLifetimeSeconds
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
}
if token.IssuedAt.IsZero() {
token.IssuedAt = time.Now().UTC()
}
return &token, nil
}
func getAuth(hostname string) (string, string, error) {
// TODO(runcom): get this from *cli.Context somehow
//if username != "" && password != "" {
//return username, password, nil
//}
if hostname == dockerHostname {
hostname = dockerAuthRegistry
func getAuth(ctx *types.SystemContext, registry string) (string, string, error) {
if ctx != nil && ctx.DockerAuthConfig != nil {
return ctx.DockerAuthConfig.Username, ctx.DockerAuthConfig.Password, nil
}
var dockerAuth dockerConfigFile
dockerCfgPath := filepath.Join(getDefaultConfigDir(".docker"), dockerCfgFileName)
if _, err := os.Stat(dockerCfgPath); err == nil {
j, err := ioutil.ReadFile(dockerCfgPath)
if err != nil {
return "", "", err
}
var dockerAuth dockerConfigFile
if err := json.Unmarshal(j, &dockerAuth); err != nil {
return "", "", err
}
// try the normal case
if c, ok := dockerAuth.AuthConfigs[hostname]; ok {
return decodeDockerAuth(c.Auth)
}
} else if os.IsNotExist(err) {
// try old config path
oldDockerCfgPath := filepath.Join(getDefaultConfigDir(dockerCfgObsolete))
if _, err := os.Stat(oldDockerCfgPath); err != nil {
return "", "", nil //missing file is not an error
if os.IsNotExist(err) {
return "", "", nil
}
return "", "", errors.Wrap(err, oldDockerCfgPath)
}
j, err := ioutil.ReadFile(oldDockerCfgPath)
if err != nil {
return "", "", err
}
var dockerAuthOld map[string]dockerAuthConfigObsolete
if err := json.Unmarshal(j, &dockerAuthOld); err != nil {
if err := json.Unmarshal(j, &dockerAuth.AuthConfigs); err != nil {
return "", "", err
}
if c, ok := dockerAuthOld[hostname]; ok {
return decodeDockerAuth(c.Auth)
}
} else {
// if file is there but we can't stat it for any reason other
// than it doesn't exist then stop
return "", "", fmt.Errorf("%s - %v", dockerCfgPath, err)
} else if err != nil {
return "", "", errors.Wrap(err, dockerCfgPath)
}
// I'm feeling lucky
if c, exists := dockerAuth.AuthConfigs[registry]; exists {
return decodeDockerAuth(c.Auth)
}
// bad luck; let's normalize the entries first
registry = normalizeRegistry(registry)
normalizedAuths := map[string]dockerAuthConfig{}
for k, v := range dockerAuth.AuthConfigs {
normalizedAuths[normalizeRegistry(k)] = v
}
if c, exists := normalizedAuths[registry]; exists {
return decodeDockerAuth(c.Auth)
}
return "", "", nil
}
type apiErr struct {
Code string
Message string
Detail interface{}
}
type pingResponse struct {
WWWAuthenticate string
APIVersion string
scheme string
errors []apiErr
}
func (c *dockerClient) ping() (*pingResponse, error) {
ping := func(scheme string) (*pingResponse, error) {
func (c *dockerClient) ping() error {
ping := func(scheme string) error {
url := fmt.Sprintf(baseURL, scheme, c.registry)
resp, err := c.client.Get(url)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
if err != nil {
return nil, err
return err
}
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", scheme+"://"+c.registry+"/v2/", resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return nil, fmt.Errorf("error pinging repository, response code %d", resp.StatusCode)
return errors.Errorf("error pinging repository, response code %d", resp.StatusCode)
}
pr := &pingResponse{}
pr.WWWAuthenticate = resp.Header.Get("WWW-Authenticate")
pr.APIVersion = resp.Header.Get("Docker-Distribution-Api-Version")
pr.scheme = scheme
if resp.StatusCode == http.StatusUnauthorized {
type APIErrors struct {
Errors []apiErr
}
errs := &APIErrors{}
if err := json.NewDecoder(resp.Body).Decode(errs); err != nil {
return nil, err
}
pr.errors = errs.Errors
}
return pr, nil
c.challenges = parseAuthHeader(resp.Header)
c.scheme = scheme
return nil
}
pr, err := ping("https")
err := ping("https")
if err != nil && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
pr, err = ping("http")
err = ping("http")
}
return pr, err
if err != nil {
err = errors.Wrap(err, "pinging docker registry returned")
if c.ctx != nil && c.ctx.DockerDisableV1Ping {
return err
}
// best effort to understand if we're talking to a V1 registry
pingV1 := func(scheme string) bool {
url := fmt.Sprintf(baseURLV1, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
if err != nil {
return false
}
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", scheme+"://"+c.registry+"/v1/_ping", resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return false
}
return true
}
isV1 := pingV1("https")
if !isV1 && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
isV1 = pingV1("http")
}
if isV1 {
err = ErrV1NotSupported
}
}
return err
}
func getDefaultConfigDir(confPath string) string {
return filepath.Join(homedir.Get(), confPath)
}
type dockerAuthConfigObsolete struct {
Auth string `json:"auth"`
}
type dockerAuthConfig struct {
Auth string `json:"auth,omitempty"`
}
@@ -368,3 +475,28 @@ func decodeDockerAuth(s string) (string, string, error) {
password := strings.Trim(parts[1], "\x00")
return user, password, nil
}
// convertToHostname converts a registry url which has http|https prepended
// to just an hostname.
// Copied from github.com/docker/docker/registry/auth.go
func convertToHostname(url string) string {
stripped := url
if strings.HasPrefix(url, "http://") {
stripped = strings.TrimPrefix(url, "http://")
} else if strings.HasPrefix(url, "https://") {
stripped = strings.TrimPrefix(url, "https://")
}
nameParts := strings.SplitN(stripped, "/", 2)
return nameParts[0]
}
func normalizeRegistry(registry string) string {
normalized := convertToHostname(registry)
switch normalized {
case "registry-1.docker.io", "docker.io":
return "index.docker.io"
}
return normalized
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/containers/image/image"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// Image is a Docker-specific implementation of types.Image with a few extra methods
@@ -24,7 +25,11 @@ func newImage(ctx *types.SystemContext, ref dockerReference) (types.Image, error
if err != nil {
return nil, err
}
return &Image{Image: image.FromSource(s), src: s}, nil
img, err := image.FromSource(s)
if err != nil {
return nil, err
}
return &Image{Image: img, src: s}, nil
}
// SourceRefFullName returns a fully expanded name for the repository this image is in.
@@ -42,7 +47,7 @@ func (i *Image) GetRepositoryTags() ([]string, error) {
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
// print url also
return nil, fmt.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
return nil, errors.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
}
type tagsRes struct {
Tags []string

View File

@@ -2,8 +2,6 @@ package docker
import (
"bytes"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
@@ -11,23 +9,39 @@ import (
"net/url"
"os"
"path/filepath"
"strconv"
"github.com/Sirupsen/logrus"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
var manifestMIMETypes = []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
func supportedManifestMIMETypesMap() map[string]bool {
m := make(map[string]bool, len(manifestMIMETypes))
for _, mt := range manifestMIMETypes {
m[mt] = true
}
return m
}
type dockerImageDestination struct {
ref dockerReference
c *dockerClient
// State
manifestDigest string // or "" if not yet known.
manifestDigest digest.Digest // or "" if not yet known.
}
// newImageDestination creates a new ImageDestination for the specified image reference.
func newImageDestination(ctx *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
c, err := newDockerClient(ctx, ref, true)
c, err := newDockerClient(ctx, ref, true, "push")
if err != nil {
return nil, err
}
@@ -48,18 +62,13 @@ func (d *dockerImageDestination) Close() {
}
func (d *dockerImageDestination) SupportedManifestMIMETypes() []string {
return []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
return manifestMIMETypes
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *dockerImageDestination) SupportsSignatures() error {
return fmt.Errorf("Pushing signatures to a Docker Registry is not supported")
return errors.Errorf("Pushing signatures to a Docker Registry is not supported")
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
@@ -67,6 +76,12 @@ func (d *dockerImageDestination) ShouldCompressLayers() bool {
return true
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }
@@ -82,8 +97,8 @@ func (c *sizeCounter) Write(p []byte) (n int, err error) {
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
if inputInfo.Digest != "" {
checkURL := fmt.Sprintf(blobsURL, d.ref.ref.RemoteName(), inputInfo.Digest)
if inputInfo.Digest.String() != "" {
checkURL := fmt.Sprintf(blobsURL, d.ref.ref.RemoteName(), inputInfo.Digest.String())
logrus.Debugf("Checking %s", checkURL)
res, err := d.c.makeRequest("HEAD", checkURL, nil, nil)
@@ -94,18 +109,14 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
switch res.StatusCode {
case http.StatusOK:
logrus.Debugf("... already exists, not uploading")
blobLength, err := strconv.ParseInt(res.Header.Get("Content-Length"), 10, 64)
if err != nil {
return types.BlobInfo{}, err
}
return types.BlobInfo{Digest: inputInfo.Digest, Size: blobLength}, nil
return types.BlobInfo{Digest: inputInfo.Digest, Size: getBlobSize(res)}, nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return types.BlobInfo{}, fmt.Errorf("not authorized to read from destination repository %s", d.ref.ref.RemoteName())
return types.BlobInfo{}, errors.Errorf("not authorized to read from destination repository %s", d.ref.ref.RemoteName())
case http.StatusNotFound:
// noop
default:
return types.BlobInfo{}, fmt.Errorf("failed to read from destination repository %s: %v", d.ref.ref.RemoteName(), http.StatusText(res.StatusCode))
return types.BlobInfo{}, errors.Errorf("failed to read from destination repository %s: %v", d.ref.ref.RemoteName(), http.StatusText(res.StatusCode))
}
logrus.Debugf("... failed, status %d", res.StatusCode)
}
@@ -120,50 +131,82 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
defer res.Body.Close()
if res.StatusCode != http.StatusAccepted {
logrus.Debugf("Error initiating layer upload, response %#v", *res)
return types.BlobInfo{}, fmt.Errorf("Error initiating layer upload to %s, status %d", uploadURL, res.StatusCode)
return types.BlobInfo{}, errors.Errorf("Error initiating layer upload to %s, status %d", uploadURL, res.StatusCode)
}
uploadLocation, err := res.Location()
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error determining upload URL: %s", err.Error())
return types.BlobInfo{}, errors.Wrap(err, "Error determining upload URL")
}
h := sha256.New()
digester := digest.Canonical.Digester()
sizeCounter := &sizeCounter{}
tee := io.TeeReader(stream, io.MultiWriter(h, sizeCounter))
res, err = d.c.makeRequestToResolvedURL("PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size)
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
res, err = d.c.makeRequestToResolvedURL("PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
if err != nil {
logrus.Debugf("Error uploading layer chunked, response %#v", *res)
return types.BlobInfo{}, err
}
defer res.Body.Close()
hash := h.Sum(nil)
computedDigest := "sha256:" + hex.EncodeToString(hash[:])
computedDigest := digester.Digest()
uploadLocation, err = res.Location()
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error determining upload URL: %s", err.Error())
return types.BlobInfo{}, errors.Wrap(err, "Error determining upload URL")
}
// FIXME: DELETE uploadLocation on failure
locationQuery := uploadLocation.Query()
// TODO: check inputInfo.Digest == computedDigest https://github.com/containers/image/pull/70#discussion_r77646717
locationQuery.Set("digest", computedDigest)
locationQuery.Set("digest", computedDigest.String())
uploadLocation.RawQuery = locationQuery.Encode()
res, err = d.c.makeRequestToResolvedURL("PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1)
res, err = d.c.makeRequestToResolvedURL("PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
logrus.Debugf("Error uploading layer, response %#v", *res)
return types.BlobInfo{}, fmt.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
return types.BlobInfo{}, errors.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
}
logrus.Debugf("Upload of layer %s complete", computedDigest)
return types.BlobInfo{Digest: computedDigest, Size: sizeCounter.size}, nil
}
func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
checkURL := fmt.Sprintf(blobsURL, d.ref.ref.RemoteName(), info.Digest.String())
logrus.Debugf("Checking %s", checkURL)
res, err := d.c.makeRequest("HEAD", checkURL, nil, nil)
if err != nil {
return false, -1, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusOK:
logrus.Debugf("... already exists")
return true, getBlobSize(res), nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return false, -1, errors.Errorf("not authorized to read from destination repository %s", d.ref.ref.RemoteName())
case http.StatusNotFound:
logrus.Debugf("... not present")
return false, -1, types.ErrBlobNotFound
default:
logrus.Errorf("failed to read from destination repository %s: %v", d.ref.ref.RemoteName(), http.StatusText(res.StatusCode))
}
logrus.Debugf("... failed, status %d, ignoring", res.StatusCode)
return false, -1, types.ErrBlobNotFound
}
func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
func (d *dockerImageDestination) PutManifest(m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
@@ -193,7 +236,7 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
logrus.Debugf("Error body %s", string(body))
}
logrus.Debugf("Error uploading manifest, status %d, %#v", res.StatusCode, res)
return fmt.Errorf("Error uploading manifest to %s, status %d", url, res.StatusCode)
return errors.Errorf("Error uploading manifest to %s, status %d", url, res.StatusCode)
}
return nil
}
@@ -207,18 +250,18 @@ func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
return nil
}
if d.c.signatureBase == nil {
return fmt.Errorf("Pushing signatures to a Docker Registry is not supported, and there is no applicable signature storage configured")
return errors.Errorf("Pushing signatures to a Docker Registry is not supported, and there is no applicable signature storage configured")
}
// FIXME: This assumption that signatures are stored after the manifest rather breaks the model.
if d.manifestDigest == "" {
return fmt.Errorf("Unknown manifest digest, can't add signatures")
if d.manifestDigest.String() == "" {
// This shouldnt happen, ImageDestination users are required to call PutManifest before PutSignatures
return errors.Errorf("Unknown manifest digest, can't add signatures")
}
for i, signature := range signatures {
url := signatureStorageURL(d.c.signatureBase, d.manifestDigest, i)
if url == nil {
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
err := d.putOneSignature(url, signature)
if err != nil {
@@ -233,7 +276,7 @@ func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
for i := len(signatures); ; i++ {
url := signatureStorageURL(d.c.signatureBase, d.manifestDigest, i)
if url == nil {
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
missing, err := d.c.deleteOneSignature(url)
if err != nil {
@@ -263,9 +306,9 @@ func (d *dockerImageDestination) putOneSignature(url *url.URL, signature []byte)
return nil
case "http", "https":
return fmt.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location.", url.Scheme, url.String())
return errors.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location", url.Scheme, url.String())
default:
return fmt.Errorf("Unsupported scheme when writing signature to %s", url.String())
return errors.Errorf("Unsupported scheme when writing signature to %s", url.String())
}
}
@@ -282,9 +325,9 @@ func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error
return false, err
case "http", "https":
return false, fmt.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location.", url.Scheme, url.String())
return false, errors.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location", url.Scheme, url.String())
default:
return false, fmt.Errorf("Unsupported scheme when deleting signature from %s", url.String())
return false, errors.Errorf("Unsupported scheme when deleting signature from %s", url.String())
}
}

View File

@@ -13,17 +13,11 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type errFetchManifest struct {
statusCode int
body []byte
}
func (e errFetchManifest) Error() string {
return fmt.Sprintf("error fetching manifest: status code: %d, body: %s", e.statusCode, string(e.body))
}
type dockerImageSource struct {
ref dockerReference
requestedManifestMIMETypes []string
@@ -38,13 +32,24 @@ type dockerImageSource struct {
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref dockerReference, requestedManifestMIMETypes []string) (*dockerImageSource, error) {
c, err := newDockerClient(ctx, ref, false)
c, err := newDockerClient(ctx, ref, false, "pull")
if err != nil {
return nil, err
}
if requestedManifestMIMETypes == nil {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
supportedMIMEs := supportedManifestMIMETypesMap()
acceptableRequestedMIMEs := false
for _, mtrequested := range requestedManifestMIMETypes {
if supportedMIMEs[mtrequested] {
acceptableRequestedMIMEs = true
break
}
}
if !acceptableRequestedMIMEs {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
return &dockerImageSource{
ref: ref,
requestedManifestMIMETypes: requestedManifestMIMETypes,
@@ -75,6 +80,8 @@ func simplifyContentType(contentType string) string {
return mimeType
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
err := s.ensureManifestIsLoaded()
if err != nil {
@@ -83,6 +90,31 @@ func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
return s.cachedManifest, s.cachedManifestMIMEType, nil
}
func (s *dockerImageSource) fetchManifest(tagOrDigest string) ([]byte, string, error) {
url := fmt.Sprintf(manifestURL, s.ref.ref.RemoteName(), tagOrDigest)
headers := make(map[string][]string)
headers["Accept"] = s.requestedManifestMIMETypes
res, err := s.c.makeRequest("GET", url, headers, nil)
if err != nil {
return nil, "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, "", client.HandleErrorResponse(res)
}
manblob, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, "", err
}
return manblob, simplifyContentType(res.Header.Get("Content-Type")), nil
}
// GetTargetManifest returns an image's manifest given a digest.
// This is mainly used to retrieve a single image's manifest out of a manifest list.
func (s *dockerImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return s.fetchManifest(digest.String())
}
// ensureManifestIsLoaded sets s.cachedManifest and s.cachedManifestMIMEType
//
// ImageSource implementations are not required or expected to do any caching,
@@ -99,32 +131,53 @@ func (s *dockerImageSource) ensureManifestIsLoaded() error {
if err != nil {
return err
}
url := fmt.Sprintf(manifestURL, s.ref.ref.RemoteName(), reference)
// TODO(runcom) set manifest version header! schema1 for now - then schema2 etc etc and v1
// TODO(runcom) NO, switch on the resulter manifest like Docker is doing
headers := make(map[string][]string)
headers["Accept"] = s.requestedManifestMIMETypes
res, err := s.c.makeRequest("GET", url, headers, nil)
manblob, mt, err := s.fetchManifest(reference)
if err != nil {
return err
}
defer res.Body.Close()
manblob, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
}
if res.StatusCode != http.StatusOK {
return errFetchManifest{res.StatusCode, manblob}
}
// We might validate manblob against the Docker-Content-Digest header here to protect against transport errors.
s.cachedManifest = manblob
s.cachedManifestMIMEType = simplifyContentType(res.Header.Get("Content-Type"))
s.cachedManifestMIMEType = mt
return nil
}
func (s *dockerImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, error) {
var (
resp *http.Response
err error
)
for _, url := range urls {
resp, err = s.c.makeRequestToResolvedURL("GET", url, nil, nil, -1, false)
if err == nil {
if resp.StatusCode != http.StatusOK {
err = errors.Errorf("error fetching external blob from %q: %d", url, resp.StatusCode)
logrus.Debug(err)
continue
}
}
}
if resp.Body != nil && err == nil {
return resp.Body, getBlobSize(resp), nil
}
return nil, 0, err
}
func getBlobSize(resp *http.Response) int64 {
size, err := strconv.ParseInt(resp.Header.Get("Content-Length"), 10, 64)
if err != nil {
size = -1
}
return size
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dockerImageSource) GetBlob(digest string) (io.ReadCloser, int64, error) {
url := fmt.Sprintf(blobsURL, s.ref.ref.RemoteName(), digest)
func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if len(info.URLs) != 0 {
return s.getExternalBlob(info.URLs)
}
url := fmt.Sprintf(blobsURL, s.ref.ref.RemoteName(), info.Digest.String())
logrus.Debugf("Downloading %s", url)
res, err := s.c.makeRequest("GET", url, nil, nil)
if err != nil {
@@ -132,13 +185,9 @@ func (s *dockerImageSource) GetBlob(digest string) (io.ReadCloser, int64, error)
}
if res.StatusCode != http.StatusOK {
// print url also
return nil, 0, fmt.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
return nil, 0, errors.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
}
size, err := strconv.ParseInt(res.Header.Get("Content-Length"), 10, 64)
if err != nil {
size = -1
}
return res.Body, size, nil
return res.Body, getBlobSize(res), nil
}
func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
@@ -158,7 +207,7 @@ func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
for i := 0; ; i++ {
url := signatureStorageURL(s.c.signatureBase, manifestDigest, i)
if url == nil {
return nil, fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return nil, errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
signature, missing, err := s.getOneSignature(url)
if err != nil {
@@ -197,7 +246,7 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
if res.StatusCode == http.StatusNotFound {
return nil, true, nil
} else if res.StatusCode != http.StatusOK {
return nil, false, fmt.Errorf("Error reading signature from %s: status %d", url.String(), res.StatusCode)
return nil, false, errors.Errorf("Error reading signature from %s: status %d", url.String(), res.StatusCode)
}
sig, err := ioutil.ReadAll(res.Body)
if err != nil {
@@ -206,13 +255,13 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
return sig, false, nil
default:
return nil, false, fmt.Errorf("Unsupported scheme when reading signature from %s", url.String())
return nil, false, errors.Errorf("Unsupported scheme when reading signature from %s", url.String())
}
}
// deleteImage deletes the named image from the registry, if supported.
func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
c, err := newDockerClient(ctx, ref, true)
c, err := newDockerClient(ctx, ref, true, "push")
if err != nil {
return err
}
@@ -239,9 +288,9 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
switch get.StatusCode {
case http.StatusOK:
case http.StatusNotFound:
return fmt.Errorf("Unable to delete %v. Image may not exist or is not stored with a v2 Schema in a v2 registry.", ref.ref)
return errors.Errorf("Unable to delete %v. Image may not exist or is not stored with a v2 Schema in a v2 registry", ref.ref)
default:
return fmt.Errorf("Failed to delete %v: %s (%v)", ref.ref, manifestBody, get.Status)
return errors.Errorf("Failed to delete %v: %s (%v)", ref.ref, manifestBody, get.Status)
}
digest := get.Header.Get("Docker-Content-Digest")
@@ -260,7 +309,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
return err
}
if delete.StatusCode != http.StatusAccepted {
return fmt.Errorf("Failed to delete %v: %s (%v)", deleteURL, string(body), delete.Status)
return errors.Errorf("Failed to delete %v: %s (%v)", deleteURL, string(body), delete.Status)
}
if c.signatureBase != nil {
@@ -272,7 +321,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
for i := 0; ; i++ {
url := signatureStorageURL(c.signatureBase, manifestDigest, i)
if url == nil {
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
missing, err := c.deleteOneSignature(url)
if err != nil {

View File

@@ -5,8 +5,9 @@ import (
"strings"
"github.com/containers/image/docker/policyconfiguration"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/pkg/errors"
)
// Transport is an ImageTransport for Docker registry-hosted images.
@@ -42,7 +43,7 @@ type dockerReference struct {
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an Docker ImageReference.
func ParseReference(refString string) (types.ImageReference, error) {
if !strings.HasPrefix(refString, "//") {
return nil, fmt.Errorf("docker: image reference %s does not start with //", refString)
return nil, errors.Errorf("docker: image reference %s does not start with //", refString)
}
ref, err := reference.ParseNamed(strings.TrimPrefix(refString, "//"))
if err != nil {
@@ -55,7 +56,7 @@ func ParseReference(refString string) (types.ImageReference, error) {
// NewReference returns a Docker reference for a named reference. The reference must satisfy !reference.IsNameOnly().
func NewReference(ref reference.Named) (types.ImageReference, error) {
if reference.IsNameOnly(ref) {
return nil, fmt.Errorf("Docker reference %s has neither a tag nor a digest", ref.String())
return nil, errors.Errorf("Docker reference %s has neither a tag nor a digest", ref.String())
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// docker/reference does not handle that, so fail.
@@ -64,7 +65,7 @@ func NewReference(ref reference.Named) (types.ImageReference, error) {
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
return nil, fmt.Errorf("Docker references with both a tag and digest are currently not supported")
return nil, errors.Errorf("Docker references with both a tag and digest are currently not supported")
}
return dockerReference{
ref: ref,
@@ -115,8 +116,10 @@ func (ref dockerReference) PolicyConfigurationNamespaces() []string {
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref dockerReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return newImage(ctx, ref)
}
@@ -149,5 +152,5 @@ func (ref dockerReference) tagOrDigest() (string, error) {
return ref.Tag(), nil
}
// This should not happen, NewReference above refuses reference.IsNameOnly values.
return "", fmt.Errorf("Internal inconsistency: Reference %s unexpectedly has neither a digest nor a tag", ref.ref.String())
return "", errors.Errorf("Internal inconsistency: Reference %s unexpectedly has neither a digest nor a tag", ref.ref.String())
}

View File

@@ -10,6 +10,8 @@ import (
"strings"
"github.com/ghodss/yaml"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
"github.com/containers/image/types"
@@ -59,12 +61,12 @@ func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReferenc
url, err := url.Parse(topLevel)
if err != nil {
return nil, fmt.Errorf("Invalid signature storage URL %s: %v", topLevel, err)
return nil, errors.Wrapf(err, "Invalid signature storage URL %s", topLevel)
}
// FIXME? Restrict to explicitly supported schemes?
repo := ref.ref.FullName() // Note that this is without a tag or digest.
if path.Clean(repo) != repo { // Coverage: This should not be reachable because /./ and /../ components are not valid in docker references
return nil, fmt.Errorf("Unexpected path elements in Docker reference %s for signature storage", ref.ref.String())
return nil, errors.Errorf("Unexpected path elements in Docker reference %s for signature storage", ref.ref.String())
}
url.Path = url.Path + "/" + repo
return url, nil
@@ -113,12 +115,12 @@ func loadAndMergeConfig(dirPath string) (*registryConfiguration, error) {
var config registryConfiguration
err = yaml.Unmarshal(configBytes, &config)
if err != nil {
return nil, fmt.Errorf("Error parsing %s: %v", configPath, err)
return nil, errors.Wrapf(err, "Error parsing %s", configPath)
}
if config.DefaultDocker != nil {
if mergedConfig.DefaultDocker != nil {
return nil, fmt.Errorf(`Error parsing signature storage configuration: "default-docker" defined both in "%s" and "%s"`,
return nil, errors.Errorf(`Error parsing signature storage configuration: "default-docker" defined both in "%s" and "%s"`,
dockerDefaultMergedFrom, configPath)
}
mergedConfig.DefaultDocker = config.DefaultDocker
@@ -127,7 +129,7 @@ func loadAndMergeConfig(dirPath string) (*registryConfiguration, error) {
for nsName, nsConfig := range config.Docker { // includes config.Docker == nil
if _, ok := mergedConfig.Docker[nsName]; ok {
return nil, fmt.Errorf(`Error parsing signature storage configuration: "docker" namespace "%s" defined both in "%s" and "%s"`,
return nil, errors.Errorf(`Error parsing signature storage configuration: "docker" namespace "%s" defined both in "%s" and "%s"`,
nsName, nsMergedFrom[nsName], configPath)
}
mergedConfig.Docker[nsName] = nsConfig
@@ -188,11 +190,11 @@ func (ns registryNamespace) signatureTopLevel(write bool) string {
// signatureStorageURL returns an URL usable for acessing signature index in base with known manifestDigest, or nil if not applicable.
// Returns nil iff base == nil.
func signatureStorageURL(base signatureStorageBase, manifestDigest string, index int) *url.URL {
func signatureStorageURL(base signatureStorageBase, manifestDigest digest.Digest, index int) *url.URL {
if base == nil {
return nil
}
url := *base
url.Path = fmt.Sprintf("%s@%s/signature-%d", url.Path, manifestDigest, index+1)
url.Path = fmt.Sprintf("%s@%s/signature-%d", url.Path, manifestDigest.String(), index+1)
return &url
}

View File

@@ -1,11 +1,11 @@
package policyconfiguration
import (
"errors"
"fmt"
"strings"
"github.com/docker/docker/reference"
"github.com/pkg/errors"
"github.com/containers/image/docker/reference"
)
// DockerReferenceIdentity returns a string representation of the reference, suitable for policy lookup,
@@ -17,9 +17,9 @@ func DockerReferenceIdentity(ref reference.Named) (string, error) {
digested, isDigested := ref.(reference.Canonical)
switch {
case isTagged && isDigested: // This should not happen, docker/reference.ParseNamed drops the tag.
return "", fmt.Errorf("Unexpected Docker reference %s with both a name and a digest", ref.String())
return "", errors.Errorf("Unexpected Docker reference %s with both a name and a digest", ref.String())
case !isTagged && !isDigested: // This should not happen, the caller is expected to ensure !reference.IsNameOnly()
return "", fmt.Errorf("Internal inconsistency: Docker reference %s with neither a tag nor a digest", ref.String())
return "", errors.Errorf("Internal inconsistency: Docker reference %s with neither a tag nor a digest", ref.String())
case isTagged:
res = res + ":" + tagged.Tag()
case isDigested:

View File

@@ -0,0 +1,6 @@
// Package reference is a fork of the upstream docker/docker/reference package.
// The package is forked because we need consistency especially when storing and
// checking signatures (RH patches break this consistency because they modify
// docker/docker/reference as part of a patch carried in projectatomic/docker).
// The version of this package is v1.12.1 from upstream, update as necessary.
package reference

View File

@@ -1,13 +1,16 @@
package reference
import (
"errors"
"fmt"
"regexp"
"strings"
"github.com/docker/distribution/digest"
// "opencontainers/go-digest" requires us to load the algorithms that we
// want to use into the binary (it calls .Available).
_ "crypto/sha256"
distreference "github.com/docker/distribution/reference"
"github.com/docker/docker/image/v1"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
const (
@@ -53,16 +56,20 @@ type Canonical interface {
// returned.
// If an error was encountered it is returned, along with a nil Reference.
func ParseNamed(s string) (Named, error) {
named, err := distreference.ParseNamed(s)
named, err := distreference.ParseNormalizedNamed(s)
if err != nil {
return nil, fmt.Errorf("Error parsing reference: %q is not a valid repository/tag", s)
return nil, errors.Wrapf(err, "Error parsing reference: %q is not a valid repository/tag", s)
}
r, err := WithName(named.Name())
if err != nil {
return nil, err
}
if canonical, isCanonical := named.(distreference.Canonical); isCanonical {
return WithDigest(r, canonical.Digest())
r, err := distreference.WithDigest(r, canonical.Digest())
if err != nil {
return nil, err
}
return &canonicalRef{namedRef{r}}, nil
}
if tagged, isTagged := named.(distreference.NamedTagged); isTagged {
return WithTag(r, tagged.Tag())
@@ -97,16 +104,6 @@ func WithTag(name Named, tag string) (NamedTagged, error) {
return &taggedRef{namedRef{r}}, nil
}
// WithDigest combines the name from "name" and the digest from "digest" to form
// a reference incorporating both the name and the digest.
func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
r, err := distreference.WithDigest(name, digest)
if err != nil {
return nil, err
}
return &canonicalRef{namedRef{r}}, nil
}
type namedRef struct {
distreference.Named
}
@@ -133,7 +130,7 @@ func (r *taggedRef) Tag() string {
return r.namedRef.Named.(distreference.NamedTagged).Tag()
}
func (r *canonicalRef) Digest() digest.Digest {
return r.namedRef.Named.(distreference.Canonical).Digest()
return digest.Digest(r.namedRef.Named.(distreference.Canonical).Digest())
}
// WithDefaultTag adds a default tag to a reference if it only has a repo name.
@@ -155,13 +152,13 @@ func IsNameOnly(ref Named) bool {
return true
}
// ParseIDOrReference parses string for a image ID or a reference. ID can be
// ParseIDOrReference parses string for an image ID or a reference. ID can be
// without a default prefix.
func ParseIDOrReference(idOrRef string) (digest.Digest, Named, error) {
if err := v1.ValidateID(idOrRef); err == nil {
if err := validateID(idOrRef); err == nil {
idOrRef = "sha256:" + idOrRef
}
if dgst, err := digest.ParseDigest(idOrRef); err == nil {
if dgst, err := digest.Parse(idOrRef); err == nil {
return dgst, nil, nil
}
ref, err := ParseNamed(idOrRef)
@@ -203,9 +200,18 @@ func normalize(name string) (string, error) {
return name, nil
}
func validateName(name string) error {
if err := v1.ValidateID(name); err == nil {
return fmt.Errorf("Invalid repository name (%s), cannot specify 64-byte hexadecimal strings", name)
var validHex = regexp.MustCompile(`^([a-f0-9]{64})$`)
func validateID(id string) error {
if ok := validHex.MatchString(id); !ok {
return errors.Errorf("image ID %q is invalid", id)
}
return nil
}
func validateName(name string) error {
if err := validateID(name); err == nil {
return errors.Errorf("Invalid repository name (%s), cannot specify 64-byte hexadecimal strings", name)
}
return nil
}

View File

@@ -0,0 +1,63 @@
package image
import (
"encoding/json"
"runtime"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type platformSpec struct {
Architecture string `json:"architecture"`
OS string `json:"os"`
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
Variant string `json:"variant,omitempty"`
Features []string `json:"features,omitempty"`
}
// A manifestDescriptor references a platform-specific manifest.
type manifestDescriptor struct {
descriptor
Platform platformSpec `json:"platform"`
}
type manifestList struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
Manifests []manifestDescriptor `json:"manifests"`
}
func manifestSchema2FromManifestList(src types.ImageSource, manblob []byte) (genericManifest, error) {
list := manifestList{}
if err := json.Unmarshal(manblob, &list); err != nil {
return nil, err
}
var targetManifestDigest digest.Digest
for _, d := range list.Manifests {
if d.Platform.Architecture == runtime.GOARCH && d.Platform.OS == runtime.GOOS {
targetManifestDigest = d.Digest
break
}
}
if targetManifestDigest == "" {
return nil, errors.New("no supported platform found in manifest list")
}
manblob, mt, err := src.GetTargetManifest(targetManifestDigest)
if err != nil {
return nil, err
}
matches, err := manifest.MatchesDigest(manblob, targetManifestDigest)
if err != nil {
return nil, errors.Wrap(err, "Error computing manifest digest")
}
if !matches {
return nil, errors.Errorf("Manifest image does not match selected manifest digest %s", targetManifestDigest)
}
return manifestInstanceFromBlob(src, manblob, mt)
}

View File

@@ -2,12 +2,15 @@ package image
import (
"encoding/json"
"errors"
"fmt"
"regexp"
"strings"
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
var (
@@ -15,18 +18,33 @@ var (
)
type fsLayersSchema1 struct {
BlobSum string `json:"blobSum"`
BlobSum digest.Digest `json:"blobSum"`
}
type historySchema1 struct {
V1Compatibility string `json:"v1Compatibility"`
}
// historySchema1 is a string containing this. It is similar to v1Image but not the same, in particular note the ThrowAway field.
type v1Compatibility struct {
ID string `json:"id"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig struct {
Cmd []string
} `json:"container_config,omitempty"`
Author string `json:"author,omitempty"`
ThrowAway bool `json:"throwaway,omitempty"`
}
type manifestSchema1 struct {
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []fsLayersSchema1 `json:"fsLayers"`
History []struct {
V1Compatibility string `json:"v1Compatibility"`
} `json:"history"`
SchemaVersion int `json:"schemaVersion"`
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []fsLayersSchema1 `json:"fsLayers"`
History []historySchema1 `json:"history"`
SchemaVersion int `json:"schemaVersion"`
}
func manifestSchema1FromManifest(manifest []byte) (genericManifest, error) {
@@ -34,23 +52,69 @@ func manifestSchema1FromManifest(manifest []byte) (genericManifest, error) {
if err := json.Unmarshal(manifest, mschema1); err != nil {
return nil, err
}
if mschema1.SchemaVersion != 1 {
return nil, errors.Errorf("unsupported schema version %d", mschema1.SchemaVersion)
}
if len(mschema1.FSLayers) != len(mschema1.History) {
return nil, errors.New("length of history not equal to number of layers")
}
if len(mschema1.FSLayers) == 0 {
return nil, errors.New("no FSLayers in manifest")
}
if err := fixManifestLayers(mschema1); err != nil {
return nil, err
}
// TODO(runcom): verify manifest schema 1, 2 etc
//if len(m.FSLayers) != len(m.History) {
//return nil, fmt.Errorf("length of history not equal to number of layers for %q", ref.String())
//}
//if len(m.FSLayers) == 0 {
//return nil, fmt.Errorf("no FSLayers in manifest for %q", ref.String())
//}
return mschema1, nil
}
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
func manifestSchema1FromComponents(ref reference.Named, fsLayers []fsLayersSchema1, history []historySchema1, architecture string) genericManifest {
var name, tag string
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
name = ref.RemoteName()
if tagged, ok := ref.(reference.NamedTagged); ok {
tag = tagged.Tag()
}
}
return &manifestSchema1{
Name: name,
Tag: tag,
Architecture: architecture,
FSLayers: fsLayers,
History: history,
SchemaVersion: 1,
}
}
func (m *manifestSchema1) serialize() ([]byte, error) {
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
unsigned, err := json.Marshal(*m)
if err != nil {
return nil, err
}
return manifest.AddDummyV2S1Signature(unsigned)
}
func (m *manifestSchema1) manifestMIMEType() string {
return manifest.DockerV2Schema1SignedMediaType
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{}
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema1) ConfigBlob() ([]byte, error) {
return nil, nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
layers := make([]types.BlobInfo, len(m.FSLayers))
for i, layer := range m.FSLayers { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
@@ -59,17 +123,9 @@ func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
return layers
}
func (m *manifestSchema1) Config() ([]byte, error) {
return []byte(m.History[0].V1Compatibility), nil
}
func (m *manifestSchema1) ImageInspectInfo() (*types.ImageInspectInfo, error) {
func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
v1 := &v1Image{}
config, err := m.Config()
if err != nil {
return nil, err
}
if err := json.Unmarshal(config, v1); err != nil {
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
@@ -82,12 +138,21 @@ func (m *manifestSchema1) ImageInspectInfo() (*types.ImageInspectInfo, error) {
}, nil
}
func (m *manifestSchema1) UpdatedManifest(options types.ManifestUpdateOptions) ([]byte, error) {
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return options.ManifestMIMEType == manifest.DockerV2Schema2MediaType
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m
if options.LayerInfos != nil {
// Our LayerInfos includes empty layers (where m.History.V1Compatibility->ThrowAway), so expect them to be included here as well.
if len(copy.FSLayers) != len(options.LayerInfos) {
return nil, fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.FSLayers), len(options.LayerInfos))
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.FSLayers), len(options.LayerInfos))
}
for i, info := range options.LayerInfos {
// (docker push) sets up m.History.V1Compatibility->{Id,Parent} based on values of info.Digest,
@@ -96,12 +161,19 @@ func (m *manifestSchema1) UpdatedManifest(options types.ManifestUpdateOptions) (
copy.FSLayers[(len(options.LayerInfos)-1)-i].BlobSum = info.Digest
}
}
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
unsigned, err := json.Marshal(copy)
if err != nil {
return nil, err
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
// We have 2 MIME types for schema 1, which are basically equivalent (even the un-"Signed" MIME type will be rejected if there isnt a signature; so,
// handle conversions between them by doing nothing.
case manifest.DockerV2Schema2MediaType:
return copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema1SignedMediaType, options.ManifestMIMEType)
}
return manifest.AddDummyV2S1Signature(unsigned)
return memoryImageFromManifest(&copy), nil
}
// fixManifestLayers, after validating the supplied manifest
@@ -130,7 +202,7 @@ func fixManifestLayers(manifest *manifestSchema1) error {
}
}
if imgs[len(imgs)-1].Parent != "" {
return errors.New("Invalid parent ID in the base layer of the image.")
return errors.New("Invalid parent ID in the base layer of the image")
}
// check general duplicates to error instead of a deadlock
idmap := make(map[string]struct{})
@@ -138,7 +210,7 @@ func fixManifestLayers(manifest *manifestSchema1) error {
for _, img := range imgs {
// skip IDs that appear after each other, we handle those later
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID)
return errors.Errorf("ID %+v appears multiple times in manifest", img.ID)
}
lastID = img.ID
idmap[lastID] = struct{}{}
@@ -149,7 +221,7 @@ func fixManifestLayers(manifest *manifestSchema1) error {
manifest.FSLayers = append(manifest.FSLayers[:i], manifest.FSLayers[i+1:]...)
manifest.History = append(manifest.History[:i], manifest.History[i+1:]...)
} else if imgs[i].Parent != imgs[i+1].ID {
return fmt.Errorf("Invalid parent ID. Expected %v, got %v.", imgs[i+1].ID, imgs[i].Parent)
return errors.Errorf("Invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent)
}
}
return nil
@@ -157,7 +229,98 @@ func fixManifestLayers(manifest *manifestSchema1) error {
func validateV1ID(id string) error {
if ok := validHex.MatchString(id); !ok {
return fmt.Errorf("image ID %q is invalid", id)
return errors.Errorf("image ID %q is invalid", id)
}
return nil
}
// Based on github.com/docker/docker/distribution/pull_v2.go
func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.BlobInfo, layerDiffIDs []digest.Digest) (types.Image, error) {
if len(m.History) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
return nil, errors.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema2MediaType)
}
if len(m.History) != len(m.FSLayers) {
return nil, errors.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.History), len(m.FSLayers))
}
if len(uploadedLayerInfos) != len(m.FSLayers) {
return nil, errors.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.FSLayers))
}
if len(layerDiffIDs) != len(m.FSLayers) {
return nil, errors.Errorf("Internal error: collected %d DiffID values, but schema1 manifest has %d fsLayers", len(layerDiffIDs), len(m.FSLayers))
}
rootFS := rootFS{
Type: "layers",
DiffIDs: []digest.Digest{},
BaseLayer: "",
}
var layers []descriptor
history := make([]imageHistory, len(m.History))
for v1Index := len(m.History) - 1; v1Index >= 0; v1Index-- {
v2Index := (len(m.History) - 1) - v1Index
var v1compat v1Compatibility
if err := json.Unmarshal([]byte(m.History[v1Index].V1Compatibility), &v1compat); err != nil {
return nil, errors.Wrapf(err, "Error decoding history entry %d", v1Index)
}
history[v2Index] = imageHistory{
Created: v1compat.Created,
Author: v1compat.Author,
CreatedBy: strings.Join(v1compat.ContainerConfig.Cmd, " "),
Comment: v1compat.Comment,
EmptyLayer: v1compat.ThrowAway,
}
if !v1compat.ThrowAway {
layers = append(layers, descriptor{
MediaType: "application/vnd.docker.image.rootfs.diff.tar.gzip",
Size: uploadedLayerInfos[v2Index].Size,
Digest: m.FSLayers[v1Index].BlobSum,
})
rootFS.DiffIDs = append(rootFS.DiffIDs, layerDiffIDs[v2Index])
}
}
configJSON, err := configJSONFromV1Config([]byte(m.History[0].V1Compatibility), rootFS, history)
if err != nil {
return nil, err
}
configDescriptor := descriptor{
MediaType: "application/vnd.docker.container.image.v1+json",
Size: int64(len(configJSON)),
Digest: digest.FromBytes(configJSON),
}
m2 := manifestSchema2FromComponents(configDescriptor, nil, configJSON, layers)
return memoryImageFromManifest(m2), nil
}
func configJSONFromV1Config(v1ConfigJSON []byte, rootFS rootFS, history []imageHistory) ([]byte, error) {
// github.com/docker/docker/image/v1/imagev1.go:MakeConfigFromV1Config unmarshals and re-marshals the input if docker_version is < 1.8.3 to remove blank fields;
// we don't do that here. FIXME? Should we? AFAICT it would only affect the digest value of the schema2 manifest, and we don't particularly need that to be
// a consistently reproducible value.
// Preserve everything we don't specifically know about.
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
rawContents := map[string]*json.RawMessage{}
if err := json.Unmarshal(v1ConfigJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
return nil, err
}
delete(rawContents, "id")
delete(rawContents, "parent")
delete(rawContents, "Size")
delete(rawContents, "parent_id")
delete(rawContents, "layer_id")
delete(rawContents, "throwaway")
updates := map[string]interface{}{"rootfs": rootFS, "history": history}
for field, value := range updates {
encoded, err := json.Marshal(value)
if err != nil {
return nil, err
}
rawContents[field] = (*json.RawMessage)(&encoded)
}
return json.Marshal(rawContents)
}

View File

@@ -1,25 +1,47 @@
package image
import (
"bytes"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io/ioutil"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// gzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
// This comes from github.com/docker/distribution/manifest/schema1/config_builder.go; there is
// a non-zero embedded timestamp; we could zero that, but that would just waste storage space
// in registries, so lets use the same values.
var gzippedEmptyLayer = []byte{
31, 139, 8, 0, 0, 9, 110, 136, 0, 255, 98, 24, 5, 163, 96, 20, 140, 88,
0, 8, 0, 0, 255, 255, 46, 175, 181, 239, 0, 4, 0, 0,
}
// gzippedEmptyLayerDigest is a digest of gzippedEmptyLayer
const gzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
type descriptor struct {
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
Digest string `json:"digest"`
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
URLs []string `json:"urls,omitempty"`
}
type manifestSchema2 struct {
src types.ImageSource
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
}
func manifestSchema2FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
@@ -30,30 +52,78 @@ func manifestSchema2FromManifest(src types.ImageSource, manifest []byte) (generi
return &v2s2, nil
}
// manifestSchema2FromComponents builds a new manifestSchema2 from the supplied data:
func manifestSchema2FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
return &manifestSchema2{
src: src,
configBlob: configBlob,
SchemaVersion: 2,
MediaType: manifest.DockerV2Schema2MediaType,
ConfigDescriptor: config,
LayersDescriptors: layers,
}
}
func (m *manifestSchema2) serialize() ([]byte, error) {
return json.Marshal(*m)
}
func (m *manifestSchema2) manifestMIMEType() string {
return m.MediaType
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema2) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestSchema2")
}
stream, _, err := m.src.GetBlob(types.BlobInfo{
Digest: m.ConfigDescriptor.Digest,
Size: m.ConfigDescriptor.Size,
URLs: m.ConfigDescriptor.URLs,
})
if err != nil {
return nil, err
}
defer stream.Close()
blob, err := ioutil.ReadAll(stream)
if err != nil {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.ConfigDescriptor.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
}
m.configBlob = blob
}
return m.configBlob, nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size})
blobs = append(blobs, types.BlobInfo{
Digest: layer.Digest,
Size: layer.Size,
URLs: layer.URLs,
})
}
return blobs
}
func (m *manifestSchema2) Config() ([]byte, error) {
rawConfig, _, err := m.src.GetBlob(m.ConfigDescriptor.Digest)
if err != nil {
return nil, err
}
config, err := ioutil.ReadAll(rawConfig)
rawConfig.Close()
return config, err
}
func (m *manifestSchema2) ImageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.Config()
func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
return nil, err
}
@@ -70,16 +140,202 @@ func (m *manifestSchema2) ImageInspectInfo() (*types.ImageInspectInfo, error) {
}, nil
}
func (m *manifestSchema2) UpdatedManifest(options types.ManifestUpdateOptions) ([]byte, error) {
copy := *m
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema2) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return false
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
if options.LayerInfos != nil {
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
copy.LayersDescriptors[i].URLs = info.URLs
}
}
return json.Marshal(copy)
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema1SignedMediaType, manifest.DockerV2Schema1MediaType:
return copy.convertToManifestSchema1(options.InformationOnly.Destination)
case imgspecv1.MediaTypeImageManifest:
return copy.convertToManifestOCI1()
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema2MediaType, options.ManifestMIMEType)
}
return memoryImageFromManifest(&copy), nil
}
func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
configBlob, err := m.ConfigBlob()
if err != nil {
return nil, err
}
// docker v2s2 and OCI v1 are mostly compatible but v2s2 contains more fields
// than OCI v1. This unmarshal, then re-marshal makes sure we drop docker v2s2
// fields that aren't needed in OCI v1.
configOCI := &imgspecv1.Image{}
if err := json.Unmarshal(configBlob, configOCI); err != nil {
return nil, err
}
configOCIBytes, err := json.Marshal(configOCI)
if err != nil {
return nil, err
}
config := descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Size: int64(len(configOCIBytes)),
Digest: digest.FromBytes(configOCIBytes),
}
layers := make([]descriptor, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
if m.LayersDescriptors[idx].MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable
} else {
// we assume layers are gzip'ed because docker v2s2 only deals with
// gzip'ed layers. However, OCI has non-gzip'ed layers as well.
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerGzip
}
}
m1 := manifestOCI1FromComponents(config, m.src, configOCIBytes, layers)
return memoryImageFromManifest(m1), nil
}
// Based on docker/distribution/manifest/schema1/config_builder.go
func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination) (types.Image, error) {
configBytes, err := m.ConfigBlob()
if err != nil {
return nil, err
}
imageConfig := &image{}
if err := json.Unmarshal(configBytes, imageConfig); err != nil {
return nil, err
}
// Build fsLayers and History, discarding all configs. We will patch the top-level config in later.
fsLayers := make([]fsLayersSchema1, len(imageConfig.History))
history := make([]historySchema1, len(imageConfig.History))
nonemptyLayerIndex := 0
var parentV1ID string // Set in the loop
v1ID := ""
haveGzippedEmptyLayer := false
if len(imageConfig.History) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
return nil, errors.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema1SignedMediaType)
}
for v2Index, historyEntry := range imageConfig.History {
parentV1ID = v1ID
v1Index := len(imageConfig.History) - 1 - v2Index
var blobDigest digest.Digest
if historyEntry.EmptyLayer {
if !haveGzippedEmptyLayer {
logrus.Debugf("Uploading empty layer during conversion to schema 1")
info, err := dest.PutBlob(bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Digest: gzippedEmptyLayerDigest, Size: int64(len(gzippedEmptyLayer))})
if err != nil {
return nil, errors.Wrap(err, "Error uploading empty layer")
}
if info.Digest != gzippedEmptyLayerDigest {
return nil, errors.Errorf("Internal error: Uploaded empty layer has digest %#v instead of %s", info.Digest, gzippedEmptyLayerDigest)
}
haveGzippedEmptyLayer = true
}
blobDigest = gzippedEmptyLayerDigest
} else {
if nonemptyLayerIndex >= len(m.LayersDescriptors) {
return nil, errors.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.LayersDescriptors))
}
blobDigest = m.LayersDescriptors[nonemptyLayerIndex].Digest
nonemptyLayerIndex++
}
// AFAICT pull ignores these ID values, at least nowadays, so we could use anything unique, including a simple counter. Use what Docker uses for cargo-cult consistency.
v, err := v1IDFromBlobDigestAndComponents(blobDigest, parentV1ID)
if err != nil {
return nil, err
}
v1ID = v
fakeImage := v1Compatibility{
ID: v1ID,
Parent: parentV1ID,
Comment: historyEntry.Comment,
Created: historyEntry.Created,
Author: historyEntry.Author,
ThrowAway: historyEntry.EmptyLayer,
}
fakeImage.ContainerConfig.Cmd = []string{historyEntry.CreatedBy}
v1CompatibilityBytes, err := json.Marshal(&fakeImage)
if err != nil {
return nil, errors.Errorf("Internal error: Error creating v1compatibility for %#v", fakeImage)
}
fsLayers[v1Index] = fsLayersSchema1{BlobSum: blobDigest}
history[v1Index] = historySchema1{V1Compatibility: string(v1CompatibilityBytes)}
// Note that parentV1ID of the top layer is preserved when exiting this loop
}
// Now patch in real configuration for the top layer (v1Index == 0)
v1ID, err = v1IDFromBlobDigestAndComponents(fsLayers[0].BlobSum, parentV1ID, string(configBytes)) // See above WRT v1ID value generation and cargo-cult consistency.
if err != nil {
return nil, err
}
v1Config, err := v1ConfigFromConfigJSON(configBytes, v1ID, parentV1ID, imageConfig.History[len(imageConfig.History)-1].EmptyLayer)
if err != nil {
return nil, err
}
history[0].V1Compatibility = string(v1Config)
m1 := manifestSchema1FromComponents(dest.Reference().DockerReference(), fsLayers, history, imageConfig.Architecture)
return memoryImageFromManifest(m1), nil
}
func v1IDFromBlobDigestAndComponents(blobDigest digest.Digest, others ...string) (string, error) {
if err := blobDigest.Validate(); err != nil {
return "", err
}
parts := append([]string{blobDigest.Hex()}, others...)
v1IDHash := sha256.Sum256([]byte(strings.Join(parts, " ")))
return hex.EncodeToString(v1IDHash[:]), nil
}
func v1ConfigFromConfigJSON(configJSON []byte, v1ID, parentV1ID string, throwaway bool) ([]byte, error) {
// Preserve everything we don't specifically know about.
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
rawContents := map[string]*json.RawMessage{}
if err := json.Unmarshal(configJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
return nil, err
}
delete(rawContents, "rootfs")
delete(rawContents, "history")
updates := map[string]interface{}{"id": v1ID}
if parentV1ID != "" {
updates["parent"] = parentV1ID
}
if throwaway {
updates["throwaway"] = throwaway
}
for field, value := range updates {
encoded, err := json.Marshal(value)
if err != nil {
return nil, err
}
rawContents[field] = (*json.RawMessage)(&encoded)
}
return json.Marshal(rawContents)
}

View File

@@ -1,186 +0,0 @@
// Package image consolidates knowledge about various container image formats
// (as opposed to image storage mechanisms, which are handled by types.ImageSource)
// and exposes all of them using an unified interface.
package image
import (
"errors"
"fmt"
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
)
// genericImage is a general set of utilities for working with container images,
// whatever is their underlying location (i.e. dockerImageSource-independent).
// Note the existence of skopeo/docker.Image: some instances of a `types.Image`
// may not be a `genericImage` directly. However, most users of `types.Image`
// do not care, and those who care about `skopeo/docker.Image` know they do.
type genericImage struct {
src types.ImageSource
// private cache for Manifest(); nil if not yet known.
cachedManifest []byte
// private cache for the manifest media type w/o having to guess it
// this may be the empty string in case the MIME Type wasn't guessed correctly
// this field is valid only if cachedManifest is not nil
cachedManifestMIMEType string
// private cache for Signatures(); nil if not yet known.
cachedSignatures [][]byte
}
// FromSource returns a types.Image implementation for source.
// The caller must call .Close() on the returned Image.
//
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// Image and ImageSource objects simultaneously, but it means that they only need to
// the Image.)
func FromSource(src types.ImageSource) types.Image {
return &genericImage{src: src}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *genericImage) Reference() types.ImageReference {
return i.src.Reference()
}
// Close removes resources associated with an initialized Image, if any.
func (i *genericImage) Close() {
i.src.Close()
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
// NOTE: It is essential for signature verification that Manifest returns the manifest from which ConfigInfo and LayerInfos is computed.
func (i *genericImage) Manifest() ([]byte, string, error) {
if i.cachedManifest == nil {
m, mt, err := i.src.GetManifest()
if err != nil {
return nil, "", err
}
i.cachedManifest = m
if mt == "" || mt == "text/plain" {
// Crane registries can return "text/plain".
// This makes no real sense, but it happens
// because requests for manifests are
// redirected to a content distribution
// network which is configured that way.
mt = manifest.GuessMIMEType(i.cachedManifest)
}
i.cachedManifestMIMEType = mt
}
return i.cachedManifest, i.cachedManifestMIMEType, nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *genericImage) Signatures() ([][]byte, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignatures()
if err != nil {
return nil, err
}
i.cachedSignatures = sigs
}
return i.cachedSignatures, nil
}
type config struct {
Labels map[string]string
}
type v1Image struct {
// Config is the configuration of the container received from the client
Config *config `json:"config,omitempty"`
// DockerVersion specifies version on which image is built
DockerVersion string `json:"docker_version,omitempty"`
// Created timestamp when image was created
Created time.Time `json:"created"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
}
// will support v1 one day...
type genericManifest interface {
Config() ([]byte, error)
ConfigInfo() types.BlobInfo
LayerInfos() []types.BlobInfo
ImageInspectInfo() (*types.ImageInspectInfo, error) // The caller will need to fill in Layers
UpdatedManifest(types.ManifestUpdateOptions) ([]byte, error)
}
// getParsedManifest parses the manifest into a data structure, cleans it up, and returns it.
// NOTE: The manifest may have been modified in the process; DO NOT reserialize and store the return value
// if you want to preserve the original manifest; use the blob returned by Manifest() directly.
// NOTE: It is essential for signature verification that the object is computed from the same manifest which is returned by Manifest().
func (i *genericImage) getParsedManifest() (genericManifest, error) {
manblob, mt, err := i.Manifest()
if err != nil {
return nil, err
}
switch mt {
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
// need to happen within the ImageSource.
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, "application/json":
return manifestSchema1FromManifest(manblob)
case manifest.DockerV2Schema2MediaType:
return manifestSchema2FromManifest(i.src, manblob)
case "":
return nil, errors.New("could not guess manifest media type")
default:
return nil, fmt.Errorf("unsupported manifest media type %s", mt)
}
}
func (i *genericImage) Inspect() (*types.ImageInspectInfo, error) {
// TODO(runcom): unused version param for now, default to docker v2-1
m, err := i.getParsedManifest()
if err != nil {
return nil, err
}
info, err := m.ImageInspectInfo()
if err != nil {
return nil, err
}
layers := m.LayerInfos()
info.Layers = make([]string, len(layers))
for i, layer := range layers {
info.Layers[i] = layer.Digest
}
return info, nil
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// NOTE: It is essential for signature verification that ConfigInfo is computed from the same manifest which is returned by Manifest().
func (i *genericImage) ConfigInfo() (types.BlobInfo, error) {
m, err := i.getParsedManifest()
if err != nil {
return types.BlobInfo{}, err
}
return m.ConfigInfo(), nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// NOTE: It is essential for signature verification that LayerInfos is computed from the same manifest which is returned by Manifest().
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (i *genericImage) LayerInfos() ([]types.BlobInfo, error) {
m, err := i.getParsedManifest()
if err != nil {
return nil, err
}
return m.LayerInfos(), nil
}
// UpdatedManifest returns the image's manifest modified according to updateOptions.
// This does not change the state of the Image object.
func (i *genericImage) UpdatedManifest(options types.ManifestUpdateOptions) ([]byte, error) {
m, err := i.getParsedManifest()
if err != nil {
return nil, err
}
return m.UpdatedManifest(options)
}

120
vendor/github.com/containers/image/image/manifest.go generated vendored Normal file
View File

@@ -0,0 +1,120 @@
package image
import (
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/docker/api/types/strslice"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type config struct {
Cmd strslice.StrSlice
Labels map[string]string
}
type v1Image struct {
ID string `json:"id,omitempty"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig *config `json:"container_config,omitempty"`
DockerVersion string `json:"docker_version,omitempty"`
Author string `json:"author,omitempty"`
// Config is the configuration of the container received from the client
Config *config `json:"config,omitempty"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
}
type image struct {
v1Image
History []imageHistory `json:"history,omitempty"`
RootFS *rootFS `json:"rootfs,omitempty"`
}
type imageHistory struct {
Created time.Time `json:"created"`
Author string `json:"author,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
Comment string `json:"comment,omitempty"`
EmptyLayer bool `json:"empty_layer,omitempty"`
}
type rootFS struct {
Type string `json:"type"`
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
BaseLayer string `json:"base_layer,omitempty"`
}
// genericManifest is an interface for parsing, modifying image manifests and related data.
// Note that the public methods are intended to be a subset of types.Image
// so that embedding a genericManifest into structs works.
// will support v1 one day...
type genericManifest interface {
serialize() ([]byte, error)
manifestMIMEType() string
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
ConfigInfo() types.BlobInfo
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
ConfigBlob() ([]byte, error)
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []types.BlobInfo
imageInspectInfo() (*types.ImageInspectInfo, error) // To be called by inspectManifest
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error)
}
func manifestInstanceFromBlob(src types.ImageSource, manblob []byte, mt string) (genericManifest, error) {
switch mt {
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
// need to happen within the ImageSource.
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, "application/json":
return manifestSchema1FromManifest(manblob)
case imgspecv1.MediaTypeImageManifest:
return manifestOCI1FromManifest(src, manblob)
case manifest.DockerV2Schema2MediaType:
return manifestSchema2FromManifest(src, manblob)
case manifest.DockerV2ListMediaType:
return manifestSchema2FromManifestList(src, manblob)
default:
// If it's not a recognized manifest media type, or we have failed determining the type, we'll try one last time
// to deserialize using v2s1 as per https://github.com/docker/distribution/blob/master/manifests.go#L108
// and https://github.com/docker/distribution/blob/master/manifest/schema1/manifest.go#L50
//
// Crane registries can also return "text/plain", or pretty much anything else depending on a file extension “recognized” in the tag.
// This makes no real sense, but it happens
// because requests for manifests are
// redirected to a content distribution
// network which is configured that way. See https://bugzilla.redhat.com/show_bug.cgi?id=1389442
return manifestSchema1FromManifest(manblob)
}
}
// inspectManifest is an implementation of types.Image.Inspect
func inspectManifest(m genericManifest) (*types.ImageInspectInfo, error) {
info, err := m.imageInspectInfo()
if err != nil {
return nil, err
}
layers := m.LayerInfos()
info.Layers = make([]string, len(layers))
for i, layer := range layers {
info.Layers[i] = layer.Digest.String()
}
return info, nil
}

70
vendor/github.com/containers/image/image/memory.go generated vendored Normal file
View File

@@ -0,0 +1,70 @@
package image
import (
"github.com/pkg/errors"
"github.com/containers/image/types"
)
// memoryImage is a mostly-implementation of types.Image assembled from data
// created in memory, used primarily as a return value of types.Image.UpdatedImage
// as a way to carry various structured information in a type-safe and easy-to-use way.
// Note that this _only_ carries the immediate metadata; it is _not_ a stand-alone
// collection of all related information, e.g. there is no way to get layer blobs
// from a memoryImage.
type memoryImage struct {
genericManifest
serializedManifest []byte // A private cache for Manifest()
}
func memoryImageFromManifest(m genericManifest) types.Image {
return &memoryImage{
genericManifest: m,
serializedManifest: nil,
}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *memoryImage) Reference() types.ImageReference {
// It would really be inappropriate to return the ImageReference of the image this was based on.
return nil
}
// Close removes resources associated with an initialized UnparsedImage, if any.
func (i *memoryImage) Close() {
}
// Size returns the size of the image as stored, if known, or -1 if not.
func (i *memoryImage) Size() (int64, error) {
return -1, nil
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Manifest() ([]byte, string, error) {
if i.serializedManifest == nil {
m, err := i.genericManifest.serialize()
if err != nil {
return nil, "", err
}
i.serializedManifest = m
}
return i.serializedManifest, i.genericManifest.manifestMIMEType(), nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Signatures() ([][]byte, error) {
// Modifying an image invalidates signatures; a caller asking the updated image for signatures
// is probably confused.
return nil, errors.New("Internal error: Image.Signatures() is not supported for images modified in memory")
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (i *memoryImage) Inspect() (*types.ImageInspectInfo, error) {
return inspectManifest(i.genericManifest)
}
// IsMultiImage returns true if the image's manifest is a list of images, false otherwise.
func (i *memoryImage) IsMultiImage() bool {
return false
}

165
vendor/github.com/containers/image/image/oci.go generated vendored Normal file
View File

@@ -0,0 +1,165 @@
package image
import (
"encoding/json"
"io/ioutil"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
type manifestOCI1 struct {
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
}
func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
oci := manifestOCI1{src: src}
if err := json.Unmarshal(manifest, &oci); err != nil {
return nil, err
}
return &oci, nil
}
// manifestOCI1FromComponents builds a new manifestOCI1 from the supplied data:
func manifestOCI1FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
return &manifestOCI1{
src: src,
configBlob: configBlob,
SchemaVersion: 2,
ConfigDescriptor: config,
LayersDescriptors: layers,
}
}
func (m *manifestOCI1) serialize() ([]byte, error) {
return json.Marshal(*m)
}
func (m *manifestOCI1) manifestMIMEType() string {
return imgspecv1.MediaTypeImageManifest
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestOCI1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestOCI1")
}
stream, _, err := m.src.GetBlob(types.BlobInfo{
Digest: m.ConfigDescriptor.Digest,
Size: m.ConfigDescriptor.Size,
URLs: m.ConfigDescriptor.URLs,
})
if err != nil {
return nil, err
}
defer stream.Close()
blob, err := ioutil.ReadAll(stream)
if err != nil {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.ConfigDescriptor.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
}
m.configBlob = blob
}
return m.configBlob, nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size})
}
return blobs
}
func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
return nil, err
}
v1 := &v1Image{}
if err := json.Unmarshal(config, v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
}, nil
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestOCI1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return false
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
if options.LayerInfos != nil {
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
}
}
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema2MediaType:
return copy.convertToManifestSchema2()
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", imgspecv1.MediaTypeImageManifest, options.ManifestMIMEType)
}
return memoryImageFromManifest(&copy), nil
}
func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
// Create a copy of the descriptor.
config := m.ConfigDescriptor
// The only difference between OCI and DockerSchema2 is the mediatypes. The
// media type of the manifest is handled by manifestSchema2FromComponents.
config.MediaType = manifest.DockerV2Schema2ConfigMediaType
layers := make([]descriptor, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
layers[idx].MediaType = manifest.DockerV2Schema2LayerMediaType
}
// Rather than copying the ConfigBlob now, we just pass m.src to the
// translated manifest, since the only difference is the mediatype of
// descriptors there is no change to any blob stored in m.src.
m1 := manifestSchema2FromComponents(config, m.src, nil, layers)
return memoryImageFromManifest(m1), nil
}

90
vendor/github.com/containers/image/image/sourced.go generated vendored Normal file
View File

@@ -0,0 +1,90 @@
// Package image consolidates knowledge about various container image formats
// (as opposed to image storage mechanisms, which are handled by types.ImageSource)
// and exposes all of them using an unified interface.
package image
import (
"github.com/containers/image/manifest"
"github.com/containers/image/types"
)
// FromSource returns a types.Image implementation for source.
// The caller must call .Close() on the returned Image.
//
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// Image and ImageSource objects simultaneously, but it means that they only need to
// the Image.)
//
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage instead of calling this function.
func FromSource(src types.ImageSource) (types.Image, error) {
return FromUnparsedImage(UnparsedFromSource(src))
}
// sourcedImage is a general set of utilities for working with container images,
// whatever is their underlying location (i.e. dockerImageSource-independent).
// Note the existence of skopeo/docker.Image: some instances of a `types.Image`
// may not be a `sourcedImage` directly. However, most users of `types.Image`
// do not care, and those who care about `skopeo/docker.Image` know they do.
type sourcedImage struct {
*UnparsedImage
manifestBlob []byte
manifestMIMEType string
// genericManifest contains data corresponding to manifestBlob.
// NOTE: The manifest may have been modified in the process; DO NOT reserialize and store genericManifest
// if you want to preserve the original manifest; use manifestBlob directly.
genericManifest
}
// FromUnparsedImage returns a types.Image implementation for unparsed.
// The caller must call .Close() on the returned Image.
//
// FromSource “takes ownership” of the input UnparsedImage and will call uparsed.Close()
// when the image is closed. (This does not prevent callers from using both the
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
// keep a reference to the Image.)
func FromUnparsedImage(unparsed *UnparsedImage) (types.Image, error) {
// Note that the input parameter above is specifically *image.UnparsedImage, not types.UnparsedImage:
// we want to be able to use unparsed.src. We could make that an explicit interface, but, well,
// this is the only UnparsedImage implementation around, anyway.
// Also, we do not explicitly implement types.Image.Close; we let the implementation fall through to
// unparsed.Close.
// NOTE: It is essential for signature verification that all parsing done in this object happens on the same manifest which is returned by unparsed.Manifest().
manifestBlob, manifestMIMEType, err := unparsed.Manifest()
if err != nil {
return nil, err
}
parsedManifest, err := manifestInstanceFromBlob(unparsed.src, manifestBlob, manifestMIMEType)
if err != nil {
return nil, err
}
return &sourcedImage{
UnparsedImage: unparsed,
manifestBlob: manifestBlob,
manifestMIMEType: manifestMIMEType,
genericManifest: parsedManifest,
}, nil
}
// Size returns the size of the image as stored, if it's known, or -1 if it isn't.
func (i *sourcedImage) Size() (int64, error) {
return -1, nil
}
// Manifest overrides the UnparsedImage.Manifest to always use the fields which we have already fetched.
func (i *sourcedImage) Manifest() ([]byte, string, error) {
return i.manifestBlob, i.manifestMIMEType, nil
}
func (i *sourcedImage) Inspect() (*types.ImageInspectInfo, error) {
return inspectManifest(i.genericManifest)
}
func (i *sourcedImage) IsMultiImage() bool {
return i.manifestMIMEType == manifest.DockerV2ListMediaType
}

82
vendor/github.com/containers/image/image/unparsed.go generated vendored Normal file
View File

@@ -0,0 +1,82 @@
package image
import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// UnparsedImage implements types.UnparsedImage .
type UnparsedImage struct {
src types.ImageSource
cachedManifest []byte // A private cache for Manifest(); nil if not yet known.
// A private cache for Manifest(), may be the empty string if guessing failed.
// Valid iff cachedManifest is not nil.
cachedManifestMIMEType string
cachedSignatures [][]byte // A private cache for Signatures(); nil if not yet known.
}
// UnparsedFromSource returns a types.UnparsedImage implementation for source.
// The caller must call .Close() on the returned UnparsedImage.
//
// UnparsedFromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
// keep a reference to the UnparsedImage.)
func UnparsedFromSource(src types.ImageSource) *UnparsedImage {
return &UnparsedImage{src: src}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *UnparsedImage) Reference() types.ImageReference {
return i.src.Reference()
}
// Close removes resources associated with an initialized UnparsedImage, if any.
func (i *UnparsedImage) Close() {
i.src.Close()
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Manifest() ([]byte, string, error) {
if i.cachedManifest == nil {
m, mt, err := i.src.GetManifest()
if err != nil {
return nil, "", err
}
// ImageSource.GetManifest does not do digest verification, but we do;
// this immediately protects also any user of types.Image.
ref := i.Reference().DockerReference()
if ref != nil {
if canonical, ok := ref.(reference.Canonical); ok {
digest := canonical.Digest()
matches, err := manifest.MatchesDigest(m, digest)
if err != nil {
return nil, "", errors.Wrap(err, "Error computing manifest digest")
}
if !matches {
return nil, "", errors.Errorf("Manifest does not match provided manifest digest %s", digest)
}
}
}
i.cachedManifest = m
i.cachedManifestMIMEType = mt
}
return i.cachedManifest, i.cachedManifestMIMEType, nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Signatures() ([][]byte, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignatures()
if err != nil {
return nil, err
}
i.cachedSignatures = sigs
}
return i.cachedSignatures, nil
}

View File

@@ -1,11 +1,10 @@
package manifest
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"github.com/docker/libtrust"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
@@ -19,8 +18,14 @@ const (
DockerV2Schema1SignedMediaType = "application/vnd.docker.distribution.manifest.v1+prettyjws"
// DockerV2Schema2MediaType MIME type represents Docker manifest schema 2
DockerV2Schema2MediaType = "application/vnd.docker.distribution.manifest.v2+json"
// DockerV2Schema2ConfigMediaType is the MIME type used for schema 2 config blobs.
DockerV2Schema2ConfigMediaType = "application/vnd.docker.container.image.v1+json"
// DockerV2Schema2LayerMediaType is the MIME type used for schema 2 layers.
DockerV2Schema2LayerMediaType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
// DockerV2ListMediaType MIME type represents Docker manifest schema 2 list
DockerV2ListMediaType = "application/vnd.docker.distribution.manifest.list.v2+json"
// DockerV2Schema2ForeignLayerMediaType is the MIME type used for schema 2 foreign layers.
DockerV2Schema2ForeignLayerMediaType = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
)
// DefaultRequestedManifestMIMETypes is a list of MIME types a types.ImageSource
@@ -30,6 +35,7 @@ var DefaultRequestedManifestMIMETypes = []string{
DockerV2Schema2MediaType,
DockerV2Schema1SignedMediaType,
DockerV2Schema1MediaType,
DockerV2ListMediaType,
}
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
@@ -65,7 +71,7 @@ func GuessMIMEType(manifest []byte) string {
}
// Digest returns the a digest of a docker manifest, with any necessary implied transformations like stripping v1s1 signatures.
func Digest(manifest []byte) (string, error) {
func Digest(manifest []byte) (digest.Digest, error) {
if GuessMIMEType(manifest) == DockerV2Schema1SignedMediaType {
sig, err := libtrust.ParsePrettySignature(manifest, "signatures")
if err != nil {
@@ -79,15 +85,14 @@ func Digest(manifest []byte) (string, error) {
}
}
hash := sha256.Sum256(manifest)
return "sha256:" + hex.EncodeToString(hash[:]), nil
return digest.FromBytes(manifest), nil
}
// MatchesDigest returns true iff the manifest matches expectedDigest.
// Error may be set if this returns false.
// Note that this is not doing ConstantTimeCompare; by the time we get here, the cryptographic signature must already have been verified,
// or we are not using a cryptographic channel and the attacker can modify the digest along with the manifest blob.
func MatchesDigest(manifest []byte, expectedDigest string) (bool, error) {
func MatchesDigest(manifest []byte, expectedDigest digest.Digest) (bool, error) {
// This should eventually support various digest types.
actualDigest, err := Digest(manifest)
if err != nil {

View File

@@ -1,19 +1,17 @@
package layout
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/pkg/errors"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
imgspec "github.com/opencontainers/image-spec/specs-go"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
@@ -39,18 +37,23 @@ func (d *ociImageDestination) Close() {
func (d *ociImageDestination) SupportedManifestMIMETypes() []string {
return []string{
imgspecv1.MediaTypeImageManifest,
manifest.DockerV2Schema2MediaType,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *ociImageDestination) SupportsSignatures() error {
return fmt.Errorf("Pushing signatures for OCI images is not supported")
return errors.Errorf("Pushing signatures for OCI images is not supported")
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *ociImageDestination) ShouldCompressLayers() bool {
return true
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ociImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
@@ -76,16 +79,16 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
}
}()
h := sha256.New()
tee := io.TeeReader(stream, h)
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := "sha256:" + hex.EncodeToString(h.Sum(nil))
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, fmt.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
@@ -108,56 +111,38 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
func createManifest(m []byte) ([]byte, string, error) {
om := imgspecv1.Manifest{}
mt := manifest.GuessMIMEType(m)
switch mt {
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
// There a simple reason about not yet implementing this.
// OCI image-spec assure about backward compatibility with docker v2s2 but not v2s1
// generating a v2s2 is a migration docker does when upgrading to 1.10.3
// and I don't think we should bother about this now (I don't want to have migration code here in skopeo)
return nil, "", errors.New("can't create an OCI manifest from Docker V2 schema 1 manifest")
case manifest.DockerV2Schema2MediaType:
if err := json.Unmarshal(m, &om); err != nil {
return nil, "", err
}
om.MediaType = imgspecv1.MediaTypeImageManifest
for i := range om.Layers {
om.Layers[i].MediaType = imgspecv1.MediaTypeImageLayer
}
om.Config.MediaType = imgspecv1.MediaTypeImageConfig
b, err := json.Marshal(om)
if err != nil {
return nil, "", err
}
return b, om.MediaType, nil
case manifest.DockerV2ListMediaType:
return nil, "", errors.New("can't create an OCI manifest from Docker V2 schema 2 manifest list")
case imgspecv1.MediaTypeImageManifestList:
return nil, "", errors.New("can't create an OCI manifest from OCI manifest list")
case imgspecv1.MediaTypeImageManifest:
return m, mt, nil
func (d *ociImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
return nil, "", fmt.Errorf("unrecognized manifest media type %q", mt)
blobPath, err := d.ref.blobPath(info.Digest)
if err != nil {
return false, -1, err
}
finfo, err := os.Stat(blobPath)
if err != nil && os.IsNotExist(err) {
return false, -1, types.ErrBlobNotFound
}
if err != nil {
return false, -1, err
}
return true, finfo.Size(), nil
}
func (d *ociImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
func (d *ociImageDestination) PutManifest(m []byte) error {
// TODO(mitr, runcom): this breaks signatures entirely since at this point we're creating a new manifest
// and signatures don't apply anymore. Will fix.
ociMan, mt, err := createManifest(m)
digest, err := manifest.Digest(m)
if err != nil {
return err
}
digest, err := manifest.Digest(ociMan)
if err != nil {
return err
}
desc := imgspec.Descriptor{}
desc := imgspecv1.Descriptor{}
desc.Digest = digest
// TODO(runcom): beaware and add support for OCI manifest list
desc.MediaType = mt
desc.Size = int64(len(ociMan))
desc.MediaType = imgspecv1.MediaTypeImageManifest
desc.Size = int64(len(m))
data, err := json.Marshal(desc)
if err != nil {
return err
@@ -167,7 +152,10 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
if err != nil {
return err
}
if err := ioutil.WriteFile(blobPath, ociMan, 0644); err != nil {
if err := ensureParentDirectoryExists(blobPath); err != nil {
return err
}
if err := ioutil.WriteFile(blobPath, m, 0644); err != nil {
return err
}
// TODO(runcom): ugly here?
@@ -197,7 +185,7 @@ func ensureParentDirectoryExists(path string) error {
func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return fmt.Errorf("Pushing signatures for OCI images is not supported")
return errors.Errorf("Pushing signatures for OCI images is not supported")
}
return nil
}

View File

@@ -0,0 +1,94 @@
package layout
import (
"encoding/json"
"io"
"io/ioutil"
"os"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type ociImageSource struct {
ref ociReference
}
// newImageSource returns an ImageSource for reading from an existing directory.
func newImageSource(ref ociReference) types.ImageSource {
return &ociImageSource{ref: ref}
}
// Reference returns the reference used to set up this source.
func (s *ociImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *ociImageSource) Close() {
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *ociImageSource) GetManifest() ([]byte, string, error) {
descriptorPath := s.ref.descriptorPath(s.ref.tag)
data, err := ioutil.ReadFile(descriptorPath)
if err != nil {
return nil, "", err
}
desc := imgspecv1.Descriptor{}
err = json.Unmarshal(data, &desc)
if err != nil {
return nil, "", err
}
manifestPath, err := s.ref.blobPath(digest.Digest(desc.Digest))
if err != nil {
return nil, "", err
}
m, err := ioutil.ReadFile(manifestPath)
if err != nil {
return nil, "", err
}
return m, manifest.GuessMIMEType(m), nil
}
func (s *ociImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
manifestPath, err := s.ref.blobPath(digest)
if err != nil {
return nil, "", err
}
m, err := ioutil.ReadFile(manifestPath)
if err != nil {
return nil, "", err
}
return m, manifest.GuessMIMEType(m), nil
}
// GetBlob returns a stream for the specified blob, and the blob's size.
func (s *ociImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
path, err := s.ref.blobPath(info.Digest)
if err != nil {
return nil, 0, err
}
r, err := os.Open(path)
if err != nil {
return nil, 0, err
}
fi, err := r.Stat()
if err != nil {
return nil, 0, err
}
return r, fi.Size(), nil
}
func (s *ociImageSource) GetSignatures() ([][]byte, error) {
return [][]byte{}, nil
}

View File

@@ -1,15 +1,17 @@
package layout
import (
"errors"
"fmt"
"path/filepath"
"regexp"
"strings"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// Transport is an ImageTransport for OCI directories.
@@ -41,16 +43,16 @@ func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
dir = scope[:sep]
tag := scope[sep+1:]
if !refRegexp.MatchString(tag) {
return fmt.Errorf("Invalid tag %s", tag)
return errors.Errorf("Invalid tag %s", tag)
}
}
if strings.Contains(dir, ":") {
return fmt.Errorf("Invalid OCI reference %s: path contains a colon", scope)
return errors.Errorf("Invalid OCI reference %s: path contains a colon", scope)
}
if !strings.HasPrefix(dir, "/") {
return fmt.Errorf("Invalid scope %s: must be an absolute path", scope)
return errors.Errorf("Invalid scope %s: must be an absolute path", scope)
}
// Refuse also "/", otherwise "/" and "" would have the same semantics,
// and "" could be unexpectedly shadowed by the "/" entry.
@@ -60,7 +62,7 @@ func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
}
cleaned := filepath.Clean(dir)
if cleaned != dir {
return fmt.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
return errors.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
}
return nil
}
@@ -104,10 +106,10 @@ func NewReference(dir, tag string) (types.ImageReference, error) {
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
// from being ambiguous with values of PolicyConfigurationIdentity.
if strings.Contains(resolved, ":") {
return nil, fmt.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, tag, resolved)
return nil, errors.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, tag, resolved)
}
if !refRegexp.MatchString(tag) {
return nil, fmt.Errorf("Invalid tag %s", tag)
return nil, errors.Errorf("Invalid tag %s", tag)
}
return ociReference{dir: dir, resolvedDir: resolved, tag: tag}, nil
}
@@ -164,10 +166,13 @@ func (ref ociReference) PolicyConfigurationNamespaces() []string {
return res
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return nil, errors.New("Full Image support not implemented for oci: image names")
src := newImageSource(ref)
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
@@ -175,7 +180,7 @@ func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error)
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref ociReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return nil, errors.New("Reading images not implemented for oci: image names")
return newImageSource(ref), nil
}
// NewImageDestination returns a types.ImageDestination for this reference.
@@ -186,7 +191,7 @@ func (ref ociReference) NewImageDestination(ctx *types.SystemContext) (types.Ima
// DeleteImage deletes the named image from the registry, if supported.
func (ref ociReference) DeleteImage(ctx *types.SystemContext) error {
return fmt.Errorf("Deleting images not implemented for oci: images")
return errors.Errorf("Deleting images not implemented for oci: images")
}
// ociLayoutPathPath returns a path for the oci-layout within a directory using OCI conventions.
@@ -195,12 +200,11 @@ func (ref ociReference) ociLayoutPath() string {
}
// blobPath returns a path for a blob within a directory using OCI image-layout conventions.
func (ref ociReference) blobPath(digest string) (string, error) {
pts := strings.SplitN(digest, ":", 2)
if len(pts) != 2 {
return "", fmt.Errorf("unexpected digest reference %s", digest)
func (ref ociReference) blobPath(digest digest.Digest) (string, error) {
if err := digest.Validate(); err != nil {
return "", errors.Wrapf(err, "unexpected digest reference %s", digest)
}
return filepath.Join(ref.dir, "blobs", pts[0], pts[1]), nil
return filepath.Join(ref.dir, "blobs", digest.Algorithm().String(), digest.Hex()), nil
}
// descriptorPath returns a path for the manifest within a directory using OCI conventions.

View File

@@ -4,7 +4,6 @@ import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net"
@@ -14,13 +13,14 @@ import (
"path"
"path/filepath"
"reflect"
"strings"
"time"
"github.com/ghodss/yaml"
"github.com/imdario/mergo"
utilerrors "k8s.io/kubernetes/pkg/util/errors"
"k8s.io/kubernetes/pkg/util/homedir"
utilnet "k8s.io/kubernetes/pkg/util/net"
"github.com/pkg/errors"
"golang.org/x/net/http2"
"k8s.io/client-go/util/homedir"
)
// restTLSClientConfig is a modified copy of k8s.io/kubernets/pkg/client/restclient.TLSClientConfig.
@@ -348,20 +348,20 @@ func validateClusterInfo(clusterName string, clusterInfo clientcmdCluster) []err
if len(clusterInfo.Server) == 0 {
if len(clusterName) == 0 {
validationErrors = append(validationErrors, fmt.Errorf("default cluster has no server defined"))
validationErrors = append(validationErrors, errors.Errorf("default cluster has no server defined"))
} else {
validationErrors = append(validationErrors, fmt.Errorf("no server found for cluster %q", clusterName))
validationErrors = append(validationErrors, errors.Errorf("no server found for cluster %q", clusterName))
}
}
// Make sure CA data and CA file aren't both specified
if len(clusterInfo.CertificateAuthority) != 0 && len(clusterInfo.CertificateAuthorityData) != 0 {
validationErrors = append(validationErrors, fmt.Errorf("certificate-authority-data and certificate-authority are both specified for %v. certificate-authority-data will override", clusterName))
validationErrors = append(validationErrors, errors.Errorf("certificate-authority-data and certificate-authority are both specified for %v. certificate-authority-data will override", clusterName))
}
if len(clusterInfo.CertificateAuthority) != 0 {
clientCertCA, err := os.Open(clusterInfo.CertificateAuthority)
defer clientCertCA.Close()
if err != nil {
validationErrors = append(validationErrors, fmt.Errorf("unable to read certificate-authority %v for %v due to %v", clusterInfo.CertificateAuthority, clusterName, err))
validationErrors = append(validationErrors, errors.Errorf("unable to read certificate-authority %v for %v due to %v", clusterInfo.CertificateAuthority, clusterName, err))
}
}
@@ -385,36 +385,36 @@ func validateAuthInfo(authInfoName string, authInfo clientcmdAuthInfo) []error {
if len(authInfo.ClientCertificate) != 0 || len(authInfo.ClientCertificateData) != 0 {
// Make sure cert data and file aren't both specified
if len(authInfo.ClientCertificate) != 0 && len(authInfo.ClientCertificateData) != 0 {
validationErrors = append(validationErrors, fmt.Errorf("client-cert-data and client-cert are both specified for %v. client-cert-data will override", authInfoName))
validationErrors = append(validationErrors, errors.Errorf("client-cert-data and client-cert are both specified for %v. client-cert-data will override", authInfoName))
}
// Make sure key data and file aren't both specified
if len(authInfo.ClientKey) != 0 && len(authInfo.ClientKeyData) != 0 {
validationErrors = append(validationErrors, fmt.Errorf("client-key-data and client-key are both specified for %v; client-key-data will override", authInfoName))
validationErrors = append(validationErrors, errors.Errorf("client-key-data and client-key are both specified for %v; client-key-data will override", authInfoName))
}
// Make sure a key is specified
if len(authInfo.ClientKey) == 0 && len(authInfo.ClientKeyData) == 0 {
validationErrors = append(validationErrors, fmt.Errorf("client-key-data or client-key must be specified for %v to use the clientCert authentication method", authInfoName))
validationErrors = append(validationErrors, errors.Errorf("client-key-data or client-key must be specified for %v to use the clientCert authentication method", authInfoName))
}
if len(authInfo.ClientCertificate) != 0 {
clientCertFile, err := os.Open(authInfo.ClientCertificate)
defer clientCertFile.Close()
if err != nil {
validationErrors = append(validationErrors, fmt.Errorf("unable to read client-cert %v for %v due to %v", authInfo.ClientCertificate, authInfoName, err))
validationErrors = append(validationErrors, errors.Errorf("unable to read client-cert %v for %v due to %v", authInfo.ClientCertificate, authInfoName, err))
}
}
if len(authInfo.ClientKey) != 0 {
clientKeyFile, err := os.Open(authInfo.ClientKey)
defer clientKeyFile.Close()
if err != nil {
validationErrors = append(validationErrors, fmt.Errorf("unable to read client-key %v for %v due to %v", authInfo.ClientKey, authInfoName, err))
validationErrors = append(validationErrors, errors.Errorf("unable to read client-key %v for %v due to %v", authInfo.ClientKey, authInfoName, err))
}
}
}
// authPath also provides information for the client to identify the server, so allow multiple auth methods in that case
if (len(methods) > 1) && (!usingAuthPath) {
validationErrors = append(validationErrors, fmt.Errorf("more than one authentication method found for %v; found %v, only one is allowed", authInfoName, methods))
validationErrors = append(validationErrors, errors.Errorf("more than one authentication method found for %v; found %v, only one is allowed", authInfoName, methods))
}
return validationErrors
@@ -450,6 +450,55 @@ func (config *directClientConfig) getCluster() clientcmdCluster {
return mergedClusterInfo
}
// aggregateErr is a modified copy of k8s.io/apimachinery/pkg/util/errors.aggregate.
// This helper implements the error and Errors interfaces. Keeping it private
// prevents people from making an aggregate of 0 errors, which is not
// an error, but does satisfy the error interface.
type aggregateErr []error
// newAggregate is a modified copy of k8s.io/apimachinery/pkg/util/errors.NewAggregate.
// NewAggregate converts a slice of errors into an Aggregate interface, which
// is itself an implementation of the error interface. If the slice is empty,
// this returns nil.
// It will check if any of the element of input error list is nil, to avoid
// nil pointer panic when call Error().
func newAggregate(errlist []error) error {
if len(errlist) == 0 {
return nil
}
// In case of input error list contains nil
var errs []error
for _, e := range errlist {
if e != nil {
errs = append(errs, e)
}
}
if len(errs) == 0 {
return nil
}
return aggregateErr(errs)
}
// Error is a modified copy of k8s.io/apimachinery/pkg/util/errors.aggregate.Error.
// Error is part of the error interface.
func (agg aggregateErr) Error() string {
if len(agg) == 0 {
// This should never happen, really.
return ""
}
if len(agg) == 1 {
return agg[0].Error()
}
result := fmt.Sprintf("[%s", agg[0].Error())
for i := 1; i < len(agg); i++ {
result += fmt.Sprintf(", %s", agg[i].Error())
}
result += "]"
return result
}
// REMOVED: aggregateErr.Errors
// errConfigurationInvalid is a modified? copy of k8s.io/kubernetes/pkg/client/unversioned/clientcmd.errConfigurationInvalid.
// errConfigurationInvalid is a set of errors indicating the configuration is invalid.
type errConfigurationInvalid []error
@@ -470,7 +519,7 @@ func newErrConfigurationInvalid(errs []error) error {
// Error implements the error interface
func (e errConfigurationInvalid) Error() string {
return fmt.Sprintf("invalid configuration: %v", utilerrors.NewAggregate(e).Error())
return fmt.Sprintf("invalid configuration: %v", newAggregate(e).Error())
}
// clientConfigLoadingRules is a modified copy of k8s.io/kubernetes/pkg/client/unversioned/clientcmd.ClientConfigLoadingRules
@@ -518,7 +567,7 @@ func (rules *clientConfigLoadingRules) Load() (*clientcmdConfig, error) {
continue
}
if err != nil {
errlist = append(errlist, fmt.Errorf("Error loading config file \"%s\": %v", filename, err))
errlist = append(errlist, errors.Wrapf(err, "Error loading config file \"%s\"", filename))
continue
}
@@ -550,7 +599,7 @@ func (rules *clientConfigLoadingRules) Load() (*clientcmdConfig, error) {
errlist = append(errlist, err)
}
return config, utilerrors.NewAggregate(errlist)
return config, newAggregate(errlist)
}
// loadFromFile is a modified copy of k8s.io/kubernetes/pkg/client/unversioned/clientcmd.LoadFromFile
@@ -623,7 +672,7 @@ func resolveLocalPaths(config *clientcmdConfig) error {
}
base, err := filepath.Abs(filepath.Dir(cluster.LocationOfOrigin))
if err != nil {
return fmt.Errorf("Could not determine the absolute path of config file %s: %v", cluster.LocationOfOrigin, err)
return errors.Wrapf(err, "Could not determine the absolute path of config file %s", cluster.LocationOfOrigin)
}
if err := resolvePaths(getClusterFileReferences(cluster), base); err != nil {
@@ -636,7 +685,7 @@ func resolveLocalPaths(config *clientcmdConfig) error {
}
base, err := filepath.Abs(filepath.Dir(authInfo.LocationOfOrigin))
if err != nil {
return fmt.Errorf("Could not determine the absolute path of config file %s: %v", authInfo.LocationOfOrigin, err)
return errors.Wrapf(err, "Could not determine the absolute path of config file %s", authInfo.LocationOfOrigin)
}
if err := resolvePaths(getAuthInfoFileReferences(authInfo), base); err != nil {
@@ -706,7 +755,7 @@ func restClientFor(config *restConfig) (*url.URL, *http.Client, error) {
// Kubernetes API.
func defaultServerURL(host string, defaultTLS bool) (*url.URL, error) {
if host == "" {
return nil, fmt.Errorf("host must be a URL or a host:port pair")
return nil, errors.Errorf("host must be a URL or a host:port pair")
}
base := host
hostURL, err := url.Parse(base)
@@ -723,7 +772,7 @@ func defaultServerURL(host string, defaultTLS bool) (*url.URL, error) {
return nil, err
}
if hostURL.Path != "" && hostURL.Path != "/" {
return nil, fmt.Errorf("host must be a URL or a host:port pair: %q", base)
return nil, errors.Errorf("host must be a URL or a host:port pair: %q", base)
}
}
@@ -793,12 +842,58 @@ func transportNew(config *restConfig) (http.RoundTripper, error) {
// REMOVED: HTTPWrappersForConfig(config, rt) in favor of the caller setting HTTP headers itself based on restConfig. Only this inlined check remains.
if len(config.Username) != 0 && len(config.BearerToken) != 0 {
return nil, fmt.Errorf("username/password or bearer token may be set, but not both")
return nil, errors.Errorf("username/password or bearer token may be set, but not both")
}
return rt, nil
}
// newProxierWithNoProxyCIDR is a modified copy of k8s.io/apimachinery/pkg/util/net.NewProxierWithNoProxyCIDR.
// NewProxierWithNoProxyCIDR constructs a Proxier function that respects CIDRs in NO_PROXY and delegates if
// no matching CIDRs are found
func newProxierWithNoProxyCIDR(delegate func(req *http.Request) (*url.URL, error)) func(req *http.Request) (*url.URL, error) {
// we wrap the default method, so we only need to perform our check if the NO_PROXY envvar has a CIDR in it
noProxyEnv := os.Getenv("NO_PROXY")
noProxyRules := strings.Split(noProxyEnv, ",")
cidrs := []*net.IPNet{}
for _, noProxyRule := range noProxyRules {
_, cidr, _ := net.ParseCIDR(noProxyRule)
if cidr != nil {
cidrs = append(cidrs, cidr)
}
}
if len(cidrs) == 0 {
return delegate
}
return func(req *http.Request) (*url.URL, error) {
host := req.URL.Host
// for some urls, the Host is already the host, not the host:port
if net.ParseIP(host) == nil {
var err error
host, _, err = net.SplitHostPort(req.URL.Host)
if err != nil {
return delegate(req)
}
}
ip := net.ParseIP(host)
if ip == nil {
return delegate(req)
}
for _, cidr := range cidrs {
if cidr.Contains(ip) {
return nil, nil
}
}
return delegate(req)
}
}
// tlsCacheGet is a modified copy of k8s.io/kubernetes/pkg/client/transport.tlsTransportCache.get.
func tlsCacheGet(config *restConfig) (http.RoundTripper, error) {
// REMOVED: any actual caching
@@ -813,15 +908,23 @@ func tlsCacheGet(config *restConfig) (http.RoundTripper, error) {
return http.DefaultTransport, nil
}
return utilnet.SetTransportDefaults(&http.Transport{ // FIXME??
Proxy: http.ProxyFromEnvironment,
// REMOVED: Call to k8s.io/apimachinery/pkg/util/net.SetTransportDefaults; instead of the generic machinery and conditionals, hard-coded the result here.
t := &http.Transport{
// http.ProxyFromEnvironment doesn't respect CIDRs and that makes it impossible to exclude things like pod and service IPs from proxy settings
// ProxierWithNoProxyCIDR allows CIDR rules in NO_PROXY
Proxy: newProxierWithNoProxyCIDR(http.ProxyFromEnvironment),
TLSHandshakeTimeout: 10 * time.Second,
TLSClientConfig: tlsConfig,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
}), nil
}
// Allow clients to disable http2 if needed.
if s := os.Getenv("DISABLE_HTTP2"); len(s) == 0 {
_ = http2.ConfigureTransport(t)
}
return t, nil
}
// tlsConfigFor is a modified copy of k8s.io/kubernetes/pkg/client/transport.TLSConfigFor.
@@ -832,7 +935,7 @@ func tlsConfigFor(c *restConfig) (*tls.Config, error) {
return nil, nil
}
if c.HasCA() && c.Insecure {
return nil, fmt.Errorf("specifying a root certificates file with the insecure flag is not allowed")
return nil, errors.Errorf("specifying a root certificates file with the insecure flag is not allowed")
}
if err := loadTLSFiles(c); err != nil {
return nil, err

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"crypto/rand"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
@@ -17,6 +16,8 @@ import (
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/image/version"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// openshiftClient is configuration for dealing with a single image stream, for reading or writing.
@@ -123,7 +124,7 @@ func (c *openshiftClient) doRequest(method, path string, requestBody []byte) ([]
if statusValid {
return nil, errors.New(status.Message)
}
return nil, fmt.Errorf("HTTP error: status code: %d, body: %s", res.StatusCode, string(body))
return nil, errors.Errorf("HTTP error: status code: %d, body: %s", res.StatusCode, string(body))
}
return body, nil
@@ -150,7 +151,7 @@ func (c *openshiftClient) getImage(imageStreamImageName string) (*image, error)
func (c *openshiftClient) convertDockerImageReference(ref string) (string, error) {
parts := strings.SplitN(ref, "/", 2)
if len(parts) != 2 {
return "", fmt.Errorf("Invalid format of docker reference %s: missing '/'", ref)
return "", errors.Errorf("Invalid format of docker reference %s: missing '/'", ref)
}
return c.ref.dockerReference.Hostname() + "/" + parts[1], nil
}
@@ -196,6 +197,15 @@ func (s *openshiftImageSource) Close() {
}
}
func (s *openshiftImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, "", err
}
return s.docker.GetTargetManifest(digest)
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, "", err
@@ -204,11 +214,11 @@ func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *openshiftImageSource) GetBlob(digest string) (io.ReadCloser, int64, error) {
func (s *openshiftImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, 0, err
}
return s.docker.GetBlob(digest)
return s.docker.GetBlob(info)
}
func (s *openshiftImageSource) GetSignatures() ([][]byte, error) {
@@ -257,7 +267,7 @@ func (s *openshiftImageSource) ensureImageIsResolved() error {
}
}
if te == nil {
return fmt.Errorf("No matching tag found")
return errors.Errorf("No matching tag found")
}
logrus.Debugf("tag event %#v", te)
dockerRefString, err := s.client.convertDockerImageReference(te.DockerImageReference)
@@ -340,6 +350,12 @@ func (d *openshiftImageDestination) ShouldCompressLayers() bool {
return true
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *openshiftImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -350,19 +366,27 @@ func (d *openshiftImageDestination) PutBlob(stream io.Reader, inputInfo types.Bl
return d.docker.PutBlob(stream, inputInfo)
}
func (d *openshiftImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
return d.docker.HasBlob(info)
}
func (d *openshiftImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return d.docker.ReapplyBlob(info)
}
func (d *openshiftImageDestination) PutManifest(m []byte) error {
manifestDigest, err := manifest.Digest(m)
if err != nil {
return err
}
d.imageStreamImageName = manifestDigest
d.imageStreamImageName = manifestDigest.String()
return d.docker.PutManifest(m)
}
func (d *openshiftImageDestination) PutSignatures(signatures [][]byte) error {
if d.imageStreamImageName == "" {
return fmt.Errorf("Internal error: Unknown manifest digest, can't add signatures")
return errors.Errorf("Internal error: Unknown manifest digest, can't add signatures")
}
// Because image signatures are a shared resource in Atomic Registry, the default upload
// always adds signatures. Eventually we should also allow removing signatures.
@@ -394,7 +418,7 @@ sigExists:
randBytes := make([]byte, 16)
n, err := rand.Read(randBytes)
if err != nil || n != 16 {
return fmt.Errorf("Error generating random signature ID: %v, len %d", err, n)
return errors.Wrapf(err, "Error generating random signature len %d", n)
}
signatureName = fmt.Sprintf("%s@%032x", d.imageStreamImageName, randBytes)
if _, ok := existingSigNames[signatureName]; !ok {

View File

@@ -6,9 +6,10 @@ import (
"strings"
"github.com/containers/image/docker/policyconfiguration"
"github.com/containers/image/docker/reference"
genericImage "github.com/containers/image/image"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/pkg/errors"
)
// Transport is an ImageTransport for OpenShift registry-hosted images.
@@ -36,7 +37,7 @@ var scopeRegexp = regexp.MustCompile("^[^/]*(/[^:/]*(/[^:/]*(:[^:/]*)?)?)?$")
// scope passed to this function will not be "", that value is always allowed.
func (t openshiftTransport) ValidatePolicyConfigurationScope(scope string) error {
if scopeRegexp.FindStringIndex(scope) == nil {
return fmt.Errorf("Invalid scope name %s", scope)
return errors.Errorf("Invalid scope name %s", scope)
}
return nil
}
@@ -52,11 +53,11 @@ type openshiftReference struct {
func ParseReference(ref string) (types.ImageReference, error) {
r, err := reference.ParseNamed(ref)
if err != nil {
return nil, fmt.Errorf("failed to parse image reference %q, %v", ref, err)
return nil, errors.Wrapf(err, "failed to parse image reference %q", ref)
}
tagged, ok := r.(reference.NamedTagged)
if !ok {
return nil, fmt.Errorf("invalid image reference %s, %#v", ref, r)
return nil, errors.Errorf("invalid image reference %s, %#v", ref, r)
}
return NewReference(tagged)
}
@@ -65,7 +66,7 @@ func ParseReference(ref string) (types.ImageReference, error) {
func NewReference(dockerRef reference.NamedTagged) (types.ImageReference, error) {
r := strings.SplitN(dockerRef.RemoteName(), "/", 3)
if len(r) != 2 {
return nil, fmt.Errorf("invalid image reference %s", dockerRef.String())
return nil, errors.Errorf("invalid image reference %s", dockerRef.String())
}
return openshiftReference{
namespace: r[0],
@@ -118,14 +119,16 @@ func (ref openshiftReference) PolicyConfigurationNamespaces() []string {
return policyconfiguration.DockerReferenceNamespaces(ref.dockerReference)
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref openshiftReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := newImageSource(ctx, ref, nil)
if err != nil {
return nil, err
}
return genericImage.FromSource(src), nil
return genericImage.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
@@ -144,5 +147,5 @@ func (ref openshiftReference) NewImageDestination(ctx *types.SystemContext) (typ
// DeleteImage deletes the named image from the registry, if supported.
func (ref openshiftReference) DeleteImage(ctx *types.SystemContext) error {
return fmt.Errorf("Deleting images not implemented for atomic: images")
return errors.Errorf("Deleting images not implemented for atomic: images")
}

View File

@@ -5,7 +5,9 @@ package signature
import (
"fmt"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/opencontainers/go-digest"
)
// SignDockerManifest returns a signature for manifest as the specified dockerReference,
@@ -15,12 +17,7 @@ func SignDockerManifest(m []byte, dockerReference string, mech SigningMechanism,
if err != nil {
return nil, err
}
sig := privateSignature{
Signature{
DockerManifestDigest: manifestDigest,
DockerReference: dockerReference,
},
}
sig := newUntrustedSignature(manifestDigest, dockerReference)
return sig.sign(mech, keyIdentity)
}
@@ -28,6 +25,10 @@ func SignDockerManifest(m []byte, dockerReference string, mech SigningMechanism,
// using mech.
func VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest []byte,
expectedDockerReference string, mech SigningMechanism, expectedKeyIdentity string) (*Signature, error) {
expectedRef, err := reference.ParseNamed(expectedDockerReference)
if err != nil {
return nil, err
}
sig, err := verifyAndExtractSignature(mech, unverifiedSignature, signatureAcceptanceRules{
validateKeyIdentity: func(keyIdentity string) error {
if keyIdentity != expectedKeyIdentity {
@@ -36,13 +37,17 @@ func VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest []byt
return nil
},
validateSignedDockerReference: func(signedDockerReference string) error {
if signedDockerReference != expectedDockerReference {
signedRef, err := reference.ParseNamed(signedDockerReference)
if err != nil {
return InvalidSignatureError{msg: fmt.Sprintf("Invalid docker reference %s in signature", signedDockerReference)}
}
if signedRef.String() != expectedRef.String() {
return InvalidSignatureError{msg: fmt.Sprintf("Docker reference %s does not match %s",
signedDockerReference, expectedDockerReference)}
}
return nil
},
validateSignedDockerManifestDigest: func(signedDockerManifestDigest string) error {
validateSignedDockerManifestDigest: func(signedDockerManifestDigest digest.Digest) error {
matches, err := manifest.MatchesDigest(unverifiedManifest, signedDockerManifestDigest)
if err != nil {
return err

View File

@@ -29,6 +29,23 @@ func validateExactMapKeys(m map[string]interface{}, expectedKeys ...string) erro
return nil
}
// int64Field returns a member fieldName of m, if it is an int64, or an error.
func int64Field(m map[string]interface{}, fieldName string) (int64, error) {
untyped, ok := m[fieldName]
if !ok {
return -1, jsonFormatError(fmt.Sprintf("Field %s missing", fieldName))
}
f, ok := untyped.(float64)
if !ok {
return -1, jsonFormatError(fmt.Sprintf("Field %s is not a number", fieldName))
}
v := int64(f)
if float64(v) != f {
return -1, jsonFormatError(fmt.Sprintf("Field %s is not an integer", fieldName))
}
return v, nil
}
// mapField returns a member fieldName of m, if it is a JSON map, or an error.
func mapField(m map[string]interface{}, fieldName string) (map[string]interface{}, error) {
untyped, ok := m[fieldName]
@@ -50,7 +67,7 @@ func stringField(m map[string]interface{}, fieldName string) (string, error) {
}
v, ok := untyped.(string)
if !ok {
return "", jsonFormatError(fmt.Sprintf("Field %s is not a JSON object", fieldName))
return "", jsonFormatError(fmt.Sprintf("Field %s is not a string", fieldName))
}
return v, nil
}

View File

@@ -4,9 +4,13 @@ package signature
import (
"bytes"
"errors"
"fmt"
"io/ioutil"
"strings"
"github.com/mtrmac/gpgme"
"golang.org/x/crypto/openpgp"
)
// SigningMechanism abstracts a way to sign binary blobs and verify their signatures.
@@ -21,6 +25,12 @@ type SigningMechanism interface {
Sign(input []byte, keyIdentity string) ([]byte, error)
// Verify parses unverifiedSignature and returns the content and the signer's identity
Verify(unverifiedSignature []byte) (contents []byte, keyIdentity string, err error)
// UntrustedSignatureContents returns UNTRUSTED contents of the signature WITHOUT ANY VERIFICATION,
// along with a short identifier of the key used for signing.
// WARNING: The short key identifier (which correponds to "Key ID" for OpenPGP keys)
// is NOT the same as a "key identity" used in other calls ot this interface, and
// the values may have no recognizable relationship if the public key is not available.
UntrustedSignatureContents(untrustedSignature []byte) (untrustedContents []byte, shortKeyIdentifier string, err error)
}
// A GPG/OpenPGP signing mechanism.
@@ -119,3 +129,31 @@ func (m gpgSigningMechanism) Verify(unverifiedSignature []byte) (contents []byte
}
return signedBuffer.Bytes(), sig.Fingerprint, nil
}
// UntrustedSignatureContents returns UNTRUSTED contents of the signature WITHOUT ANY VERIFICATION,
// along with a short identifier of the key used for signing.
// WARNING: The short key identifier (which correponds to "Key ID" for OpenPGP keys)
// is NOT the same as a "key identity" used in other calls ot this interface, and
// the values may have no recognizable relationship if the public key is not available.
func (m gpgSigningMechanism) UntrustedSignatureContents(untrustedSignature []byte) (untrustedContents []byte, shortKeyIdentifier string, err error) {
// This uses the Golang-native OpenPGP implementation instead of gpgme because we are not doing any cryptography.
md, err := openpgp.ReadMessage(bytes.NewReader(untrustedSignature), openpgp.EntityList{}, nil, nil)
if err != nil {
return nil, "", err
}
if !md.IsSigned {
return nil, "", errors.New("The input is not a signature")
}
content, err := ioutil.ReadAll(md.UnverifiedBody)
if err != nil {
// Coverage: An error during reading the body can happen only if
// 1) the message is encrypted, which is not our case (and we dont give ReadMessage the key
// to decrypt the contents anyway), or
// 2) the message is signed AND we give ReadMessage a correspnding public key, which we dont.
return nil, "", err
}
// Uppercase the key ID for minimal consistency with the gpgme-returned fingerprints
// (but note that key ID is a suffix of the fingerprint only for V4 keys, not V3)!
return content, strings.ToUpper(fmt.Sprintf("%016X", md.SignedByKeyId)), nil
}

View File

@@ -15,14 +15,15 @@ package signature
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"path/filepath"
"github.com/pkg/errors"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
)
// systemDefaultPolicyPath is the policy path used for DefaultPolicy().
@@ -41,8 +42,6 @@ func (err InvalidPolicyFormatError) Error() string {
return string(err)
}
// FIXME: NewDefaultPolicy, from default file (or environment if trusted?)
// DefaultPolicy returns the default policy of the system.
// Most applications should be using this method to get the policy configured
// by the system administrator.
@@ -386,7 +385,7 @@ func (pr *prSignedBy) UnmarshalJSON(data []byte) error {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
if signedIdentity == nil {
tmp.SignedIdentity = NewPRMMatchExact()
tmp.SignedIdentity = NewPRMMatchRepoDigestOrExact()
} else {
si, err := newPolicyReferenceMatchFromJSON(signedIdentity)
if err != nil {
@@ -407,7 +406,7 @@ func (pr *prSignedBy) UnmarshalJSON(data []byte) error {
case !gotKeyPath && !gotKeyData:
return InvalidPolicyFormatError("At least one of keyPath and keyData mus be specified")
default: // Coverage: This should never happen
return fmt.Errorf("Impossible keyPath/keyData presence combination!?")
return errors.Errorf("Impossible keyPath/keyData presence combination!?")
}
if err != nil {
return err
@@ -501,7 +500,7 @@ func (pr *prSignedBaseLayer) UnmarshalJSON(data []byte) error {
return nil
}
// newPolicyRequirementFromJSON parses JSON data into a PolicyReferenceMatch implementation.
// newPolicyReferenceMatchFromJSON parses JSON data into a PolicyReferenceMatch implementation.
func newPolicyReferenceMatchFromJSON(data []byte) (PolicyReferenceMatch, error) {
var typeField prmCommon
if err := json.Unmarshal(data, &typeField); err != nil {
@@ -511,6 +510,8 @@ func newPolicyReferenceMatchFromJSON(data []byte) (PolicyReferenceMatch, error)
switch typeField.Type {
case prmTypeMatchExact:
res = &prmMatchExact{}
case prmTypeMatchRepoDigestOrExact:
res = &prmMatchRepoDigestOrExact{}
case prmTypeMatchRepository:
res = &prmMatchRepository{}
case prmTypeExactReference:
@@ -561,6 +562,41 @@ func (prm *prmMatchExact) UnmarshalJSON(data []byte) error {
return nil
}
// newPRMMatchRepoDigestOrExact is NewPRMMatchRepoDigestOrExact, except it resturns the private type.
func newPRMMatchRepoDigestOrExact() *prmMatchRepoDigestOrExact {
return &prmMatchRepoDigestOrExact{prmCommon{Type: prmTypeMatchRepoDigestOrExact}}
}
// NewPRMMatchRepoDigestOrExact returns a new "matchRepoDigestOrExact" PolicyReferenceMatch.
func NewPRMMatchRepoDigestOrExact() PolicyReferenceMatch {
return newPRMMatchRepoDigestOrExact()
}
// Compile-time check that prmMatchRepoDigestOrExact implements json.Unmarshaler.
var _ json.Unmarshaler = (*prmMatchRepoDigestOrExact)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (prm *prmMatchRepoDigestOrExact) UnmarshalJSON(data []byte) error {
*prm = prmMatchRepoDigestOrExact{}
var tmp prmMatchRepoDigestOrExact
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
default:
return nil
}
}); err != nil {
return err
}
if tmp.Type != prmTypeMatchRepoDigestOrExact {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
*prm = *newPRMMatchRepoDigestOrExact()
return nil
}
// newPRMMatchRepository is NewPRMMatchRepository, except it resturns the private type.
func newPRMMatchRepository() *prmMatchRepository {
return &prmMatchRepository{prmCommon{Type: prmTypeMatchRepository}}

View File

@@ -6,10 +6,9 @@
package signature
import (
"fmt"
"github.com/Sirupsen/logrus"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// PolicyRequirementError is an explanatory text for rejecting a signature or an image.
@@ -54,14 +53,14 @@ type PolicyRequirement interface {
// a container based on this image; use IsRunningImageAllowed instead.
// - Just because a signature is accepted does not automatically mean the contents of the
// signature are authorized to run code as root, or to affect system or cluster configuration.
isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error)
isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error)
// isRunningImageAllowed returns true if the requirement allows running an image.
// If it returns false, err must be non-nil, and should be an PolicyRequirementError if evaluation
// succeeded but the result was rejection.
// WARNING: This validates signatures and the manifest, but does not download or validate the
// layers. Users must validate that the layers match their expected digests.
isRunningImageAllowed(image types.Image) (bool, error)
isRunningImageAllowed(image types.UnparsedImage) (bool, error)
}
// PolicyReferenceMatch specifies a set of image identities accepted in PolicyRequirement.
@@ -70,7 +69,7 @@ type PolicyReferenceMatch interface {
// matchesDockerReference decides whether a specific image identity is accepted for an image
// (or, usually, for the image's Reference().DockerReference()). Note that
// image.Reference().DockerReference() may be nil.
matchesDockerReference(image types.Image, signatureDockerReference string) bool
matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool
}
// PolicyContext encapsulates a policy and possible cached state
@@ -95,7 +94,7 @@ const (
// changeContextState changes pc.state, or fails if the state is unexpected
func (pc *PolicyContext) changeState(expected, new policyContextState) error {
if pc.state != expected {
return fmt.Errorf(`"Invalid PolicyContext state, expected "%s", found "%s"`, expected, pc.state)
return errors.Errorf(`"Invalid PolicyContext state, expected "%s", found "%s"`, expected, pc.state)
}
pc.state = new
return nil
@@ -174,7 +173,7 @@ func (pc *PolicyContext) requirementsForImageRef(ref types.ImageReference) Polic
// a container based on this image; use IsRunningImageAllowed instead.
// - Just because a signature is accepted does not automatically mean the contents of the
// signature are authorized to run code as root, or to affect system or cluster configuration.
func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.Image) (sigs []*Signature, finalErr error) {
func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.UnparsedImage) (sigs []*Signature, finalErr error) {
if err := pc.changeState(pcReady, pcInUse); err != nil {
return nil, err
}
@@ -254,7 +253,7 @@ func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.Image) (sig
// succeeded but the result was rejection.
// WARNING: This validates signatures and the manifest, but does not download or validate the
// layers. Users must validate that the layers match their expected digests.
func (pc *PolicyContext) IsRunningImageAllowed(image types.Image) (res bool, finalErr error) {
func (pc *PolicyContext) IsRunningImageAllowed(image types.UnparsedImage) (res bool, finalErr error) {
if err := pc.changeState(pcReady, pcInUse); err != nil {
return false, err
}

View File

@@ -7,11 +7,11 @@ import (
"github.com/containers/image/types"
)
func (pr *prSignedBaseLayer) isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error) {
func (pr *prSignedBaseLayer) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
return sarUnknown, nil, nil
}
func (pr *prSignedBaseLayer) isRunningImageAllowed(image types.Image) (bool, error) {
func (pr *prSignedBaseLayer) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
// FIXME? Reject this at policy parsing time already?
logrus.Errorf("signedBaseLayer not implemented yet!")
return false, PolicyRequirementError("signedBaseLayer not implemented yet!")

View File

@@ -3,25 +3,27 @@
package signature
import (
"errors"
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
)
func (pr *prSignedBy) isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error) {
func (pr *prSignedBy) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
switch pr.KeyType {
case SBKeyTypeGPGKeys:
case SBKeyTypeSignedByGPGKeys, SBKeyTypeX509Certificates, SBKeyTypeSignedByX509CAs:
// FIXME? Reject this at policy parsing time already?
return sarRejected, nil, fmt.Errorf(`"Unimplemented "keyType" value "%s"`, string(pr.KeyType))
return sarRejected, nil, errors.Errorf(`"Unimplemented "keyType" value "%s"`, string(pr.KeyType))
default:
// This should never happen, newPRSignedBy ensures KeyType.IsValid()
return sarRejected, nil, fmt.Errorf(`"Unknown "keyType" value "%s"`, string(pr.KeyType))
return sarRejected, nil, errors.Errorf(`"Unknown "keyType" value "%s"`, string(pr.KeyType))
}
if pr.KeyPath != "" && pr.KeyData != nil {
@@ -75,7 +77,7 @@ func (pr *prSignedBy) isSignatureAuthorAccepted(image types.Image, sig []byte) (
}
return nil
},
validateSignedDockerManifestDigest: func(digest string) error {
validateSignedDockerManifestDigest: func(digest digest.Digest) error {
m, _, err := image.Manifest()
if err != nil {
return err
@@ -97,7 +99,7 @@ func (pr *prSignedBy) isSignatureAuthorAccepted(image types.Image, sig []byte) (
return sarAccepted, signature, nil
}
func (pr *prSignedBy) isRunningImageAllowed(image types.Image) (bool, error) {
func (pr *prSignedBy) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
sigs, err := image.Signatures()
if err != nil {
return false, err
@@ -115,7 +117,7 @@ func (pr *prSignedBy) isRunningImageAllowed(image types.Image) (bool, error) {
// Huh?! This should not happen at all; treat it as any other invalid value.
fallthrough
default:
reason = fmt.Errorf(`Internal error: Unexpected signature verification result "%s"`, string(res))
reason = errors.Errorf(`Internal error: Unexpected signature verification result "%s"`, string(res))
}
rejections = append(rejections, reason)
}

View File

@@ -9,20 +9,20 @@ import (
"github.com/containers/image/types"
)
func (pr *prInsecureAcceptAnything) isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error) {
func (pr *prInsecureAcceptAnything) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
// prInsecureAcceptAnything semantics: Every image is allowed to run,
// but this does not consider the signature as verified.
return sarUnknown, nil, nil
}
func (pr *prInsecureAcceptAnything) isRunningImageAllowed(image types.Image) (bool, error) {
func (pr *prInsecureAcceptAnything) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
return true, nil
}
func (pr *prReject) isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error) {
func (pr *prReject) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
return sarRejected, nil, PolicyRequirementError(fmt.Sprintf("Any signatures for image %s are rejected by policy.", transports.ImageName(image.Reference())))
}
func (pr *prReject) isRunningImageAllowed(image types.Image) (bool, error) {
func (pr *prReject) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
return false, PolicyRequirementError(fmt.Sprintf("Running image %s is rejected by policy.", transports.ImageName(image.Reference())))
}

View File

@@ -5,13 +5,13 @@ package signature
import (
"fmt"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
)
// parseImageAndDockerReference converts an image and a reference string into two parsed entities, failing on any error and handling unidentified images.
func parseImageAndDockerReference(image types.Image, s2 string) (reference.Named, reference.Named, error) {
func parseImageAndDockerReference(image types.UnparsedImage, s2 string) (reference.Named, reference.Named, error) {
r1 := image.Reference().DockerReference()
if r1 == nil {
return nil, nil, PolicyRequirementError(fmt.Sprintf("Docker reference match attempted on image %s with no known Docker reference identity",
@@ -24,7 +24,7 @@ func parseImageAndDockerReference(image types.Image, s2 string) (reference.Named
return r1, r2, nil
}
func (prm *prmMatchExact) matchesDockerReference(image types.Image, signatureDockerReference string) bool {
func (prm *prmMatchExact) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseImageAndDockerReference(image, signatureDockerReference)
if err != nil {
return false
@@ -36,7 +36,30 @@ func (prm *prmMatchExact) matchesDockerReference(image types.Image, signatureDoc
return signature.String() == intended.String()
}
func (prm *prmMatchRepository) matchesDockerReference(image types.Image, signatureDockerReference string) bool {
func (prm *prmMatchRepoDigestOrExact) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseImageAndDockerReference(image, signatureDockerReference)
if err != nil {
return false
}
// Do not add default tags: image.Reference().DockerReference() should contain it already, and signatureDockerReference should be exact; so, verify that now.
if reference.IsNameOnly(signature) {
return false
}
switch intended.(type) {
case reference.NamedTagged: // Includes the case when intended has both a tag and a digest.
return signature.String() == intended.String()
case reference.Canonical:
// We dont actually compare the manifest digest against the signature here; that happens prSignedBy.in UnparsedImage.Manifest.
// Becase UnparsedImage.Manifest verifies the intended.Digest() against the manifest, and prSignedBy verifies the signature digest against the manifest,
// we know that signature digest matches intended.Digest() (but intended.Digest() and signature digest may use different algorithms)
return signature.Name() == intended.Name()
default: // !reference.IsNameOnly(intended)
return false
}
}
func (prm *prmMatchRepository) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseImageAndDockerReference(image, signatureDockerReference)
if err != nil {
return false
@@ -57,7 +80,7 @@ func parseDockerReferences(s1, s2 string) (reference.Named, reference.Named, err
return r1, r2, nil
}
func (prm *prmExactReference) matchesDockerReference(image types.Image, signatureDockerReference string) bool {
func (prm *prmExactReference) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseDockerReferences(prm.DockerReference, signatureDockerReference)
if err != nil {
return false
@@ -69,7 +92,7 @@ func (prm *prmExactReference) matchesDockerReference(image types.Image, signatur
return signature.String() == intended.String()
}
func (prm *prmExactRepository) matchesDockerReference(image types.Image, signatureDockerReference string) bool {
func (prm *prmExactRepository) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseDockerReferences(prm.DockerRepository, signatureDockerReference)
if err != nil {
return false

View File

@@ -6,7 +6,9 @@
package signature
// Policy defines requirements for considering a signature valid.
// NOTE: Keep this in sync with docs/policy.json.md!
// Policy defines requirements for considering a signature, or an image, valid.
type Policy struct {
// Default applies to any image which does not have a matching policy in Transports.
// Note that this can happen even if a matching PolicyTransportScopes exists in Transports
@@ -114,10 +116,11 @@ type prmCommon struct {
type prmTypeIdentifier string
const (
prmTypeMatchExact prmTypeIdentifier = "matchExact"
prmTypeMatchRepository prmTypeIdentifier = "matchRepository"
prmTypeExactReference prmTypeIdentifier = "exactReference"
prmTypeExactRepository prmTypeIdentifier = "exactRepository"
prmTypeMatchExact prmTypeIdentifier = "matchExact"
prmTypeMatchRepoDigestOrExact prmTypeIdentifier = "matchRepoDigestOrExact"
prmTypeMatchRepository prmTypeIdentifier = "matchRepository"
prmTypeExactReference prmTypeIdentifier = "exactReference"
prmTypeExactRepository prmTypeIdentifier = "exactRepository"
)
// prmMatchExact is a PolicyReferenceMatch with type = prmMatchExact: the two references must match exactly.
@@ -125,6 +128,12 @@ type prmMatchExact struct {
prmCommon
}
// prmMatchRepoDigestOrExact is a PolicyReferenceMatch with type = prmMatchExactOrDigest: the two references must match exactly,
// except that digest references are also accepted if the repository name matches (regardless of tag/digest) and the signature applies to the referenced digest
type prmMatchRepoDigestOrExact struct {
prmCommon
}
// prmMatchRepository is a PolicyReferenceMatch with type = prmMatchRepository: the two references must use the same repository, may differ in the tag.
type prmMatchRepository struct {
prmCommon

View File

@@ -4,11 +4,13 @@ package signature
import (
"encoding/json"
"errors"
"fmt"
"time"
"github.com/pkg/errors"
"github.com/containers/image/version"
"github.com/opencontainers/go-digest"
)
const (
@@ -25,37 +27,74 @@ func (err InvalidSignatureError) Error() string {
}
// Signature is a parsed content of a signature.
// The only way to get this structure from a blob should be as a return value from a successful call to verifyAndExtractSignature below.
type Signature struct {
DockerManifestDigest string // FIXME: more precise type?
DockerManifestDigest digest.Digest
DockerReference string // FIXME: more precise type?
}
// Wrap signature to add to it some methods which we don't want to make public.
type privateSignature struct {
Signature
// untrustedSignature is a parsed content of a signature.
type untrustedSignature struct {
UntrustedDockerManifestDigest digest.Digest
UntrustedDockerReference string // FIXME: more precise type?
UntrustedCreatorID *string
// This is intentionally an int64; the native JSON float64 type would allow to represent _some_ sub-second precision,
// but not nearly enough (with current timestamp values, a single unit in the last place is on the order of hundreds of nanoseconds).
// So, this is explicitly an int64, and we reject fractional values. If we did need more precise timestamps eventually,
// we would add another field, UntrustedTimestampNS int64.
UntrustedTimestamp *int64
}
// Compile-time check that privateSignature implements json.Marshaler
var _ json.Marshaler = (*privateSignature)(nil)
// UntrustedSignatureInformation is information available in an untrusted signature.
// This may be useful when debugging signature verification failures,
// or when managing a set of signatures on a single image.
//
// WARNING: Do not use the contents of this for ANY security decisions,
// and be VERY CAREFUL about showing this information to humans in any way which suggest that these values “are probably” reliable.
// There is NO REASON to expect the values to be correct, or not intentionally misleading
// (including things like “✅ Verified by $authority”)
type UntrustedSignatureInformation struct {
UntrustedDockerManifestDigest digest.Digest
UntrustedDockerReference string // FIXME: more precise type?
UntrustedCreatorID *string
UntrustedTimestamp *time.Time
UntrustedShortKeyIdentifier string
}
// newUntrustedSignature returns an untrustedSignature object with
// the specified primary contents and appropriate metadata.
func newUntrustedSignature(dockerManifestDigest digest.Digest, dockerReference string) untrustedSignature {
// Use intermediate variables for these values so that we can take their addresses.
// Golang guarantees that they will have a new address on every execution.
creatorID := "atomic " + version.Version
timestamp := time.Now().Unix()
return untrustedSignature{
UntrustedDockerManifestDigest: dockerManifestDigest,
UntrustedDockerReference: dockerReference,
UntrustedCreatorID: &creatorID,
UntrustedTimestamp: &timestamp,
}
}
// Compile-time check that untrustedSignature implements json.Marshaler
var _ json.Marshaler = (*untrustedSignature)(nil)
// MarshalJSON implements the json.Marshaler interface.
func (s privateSignature) MarshalJSON() ([]byte, error) {
return s.marshalJSONWithVariables(time.Now().UTC().Unix(), "atomic "+version.Version)
}
// Implementation of MarshalJSON, with a caller-chosen values of the variable items to help testing.
func (s privateSignature) marshalJSONWithVariables(timestamp int64, creatorID string) ([]byte, error) {
if s.DockerManifestDigest == "" || s.DockerReference == "" {
func (s untrustedSignature) MarshalJSON() ([]byte, error) {
if s.UntrustedDockerManifestDigest == "" || s.UntrustedDockerReference == "" {
return nil, errors.New("Unexpected empty signature content")
}
critical := map[string]interface{}{
"type": signatureType,
"image": map[string]string{"docker-manifest-digest": s.DockerManifestDigest},
"identity": map[string]string{"docker-reference": s.DockerReference},
"image": map[string]string{"docker-manifest-digest": s.UntrustedDockerManifestDigest.String()},
"identity": map[string]string{"docker-reference": s.UntrustedDockerReference},
}
optional := map[string]interface{}{
"creator": creatorID,
"timestamp": timestamp,
optional := map[string]interface{}{}
if s.UntrustedCreatorID != nil {
optional["creator"] = *s.UntrustedCreatorID
}
if s.UntrustedTimestamp != nil {
optional["timestamp"] = *s.UntrustedTimestamp
}
signature := map[string]interface{}{
"critical": critical,
@@ -64,11 +103,11 @@ func (s privateSignature) marshalJSONWithVariables(timestamp int64, creatorID st
return json.Marshal(signature)
}
// Compile-time check that privateSignature implements json.Unmarshaler
var _ json.Unmarshaler = (*privateSignature)(nil)
// Compile-time check that untrustedSignature implements json.Unmarshaler
var _ json.Unmarshaler = (*untrustedSignature)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface
func (s *privateSignature) UnmarshalJSON(data []byte) error {
func (s *untrustedSignature) UnmarshalJSON(data []byte) error {
err := s.strictUnmarshalJSON(data)
if err != nil {
if _, ok := err.(jsonFormatError); ok {
@@ -80,7 +119,7 @@ func (s *privateSignature) UnmarshalJSON(data []byte) error {
// strictUnmarshalJSON is UnmarshalJSON, except that it may return the internal jsonFormatError error type.
// Splitting it into a separate function allows us to do the jsonFormatError → InvalidSignatureError in a single place, the caller.
func (s *privateSignature) strictUnmarshalJSON(data []byte) error {
func (s *untrustedSignature) strictUnmarshalJSON(data []byte) error {
var untyped interface{}
if err := json.Unmarshal(data, &untyped); err != nil {
return err
@@ -105,7 +144,20 @@ func (s *privateSignature) strictUnmarshalJSON(data []byte) error {
if err != nil {
return err
}
_ = optional // We don't use anything from here for now.
if _, ok := optional["creator"]; ok {
creatorID, err := stringField(optional, "creator")
if err != nil {
return err
}
s.UntrustedCreatorID = &creatorID
}
if _, ok := optional["timestamp"]; ok {
timestamp, err := int64Field(optional, "timestamp")
if err != nil {
return err
}
s.UntrustedTimestamp = &timestamp
}
t, err := stringField(c, "type")
if err != nil {
@@ -122,11 +174,11 @@ func (s *privateSignature) strictUnmarshalJSON(data []byte) error {
if err := validateExactMapKeys(image, "docker-manifest-digest"); err != nil {
return err
}
digest, err := stringField(image, "docker-manifest-digest")
digestString, err := stringField(image, "docker-manifest-digest")
if err != nil {
return err
}
s.DockerManifestDigest = digest
s.UntrustedDockerManifestDigest = digest.Digest(digestString)
identity, err := mapField(c, "identity")
if err != nil {
@@ -139,13 +191,18 @@ func (s *privateSignature) strictUnmarshalJSON(data []byte) error {
if err != nil {
return err
}
s.DockerReference = reference
s.UntrustedDockerReference = reference
return nil
}
// Sign formats the signature and returns a blob signed using mech and keyIdentity
func (s privateSignature) sign(mech SigningMechanism, keyIdentity string) ([]byte, error) {
// (If it seems surprising that this is a method on untrustedSignature, note that there
// isnt a good reason to think that a key used by the user is trusted by any component
// of the system just because it is a private key — actually the presence of a private key
// on the system increases the likelihood of an a successful attack on that private key
// on that particular system.)
func (s untrustedSignature) sign(mech SigningMechanism, keyIdentity string) ([]byte, error) {
json, err := json.Marshal(s)
if err != nil {
return nil, err
@@ -157,12 +214,12 @@ func (s privateSignature) sign(mech SigningMechanism, keyIdentity string) ([]byt
// signatureAcceptanceRules specifies how to decide whether an untrusted signature is acceptable.
// We centralize the actual parsing and data extraction in verifyAndExtractSignature; this supplies
// the policy. We use an object instead of supplying func parameters to verifyAndExtractSignature
// because all of the functions have the same type, so there is a risk of exchanging the functions;
// because the functions have the same or similar types, so there is a risk of exchanging the functions;
// named members of this struct are more explicit.
type signatureAcceptanceRules struct {
validateKeyIdentity func(string) error
validateSignedDockerReference func(string) error
validateSignedDockerManifestDigest func(string) error
validateSignedDockerManifestDigest func(digest.Digest) error
}
// verifyAndExtractSignature verifies that unverifiedSignature has been signed, and that its principial components
@@ -176,16 +233,58 @@ func verifyAndExtractSignature(mech SigningMechanism, unverifiedSignature []byte
return nil, err
}
var unmatchedSignature privateSignature
var unmatchedSignature untrustedSignature
if err := json.Unmarshal(signed, &unmatchedSignature); err != nil {
return nil, InvalidSignatureError{msg: err.Error()}
}
if err := rules.validateSignedDockerManifestDigest(unmatchedSignature.DockerManifestDigest); err != nil {
if err := rules.validateSignedDockerManifestDigest(unmatchedSignature.UntrustedDockerManifestDigest); err != nil {
return nil, err
}
if err := rules.validateSignedDockerReference(unmatchedSignature.DockerReference); err != nil {
if err := rules.validateSignedDockerReference(unmatchedSignature.UntrustedDockerReference); err != nil {
return nil, err
}
signature := unmatchedSignature.Signature // Policy OK.
return &signature, nil
// signatureAcceptanceRules have accepted this value.
return &Signature{
DockerManifestDigest: unmatchedSignature.UntrustedDockerManifestDigest,
DockerReference: unmatchedSignature.UntrustedDockerReference,
}, nil
}
// GetUntrustedSignatureInformationWithoutVerifying extracts information available in an untrusted signature,
// WITHOUT doing any cryptographic verification.
// This may be useful when debugging signature verification failures,
// or when managing a set of signatures on a single image.
//
// WARNING: Do not use the contents of this for ANY security decisions,
// and be VERY CAREFUL about showing this information to humans in any way which suggest that these values “are probably” reliable.
// There is NO REASON to expect the values to be correct, or not intentionally misleading
// (including things like “✅ Verified by $authority”)
func GetUntrustedSignatureInformationWithoutVerifying(untrustedSignatureBytes []byte) (*UntrustedSignatureInformation, error) {
// NOTE: This should eventualy do format autodetection.
mech, err := NewGPGSigningMechanism()
if err != nil {
return nil, err
}
untrustedContents, shortKeyIdentifier, err := mech.UntrustedSignatureContents(untrustedSignatureBytes)
if err != nil {
return nil, err
}
var untrustedDecodedContents untrustedSignature
if err := json.Unmarshal(untrustedContents, &untrustedDecodedContents); err != nil {
return nil, InvalidSignatureError{msg: err.Error()}
}
var timestamp *time.Time // = nil
if untrustedDecodedContents.UntrustedTimestamp != nil {
ts := time.Unix(*untrustedDecodedContents.UntrustedTimestamp, 0)
timestamp = &ts
}
return &UntrustedSignatureInformation{
UntrustedDockerManifestDigest: untrustedDecodedContents.UntrustedDockerManifestDigest,
UntrustedDockerReference: untrustedDecodedContents.UntrustedDockerReference,
UntrustedCreatorID: untrustedDecodedContents.UntrustedCreatorID,
UntrustedTimestamp: timestamp,
UntrustedShortKeyIdentifier: shortKeyIdentifier,
}, nil
}

View File

@@ -0,0 +1,570 @@
package storage
import (
"bytes"
"encoding/json"
"io"
"io/ioutil"
"time"
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/storage"
ddigest "github.com/opencontainers/go-digest"
)
var (
// ErrBlobDigestMismatch is returned when PutBlob() is given a blob
// with a digest-based name that doesn't match its contents.
ErrBlobDigestMismatch = errors.New("blob digest mismatch")
// ErrBlobSizeMismatch is returned when PutBlob() is given a blob
// with an expected size that doesn't match the reader.
ErrBlobSizeMismatch = errors.New("blob size mismatch")
// ErrNoManifestLists is returned when GetTargetManifest() is
// called.
ErrNoManifestLists = errors.New("manifest lists are not supported by this transport")
// ErrNoSuchImage is returned when we attempt to access an image which
// doesn't exist in the storage area.
ErrNoSuchImage = storage.ErrNotAnImage
)
type storageImageSource struct {
imageRef storageReference
Tag string `json:"tag,omitempty"`
Created time.Time `json:"created-time,omitempty"`
ID string `json:"id"`
BlobList []types.BlobInfo `json:"blob-list,omitempty"` // Ordered list of every blob the image has been told to handle
Layers map[ddigest.Digest][]string `json:"layers,omitempty"` // Map from digests of blobs to lists of layer IDs
LayerPosition map[ddigest.Digest]int `json:"-"` // Where we are in reading a blob's layers
SignatureSizes []int `json:"signature-sizes"` // List of sizes of each signature slice
}
type storageImageDestination struct {
imageRef storageReference
Tag string `json:"tag,omitempty"`
Created time.Time `json:"created-time,omitempty"`
ID string `json:"id"`
BlobList []types.BlobInfo `json:"blob-list,omitempty"` // Ordered list of every blob the image has been told to handle
Layers map[ddigest.Digest][]string `json:"layers,omitempty"` // Map from digests of blobs to lists of layer IDs
BlobData map[ddigest.Digest][]byte `json:"-"` // Map from names of blobs that aren't layers to contents, temporary
Manifest []byte `json:"-"` // Manifest contents, temporary
Signatures []byte `json:"-"` // Signature contents, temporary
SignatureSizes []int `json:"signature-sizes"` // List of sizes of each signature slice
}
type storageLayerMetadata struct {
Digest string `json:"digest,omitempty"`
Size int64 `json:"size"`
CompressedSize int64 `json:"compressed-size,omitempty"`
}
type storageImage struct {
types.Image
size int64
}
// newImageSource sets us up to read out an image, which needs to already exist.
func newImageSource(imageRef storageReference) (*storageImageSource, error) {
id := imageRef.resolveID()
if id == "" {
logrus.Errorf("no image matching reference %q found", imageRef.StringWithinTransport())
return nil, ErrNoSuchImage
}
img, err := imageRef.transport.store.GetImage(id)
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", id)
}
image := &storageImageSource{
imageRef: imageRef,
Created: time.Now(),
ID: img.ID,
BlobList: []types.BlobInfo{},
Layers: make(map[ddigest.Digest][]string),
LayerPosition: make(map[ddigest.Digest]int),
SignatureSizes: []int{},
}
if err := json.Unmarshal([]byte(img.Metadata), image); err != nil {
return nil, errors.Wrap(err, "error decoding metadata for source image")
}
return image, nil
}
// newImageDestination sets us up to write a new image.
func newImageDestination(imageRef storageReference) (*storageImageDestination, error) {
image := &storageImageDestination{
imageRef: imageRef,
Tag: imageRef.reference,
Created: time.Now(),
ID: imageRef.id,
BlobList: []types.BlobInfo{},
Layers: make(map[ddigest.Digest][]string),
BlobData: make(map[ddigest.Digest][]byte),
SignatureSizes: []int{},
}
return image, nil
}
func (s storageImageSource) Reference() types.ImageReference {
return s.imageRef
}
func (s storageImageDestination) Reference() types.ImageReference {
return s.imageRef
}
func (s storageImageSource) Close() {
}
func (s storageImageDestination) Close() {
}
func (s storageImageDestination) ShouldCompressLayers() bool {
// We ultimately have to decompress layers to populate trees on disk,
// so callers shouldn't bother compressing them before handing them to
// us, if they're not already compressed.
return false
}
// PutBlob is used to both store filesystem layers and binary data that is part
// of the image. Filesystem layers are assumed to be imported in order, as
// that is required by some of the underlying storage drivers.
func (s *storageImageDestination) PutBlob(stream io.Reader, blobinfo types.BlobInfo) (types.BlobInfo, error) {
blobSize := int64(-1)
digest := blobinfo.Digest
errorBlobInfo := types.BlobInfo{
Digest: "",
Size: -1,
}
// Try to read an initial snippet of the blob.
header := make([]byte, 10240)
n, err := stream.Read(header)
if err != nil && err != io.EOF {
return errorBlobInfo, err
}
// Set up to read the whole blob (the initial snippet, plus the rest)
// while digesting it with either the default, or the passed-in digest,
// if one was specified.
hasher := ddigest.Canonical.Digester()
if digest.Validate() == nil {
if a := digest.Algorithm(); a.Available() {
hasher = a.Digester()
}
}
hash := ""
counter := ioutils.NewWriteCounter(hasher.Hash())
defragmented := io.MultiReader(bytes.NewBuffer(header[:n]), stream)
multi := io.TeeReader(defragmented, counter)
if (n > 0) && archive.IsArchive(header[:n]) {
// It's a filesystem layer. If it's not the first one in the
// image, we assume that the most recently added layer is its
// parent.
parentLayer := ""
for _, blob := range s.BlobList {
if layerList, ok := s.Layers[blob.Digest]; ok {
parentLayer = layerList[len(layerList)-1]
}
}
// If we have an expected content digest, generate a layer ID
// based on the parent's ID and the expected content digest.
id := ""
if digest.Validate() == nil {
id = ddigest.Canonical.FromBytes([]byte(parentLayer + "+" + digest.String())).Hex()
}
// Attempt to create the identified layer and import its contents.
layer, uncompressedSize, err := s.imageRef.transport.store.PutLayer(id, parentLayer, nil, "", true, multi)
if err != nil && err != storage.ErrDuplicateID {
logrus.Debugf("error importing layer blob %q as %q: %v", blobinfo.Digest, id, err)
return errorBlobInfo, err
}
if err == storage.ErrDuplicateID {
// We specified an ID, and there's already a layer with
// the same ID. Drain the input so that we can look at
// its length and digest.
_, err := io.Copy(ioutil.Discard, multi)
if err != nil && err != io.EOF {
logrus.Debugf("error digesting layer blob %q: %v", blobinfo.Digest, id, err)
return errorBlobInfo, err
}
hash = hasher.Digest().String()
} else {
// Applied the layer with the specified ID. Note the
// size info and computed digest.
hash = hasher.Digest().String()
layerMeta := storageLayerMetadata{
Digest: hash,
CompressedSize: counter.Count,
Size: uncompressedSize,
}
if metadata, err := json.Marshal(&layerMeta); len(metadata) != 0 && err == nil {
s.imageRef.transport.store.SetMetadata(layer.ID, string(metadata))
}
// Hang on to the new layer's ID.
id = layer.ID
}
blobSize = counter.Count
// Check if the size looks right.
if blobinfo.Size >= 0 && blobSize != blobinfo.Size {
logrus.Debugf("blob %q size is %d, not %d, rejecting", blobinfo.Digest, blobSize, blobinfo.Size)
if layer != nil {
// Something's wrong; delete the newly-created layer.
s.imageRef.transport.store.DeleteLayer(layer.ID)
}
return errorBlobInfo, ErrBlobSizeMismatch
}
// If the content digest was specified, verify it.
if digest.Validate() == nil && digest.String() != hash {
logrus.Debugf("blob %q digests to %q, rejecting", blobinfo.Digest, hash)
if layer != nil {
// Something's wrong; delete the newly-created layer.
s.imageRef.transport.store.DeleteLayer(layer.ID)
}
return errorBlobInfo, ErrBlobDigestMismatch
}
// If we didn't get a digest, construct one.
if digest == "" {
digest = ddigest.Digest(hash)
}
// Record that this layer blob is a layer, and the layer ID it
// ended up having. This is a list, in case the same blob is
// being applied more than once.
s.Layers[digest] = append(s.Layers[digest], id)
s.BlobList = append(s.BlobList, types.BlobInfo{Digest: digest, Size: blobSize})
if layer != nil {
logrus.Debugf("blob %q imported as a filesystem layer %q", blobinfo.Digest, id)
} else {
logrus.Debugf("layer blob %q already present as layer %q", blobinfo.Digest, id)
}
} else {
// It's just data. Finish scanning it in, check that our
// computed digest matches the passed-in digest, and store it,
// but leave it out of the blob-to-layer-ID map so that we can
// tell that it's not a layer.
blob, err := ioutil.ReadAll(multi)
if err != nil && err != io.EOF {
return errorBlobInfo, err
}
blobSize = int64(len(blob))
hash = hasher.Digest().String()
if blobinfo.Size >= 0 && blobSize != blobinfo.Size {
logrus.Debugf("blob %q size is %d, not %d, rejecting", blobinfo.Digest, blobSize, blobinfo.Size)
return errorBlobInfo, ErrBlobSizeMismatch
}
// If we were given a digest, verify that the content matches
// it.
if digest.Validate() == nil && digest.String() != hash {
logrus.Debugf("blob %q digests to %q, rejecting", blobinfo.Digest, hash)
return errorBlobInfo, ErrBlobDigestMismatch
}
// If we didn't get a digest, construct one.
if digest == "" {
digest = ddigest.Digest(hash)
}
// Save the blob for when we Commit().
s.BlobData[digest] = blob
s.BlobList = append(s.BlobList, types.BlobInfo{Digest: digest, Size: blobSize})
logrus.Debugf("blob %q imported as opaque data %q", blobinfo.Digest, digest)
}
return types.BlobInfo{
Digest: digest,
Size: blobSize,
}, nil
}
func (s *storageImageDestination) HasBlob(blobinfo types.BlobInfo) (bool, int64, error) {
if blobinfo.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
for _, blob := range s.BlobList {
if blob.Digest == blobinfo.Digest {
return true, blob.Size, nil
}
}
return false, -1, types.ErrBlobNotFound
}
func (s *storageImageDestination) ReapplyBlob(blobinfo types.BlobInfo) (types.BlobInfo, error) {
err := blobinfo.Digest.Validate()
if err != nil {
return types.BlobInfo{}, err
}
if layerList, ok := s.Layers[blobinfo.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, blobinfo.Digest.String())
if err != nil {
return types.BlobInfo{}, err
}
return types.BlobInfo{Digest: blobinfo.Digest, Size: int64(len(b))}, nil
}
layerList := s.Layers[blobinfo.Digest]
rc, _, err := diffLayer(s.imageRef.transport.store, layerList[len(layerList)-1])
if err != nil {
return types.BlobInfo{}, err
}
return s.PutBlob(rc, blobinfo)
}
func (s *storageImageDestination) Commit() error {
// Create the image record.
lastLayer := ""
for _, blob := range s.BlobList {
if layerList, ok := s.Layers[blob.Digest]; ok {
lastLayer = layerList[len(layerList)-1]
}
}
img, err := s.imageRef.transport.store.CreateImage(s.ID, nil, lastLayer, "", nil)
if err != nil {
logrus.Debugf("error creating image: %q", err)
return err
}
logrus.Debugf("created new image ID %q", img.ID)
s.ID = img.ID
if s.Tag != "" {
// We have a name to set, so move the name to this image.
if err := s.imageRef.transport.store.SetNames(img.ID, []string{s.Tag}); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error setting names on image %q: %v", img.ID, err)
return err
}
logrus.Debugf("set name of image %q to %q", img.ID, s.Tag)
}
// Save the data blobs to disk, and drop their contents from memory.
keys := []ddigest.Digest{}
for k, v := range s.BlobData {
if err := s.imageRef.transport.store.SetImageBigData(img.ID, k.String(), v); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error saving big data %q for image %q: %v", k, img.ID, err)
return err
}
keys = append(keys, k)
}
for _, key := range keys {
delete(s.BlobData, key)
}
// Save the manifest, if we have one.
if err := s.imageRef.transport.store.SetImageBigData(s.ID, "manifest", s.Manifest); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error saving manifest for image %q: %v", img.ID, err)
return err
}
// Save the signatures, if we have any.
if err := s.imageRef.transport.store.SetImageBigData(s.ID, "signatures", s.Signatures); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error saving signatures for image %q: %v", img.ID, err)
return err
}
// Save our metadata.
metadata, err := json.Marshal(s)
if err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error encoding metadata for image %q: %v", img.ID, err)
return err
}
if len(metadata) != 0 {
if err = s.imageRef.transport.store.SetMetadata(s.ID, string(metadata)); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error saving metadata for image %q: %v", img.ID, err)
return err
}
logrus.Debugf("saved image metadata %q", string(metadata))
}
return nil
}
func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
return nil
}
func (s *storageImageDestination) PutManifest(manifest []byte) error {
s.Manifest = make([]byte, len(manifest))
copy(s.Manifest, manifest)
return nil
}
// SupportsSignatures returns an error if we can't expect GetSignatures() to
// return data that was previously supplied to PutSignatures().
func (s *storageImageDestination) SupportsSignatures() error {
return nil
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (s *storageImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
func (s *storageImageDestination) PutSignatures(signatures [][]byte) error {
sizes := []int{}
sigblob := []byte{}
for _, sig := range signatures {
sizes = append(sizes, len(sig))
newblob := make([]byte, len(sigblob)+len(sig))
copy(newblob, sigblob)
copy(newblob[len(sigblob):], sig)
sigblob = newblob
}
s.Signatures = sigblob
s.SignatureSizes = sizes
return nil
}
func (s *storageImageSource) GetBlob(info types.BlobInfo) (rc io.ReadCloser, n int64, err error) {
rc, n, _, err = s.getBlobAndLayerID(info)
return rc, n, err
}
func (s *storageImageSource) getBlobAndLayerID(info types.BlobInfo) (rc io.ReadCloser, n int64, layerID string, err error) {
err = info.Digest.Validate()
if err != nil {
return nil, -1, "", err
}
if layerList, ok := s.Layers[info.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, info.Digest.String())
if err != nil {
return nil, -1, "", err
}
r := bytes.NewReader(b)
logrus.Debugf("exporting opaque data as blob %q", info.Digest.String())
return ioutil.NopCloser(r), int64(r.Len()), "", nil
}
// If the blob was "put" more than once, we have multiple layer IDs
// which should all produce the same diff. For the sake of tests that
// want to make sure we created different layers each time the blob was
// "put", though, cycle through the layers.
layerList := s.Layers[info.Digest]
position, ok := s.LayerPosition[info.Digest]
if !ok {
position = 0
}
s.LayerPosition[info.Digest] = (position + 1) % len(layerList)
logrus.Debugf("exporting filesystem layer %q for blob %q", layerList[position], info.Digest)
rc, n, err = diffLayer(s.imageRef.transport.store, layerList[position])
return rc, n, layerList[position], err
}
func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64, err error) {
layer, err := store.GetLayer(layerID)
if err != nil {
return nil, -1, err
}
layerMeta := storageLayerMetadata{
CompressedSize: -1,
}
if layer.Metadata != "" {
if err := json.Unmarshal([]byte(layer.Metadata), &layerMeta); err != nil {
return nil, -1, errors.Wrapf(err, "error decoding metadata for layer %q", layerID)
}
}
if layerMeta.CompressedSize <= 0 {
n = -1
} else {
n = layerMeta.CompressedSize
}
diff, err := store.Diff("", layer.ID)
if err != nil {
return nil, -1, err
}
return diff, n, nil
}
func (s *storageImageSource) GetManifest() (manifestBlob []byte, MIMEType string, err error) {
manifestBlob, err = s.imageRef.transport.store.GetImageBigData(s.ID, "manifest")
return manifestBlob, manifest.GuessMIMEType(manifestBlob), err
}
func (s *storageImageSource) GetTargetManifest(digest ddigest.Digest) (manifestBlob []byte, MIMEType string, err error) {
return nil, "", ErrNoManifestLists
}
func (s *storageImageSource) GetSignatures() (signatures [][]byte, err error) {
var offset int
signature, err := s.imageRef.transport.store.GetImageBigData(s.ID, "signatures")
if err != nil {
return nil, err
}
sigslice := [][]byte{}
for _, length := range s.SignatureSizes {
sigslice = append(sigslice, signature[offset:offset+length])
offset += length
}
if offset != len(signature) {
return nil, errors.Errorf("signatures data contained %d extra bytes", len(signatures)-offset)
}
return sigslice, nil
}
func (s *storageImageSource) getSize() (int64, error) {
var sum int64
names, err := s.imageRef.transport.store.ListImageBigData(s.imageRef.id)
if err != nil {
return -1, errors.Wrapf(err, "error reading image %q", s.imageRef.id)
}
for _, name := range names {
bigSize, err := s.imageRef.transport.store.GetImageBigDataSize(s.imageRef.id, name)
if err != nil {
return -1, errors.Wrapf(err, "error reading data blob size %q for %q", name, s.imageRef.id)
}
sum += bigSize
}
for _, sigSize := range s.SignatureSizes {
sum += int64(sigSize)
}
for _, layerList := range s.Layers {
for _, layerID := range layerList {
layer, err := s.imageRef.transport.store.GetLayer(layerID)
if err != nil {
return -1, err
}
layerMeta := storageLayerMetadata{
Size: -1,
}
if layer.Metadata != "" {
if err := json.Unmarshal([]byte(layer.Metadata), &layerMeta); err != nil {
return -1, errors.Wrapf(err, "error decoding metadata for layer %q", layerID)
}
}
if layerMeta.Size < 0 {
return -1, errors.Errorf("size for layer %q is unknown, failing getSize()", layerID)
}
sum += layerMeta.Size
}
}
return sum, nil
}
func (s *storageImage) Size() (int64, error) {
return s.size, nil
}
// newImage creates an image that also knows its size
func newImage(s storageReference) (types.Image, error) {
src, err := newImageSource(s)
if err != nil {
return nil, err
}
img, err := image.FromSource(src)
if err != nil {
return nil, err
}
size, err := src.getSize()
if err != nil {
return nil, err
}
return &storageImage{Image: img, size: size}, nil
}

View File

@@ -0,0 +1,127 @@
package storage
import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
)
// A storageReference holds an arbitrary name and/or an ID, which is a 32-byte
// value hex-encoded into a 64-character string, and a reference to a Store
// where an image is, or would be, kept.
type storageReference struct {
transport storageTransport
reference string
id string
name reference.Named
}
func newReference(transport storageTransport, reference, id string, name reference.Named) *storageReference {
// We take a copy of the transport, which contains a pointer to the
// store that it used for resolving this reference, so that the
// transport that we'll return from Transport() won't be affected by
// further calls to the original transport's SetStore() method.
return &storageReference{
transport: transport,
reference: reference,
id: id,
name: name,
}
}
// Resolve the reference's name to an image ID in the store, if there's already
// one present with the same name or ID.
func (s *storageReference) resolveID() string {
if s.id == "" {
image, err := s.transport.store.GetImage(s.reference)
if image != nil && err == nil {
s.id = image.ID
}
}
return s.id
}
// Return a Transport object that defaults to using the same store that we used
// to build this reference object.
func (s storageReference) Transport() types.ImageTransport {
return &storageTransport{
store: s.transport.store,
}
}
// Return a name with a tag, if we have a name to base them on.
func (s storageReference) DockerReference() reference.Named {
return s.name
}
// Return a name with a tag, prefixed with the graph root and driver name, to
// disambiguate between images which may be present in multiple stores and
// share only their names.
func (s storageReference) StringWithinTransport() string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
if s.name == nil {
return storeSpec + "@" + s.id
}
if s.id == "" {
return storeSpec + s.reference
}
return storeSpec + s.reference + "@" + s.id
}
func (s storageReference) PolicyConfigurationIdentity() string {
return s.StringWithinTransport()
}
// Also accept policy that's tied to the combination of the graph root and
// driver name, to apply to all images stored in the Store, and to just the
// graph root, in case we're using multiple drivers in the same directory for
// some reason.
func (s storageReference) PolicyConfigurationNamespaces() []string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
driverlessStoreSpec := "[" + s.transport.store.GetGraphRoot() + "]"
namespaces := []string{}
if s.name != nil {
if s.id != "" {
// The reference without the ID is also a valid namespace.
namespaces = append(namespaces, storeSpec+s.reference)
}
components := strings.Split(s.name.FullName(), "/")
for len(components) > 0 {
namespaces = append(namespaces, storeSpec+strings.Join(components, "/"))
components = components[:len(components)-1]
}
}
namespaces = append(namespaces, storeSpec)
namespaces = append(namespaces, driverlessStoreSpec)
return namespaces
}
func (s storageReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return newImage(s)
}
func (s storageReference) DeleteImage(ctx *types.SystemContext) error {
id := s.resolveID()
if id == "" {
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return ErrNoSuchImage
}
layers, err := s.transport.store.DeleteImage(id, true)
if err == nil {
logrus.Debugf("deleted image %q", id)
for _, layer := range layers {
logrus.Debugf("deleted layer %q", layer)
}
}
return err
}
func (s storageReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(s)
}
func (s storageReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(s)
}

View File

@@ -0,0 +1,286 @@
package storage
import (
"path/filepath"
"regexp"
"strings"
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/containers/storage/storage"
"github.com/opencontainers/go-digest"
ddigest "github.com/opencontainers/go-digest"
)
var (
// Transport is an ImageTransport that uses either a default
// storage.Store or one that's it's explicitly told to use.
Transport StoreTransport = &storageTransport{}
// ErrInvalidReference is returned when ParseReference() is passed an
// empty reference.
ErrInvalidReference = errors.New("invalid reference")
// ErrPathNotAbsolute is returned when a graph root is not an absolute
// path name.
ErrPathNotAbsolute = errors.New("path name is not absolute")
idRegexp = regexp.MustCompile("^(sha256:)?([0-9a-fA-F]{64})$")
)
// StoreTransport is an ImageTransport that uses a storage.Store to parse
// references, either its own default or one that it's told to use.
type StoreTransport interface {
types.ImageTransport
// SetStore sets the default store for this transport.
SetStore(storage.Store)
// GetImage retrieves the image from the transport's store that's named
// by the reference.
GetImage(types.ImageReference) (*storage.Image, error)
// GetStoreImage retrieves the image from a specified store that's named
// by the reference.
GetStoreImage(storage.Store, types.ImageReference) (*storage.Image, error)
// ParseStoreReference parses a reference, overriding any store
// specification that it may contain.
ParseStoreReference(store storage.Store, reference string) (*storageReference, error)
}
type storageTransport struct {
store storage.Store
}
func (s *storageTransport) Name() string {
// Still haven't really settled on a name.
return "containers-storage"
}
// SetStore sets the Store object which the Transport will use for parsing
// references when information about a Store is not directly specified as part
// of the reference. If one is not set, the library will attempt to initialize
// one with default settings when a reference needs to be parsed. Calling
// SetStore does not affect previously parsed references.
func (s *storageTransport) SetStore(store storage.Store) {
s.store = store
}
// ParseStoreReference takes a name or an ID, tries to figure out which it is
// relative to the given store, and returns it in a reference object.
func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (*storageReference, error) {
var name reference.Named
var sum digest.Digest
var err error
if ref == "" {
return nil, ErrInvalidReference
}
if ref[0] == '[' {
// Ignore the store specifier.
closeIndex := strings.IndexRune(ref, ']')
if closeIndex < 1 {
return nil, ErrInvalidReference
}
ref = ref[closeIndex+1:]
}
refInfo := strings.SplitN(ref, "@", 2)
if len(refInfo) == 1 {
// A name.
name, err = reference.ParseNamed(refInfo[0])
if err != nil {
return nil, err
}
} else if len(refInfo) == 2 {
// An ID, possibly preceded by a name.
if refInfo[0] != "" {
name, err = reference.ParseNamed(refInfo[0])
if err != nil {
return nil, err
}
}
sum, err = digest.Parse("sha256:" + refInfo[1])
if err != nil {
return nil, err
}
} else { // Coverage: len(refInfo) is always 1 or 2
// Anything else: store specified in a form we don't
// recognize.
return nil, ErrInvalidReference
}
storeSpec := "[" + store.GetGraphDriverName() + "@" + store.GetGraphRoot() + "]"
id := ""
if sum.Validate() == nil {
id = sum.Hex()
}
refname := ""
if name != nil {
name = reference.WithDefaultTag(name)
refname = verboseName(name)
}
if refname == "" {
logrus.Debugf("parsed reference into %q", storeSpec+"@"+id)
} else if id == "" {
logrus.Debugf("parsed reference into %q", storeSpec+refname)
} else {
logrus.Debugf("parsed reference into %q", storeSpec+refname+"@"+id)
}
return newReference(storageTransport{store: store}, refname, id, name), nil
}
func (s *storageTransport) GetStore() (storage.Store, error) {
// Return the transport's previously-set store. If we don't have one
// of those, initialize one now.
if s.store == nil {
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
return nil, err
}
s.store = store
}
return s.store, nil
}
// ParseReference takes a name and/or an ID ("_name_"/"@_id_"/"_name_@_id_"),
// possibly prefixed with a store specifier in the form "[_graphroot_]" or
// "[_driver_@_graphroot_]", tries to figure out which it is, and returns it in
// a reference object. If the _graphroot_ is a location other than the default,
// it needs to have been previously opened using storage.GetStore(), so that it
// can figure out which run root goes with the graph root.
func (s *storageTransport) ParseReference(reference string) (types.ImageReference, error) {
store, err := s.GetStore()
if err != nil {
return nil, err
}
// Check if there's a store location prefix. If there is, then it
// needs to match a store that was previously initialized using
// storage.GetStore(), or be enough to let the storage library fill out
// the rest using knowledge that it has from elsewhere.
if reference[0] == '[' {
closeIndex := strings.IndexRune(reference, ']')
if closeIndex < 1 {
return nil, ErrInvalidReference
}
storeSpec := reference[1:closeIndex]
reference = reference[closeIndex+1:]
storeInfo := strings.SplitN(storeSpec, "@", 2)
if len(storeInfo) == 1 && storeInfo[0] != "" {
// One component: the graph root.
if !filepath.IsAbs(storeInfo[0]) {
return nil, ErrPathNotAbsolute
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphRoot: storeInfo[0],
})
if err != nil {
return nil, err
}
store = store2
} else if len(storeInfo) == 2 && storeInfo[0] != "" && storeInfo[1] != "" {
// Two components: the driver type and the graph root.
if !filepath.IsAbs(storeInfo[1]) {
return nil, ErrPathNotAbsolute
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphDriverName: storeInfo[0],
GraphRoot: storeInfo[1],
})
if err != nil {
return nil, err
}
store = store2
} else {
// Anything else: store specified in a form we don't
// recognize.
return nil, ErrInvalidReference
}
}
return s.ParseStoreReference(store, reference)
}
func (s storageTransport) GetStoreImage(store storage.Store, ref types.ImageReference) (*storage.Image, error) {
dref := ref.DockerReference()
if dref == nil {
if sref, ok := ref.(*storageReference); ok {
if sref.id != "" {
if img, err := store.GetImage(sref.id); err == nil {
return img, nil
}
}
}
return nil, ErrInvalidReference
}
return store.GetImage(verboseName(dref))
}
func (s *storageTransport) GetImage(ref types.ImageReference) (*storage.Image, error) {
store, err := s.GetStore()
if err != nil {
return nil, err
}
return s.GetStoreImage(store, ref)
}
func (s storageTransport) ValidatePolicyConfigurationScope(scope string) error {
// Check that there's a store location prefix. Values we're passed are
// expected to come from PolicyConfigurationIdentity or
// PolicyConfigurationNamespaces, so if there's no store location,
// something's wrong.
if scope[0] != '[' {
return ErrInvalidReference
}
// Parse the store location prefix.
closeIndex := strings.IndexRune(scope, ']')
if closeIndex < 1 {
return ErrInvalidReference
}
storeSpec := scope[1:closeIndex]
scope = scope[closeIndex+1:]
storeInfo := strings.SplitN(storeSpec, "@", 2)
if len(storeInfo) == 1 && storeInfo[0] != "" {
// One component: the graph root.
if !filepath.IsAbs(storeInfo[0]) {
return ErrPathNotAbsolute
}
} else if len(storeInfo) == 2 && storeInfo[0] != "" && storeInfo[1] != "" {
// Two components: the driver type and the graph root.
if !filepath.IsAbs(storeInfo[1]) {
return ErrPathNotAbsolute
}
} else {
// Anything else: store specified in a form we don't
// recognize.
return ErrInvalidReference
}
// That might be all of it, and that's okay.
if scope == "" {
return nil
}
// But if there is anything left, it has to be a name, with or without
// a tag, with or without an ID, since we don't return namespace values
// that are just bare IDs.
scopeInfo := strings.SplitN(scope, "@", 2)
if len(scopeInfo) == 1 && scopeInfo[0] != "" {
_, err := reference.ParseNamed(scopeInfo[0])
if err != nil {
return err
}
} else if len(scopeInfo) == 2 && scopeInfo[0] != "" && scopeInfo[1] != "" {
_, err := reference.ParseNamed(scopeInfo[0])
if err != nil {
return err
}
_, err = ddigest.Parse("sha256:" + scopeInfo[1])
if err != nil {
return err
}
} else {
return ErrInvalidReference
}
return nil
}
func verboseName(name reference.Named) string {
name = reference.WithDefaultTag(name)
tag := ""
if tagged, ok := name.(reference.NamedTagged); ok {
tag = tagged.Tag()
}
return name.FullName() + ":" + tag
}

View File

@@ -6,9 +6,12 @@ import (
"github.com/containers/image/directory"
"github.com/containers/image/docker"
"github.com/containers/image/docker/daemon"
ociLayout "github.com/containers/image/oci/layout"
"github.com/containers/image/openshift"
"github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// KnownTransports is a registry of known ImageTransport instances.
@@ -16,11 +19,15 @@ var KnownTransports map[string]types.ImageTransport
func init() {
KnownTransports = make(map[string]types.ImageTransport)
// NOTE: Make sure docs/policy.json.md is updated when adding or updating
// a transport.
for _, t := range []types.ImageTransport{
directory.Transport,
docker.Transport,
daemon.Transport,
ociLayout.Transport,
openshift.Transport,
storage.Transport,
} {
name := t.Name()
if _, ok := KnownTransports[name]; ok {
@@ -34,11 +41,11 @@ func init() {
func ParseImageName(imgName string) (types.ImageReference, error) {
parts := strings.SplitN(imgName, ":", 2)
if len(parts) != 2 {
return nil, fmt.Errorf(`Invalid image name "%s", expected colon-separated transport:reference`, imgName)
return nil, errors.Errorf(`Invalid image name "%s", expected colon-separated transport:reference`, imgName)
}
transport, ok := KnownTransports[parts[0]]
if !ok {
return nil, fmt.Errorf(`Invalid image name "%s", unknown transport "%s"`, imgName, parts[0])
return nil, errors.Errorf(`Invalid image name "%s", unknown transport "%s"`, imgName, parts[0])
}
return transport.ParseReference(parts[1])
}

View File

@@ -4,7 +4,10 @@ import (
"io"
"time"
"github.com/docker/docker/reference"
"github.com/pkg/errors"
"github.com/containers/image/docker/reference"
"github.com/opencontainers/go-digest"
)
// ImageTransport is a top-level namespace for ways to to store/load an image.
@@ -70,8 +73,10 @@ type ImageReference interface {
// and each following element to be a prefix of the element preceding it.
PolicyConfigurationNamespaces() []string
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
NewImage(ctx *SystemContext) (Image, error)
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
@@ -89,25 +94,33 @@ type ImageReference interface {
// BlobInfo collects known information about a blob (layer/config).
// In some situations, some fields may be unknown, in others they may be mandatory; documenting an “unknown” value here does not override that.
type BlobInfo struct {
Digest string // "" if unknown.
Size int64 // -1 if unknown
Digest digest.Digest // "" if unknown.
Size int64 // -1 if unknown
URLs []string
}
// ImageSource is a service, possibly remote (= slow), to download components of a single image.
// This is primarily useful for copying images around; for examining their properties, Image (below)
// is usually more useful.
// Each ImageSource should eventually be closed by calling Close().
//
// WARNING: Various methods which return an object identified by digest generally do not
// validate that the returned data actually matches that digest; this is the callers responsibility.
type ImageSource interface {
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
Reference() ImageReference
// Close removes resources associated with an initialized ImageSource, if any.
Close()
// GetManifest returns the image's manifest along with its MIME type. The empty string is returned if the MIME type is unknown.
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
GetManifest() ([]byte, string, error)
// GetTargetManifest returns an image's manifest given a digest. This is mainly used to retrieve a single image's manifest
// out of a manifest list.
GetTargetManifest(digest digest.Digest) ([]byte, string, error)
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
GetBlob(digest string) (io.ReadCloser, int64, error)
// The Digest field in BlobInfo is guaranteed to be provided; Size may be -1.
GetBlob(BlobInfo) (io.ReadCloser, int64, error)
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
GetSignatures() ([][]byte, error)
}
@@ -116,6 +129,7 @@ type ImageSource interface {
//
// There is a specific required order for some of the calls:
// PutBlob on the various blobs, if any, MUST be called before PutManifest (manifest references blobs, which may be created or compressed only at push time)
// ReapplyBlob, if used, MUST only be called if HasBlob returned true for the same blob digest
// PutSignatures, if called, MUST be called after PutManifest (signatures reference manifest contents)
// Finally, Commit MUST be called if the caller wants the image, as formed by the components saved above, to persist.
//
@@ -136,6 +150,10 @@ type ImageDestination interface {
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
ShouldCompressLayers() bool
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
AcceptsForeignLayerURLs() bool
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -143,6 +161,10 @@ type ImageDestination interface {
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
PutBlob(stream io.Reader, inputInfo BlobInfo) (BlobInfo, error)
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob. Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned. A false result will often be accompanied by an ErrBlobNotFound error.
HasBlob(info BlobInfo) (bool, int64, error)
// ReapplyBlob informs the image destination that a blob for which HasBlob previously returned true would have been passed to PutBlob if it had returned false. Like HasBlob and unlike PutBlob, the digest can not be empty. If the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree.
ReapplyBlob(info BlobInfo) (BlobInfo, error)
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
PutManifest([]byte) error
PutSignatures(signatures [][]byte) error
@@ -153,38 +175,69 @@ type ImageDestination interface {
Commit() error
}
// Image is the primary API for inspecting properties of images.
// Each Image should eventually be closed by calling Close().
type Image interface {
// UnparsedImage is an Image-to-be; until it is verified and accepted, it only caries its identity and caches manifest and signature blobs.
// Thus, an UnparsedImage can be created from an ImageSource simply by fetching blobs without interpreting them,
// allowing cryptographic signature verification to happen first, before even fetching the manifest, or parsing anything else.
// This also makes the UnparsedImage→Image conversion an explicitly visible step.
// Each UnparsedImage should eventually be closed by calling Close().
type UnparsedImage interface {
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
Reference() ImageReference
// Close removes resources associated with an initialized Image, if any.
// Close removes resources associated with an initialized UnparsedImage, if any.
Close()
// ref to repository?
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
// NOTE: It is essential for signature verification that Manifest returns the manifest from which ConfigInfo and LayerInfos is computed.
Manifest() ([]byte, string, error)
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
Signatures() ([][]byte, error)
}
// Image is the primary API for inspecting properties of images.
// Each Image should eventually be closed by calling Close().
type Image interface {
// Note that Reference may return nil in the return value of UpdatedImage!
UnparsedImage
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// NOTE: It is essential for signature verification that ConfigInfo is computed from the same manifest which is returned by Manifest().
ConfigInfo() (BlobInfo, error)
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
ConfigInfo() BlobInfo
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
ConfigBlob() ([]byte, error)
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// NOTE: It is essential for signature verification that LayerInfos is computed from the same manifest which is returned by Manifest().
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() ([]BlobInfo, error)
LayerInfos() []BlobInfo
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
Inspect() (*ImageInspectInfo, error)
// UpdatedManifest returns the image's manifest modified according to options.
// This does not change the state of the Image object.
UpdatedManifest(options ManifestUpdateOptions) ([]byte, error)
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
UpdatedImageNeedsLayerDiffIDs(options ManifestUpdateOptions) bool
// UpdatedImage returns a types.Image modified according to options.
// Everything in options.InformationOnly should be provided, other fields should be set only if a modification is desired.
// This does not change the state of the original Image object.
UpdatedImage(options ManifestUpdateOptions) (Image, error)
// IsMultiImage returns true if the image's manifest is a list of images, false otherwise.
IsMultiImage() bool
// Size returns an approximation of the amount of disk space which is consumed by the image in its current
// location. If the size is not known, -1 will be returned.
Size() (int64, error)
}
// ManifestUpdateOptions is a way to pass named optional arguments to Image.UpdatedManifest
type ManifestUpdateOptions struct {
LayerInfos []BlobInfo // Complete BlobInfos (size+digest) which should replace the originals, in order (the root layer first, and then successive layered layers)
LayerInfos []BlobInfo // Complete BlobInfos (size+digest+urls) which should replace the originals, in order (the root layer first, and then successive layered layers)
ManifestMIMEType string
// The values below are NOT requests to modify the image; they provide optional context which may or may not be used.
InformationOnly ManifestUpdateInformation
}
// ManifestUpdateInformation is a component of ManifestUpdateOptions, named here
// only to make writing struct literals possible.
type ManifestUpdateInformation struct {
Destination ImageDestination // and yes, UpdatedManifest may write to Destination (see the schema2 → schema1 conversion logic in image/docker_schema2.go)
LayerInfos []BlobInfo // Complete BlobInfos (size+digest) which have been uploaded, in order (the root layer first, and then successive layered layers)
LayerDiffIDs []digest.Digest // Digest values for the _uncompressed_ contents of the blobs which have been uploaded, in the same order.
}
// ImageInspectInfo is a set of metadata describing Docker images, primarily their manifest and configuration.
@@ -200,6 +253,12 @@ type ImageInspectInfo struct {
Layers []string
}
// DockerAuthConfig contains authorization information for connecting to a registry.
type DockerAuthConfig struct {
Username string
Password string
}
// SystemContext allows parametrizing access to implicitly-accessed resources,
// like configuration files in /etc and users' login state in their home directory.
// Various components can share the same field only if their semantics is exactly
@@ -221,6 +280,22 @@ type SystemContext struct {
RegistriesDirPath string
// === docker.Transport overrides ===
DockerCertPath string // If not "", a directory containing "cert.pem" and "key.pem" used when talking to a Docker Registry
DockerInsecureSkipTLSVerify bool // Allow contacting docker registries over HTTP, or HTTPS with failed TLS verification. Note that this does not affect other TLS connections.
// If not "", a directory containing a CA certificate (ending with ".crt"),
// a client certificate (ending with ".cert") and a client ceritificate key
// (ending with ".key") used when talking to a Docker Registry.
DockerCertPath string
DockerInsecureSkipTLSVerify bool // Allow contacting docker registries over HTTP, or HTTPS with failed TLS verification. Note that this does not affect other TLS connections.
// if nil, the library tries to parse ~/.docker/config.json to retrieve credentials
DockerAuthConfig *DockerAuthConfig
// if not "", an User-Agent header is added to each request when contacting a registry.
DockerRegistryUserAgent string
// if true, a V1 ping attempt isn't done to give users a better error. Default is false.
// Note that this field is used mainly to integrate containers/image into projectatomic/docker
// in order to not break any existing docker's integration tests.
DockerDisableV1Ping bool
}
var (
// ErrBlobNotFound can be returned by an ImageDestination's HasBlob() method
ErrBlobNotFound = errors.New("no such blob present")
)

View File

@@ -176,7 +176,7 @@
END OF TERMS AND CONDITIONS
Copyright 2015-2016 Docker, Inc.
Copyright 2013-2016 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

19
vendor/github.com/containers/storage/NOTICE generated vendored Normal file
View File

@@ -0,0 +1,19 @@
Docker
Copyright 2012-2016 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:
Use and transfer of Docker may be subject to certain restrictions by the
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
For more information, please see https://www.bis.doc.gov
See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.

View File

@@ -0,0 +1,586 @@
// +build linux
/*
aufs driver directory structure
.
├── layers // Metadata of layers
│ ├── 1
│ ├── 2
│ └── 3
├── diff // Content of the layer
│ ├── 1 // Contains layers that need to be mounted for the id
│ ├── 2
│ └── 3
└── mnt // Mount points for the rw layers to be mounted
├── 1
├── 2
└── 3
*/
package aufs
import (
"bufio"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"sync"
"syscall"
"github.com/Sirupsen/logrus"
"github.com/vbatts/tar-split/tar/storage"
"github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/chrootarchive"
"github.com/containers/storage/pkg/directory"
"github.com/containers/storage/pkg/idtools"
mountpk "github.com/containers/storage/pkg/mount"
"github.com/containers/storage/pkg/stringid"
"github.com/opencontainers/runc/libcontainer/label"
rsystem "github.com/opencontainers/runc/libcontainer/system"
)
var (
// ErrAufsNotSupported is returned if aufs is not supported by the host.
ErrAufsNotSupported = fmt.Errorf("AUFS was not found in /proc/filesystems")
// ErrAufsNested means aufs cannot be used bc we are in a user namespace
ErrAufsNested = fmt.Errorf("AUFS cannot be used in non-init user namespace")
backingFs = "<unknown>"
enableDirpermLock sync.Once
enableDirperm bool
)
func init() {
graphdriver.Register("aufs", Init)
}
// Driver contains information about the filesystem mounted.
type Driver struct {
sync.Mutex
root string
uidMaps []idtools.IDMap
gidMaps []idtools.IDMap
ctr *graphdriver.RefCounter
pathCacheLock sync.Mutex
pathCache map[string]string
}
// Init returns a new AUFS driver.
// An error is returned if AUFS is not supported.
func Init(root string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
// Try to load the aufs kernel module
if err := supportsAufs(); err != nil {
return nil, graphdriver.ErrNotSupported
}
fsMagic, err := graphdriver.GetFSMagic(root)
if err != nil {
return nil, err
}
if fsName, ok := graphdriver.FsNames[fsMagic]; ok {
backingFs = fsName
}
switch fsMagic {
case graphdriver.FsMagicAufs, graphdriver.FsMagicBtrfs, graphdriver.FsMagicEcryptfs:
logrus.Errorf("AUFS is not supported over %s", backingFs)
return nil, graphdriver.ErrIncompatibleFS
}
paths := []string{
"mnt",
"diff",
"layers",
}
a := &Driver{
root: root,
uidMaps: uidMaps,
gidMaps: gidMaps,
pathCache: make(map[string]string),
ctr: graphdriver.NewRefCounter(graphdriver.NewFsChecker(graphdriver.FsMagicAufs)),
}
rootUID, rootGID, err := idtools.GetRootUIDGID(uidMaps, gidMaps)
if err != nil {
return nil, err
}
// Create the root aufs driver dir and return
// if it already exists
// If not populate the dir structure
if err := idtools.MkdirAllAs(root, 0700, rootUID, rootGID); err != nil {
if os.IsExist(err) {
return a, nil
}
return nil, err
}
if err := mountpk.MakePrivate(root); err != nil {
return nil, err
}
// Populate the dir structure
for _, p := range paths {
if err := idtools.MkdirAllAs(path.Join(root, p), 0700, rootUID, rootGID); err != nil {
return nil, err
}
}
return a, nil
}
// Return a nil error if the kernel supports aufs
// We cannot modprobe because inside dind modprobe fails
// to run
func supportsAufs() error {
// We can try to modprobe aufs first before looking at
// proc/filesystems for when aufs is supported
exec.Command("modprobe", "aufs").Run()
if rsystem.RunningInUserNS() {
return ErrAufsNested
}
f, err := os.Open("/proc/filesystems")
if err != nil {
return err
}
defer f.Close()
s := bufio.NewScanner(f)
for s.Scan() {
if strings.Contains(s.Text(), "aufs") {
return nil
}
}
return ErrAufsNotSupported
}
func (a *Driver) rootPath() string {
return a.root
}
func (*Driver) String() string {
return "aufs"
}
// Status returns current information about the filesystem such as root directory, number of directories mounted, etc.
func (a *Driver) Status() [][2]string {
ids, _ := loadIds(path.Join(a.rootPath(), "layers"))
return [][2]string{
{"Root Dir", a.rootPath()},
{"Backing Filesystem", backingFs},
{"Dirs", fmt.Sprintf("%d", len(ids))},
{"Dirperm1 Supported", fmt.Sprintf("%v", useDirperm())},
}
}
// GetMetadata not implemented
func (a *Driver) GetMetadata(id string) (map[string]string, error) {
return nil, nil
}
// Exists returns true if the given id is registered with
// this driver
func (a *Driver) Exists(id string) bool {
if _, err := os.Lstat(path.Join(a.rootPath(), "layers", id)); err != nil {
return false
}
return true
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (a *Driver) CreateReadWrite(id, parent, mountLabel string, storageOpt map[string]string) error {
return a.Create(id, parent, mountLabel, storageOpt)
}
// Create three folders for each id
// mnt, layers, and diff
func (a *Driver) Create(id, parent, mountLabel string, storageOpt map[string]string) error {
if len(storageOpt) != 0 {
return fmt.Errorf("--storage-opt is not supported for aufs")
}
if err := a.createDirsFor(id); err != nil {
return err
}
// Write the layers metadata
f, err := os.Create(path.Join(a.rootPath(), "layers", id))
if err != nil {
return err
}
defer f.Close()
if parent != "" {
ids, err := getParentIds(a.rootPath(), parent)
if err != nil {
return err
}
if _, err := fmt.Fprintln(f, parent); err != nil {
return err
}
for _, i := range ids {
if _, err := fmt.Fprintln(f, i); err != nil {
return err
}
}
}
return nil
}
// createDirsFor creates two directories for the given id.
// mnt and diff
func (a *Driver) createDirsFor(id string) error {
paths := []string{
"mnt",
"diff",
}
rootUID, rootGID, err := idtools.GetRootUIDGID(a.uidMaps, a.gidMaps)
if err != nil {
return err
}
// Directory permission is 0755.
// The path of directories are <aufs_root_path>/mnt/<image_id>
// and <aufs_root_path>/diff/<image_id>
for _, p := range paths {
if err := idtools.MkdirAllAs(path.Join(a.rootPath(), p, id), 0755, rootUID, rootGID); err != nil {
return err
}
}
return nil
}
// Remove will unmount and remove the given id.
func (a *Driver) Remove(id string) error {
a.pathCacheLock.Lock()
mountpoint, exists := a.pathCache[id]
a.pathCacheLock.Unlock()
if !exists {
mountpoint = a.getMountpoint(id)
}
if err := a.unmount(mountpoint); err != nil {
// no need to return here, we can still try to remove since the `Rename` will fail below if still mounted
logrus.Debugf("aufs: error while unmounting %s: %v", mountpoint, err)
}
// Atomically remove each directory in turn by first moving it out of the
// way (so that docker doesn't find it anymore) before doing removal of
// the whole tree.
tmpMntPath := path.Join(a.mntPath(), fmt.Sprintf("%s-removing", id))
if err := os.Rename(mountpoint, tmpMntPath); err != nil && !os.IsNotExist(err) {
return err
}
defer os.RemoveAll(tmpMntPath)
tmpDiffpath := path.Join(a.diffPath(), fmt.Sprintf("%s-removing", id))
if err := os.Rename(a.getDiffPath(id), tmpDiffpath); err != nil && !os.IsNotExist(err) {
return err
}
defer os.RemoveAll(tmpDiffpath)
// Remove the layers file for the id
if err := os.Remove(path.Join(a.rootPath(), "layers", id)); err != nil && !os.IsNotExist(err) {
return err
}
a.pathCacheLock.Lock()
delete(a.pathCache, id)
a.pathCacheLock.Unlock()
return nil
}
// Get returns the rootfs path for the id.
// This will mount the dir at it's given path
func (a *Driver) Get(id, mountLabel string) (string, error) {
parents, err := a.getParentLayerPaths(id)
if err != nil && !os.IsNotExist(err) {
return "", err
}
a.pathCacheLock.Lock()
m, exists := a.pathCache[id]
a.pathCacheLock.Unlock()
if !exists {
m = a.getDiffPath(id)
if len(parents) > 0 {
m = a.getMountpoint(id)
}
}
if count := a.ctr.Increment(m); count > 1 {
return m, nil
}
// If a dir does not have a parent ( no layers )do not try to mount
// just return the diff path to the data
if len(parents) > 0 {
if err := a.mount(id, m, mountLabel, parents); err != nil {
return "", err
}
}
a.pathCacheLock.Lock()
a.pathCache[id] = m
a.pathCacheLock.Unlock()
return m, nil
}
// Put unmounts and updates list of active mounts.
func (a *Driver) Put(id string) error {
a.pathCacheLock.Lock()
m, exists := a.pathCache[id]
if !exists {
m = a.getMountpoint(id)
a.pathCache[id] = m
}
a.pathCacheLock.Unlock()
if count := a.ctr.Decrement(m); count > 0 {
return nil
}
err := a.unmount(m)
if err != nil {
logrus.Debugf("Failed to unmount %s aufs: %v", id, err)
}
return err
}
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
func (a *Driver) Diff(id, parent string) (archive.Archive, error) {
// AUFS doesn't need the parent layer to produce a diff.
return archive.TarWithOptions(path.Join(a.rootPath(), "diff", id), &archive.TarOptions{
Compression: archive.Uncompressed,
ExcludePatterns: []string{archive.WhiteoutMetaPrefix + "*", "!" + archive.WhiteoutOpaqueDir},
UIDMaps: a.uidMaps,
GIDMaps: a.gidMaps,
})
}
type fileGetNilCloser struct {
storage.FileGetter
}
func (f fileGetNilCloser) Close() error {
return nil
}
// DiffGetter returns a FileGetCloser that can read files from the directory that
// contains files for the layer differences. Used for direct access for tar-split.
func (a *Driver) DiffGetter(id string) (graphdriver.FileGetCloser, error) {
p := path.Join(a.rootPath(), "diff", id)
return fileGetNilCloser{storage.NewPathFileGetter(p)}, nil
}
func (a *Driver) applyDiff(id string, diff archive.Reader) error {
return chrootarchive.UntarUncompressed(diff, path.Join(a.rootPath(), "diff", id), &archive.TarOptions{
UIDMaps: a.uidMaps,
GIDMaps: a.gidMaps,
})
}
// DiffSize calculates the changes between the specified id
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
func (a *Driver) DiffSize(id, parent string) (size int64, err error) {
// AUFS doesn't need the parent layer to calculate the diff size.
return directory.Size(path.Join(a.rootPath(), "diff", id))
}
// ApplyDiff extracts the changeset from the given diff into the
// layer with the specified id and parent, returning the size of the
// new layer in bytes.
func (a *Driver) ApplyDiff(id, parent string, diff archive.Reader) (size int64, err error) {
// AUFS doesn't need the parent id to apply the diff.
if err = a.applyDiff(id, diff); err != nil {
return
}
return a.DiffSize(id, parent)
}
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
func (a *Driver) Changes(id, parent string) ([]archive.Change, error) {
// AUFS doesn't have snapshots, so we need to get changes from all parent
// layers.
layers, err := a.getParentLayerPaths(id)
if err != nil {
return nil, err
}
return archive.Changes(layers, path.Join(a.rootPath(), "diff", id))
}
func (a *Driver) getParentLayerPaths(id string) ([]string, error) {
parentIds, err := getParentIds(a.rootPath(), id)
if err != nil {
return nil, err
}
layers := make([]string, len(parentIds))
// Get the diff paths for all the parent ids
for i, p := range parentIds {
layers[i] = path.Join(a.rootPath(), "diff", p)
}
return layers, nil
}
func (a *Driver) mount(id string, target string, mountLabel string, layers []string) error {
a.Lock()
defer a.Unlock()
// If the id is mounted or we get an error return
if mounted, err := a.mounted(target); err != nil || mounted {
return err
}
rw := a.getDiffPath(id)
if err := a.aufsMount(layers, rw, target, mountLabel); err != nil {
return fmt.Errorf("error creating aufs mount to %s: %v", target, err)
}
return nil
}
func (a *Driver) unmount(mountPath string) error {
a.Lock()
defer a.Unlock()
if mounted, err := a.mounted(mountPath); err != nil || !mounted {
return err
}
if err := Unmount(mountPath); err != nil {
return err
}
return nil
}
func (a *Driver) mounted(mountpoint string) (bool, error) {
return graphdriver.Mounted(graphdriver.FsMagicAufs, mountpoint)
}
// Cleanup aufs and unmount all mountpoints
func (a *Driver) Cleanup() error {
var dirs []string
if err := filepath.Walk(a.mntPath(), func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if !info.IsDir() {
return nil
}
dirs = append(dirs, path)
return nil
}); err != nil {
return err
}
for _, m := range dirs {
if err := a.unmount(m); err != nil {
logrus.Debugf("aufs error unmounting %s: %s", stringid.TruncateID(m), err)
}
}
return mountpk.Unmount(a.root)
}
func (a *Driver) aufsMount(ro []string, rw, target, mountLabel string) (err error) {
defer func() {
if err != nil {
Unmount(target)
}
}()
// Mount options are clipped to page size(4096 bytes). If there are more
// layers then these are remounted individually using append.
offset := 54
if useDirperm() {
offset += len("dirperm1")
}
b := make([]byte, syscall.Getpagesize()-len(mountLabel)-offset) // room for xino & mountLabel
bp := copy(b, fmt.Sprintf("br:%s=rw", rw))
firstMount := true
i := 0
for {
for ; i < len(ro); i++ {
layer := fmt.Sprintf(":%s=ro+wh", ro[i])
if firstMount {
if bp+len(layer) > len(b) {
break
}
bp += copy(b[bp:], layer)
} else {
data := label.FormatMountLabel(fmt.Sprintf("append%s", layer), mountLabel)
if err = mount("none", target, "aufs", syscall.MS_REMOUNT, data); err != nil {
return
}
}
}
if firstMount {
opts := "dio,xino=/dev/shm/aufs.xino"
if useDirperm() {
opts += ",dirperm1"
}
data := label.FormatMountLabel(fmt.Sprintf("%s,%s", string(b[:bp]), opts), mountLabel)
if err = mount("none", target, "aufs", 0, data); err != nil {
return
}
firstMount = false
}
if i == len(ro) {
break
}
}
return
}
// useDirperm checks dirperm1 mount option can be used with the current
// version of aufs.
func useDirperm() bool {
enableDirpermLock.Do(func() {
base, err := ioutil.TempDir("", "docker-aufs-base")
if err != nil {
logrus.Errorf("error checking dirperm1: %v", err)
return
}
defer os.RemoveAll(base)
union, err := ioutil.TempDir("", "docker-aufs-union")
if err != nil {
logrus.Errorf("error checking dirperm1: %v", err)
return
}
defer os.RemoveAll(union)
opts := fmt.Sprintf("br:%s,dirperm1,xino=/dev/shm/aufs.xino", base)
if err := mount("none", union, "aufs", 0, opts); err != nil {
return
}
enableDirperm = true
if err := Unmount(union); err != nil {
logrus.Errorf("error checking dirperm1: failed to unmount %v", err)
}
})
return enableDirperm
}

View File

@@ -0,0 +1,64 @@
// +build linux
package aufs
import (
"bufio"
"io/ioutil"
"os"
"path"
)
// Return all the directories
func loadIds(root string) ([]string, error) {
dirs, err := ioutil.ReadDir(root)
if err != nil {
return nil, err
}
out := []string{}
for _, d := range dirs {
if !d.IsDir() {
out = append(out, d.Name())
}
}
return out, nil
}
// Read the layers file for the current id and return all the
// layers represented by new lines in the file
//
// If there are no lines in the file then the id has no parent
// and an empty slice is returned.
func getParentIds(root, id string) ([]string, error) {
f, err := os.Open(path.Join(root, "layers", id))
if err != nil {
return nil, err
}
defer f.Close()
out := []string{}
s := bufio.NewScanner(f)
for s.Scan() {
if t := s.Text(); t != "" {
out = append(out, s.Text())
}
}
return out, s.Err()
}
func (a *Driver) getMountpoint(id string) string {
return path.Join(a.mntPath(), id)
}
func (a *Driver) mntPath() string {
return path.Join(a.rootPath(), "mnt")
}
func (a *Driver) getDiffPath(id string) string {
return path.Join(a.diffPath(), id)
}
func (a *Driver) diffPath() string {
return path.Join(a.rootPath(), "diff")
}

View File

@@ -0,0 +1,21 @@
// +build linux
package aufs
import (
"os/exec"
"syscall"
"github.com/Sirupsen/logrus"
)
// Unmount the target specified.
func Unmount(target string) error {
if err := exec.Command("auplink", target, "flush").Run(); err != nil {
logrus.Warnf("Couldn't run auplink before unmount %s: %s", target, err)
}
if err := syscall.Unmount(target, 0); err != nil {
return err
}
return nil
}

View File

@@ -0,0 +1,7 @@
package aufs
import "syscall"
func mount(source string, target string, fstype string, flags uintptr, data string) error {
return syscall.Mount(source, target, fstype, flags, data)
}

View File

@@ -0,0 +1,12 @@
// +build !linux
package aufs
import "errors"
// MsRemount declared to specify a non-linux system mount.
const MsRemount = 0
func mount(source string, target string, fstype string, flags uintptr, data string) (err error) {
return errors.New("mount is not implemented on this platform")
}

View File

@@ -0,0 +1,520 @@
// +build linux
package btrfs
/*
#include <stdlib.h>
#include <dirent.h>
#include <btrfs/ioctl.h>
#include <btrfs/ctree.h>
static void set_name_btrfs_ioctl_vol_args_v2(struct btrfs_ioctl_vol_args_v2* btrfs_struct, const char* value) {
snprintf(btrfs_struct->name, BTRFS_SUBVOL_NAME_MAX, "%s", value);
}
*/
import "C"
import (
"fmt"
"os"
"path"
"path/filepath"
"strings"
"syscall"
"unsafe"
"github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/mount"
"github.com/containers/storage/pkg/parsers"
"github.com/docker/go-units"
"github.com/opencontainers/runc/libcontainer/label"
)
func init() {
graphdriver.Register("btrfs", Init)
}
var (
quotaEnabled = false
userDiskQuota = false
)
type btrfsOptions struct {
minSpace uint64
size uint64
}
// Init returns a new BTRFS driver.
// An error is returned if BTRFS is not supported.
func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
fsMagic, err := graphdriver.GetFSMagic(home)
if err != nil {
return nil, err
}
if fsMagic != graphdriver.FsMagicBtrfs {
return nil, graphdriver.ErrPrerequisites
}
rootUID, rootGID, err := idtools.GetRootUIDGID(uidMaps, gidMaps)
if err != nil {
return nil, err
}
if err := idtools.MkdirAllAs(home, 0700, rootUID, rootGID); err != nil {
return nil, err
}
if err := mount.MakePrivate(home); err != nil {
return nil, err
}
opt, err := parseOptions(options)
if err != nil {
return nil, err
}
if userDiskQuota {
if err := subvolEnableQuota(home); err != nil {
return nil, err
}
quotaEnabled = true
}
driver := &Driver{
home: home,
uidMaps: uidMaps,
gidMaps: gidMaps,
options: opt,
}
return graphdriver.NewNaiveDiffDriver(driver, uidMaps, gidMaps), nil
}
func parseOptions(opt []string) (btrfsOptions, error) {
var options btrfsOptions
for _, option := range opt {
key, val, err := parsers.ParseKeyValueOpt(option)
if err != nil {
return options, err
}
key = strings.ToLower(key)
switch key {
case "btrfs.min_space":
minSpace, err := units.RAMInBytes(val)
if err != nil {
return options, err
}
userDiskQuota = true
options.minSpace = uint64(minSpace)
default:
return options, fmt.Errorf("Unknown option %s", key)
}
}
return options, nil
}
// Driver contains information about the filesystem mounted.
type Driver struct {
//root of the file system
home string
uidMaps []idtools.IDMap
gidMaps []idtools.IDMap
options btrfsOptions
}
// String prints the name of the driver (btrfs).
func (d *Driver) String() string {
return "btrfs"
}
// Status returns current driver information in a two dimensional string array.
// Output contains "Build Version" and "Library Version" of the btrfs libraries used.
// Version information can be used to check compatibility with your kernel.
func (d *Driver) Status() [][2]string {
status := [][2]string{}
if bv := btrfsBuildVersion(); bv != "-" {
status = append(status, [2]string{"Build Version", bv})
}
if lv := btrfsLibVersion(); lv != -1 {
status = append(status, [2]string{"Library Version", fmt.Sprintf("%d", lv)})
}
return status
}
// GetMetadata returns empty metadata for this driver.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
return nil, nil
}
// Cleanup unmounts the home directory.
func (d *Driver) Cleanup() error {
if quotaEnabled {
if err := subvolDisableQuota(d.home); err != nil {
return err
}
}
return mount.Unmount(d.home)
}
func free(p *C.char) {
C.free(unsafe.Pointer(p))
}
func openDir(path string) (*C.DIR, error) {
Cpath := C.CString(path)
defer free(Cpath)
dir := C.opendir(Cpath)
if dir == nil {
return nil, fmt.Errorf("Can't open dir")
}
return dir, nil
}
func closeDir(dir *C.DIR) {
if dir != nil {
C.closedir(dir)
}
}
func getDirFd(dir *C.DIR) uintptr {
return uintptr(C.dirfd(dir))
}
func subvolCreate(path, name string) error {
dir, err := openDir(path)
if err != nil {
return err
}
defer closeDir(dir)
var args C.struct_btrfs_ioctl_vol_args
for i, c := range []byte(name) {
args.name[i] = C.char(c)
}
_, _, errno := syscall.Syscall(syscall.SYS_IOCTL, getDirFd(dir), C.BTRFS_IOC_SUBVOL_CREATE,
uintptr(unsafe.Pointer(&args)))
if errno != 0 {
return fmt.Errorf("Failed to create btrfs subvolume: %v", errno.Error())
}
return nil
}
func subvolSnapshot(src, dest, name string) error {
srcDir, err := openDir(src)
if err != nil {
return err
}
defer closeDir(srcDir)
destDir, err := openDir(dest)
if err != nil {
return err
}
defer closeDir(destDir)
var args C.struct_btrfs_ioctl_vol_args_v2
args.fd = C.__s64(getDirFd(srcDir))
var cs = C.CString(name)
C.set_name_btrfs_ioctl_vol_args_v2(&args, cs)
C.free(unsafe.Pointer(cs))
_, _, errno := syscall.Syscall(syscall.SYS_IOCTL, getDirFd(destDir), C.BTRFS_IOC_SNAP_CREATE_V2,
uintptr(unsafe.Pointer(&args)))
if errno != 0 {
return fmt.Errorf("Failed to create btrfs snapshot: %v", errno.Error())
}
return nil
}
func isSubvolume(p string) (bool, error) {
var bufStat syscall.Stat_t
if err := syscall.Lstat(p, &bufStat); err != nil {
return false, err
}
// return true if it is a btrfs subvolume
return bufStat.Ino == C.BTRFS_FIRST_FREE_OBJECTID, nil
}
func subvolDelete(dirpath, name string) error {
dir, err := openDir(dirpath)
if err != nil {
return err
}
defer closeDir(dir)
fullPath := path.Join(dirpath, name)
var args C.struct_btrfs_ioctl_vol_args
// walk the btrfs subvolumes
walkSubvolumes := func(p string, f os.FileInfo, err error) error {
if err != nil {
if os.IsNotExist(err) && p != fullPath {
// missing most likely because the path was a subvolume that got removed in the previous iteration
// since it's gone anyway, we don't care
return nil
}
return fmt.Errorf("error walking subvolumes: %v", err)
}
// we want to check children only so skip itself
// it will be removed after the filepath walk anyways
if f.IsDir() && p != fullPath {
sv, err := isSubvolume(p)
if err != nil {
return fmt.Errorf("Failed to test if %s is a btrfs subvolume: %v", p, err)
}
if sv {
if err := subvolDelete(path.Dir(p), f.Name()); err != nil {
return fmt.Errorf("Failed to destroy btrfs child subvolume (%s) of parent (%s): %v", p, dirpath, err)
}
}
}
return nil
}
if err := filepath.Walk(path.Join(dirpath, name), walkSubvolumes); err != nil {
return fmt.Errorf("Recursively walking subvolumes for %s failed: %v", dirpath, err)
}
// all subvolumes have been removed
// now remove the one originally passed in
for i, c := range []byte(name) {
args.name[i] = C.char(c)
}
_, _, errno := syscall.Syscall(syscall.SYS_IOCTL, getDirFd(dir), C.BTRFS_IOC_SNAP_DESTROY,
uintptr(unsafe.Pointer(&args)))
if errno != 0 {
return fmt.Errorf("Failed to destroy btrfs snapshot %s for %s: %v", dirpath, name, errno.Error())
}
return nil
}
func subvolEnableQuota(path string) error {
dir, err := openDir(path)
if err != nil {
return err
}
defer closeDir(dir)
var args C.struct_btrfs_ioctl_quota_ctl_args
args.cmd = C.BTRFS_QUOTA_CTL_ENABLE
_, _, errno := syscall.Syscall(syscall.SYS_IOCTL, getDirFd(dir), C.BTRFS_IOC_QUOTA_CTL,
uintptr(unsafe.Pointer(&args)))
if errno != 0 {
return fmt.Errorf("Failed to enable btrfs quota for %s: %v", dir, errno.Error())
}
return nil
}
func subvolDisableQuota(path string) error {
dir, err := openDir(path)
if err != nil {
return err
}
defer closeDir(dir)
var args C.struct_btrfs_ioctl_quota_ctl_args
args.cmd = C.BTRFS_QUOTA_CTL_DISABLE
_, _, errno := syscall.Syscall(syscall.SYS_IOCTL, getDirFd(dir), C.BTRFS_IOC_QUOTA_CTL,
uintptr(unsafe.Pointer(&args)))
if errno != 0 {
return fmt.Errorf("Failed to disable btrfs quota for %s: %v", dir, errno.Error())
}
return nil
}
func subvolRescanQuota(path string) error {
dir, err := openDir(path)
if err != nil {
return err
}
defer closeDir(dir)
var args C.struct_btrfs_ioctl_quota_rescan_args
_, _, errno := syscall.Syscall(syscall.SYS_IOCTL, getDirFd(dir), C.BTRFS_IOC_QUOTA_RESCAN_WAIT,
uintptr(unsafe.Pointer(&args)))
if errno != 0 {
return fmt.Errorf("Failed to rescan btrfs quota for %s: %v", dir, errno.Error())
}
return nil
}
func subvolLimitQgroup(path string, size uint64) error {
dir, err := openDir(path)
if err != nil {
return err
}
defer closeDir(dir)
var args C.struct_btrfs_ioctl_qgroup_limit_args
args.lim.max_referenced = C.__u64(size)
args.lim.flags = C.BTRFS_QGROUP_LIMIT_MAX_RFER
_, _, errno := syscall.Syscall(syscall.SYS_IOCTL, getDirFd(dir), C.BTRFS_IOC_QGROUP_LIMIT,
uintptr(unsafe.Pointer(&args)))
if errno != 0 {
return fmt.Errorf("Failed to limit qgroup for %s: %v", dir, errno.Error())
}
return nil
}
func (d *Driver) subvolumesDir() string {
return path.Join(d.home, "subvolumes")
}
func (d *Driver) subvolumesDirID(id string) string {
return path.Join(d.subvolumesDir(), id)
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent, mountLabel string, storageOpt map[string]string) error {
return d.Create(id, parent, mountLabel, storageOpt)
}
// Create the filesystem with given id.
func (d *Driver) Create(id, parent, mountLabel string, storageOpt map[string]string) error {
subvolumes := path.Join(d.home, "subvolumes")
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return err
}
if err := idtools.MkdirAllAs(subvolumes, 0700, rootUID, rootGID); err != nil {
return err
}
if parent == "" {
if err := subvolCreate(subvolumes, id); err != nil {
return err
}
} else {
parentDir := d.subvolumesDirID(parent)
st, err := os.Stat(parentDir)
if err != nil {
return err
}
if !st.IsDir() {
return fmt.Errorf("%s: not a directory", parentDir)
}
if err := subvolSnapshot(parentDir, subvolumes, id); err != nil {
return err
}
}
if _, ok := storageOpt["size"]; ok {
driver := &Driver{}
if err := d.parseStorageOpt(storageOpt, driver); err != nil {
return err
}
if err := d.setStorageSize(path.Join(subvolumes, id), driver); err != nil {
return err
}
}
// if we have a remapped root (user namespaces enabled), change the created snapshot
// dir ownership to match
if rootUID != 0 || rootGID != 0 {
if err := os.Chown(path.Join(subvolumes, id), rootUID, rootGID); err != nil {
return err
}
}
return label.Relabel(path.Join(subvolumes, id), mountLabel, false)
}
// Parse btrfs storage options
func (d *Driver) parseStorageOpt(storageOpt map[string]string, driver *Driver) error {
// Read size to change the subvolume disk quota per container
for key, val := range storageOpt {
key := strings.ToLower(key)
switch key {
case "size":
size, err := units.RAMInBytes(val)
if err != nil {
return err
}
driver.options.size = uint64(size)
default:
return fmt.Errorf("Unknown option %s", key)
}
}
return nil
}
// Set btrfs storage size
func (d *Driver) setStorageSize(dir string, driver *Driver) error {
if driver.options.size <= 0 {
return fmt.Errorf("btrfs: invalid storage size: %s", units.HumanSize(float64(driver.options.size)))
}
if d.options.minSpace > 0 && driver.options.size < d.options.minSpace {
return fmt.Errorf("btrfs: storage size cannot be less than %s", units.HumanSize(float64(d.options.minSpace)))
}
if !quotaEnabled {
if err := subvolEnableQuota(d.home); err != nil {
return err
}
quotaEnabled = true
}
if err := subvolLimitQgroup(dir, driver.options.size); err != nil {
return err
}
return nil
}
// Remove the filesystem with given id.
func (d *Driver) Remove(id string) error {
dir := d.subvolumesDirID(id)
if _, err := os.Stat(dir); err != nil {
return err
}
if err := subvolDelete(d.subvolumesDir(), id); err != nil {
return err
}
if err := os.RemoveAll(dir); err != nil && !os.IsNotExist(err) {
return err
}
if err := subvolRescanQuota(d.home); err != nil {
return err
}
return nil
}
// Get the requested filesystem id.
func (d *Driver) Get(id, mountLabel string) (string, error) {
dir := d.subvolumesDirID(id)
st, err := os.Stat(dir)
if err != nil {
return "", err
}
if !st.IsDir() {
return "", fmt.Errorf("%s: not a directory", dir)
}
return dir, nil
}
// Put is not implemented for BTRFS as there is no cleanup required for the id.
func (d *Driver) Put(id string) error {
// Get() creates no runtime resources (like e.g. mounts)
// so this doesn't need to do anything.
return nil
}
// Exists checks if the id exists in the filesystem.
func (d *Driver) Exists(id string) bool {
dir := d.subvolumesDirID(id)
_, err := os.Stat(dir)
return err == nil
}

View File

@@ -0,0 +1,3 @@
// +build !linux !cgo
package btrfs

View File

@@ -0,0 +1,26 @@
// +build linux,!btrfs_noversion
package btrfs
/*
#include <btrfs/version.h>
// around version 3.16, they did not define lib version yet
#ifndef BTRFS_LIB_VERSION
#define BTRFS_LIB_VERSION -1
#endif
// upstream had removed it, but now it will be coming back
#ifndef BTRFS_BUILD_VERSION
#define BTRFS_BUILD_VERSION "-"
#endif
*/
import "C"
func btrfsBuildVersion() string {
return string(C.BTRFS_BUILD_VERSION)
}
func btrfsLibVersion() int {
return int(C.BTRFS_LIB_VERSION)
}

View File

@@ -0,0 +1,14 @@
// +build linux,btrfs_noversion
package btrfs
// TODO(vbatts) remove this work-around once supported linux distros are on
// btrfs utilities of >= 3.16.1
func btrfsBuildVersion() string {
return "-"
}
func btrfsLibVersion() int {
return -1
}

View File

@@ -0,0 +1,67 @@
package graphdriver
import "sync"
type minfo struct {
check bool
count int
}
// RefCounter is a generic counter for use by graphdriver Get/Put calls
type RefCounter struct {
counts map[string]*minfo
mu sync.Mutex
checker Checker
}
// NewRefCounter returns a new RefCounter
func NewRefCounter(c Checker) *RefCounter {
return &RefCounter{
checker: c,
counts: make(map[string]*minfo),
}
}
// Increment increaes the ref count for the given id and returns the current count
func (c *RefCounter) Increment(path string) int {
c.mu.Lock()
m := c.counts[path]
if m == nil {
m = &minfo{}
c.counts[path] = m
}
// if we are checking this path for the first time check to make sure
// if it was already mounted on the system and make sure we have a correct ref
// count if it is mounted as it is in use.
if !m.check {
m.check = true
if c.checker.IsMounted(path) {
m.count++
}
}
m.count++
c.mu.Unlock()
return m.count
}
// Decrement decreases the ref count for the given id and returns the current count
func (c *RefCounter) Decrement(path string) int {
c.mu.Lock()
m := c.counts[path]
if m == nil {
m = &minfo{}
c.counts[path] = m
}
// if we are checking this path for the first time check to make sure
// if it was already mounted on the system and make sure we have a correct ref
// count if it is mounted as it is in use.
if !m.check {
m.check = true
if c.checker.IsMounted(path) {
m.count++
}
}
m.count--
c.mu.Unlock()
return m.count
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,106 @@
package devmapper
// Definition of struct dm_task and sub structures (from lvm2)
//
// struct dm_ioctl {
// /*
// * The version number is made up of three parts:
// * major - no backward or forward compatibility,
// * minor - only backwards compatible,
// * patch - both backwards and forwards compatible.
// *
// * All clients of the ioctl interface should fill in the
// * version number of the interface that they were
// * compiled with.
// *
// * All recognized ioctl commands (ie. those that don't
// * return -ENOTTY) fill out this field, even if the
// * command failed.
// */
// uint32_t version[3]; /* in/out */
// uint32_t data_size; /* total size of data passed in
// * including this struct */
// uint32_t data_start; /* offset to start of data
// * relative to start of this struct */
// uint32_t target_count; /* in/out */
// int32_t open_count; /* out */
// uint32_t flags; /* in/out */
// /*
// * event_nr holds either the event number (input and output) or the
// * udev cookie value (input only).
// * The DM_DEV_WAIT ioctl takes an event number as input.
// * The DM_SUSPEND, DM_DEV_REMOVE and DM_DEV_RENAME ioctls
// * use the field as a cookie to return in the DM_COOKIE
// * variable with the uevents they issue.
// * For output, the ioctls return the event number, not the cookie.
// */
// uint32_t event_nr; /* in/out */
// uint32_t padding;
// uint64_t dev; /* in/out */
// char name[DM_NAME_LEN]; /* device name */
// char uuid[DM_UUID_LEN]; /* unique identifier for
// * the block device */
// char data[7]; /* padding or data */
// };
// struct target {
// uint64_t start;
// uint64_t length;
// char *type;
// char *params;
// struct target *next;
// };
// typedef enum {
// DM_ADD_NODE_ON_RESUME, /* add /dev/mapper node with dmsetup resume */
// DM_ADD_NODE_ON_CREATE /* add /dev/mapper node with dmsetup create */
// } dm_add_node_t;
// struct dm_task {
// int type;
// char *dev_name;
// char *mangled_dev_name;
// struct target *head, *tail;
// int read_only;
// uint32_t event_nr;
// int major;
// int minor;
// int allow_default_major_fallback;
// uid_t uid;
// gid_t gid;
// mode_t mode;
// uint32_t read_ahead;
// uint32_t read_ahead_flags;
// union {
// struct dm_ioctl *v4;
// } dmi;
// char *newname;
// char *message;
// char *geometry;
// uint64_t sector;
// int no_flush;
// int no_open_count;
// int skip_lockfs;
// int query_inactive_table;
// int suppress_identical_reload;
// dm_add_node_t add_node;
// uint64_t existing_table_size;
// int cookie_set;
// int new_uuid;
// int secure_data;
// int retry_remove;
// int enable_checks;
// int expected_errno;
// char *uuid;
// char *mangled_uuid;
// };
//

View File

@@ -0,0 +1,226 @@
// +build linux
package devmapper
import (
"fmt"
"io/ioutil"
"os"
"path"
"strconv"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/devicemapper"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/mount"
"github.com/docker/go-units"
)
func init() {
graphdriver.Register("devicemapper", Init)
}
// Driver contains the device set mounted and the home directory
type Driver struct {
*DeviceSet
home string
uidMaps []idtools.IDMap
gidMaps []idtools.IDMap
ctr *graphdriver.RefCounter
}
// Init creates a driver with the given home and the set of options.
func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
deviceSet, err := NewDeviceSet(home, true, options, uidMaps, gidMaps)
if err != nil {
return nil, err
}
if err := mount.MakePrivate(home); err != nil {
return nil, err
}
d := &Driver{
DeviceSet: deviceSet,
home: home,
uidMaps: uidMaps,
gidMaps: gidMaps,
ctr: graphdriver.NewRefCounter(graphdriver.NewDefaultChecker()),
}
return graphdriver.NewNaiveDiffDriver(d, uidMaps, gidMaps), nil
}
func (d *Driver) String() string {
return "devicemapper"
}
// Status returns the status about the driver in a printable format.
// Information returned contains Pool Name, Data File, Metadata file, disk usage by
// the data and metadata, etc.
func (d *Driver) Status() [][2]string {
s := d.DeviceSet.Status()
status := [][2]string{
{"Pool Name", s.PoolName},
{"Pool Blocksize", fmt.Sprintf("%s", units.HumanSize(float64(s.SectorSize)))},
{"Base Device Size", fmt.Sprintf("%s", units.HumanSize(float64(s.BaseDeviceSize)))},
{"Backing Filesystem", s.BaseDeviceFS},
{"Data file", s.DataFile},
{"Metadata file", s.MetadataFile},
{"Data Space Used", fmt.Sprintf("%s", units.HumanSize(float64(s.Data.Used)))},
{"Data Space Total", fmt.Sprintf("%s", units.HumanSize(float64(s.Data.Total)))},
{"Data Space Available", fmt.Sprintf("%s", units.HumanSize(float64(s.Data.Available)))},
{"Metadata Space Used", fmt.Sprintf("%s", units.HumanSize(float64(s.Metadata.Used)))},
{"Metadata Space Total", fmt.Sprintf("%s", units.HumanSize(float64(s.Metadata.Total)))},
{"Metadata Space Available", fmt.Sprintf("%s", units.HumanSize(float64(s.Metadata.Available)))},
{"Thin Pool Minimum Free Space", fmt.Sprintf("%s", units.HumanSize(float64(s.MinFreeSpace)))},
{"Udev Sync Supported", fmt.Sprintf("%v", s.UdevSyncSupported)},
{"Deferred Removal Enabled", fmt.Sprintf("%v", s.DeferredRemoveEnabled)},
{"Deferred Deletion Enabled", fmt.Sprintf("%v", s.DeferredDeleteEnabled)},
{"Deferred Deleted Device Count", fmt.Sprintf("%v", s.DeferredDeletedDeviceCount)},
}
if len(s.DataLoopback) > 0 {
status = append(status, [2]string{"Data loop file", s.DataLoopback})
}
if len(s.MetadataLoopback) > 0 {
status = append(status, [2]string{"Metadata loop file", s.MetadataLoopback})
}
if vStr, err := devicemapper.GetLibraryVersion(); err == nil {
status = append(status, [2]string{"Library Version", vStr})
}
return status
}
// GetMetadata returns a map of information about the device.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
m, err := d.DeviceSet.exportDeviceMetadata(id)
if err != nil {
return nil, err
}
metadata := make(map[string]string)
metadata["DeviceId"] = strconv.Itoa(m.deviceID)
metadata["DeviceSize"] = strconv.FormatUint(m.deviceSize, 10)
metadata["DeviceName"] = m.deviceName
return metadata, nil
}
// Cleanup unmounts a device.
func (d *Driver) Cleanup() error {
err := d.DeviceSet.Shutdown(d.home)
if err2 := mount.Unmount(d.home); err == nil {
err = err2
}
return err
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent, mountLabel string, storageOpt map[string]string) error {
return d.Create(id, parent, mountLabel, storageOpt)
}
// Create adds a device with a given id and the parent.
func (d *Driver) Create(id, parent, mountLabel string, storageOpt map[string]string) error {
if err := d.DeviceSet.AddDevice(id, parent, storageOpt); err != nil {
return err
}
return nil
}
// Remove removes a device with a given id, unmounts the filesystem.
func (d *Driver) Remove(id string) error {
if !d.DeviceSet.HasDevice(id) {
// Consider removing a non-existing device a no-op
// This is useful to be able to progress on container removal
// if the underlying device has gone away due to earlier errors
return nil
}
// This assumes the device has been properly Get/Put:ed and thus is unmounted
if err := d.DeviceSet.DeleteDevice(id, false); err != nil {
return err
}
mp := path.Join(d.home, "mnt", id)
if err := os.RemoveAll(mp); err != nil && !os.IsNotExist(err) {
return err
}
return nil
}
// Get mounts a device with given id into the root filesystem
func (d *Driver) Get(id, mountLabel string) (string, error) {
mp := path.Join(d.home, "mnt", id)
rootFs := path.Join(mp, "rootfs")
if count := d.ctr.Increment(mp); count > 1 {
return rootFs, nil
}
uid, gid, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
d.ctr.Decrement(mp)
return "", err
}
// Create the target directories if they don't exist
if err := idtools.MkdirAllAs(path.Join(d.home, "mnt"), 0755, uid, gid); err != nil && !os.IsExist(err) {
d.ctr.Decrement(mp)
return "", err
}
if err := idtools.MkdirAs(mp, 0755, uid, gid); err != nil && !os.IsExist(err) {
d.ctr.Decrement(mp)
return "", err
}
// Mount the device
if err := d.DeviceSet.MountDevice(id, mp, mountLabel); err != nil {
d.ctr.Decrement(mp)
return "", err
}
if err := idtools.MkdirAllAs(rootFs, 0755, uid, gid); err != nil && !os.IsExist(err) {
d.ctr.Decrement(mp)
d.DeviceSet.UnmountDevice(id, mp)
return "", err
}
idFile := path.Join(mp, "id")
if _, err := os.Stat(idFile); err != nil && os.IsNotExist(err) {
// Create an "id" file with the container/image id in it to help reconstruct this in case
// of later problems
if err := ioutil.WriteFile(idFile, []byte(id), 0600); err != nil {
d.ctr.Decrement(mp)
d.DeviceSet.UnmountDevice(id, mp)
return "", err
}
}
return rootFs, nil
}
// Put unmounts a device and removes it.
func (d *Driver) Put(id string) error {
mp := path.Join(d.home, "mnt", id)
if count := d.ctr.Decrement(mp); count > 0 {
return nil
}
err := d.DeviceSet.UnmountDevice(id, mp)
if err != nil {
logrus.Errorf("devmapper: Error unmounting device %s: %s", id, err)
}
return err
}
// Exists checks to see if the device exists.
func (d *Driver) Exists(id string) bool {
return d.DeviceSet.HasDevice(id)
}

View File

@@ -0,0 +1,89 @@
// +build linux
package devmapper
import (
"bytes"
"fmt"
"os"
"path/filepath"
"syscall"
)
// FIXME: this is copy-pasted from the aufs driver.
// It should be moved into the core.
// Mounted returns true if a mount point exists.
func Mounted(mountpoint string) (bool, error) {
mntpoint, err := os.Stat(mountpoint)
if err != nil {
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
parent, err := os.Stat(filepath.Join(mountpoint, ".."))
if err != nil {
return false, err
}
mntpointSt := mntpoint.Sys().(*syscall.Stat_t)
parentSt := parent.Sys().(*syscall.Stat_t)
return mntpointSt.Dev != parentSt.Dev, nil
}
type probeData struct {
fsName string
magic string
offset uint64
}
// ProbeFsType returns the filesystem name for the given device id.
func ProbeFsType(device string) (string, error) {
probes := []probeData{
{"btrfs", "_BHRfS_M", 0x10040},
{"ext4", "\123\357", 0x438},
{"xfs", "XFSB", 0},
}
maxLen := uint64(0)
for _, p := range probes {
l := p.offset + uint64(len(p.magic))
if l > maxLen {
maxLen = l
}
}
file, err := os.Open(device)
if err != nil {
return "", err
}
defer file.Close()
buffer := make([]byte, maxLen)
l, err := file.Read(buffer)
if err != nil {
return "", err
}
if uint64(l) != maxLen {
return "", fmt.Errorf("devmapper: unable to detect filesystem type of %s, short read", device)
}
for _, p := range probes {
if bytes.Equal([]byte(p.magic), buffer[p.offset:p.offset+uint64(len(p.magic))]) {
return p.fsName, nil
}
}
return "", fmt.Errorf("devmapper: Unknown filesystem type on %s", device)
}
func joinMountOptions(a, b string) string {
if a == "" {
return b
}
if b == "" {
return a
}
return a + "," + b
}

View File

@@ -10,8 +10,8 @@ import (
"github.com/Sirupsen/logrus"
"github.com/vbatts/tar-split/tar/storage"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/idtools"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/idtools"
)
// FsMagic unsigned id of the filesystem in use.
@@ -46,9 +46,12 @@ type InitFunc func(root string, options []string, uidMaps, gidMaps []idtools.IDM
type ProtoDriver interface {
// String returns a string representation of this driver.
String() string
// CreateReadWrite creates a new, empty filesystem layer that is ready
// to be used as the storage for a container.
CreateReadWrite(id, parent, mountLabel string, storageOpt map[string]string) error
// Create creates a new, empty, filesystem layer with the
// specified id and parent and mountLabel. Parent and mountLabel may be "".
Create(id, parent, mountLabel string) error
Create(id, parent, mountLabel string, storageOpt map[string]string) error
// Remove attempts to remove the filesystem layer with this id.
Remove(id string) error
// Get returns the mountpoint for the layered filesystem referred
@@ -110,11 +113,17 @@ type FileGetCloser interface {
Close() error
}
// Checker makes checks on specified filesystems.
type Checker interface {
// IsMounted returns true if the provided path is mounted for the specific checker
IsMounted(path string) bool
}
func init() {
drivers = make(map[string]InitFunc)
}
// Register registers a InitFunc for the driver.
// Register registers an InitFunc for the driver.
func Register(name string, initFunc InitFunc) error {
if _, exists := drivers[name]; exists {
return fmt.Errorf("Name already registered %s", name)

View File

@@ -5,6 +5,8 @@ package graphdriver
import (
"path/filepath"
"syscall"
"github.com/containers/storage/pkg/mount"
)
const (
@@ -14,6 +16,8 @@ const (
FsMagicBtrfs = FsMagic(0x9123683E)
// FsMagicCramfs filesystem id for Cramfs
FsMagicCramfs = FsMagic(0x28cd3d45)
// FsMagicEcryptfs filesystem id for eCryptfs
FsMagicEcryptfs = FsMagic(0xf15f)
// FsMagicExtfs filesystem id for Extfs
FsMagicExtfs = FsMagic(0x0000EF53)
// FsMagicF2fs filesystem id for F2fs
@@ -89,6 +93,36 @@ func GetFSMagic(rootpath string) (FsMagic, error) {
return FsMagic(buf.Type), nil
}
// NewFsChecker returns a checker configured for the provied FsMagic
func NewFsChecker(t FsMagic) Checker {
return &fsChecker{
t: t,
}
}
type fsChecker struct {
t FsMagic
}
func (c *fsChecker) IsMounted(path string) bool {
m, _ := Mounted(c.t, path)
return m
}
// NewDefaultChecker returns a check that parses /proc/mountinfo to check
// if the specified path is mounted.
func NewDefaultChecker() Checker {
return &defaultChecker{}
}
type defaultChecker struct {
}
func (c *defaultChecker) IsMounted(path string) bool {
m, _ := mount.Mounted(path)
return m
}
// Mounted checks if the given path is mounted as the fs type
func Mounted(fsType FsMagic, mountPath string) (bool, error) {
var buf syscall.Statfs_t

Some files were not shown because too many files have changed in this diff Show More