Compare commits

..

151 Commits

Author SHA1 Message Date
Antonio Murdaca
62e3747a11 bump to v0.1.19
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-04-14 12:21:09 +02:00
Miloslav Trmač
882ba36ef8 Merge pull request #327 from cyphar/bump-c-i
vendor: update containers/image@fb36437
2017-04-13 16:39:07 +02:00
Aleksa Sarai
3b699c5248 vendor: update c/i@fb36437e0f
This change includes the docker-archive: transport, allowing for
entirely local manipulation of Docker images.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-04-13 23:58:45 +10:00
Aleksa Sarai
b82945b689 vendor: fix non-root imports
vndr has never supported non-root imports but it used to not produce
errors. Newer versions of vndr will not clone anything if the
vendor.conf doesn't "look right".

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-04-13 23:54:23 +10:00
Antonio Murdaca
3851d89b17 Merge pull request #326 from estesp/gracefully-allow-no-tag-list
Allow inspect to work even if tag list blocked
2017-04-06 20:24:00 +02:00
Phil Estes
4360db9f6d Allow inspect to work even if tag list blocked
Some registries may choose to block the "list all tags" endpoint for
performance or other reasons. In this case we should still allow an
inspect which will not include the "tag list" in the output.

Signed-off-by: Phil Estes <estesp@gmail.com>
2017-04-06 13:48:19 -04:00
Antonio Murdaca
355de6c757 Merge pull request #324 from runcom/revndr-c/image
revendor c/image for OCIConfig
2017-04-04 18:29:55 +02:00
Antonio Murdaca
ceacd8d885 revendor c/image for OCIConfig
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-04-04 17:49:51 +02:00
Antonio Murdaca
bc36bb416b Merge pull request #316 from mtrmac/change-sigstore-layout
Vendor mtrmac/image:change-sigstore-layout
2017-04-03 23:48:41 +02:00
Miloslav Trmač
e86ff0e79d Vendor after merging mtrmac/image:change-sigstore-layout 2017-04-03 23:26:18 +02:00
Miloslav Trmač
590703db95 Merge pull request #322 from mtrmac/x-registry-supports-signatures
Add support for the X-Registry-Supports-Signatures API extension
2017-04-03 16:40:01 +02:00
Miloslav Trmač
727212b12f Add CopySuite.TestCopyAtomicExtension
… testing signature reading and writing using the
X-Registry-Supports-Signatures extension, and its
interoperability/equivalence with the atomic: native OpenShift API.
2017-03-30 19:35:44 +02:00
Miloslav Trmač
593bdfe098 Vendor after merging mtrmac/image:x-registry-supports-signatures 2017-03-30 19:35:36 +02:00
Miloslav Trmač
ba22d17d1f Merge pull request #298 from mtrmac/openpgp
Vendor in non-gpgme support, and update tests and build machinery
2017-03-29 22:12:36 +02:00
Miloslav Trmač
0caee746fb Vendor after merging mtrmac/image:openpgp, + other updates
Primarily vendor after merging mtrmac/image:openpgp.

Then update for the SigningMechanism API change.

Also skip signing tests if the GPG mechanism does not support signing.

Also abort some of the tests early instead of trying to use invalid (or
nil) values.

The current master of image-tools does not build with Go 1.6, so keep
using an older release.

Also requires adding a few more dependencies of our updated
dependencies.
2017-03-29 20:54:18 +02:00
Miloslav Trmač
d2d41ebc33 Propagate BUILDTAGS through the build process
… so that any command-line overrides are respected.
2017-03-29 20:50:45 +02:00
Miloslav Trmač
9272b5177e Merge pull request #321 from mtrmac/remove-temp-dir
Trivial bug fixes to `CopySuite.TestCopyDockerSigstore`
2017-03-28 17:10:52 +02:00
Miloslav Trmač
62967259a4 Remove a copy&pasted comment 2017-03-28 16:41:15 +02:00
Miloslav Trmač
224b54c367 Uncomment a “defer os.RemoveAll(tmpDir)”
We are running the integration tests in a container, so this does not
_really_ matter, at least right now—mostly a matter of principle.
2017-03-28 16:40:28 +02:00
Miloslav Trmač
0734c4ccb3 Merge pull request #320 from mtrmac/openshift-shell
Add a commented-out CopySuite.TestRunShell
2017-03-27 17:36:14 +02:00
Miloslav Trmač
6b9345a5f9 Add a commented-out CopySuite.TestRunShell
We are maintaining code to set up and run registries, including the
fairly complex setup for Atomic Registry, in the integration tests.
This is all useful for experimentation in shell, and the easiest way to
do that is to add a “test” which, after all the set up is done, simply
starts a shell.

This is gated by a build tag, so it does not affect normal test runs.

A possible alternative would be to convert all of the setup code not to
depend on check.C and testing.T, but that would be fairly cumbersome due
to how prevalent c.Logf and c.Assert are throughout the setup code.
Especially the natural replacement of c.Assert with a panic() would be
pretty ugly, and adding real error handling to all of that would make
the code noticeably longer.  The build tag and copy&pasting a command
works just as well, at least for now.

(It is not conveniently possible to create a new “main program” which
manually creates a check.C and testing.T just for the purpose of running
the setup code either; check.C can be created given a testing.T, but
testing.T is only created by testing.MainStart, which does not allow us
to submit a non-test method; and testing.MainStart is excluded from the
Go compatibility promise.)
2017-03-27 17:01:29 +02:00
Miloslav Trmač
ff5694b1a6 Merge pull request #319 from kofalt/insecure-policy-flag-redux
Insecure policy flag redux
2017-03-25 00:39:43 +01:00
Nathaniel Kofalt
467a574e34 Add documentation for --insecure-policy flag
Signed-off-by: Nathaniel Kofalt <nathaniel@kofalt.com>
2017-03-24 17:30:01 -05:00
Kushal Das
4043ecf922 Adds --insecure-policy flag
This patch adds a new flag --insecure-policy.
Closes #181, we can now directly use the tool with the
above mentioned flag wihout using a policy file

Signed-off-by: Kushal Das <mail@kushaldas.in>
2017-03-24 17:10:44 -05:00
Antonio Murdaca
e052488674 Merge pull request #318 from enoodle/inspect_cmd_cert_path_to_cert_dir
Fix wrong naming of cert-dir argument in inspect command
2017-03-21 10:40:47 +01:00
Erez Freiberger
52aade5356 fix cert-path references in completions/bash/skopeo 2017-03-20 16:16:57 +02:00
Erez Freiberger
1491651ea9 adding tests 2017-03-20 12:53:34 +02:00
Erez Freiberger
d969934fa4 Fix wrnog naming of cert-dir argument 2017-03-20 11:27:43 +02:00
Antonio Murdaca
bda45f0d60 Merge pull request #317 from mtrmac/openshift-distribution-api
Update OpenShift in integration tests to get the docker/distribution API extensions
2017-03-20 08:42:35 +01:00
Miloslav Trmač
96e579720e Update OpenShift from 1.3.0-alpha.3 to 1.5.0-alpha.3
This is primarily to get the signature access docker/distribution API
extension.

To make it work, two updates to the test harness are necessary:

- Change the expected output of (oadm policy add-cluster-role-to-group)
- Don't expect (openshift start master) to create .kubeconfig files
  for the registry service.

  As of https://github.com/openshift/origin/pull/10830 ,
  openshift.local.config/master/openshift-registry.kubeconfig is no longer
  autogenerated.  Instead, do what (oadm registry) does, creating a
  service account and a cluster policy role binding.  Then manually create
  the necessary certificates and a .kubeconfig instead of using the
  service account in a pod.
2017-03-18 20:25:01 +01:00
Miloslav Trmač
9d88725a97 Update tests to handle OpenShift changing the schema1 signatures
The integrated registry used to return the original signature unmodified
in 1.3.0-alpha.3; in 1.5.0-alpha-3 it regenerates a new one, so allow that
when comparing the original and copied image.
2017-03-18 20:24:22 +01:00
Miloslav Trmač
def5f4a11a Add an openshiftCluster.clusterCmd helper method
… to help with creating an exec.Cmd in the cluster’s directory and with
the appropriate environment variables.
2017-03-18 20:24:22 +01:00
Miloslav Trmač
c4275519ae Split runExecCmdWithInput from runCommandWithInput
This does not change behavior yet; runExecCmdWithInput will be used in
the future by callers who need to modify the exec.Cmd.
2017-03-18 20:24:22 +01:00
Antonio Murdaca
b164a261cf Merge pull request #315 from runcom/fix-panic
fix copy panic on http response with tls-verify true
2017-03-04 00:17:10 +01:00
Antonio Murdaca
f89bd82dcd fix copy panic on http response with tls-verify true
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-03-03 23:59:59 +01:00
Antonio Murdaca
ab4912a5a1 Merge pull request #288 from runcom/pluggable-transports
vendor c/image for pluggable transports
2017-03-02 15:39:52 +01:00
Antonio Murdaca
85d737fc29 vendor c/image for pluggable transports
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-03-02 15:21:58 +01:00
Miloslav Trmač
0224d8cd38 Merge pull request #309 from erikh/check-close-errors
cmd/skopeo: check errors on close functions for image handling.
2017-02-27 15:28:15 +01:00
Erik Hollensbe
f0730043c6 vendor.conf,vendor: vndr update for containers/image
Signed-off-by: Erik Hollensbe <github@hollensbe.org>
2017-02-27 02:15:36 -08:00
Erik Hollensbe
e0efa0c2b3 cmd/skopeo: check errors on close functions for image handling.
Signed-off-by: Erik Hollensbe <github@hollensbe.org>
2017-02-27 01:55:49 -08:00
Antonio Murdaca
b9826f0c42 Merge pull request #307 from cyphar/vendor-ci193
[donotmerge] vendor: update to c/i@copy-obey-destination-compression
2017-02-17 14:03:28 +01:00
Aleksa Sarai
94d6767d07 vendor: update to c/i@3f493f2e5d
This includes fixes to docker-daemon's GetBlob, which will now
decompress blobs (making c/i/copy act sanely when trying to copy from a
docker-daemon to uncompressed destinations, as well as making
verification actually work properly).

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-02-17 23:45:05 +11:00
Antonio Murdaca
226dc99ad4 Merge pull request #306 from cyphar/oci-roundtrip-tests
integration: add OCI <-> Docker roundtrip tests
2017-02-16 13:07:42 +01:00
Aleksa Sarai
eea384cdf7 integration: add upstream validator to OCI roundtrip tests
In order to make sure that we don't create invalid OCI images that are
consistently invalid, add additional checks to ensure that both of the
generated OCI images in the round-trip test are valid according to the
upstream validator.

This commit vendors the following packages (deep breath):
* oci/image-tools@7575a09363, which requires
* oci/image-spec@v1.0.0-rc4 [revendor, but is technically an update
  because I couldn't figure out what version was vendored last time]
* oci/runtime-spec@v1.0.0-rc4
* xeipuuv/gojsonschema@6b67b3fab7
* xeipuuv/gojsonreference@e02fc20de9
* xeipuuv/gojsonpointer@e0fe6f6830
* camlistore/go4@7ce08ca145

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-02-16 22:48:32 +11:00
Aleksa Sarai
76f5c6d4c5 integration: add OCI <-> Docker roundtrip tests
This test is just a general smoke test to make sure there are no errors
with skopeo, but also verifying that after passing through several
translation steps an OCI image will remain in fully working order.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-02-16 22:48:32 +11:00
Aleksa Sarai
79ef111398 vendor: update c/image@a074c669cf
This includes fixes required to add OCI roundtrip integration tests
(namely f9214e1d9d5d ("oci: remove MIME type autodetection")).

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-02-16 21:22:37 +11:00
Antonio Murdaca
af2998040a Merge pull request #302 from runcom/new-rel-v0.1.18
New release: v0.1.18
2017-02-02 18:08:18 +01:00
Antonio Murdaca
1f6c140716 bump to v0.1.19-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-02-02 17:42:48 +01:00
Antonio Murdaca
b08008c5b2 bump to v0.1.18
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-02-02 17:42:32 +01:00
Antonio Murdaca
c011e81b38 Merge pull request #301 from runcom/perf-gain
vendor c/image@c1893ff40c
2017-02-02 17:41:13 +01:00
Antonio Murdaca
683d45ffd6 vendor c/image@c1893ff40c
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-02-02 17:23:47 +01:00
Antonio Murdaca
07d6e7db03 Merge pull request #300 from runcom/fix-v2s2-oci-conversion
oci: fix config conversion from docker v2s2
2017-01-31 18:42:31 +01:00
Antonio Murdaca
02ea29a99d oci: fix config conversion from docker v2s2
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-31 18:25:18 +01:00
Antonio Murdaca
f1849c6a47 Merge pull request #297 from runcom/remove-engine-api
docker: remove github.com/docker/engine-api
2017-01-31 17:27:13 +01:00
Antonio Murdaca
03bb3c2f74 docker: remove github.com/docker/engine-api
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-31 17:09:02 +01:00
Antonio Murdaca
fdf0bec556 Merge pull request #296 from runcom/fix-registry-auth
docker: fix registry authentication issues
2017-01-30 17:00:52 +01:00
Antonio Murdaca
c736e69d48 docker: fix registry authentication issues
vendor containers/image@d23efe9c5a
fix https://bugzilla.redhat.com/show_bug.cgi?id=1413987

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-30 16:43:00 +01:00
Antonio Murdaca
d7156f9b3d Merge pull request #294 from runcom/fix-image-spec-dep
fix OCI image-spec dependency
2017-01-28 13:31:46 +01:00
Antonio Murdaca
15b3fdf6f4 fix OCI image-spec dependency
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-28 12:53:41 +01:00
Miloslav Trmač
0e1ba1fb70 Merge pull request #293 from mtrmac/unverified-contents
REPLACE: Vendor in mtrmac/image:unverified-contents and add a (skopeo dump-untrusted-signature-contents-without-verification)
2017-01-23 17:34:11 +01:00
Miloslav Trmač
ee590a9795 Add an undocumented (skopeo untrusted-signature-dump-without-verification)
This is a bit better than raw (gpg -d $signature), and it allows testing
of the signature.GetSignatureInformationWithoutVerification function;
but, still, keeping it hidden because relying on this in common
workflows is probably a bad idea and we don’t _neeed_ to expose it right
now.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-01-23 16:45:18 +01:00
Miloslav Trmač
076d41d627 Vendor after merging mtrmac/image:unverified-contents 2017-01-23 16:45:03 +01:00
Miloslav Trmač
4830d90c32 Merge pull request #292 from mtrmac/ParseNormalizedNamed
REPLACE: Vendor in mtrmac/image:ParseNormalizedNamed
2017-01-20 00:02:14 +01:00
Miloslav Trmač
2f8cc39a1a Vendor after merging mtrmac/image:ParseNormalizedNamed
… and use the master branch of docker/distribution which provides
docker/distribution/reference.ParseNormalizedNamed.
2017-01-19 23:11:08 +01:00
Miloslav Trmač
8602471486 Merge pull request #291 from mtrmac/erikh-kubernetes-apimachinery
Vendor after merging erikh/image:kube-fix
2017-01-19 20:37:33 +01:00
Erik Hollensbe
1ee74864e9 Vendor after merging erikh/image:kube-fix
Based on https://github.com/projectatomic/skopeo/pull/289 by Erik
Hollensbe <github@hollensbe.org>

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-01-19 20:17:36 +01:00
Antonio Murdaca
81404fb71c Merge pull request #286 from AkihiroSuda/man2
Makefile: /usr/bin/go-md2man -> go-md2man
2017-01-12 15:29:57 +01:00
Akihiro Suda
845ad88cec Makefile: /usr/bin/go-md2man -> go-md2man
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
2017-01-12 09:57:52 +00:00
Antonio Murdaca
9b6b57df50 Merge pull request #284 from runcom/switch-to-vndr
switch to vndr
2017-01-09 18:11:22 +01:00
Antonio Murdaca
fefeeb4c70 switch to vndr
vndr is almost exactly the same as our old good hack/vendor.sh. Except
it's cleaner and it allows to re-vendor just one dependency if needed
(which we do a lot for containers/image).

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-09 17:54:17 +01:00
Antonio Murdaca
bbc0c69624 Merge pull request #283 from runcom/oci-digest
*: switch to opencontaniners/go-digest
2017-01-09 16:51:47 +01:00
Antonio Murdaca
dcfcfdaa1e *: switch to opencontaniners/go-digest
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-09 16:20:00 +01:00
Antonio Murdaca
e41b0d67d6 Merge pull request #282 from runcom/rev-c-image-2
bump c/image
2017-01-08 13:26:08 +01:00
Antonio Murdaca
1ec992abd1 bump c/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-08 13:10:07 +01:00
Antonio Murdaca
b8ae5c6054 Merge pull request #278 from runcom/up-oci-image
update opencontainers/image-spec to v1.0.0-rc3
2016-12-23 12:07:14 +01:00
Antonio Murdaca
7c530bc55f update opencontainers/image-spec to v1.0.0-rc3
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-23 11:39:27 +01:00
Antonio Murdaca
3377542e27 Merge pull request #277 from runcom/up-c-image
update c/image
2016-12-22 17:59:22 +01:00
Antonio Murdaca
cf5d9ffa49 update c/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-22 17:43:37 +01:00
Antonio Murdaca
686c3fcd7a Merge pull request #265 from masters-of-cats/vendor-errorspkg
Add pkg/errors dependency
2016-12-19 17:47:15 +01:00
George Lestaris
78a24cea81 Vendors new containers/image version using pkg/errors
Signed-off-by: George Lestaris <glestaris@pivotal.io>
2016-12-19 16:25:50 +00:00
Antonio Murdaca
fd93ebb78d Merge pull request #275 from glestaris/patch-1
Fix typo in CONTRIBUTING.md
2016-12-16 12:30:43 +01:00
George Lestaris
56dd3fc928 Fix typo
Signed-off-by: Gareth Clay <gclay@pivotal.io>
Signed-off-by: George Lestaris <glestaris@pivotal.io>
2016-12-16 10:56:57 +00:00
Antonio Murdaca
d830304e6d Merge pull request #179 from nalind/vendor-storage
Vendor containers/storage
2016-12-15 18:49:51 +01:00
Nalin Dahyabhai
4960f390e2 Vendor containers/storage, update containers/image
Vendor containers/storage, and its dependencies github.com/pborman/uuid
and github.com/mistifyio/go-zfs, which we didn't already use.

Update the build Dockerfile to install their dependencies.

Add scriptlets that try to detect whether or not we need to use the
"libdm_no_deferred_remove" and/or "btrfs_noversion" build tags.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-12-15 12:29:07 -05:00
Nalin Dahyabhai
9ba6dd71d7 Initialize reexec() handlers at startup-time
When we start up, initialize handlers so that we can import blobs
correctly when using the storage library.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-12-09 13:31:54 -05:00
Antonio Murdaca
dd6441b546 Merge pull request #273 from mtrmac/FIXMEs
Remove an obsolete FIXME
2016-12-09 17:28:19 +01:00
Miloslav Trmač
09cc6c3199 Remove an obsolete FIXME
With https://github.com/containers/image/pull/183 , this no longer
applies.
2016-12-09 17:10:32 +01:00
Antonio Murdaca
7f7b648443 Merge pull request #272 from runcom/bump-v0.1.17-and-again
Bump v0.1.17 and again to v0.1.18-dev
2016-12-08 21:15:36 +01:00
Antonio Murdaca
cc571eb1ea bump to v0.1.18-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 20:58:26 +01:00
Antonio Murdaca
b3b4e2b8f8 bump to v0.1.17
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 20:58:03 +01:00
Antonio Murdaca
28647cf29f Merge pull request #271 from runcom/fix-nokey-err
provide better error when key not found
2016-12-08 19:49:49 +01:00
Antonio Murdaca
0c8511f222 provide better error when key not found
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 18:51:30 +01:00
Antonio Murdaca
a515fefda9 Merge pull request #264 from runcom/split-flags
cmd: per command tls flags
2016-12-08 18:12:11 +01:00
Antonio Murdaca
1215f5fe69 cmd: per command tls flags
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 17:56:22 +01:00
Antonio Murdaca
93cde78d9b cmd/skopeo: hide layers command
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 17:56:22 +01:00
Antonio Murdaca
1730fd0d5f Merge pull request #269 from nalind/buildtags
Makefile: run "go" with $(BUILDTAGS)
2016-12-08 17:10:45 +01:00
Nalin Dahyabhai
7d58309a4f Makefile: run "go" with $(BUILDTAGS)
Run the "go" command with the $(BUILDTAGS) makefile variable passed in
as build tags.  We don't currently set it, but we'll need to eventually,
and adding it now does no harm.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-12-08 10:49:36 -05:00
Antonio Murdaca
a865c07818 Merge pull request #267 from cyphar/vendor-image
vendor: update to containers/image@a791c54467
2016-12-08 10:03:48 +01:00
Aleksa Sarai
cd269a4558 vendor: update to containers/image@a791c54467
Signed-off-by: Aleksa Sarai <asarai@suse.de>
2016-12-08 12:32:36 +11:00
Miloslav Trmač
2b3af4ad51 Merge pull request #266 from runcom/update-readme-contrib
README.md: add dependencies management tips
2016-12-05 17:38:21 +01:00
Antonio Murdaca
6ec338aa30 README.md: add dependencies management tips
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-05 13:07:20 +01:00
Antonio Murdaca
fb61d0c98f Merge pull request #262 from runcom/fix-quay-io
docker: fix ping routine
2016-12-02 19:16:31 +01:00
Antonio Murdaca
7620193722 docker: fix ping routine
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-02 19:00:38 +01:00
Miloslav Trmač
f8bd406deb Merge pull request #261 from runcom/fix-inline-df-comments
Revert "Dockerfile: remove inline comments"
2016-12-02 18:28:40 +01:00
Antonio Murdaca
d9b60e7fc9 Revert "Dockerfile: remove inline comments"
This reverts commit 3dcdb1ff7d.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-01 22:03:10 +01:00
Miloslav Trmač
d0a41799da Merge pull request #260 from runcom/rerevendor-cimage
revendor containers/image
2016-12-01 19:19:44 +01:00
Antonio Murdaca
dcc5395140 revendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-01 19:03:10 +01:00
Antonio Murdaca
980ff3eadd Merge pull request #249 from runcom/layers-federation
Support layers federation
2016-11-30 22:14:30 +01:00
Antonio Murdaca
f36fde92d6 Supports layers federation
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-30 19:46:54 +01:00
Antonio Murdaca
6b616d1730 Merge pull request #256 from rhatdan/bash_completions
Complete bash completions for skopeo
2016-11-30 19:39:25 +01:00
Dan Walsh
6dc36483f4 Complete bash completions for skopeo
Current code only handled commands not the options.
2016-11-30 13:22:57 -05:00
Antonio Murdaca
574b764391 Merge pull request #257 from runcom/revendor-cimage
revendor containers/image
2016-11-30 18:24:05 +01:00
Antonio Murdaca
1e795e038b revendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-30 18:07:35 +01:00
Antonio Murdaca
7cca84ba57 Merge pull request #254 from runcom/enable-cli-userpass
use user/pass flags
2016-11-30 17:29:48 +01:00
Antonio Murdaca
342ba18561 use user/pass flags
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-30 17:10:42 +01:00
Antonio Murdaca
d69c51e958 Merge pull request #248 from Crazykev/use-docker-digest
Use docker/distribution/digest
2016-11-28 17:31:33 +01:00
Crazykev
8b73542d89 use docker/distribution/digest
Signed-off-by: Crazykev <crazykev@zju.edu.cn>
2016-11-28 16:14:51 +00:00
Crazykev
1c76bc950d update containers/image vendor
Signed-off-by: Crazykev <crazykev@zju.edu.cn>
2016-11-28 11:36:18 +00:00
Antonio Murdaca
141212f27d Merge pull request #255 from runcom/fix-Dockerfile
Dockerfile: remove inline comments
2016-11-25 18:27:05 +01:00
Antonio Murdaca
3dcdb1ff7d Dockerfile: remove inline comments
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-25 17:35:59 +01:00
Antonio Murdaca
4620d5849c Merge pull request #252 from mtrmac/one-skopeo-in-all
Only build one skopeo binary by (make all)
2016-11-23 17:48:35 +01:00
Miloslav Trmač
a0af3619d3 Only build one skopeo binary by (make all)
Both (make binary) and (make binary-static) compile the code and create
a skopeo binary, so (make all) should only depend on one of them.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2016-11-23 16:57:37 +01:00
Antonio Murdaca
fcdf9c1b91 Merge pull request #243 from hustcat/static-branch
Add static compile target in Makefile
2016-11-03 09:00:26 +01:00
fightingdu
12cc3a9cbf Add static compile target in Makefile.
Add install `go-md2man` in Dockerfile.build

Signed-off-by: Ye Yin <eyniy@qq.com>
2016-11-03 15:31:50 +08:00
Antonio Murdaca
bd816574ed Merge pull request #242 from runcom/fix-skopeo-layers
Fix skopeo layers and deprecate it
2016-11-02 17:36:36 +01:00
Antonio Murdaca
b3b322e10b deprecate skopeo layers
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-02 17:10:25 +01:00
Antonio Murdaca
2c5532746f cmd/skopeo/layers: fix index out of range
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-02 09:20:43 +01:00
Miloslav Trmač
c70e58e6b5 Merge pull request #224 from aweiteka/default-policy
Add insecureAcceptAnything to default docker-daemon transport
2016-10-31 20:18:30 +01:00
Aaron Weitekamp
879dbc3757 Add insecureAcceptAnything to default docker-daemon transport
Signed-off-by: Aaron Weitekamp <aweiteka@redhat.com>
2016-10-31 14:43:35 -04:00
Antonio Murdaca
1f655f3f09 Merge pull request #238 from runcom/integrate-all-the-things-into-master
Pull in schema1 and docker-daemon
2016-10-21 17:13:01 +02:00
Antonio Murdaca
69e08d78ad Pull in schema1 and docker-daemon
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-21 16:48:39 +02:00
Antonio Murdaca
f4f69742ad Merge pull request #237 from so0k/add-osx-instructions
Add OSX instructions to Readme
2016-10-21 12:22:49 +02:00
Vincent De Smet
066463201a Add OSX instructions to Readme 2016-10-21 17:58:55 +08:00
Antonio Murdaca
5d589d6d54 Merge pull request #233 from runcom/add-defyaml
add sigstore default configuration
2016-10-12 19:19:15 +02:00
Antonio Murdaca
012f89d16b add sigstore default configuration
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-12 18:58:14 +02:00
Antonio Murdaca
ce42c70d4c Merge pull request #232 from runcom/add-better-errors
vendor containers/image for better registry errors
2016-10-12 15:14:46 +02:00
Antonio Murdaca
a48f7597e3 vendor containers/image for better registry errors
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-12 14:53:46 +02:00
Antonio Murdaca
d166555fb4 Merge pull request #231 from runcom/add-reg-user-agent
vendor containers/iamge for DockerRegistryUserAgent
2016-10-11 18:45:42 +02:00
Antonio Murdaca
5721355da7 vendor containers/iamge for DockerRegistryUserAgent
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-11 18:23:09 +02:00
Antonio Murdaca
c00868148e Merge pull request #229 from runcom/fix-fork-docker-reference
vendor contianers/image with docker/docker/reference forked
2016-10-11 18:04:50 +02:00
Antonio Murdaca
7f757cd253 vendor contianers/image with docker/docker/reference forked
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-11 17:42:19 +02:00
Antonio Murdaca
a720c22303 Merge pull request #230 from mtrmac/image-refactor
Refactor c/i/image
2016-10-11 16:13:42 +02:00
Miloslav Trmač
bd992e3872 Vendor after merging mtrmac/image:image-refactor
… and update for API changes
2016-10-11 15:53:20 +02:00
Antonio Murdaca
5207447327 Merge pull request #228 from runcom/fix-authConfig
vendor containers/image for DockerAuthConfig
2016-10-11 12:36:57 +02:00
Antonio Murdaca
f957e894e6 vendor containers/image for DockerAuthConfig
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-11 12:12:45 +02:00
Antonio Murdaca
0eb841ec8b Merge pull request #226 from runcom/fix-copy-test-manlist
vendor containers/image
2016-10-10 20:06:12 +02:00
Antonio Murdaca
dc1e560d4e vendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-10 19:46:10 +02:00
Antonio Murdaca
f69a78fa0b Merge pull request #221 from runcom/fix-oci-image-spec
update opencontainers/image-spec
2016-10-01 20:31:42 +02:00
Antonio Murdaca
efb47cf374 update opencontainers/image-spec
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-01 20:07:43 +02:00
Antonio Murdaca
507d09876d Merge pull request #219 from runcom/bump-containers-image-1
Bump v0.1.16
2016-09-27 21:25:21 +02:00
Antonio Murdaca
34fe924aff bump to v0.1.17-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-09-27 21:00:11 +02:00
867 changed files with 113849 additions and 44499 deletions

View File

@@ -8,7 +8,7 @@ that we follow.
* [Reporting Issues](#reporting-issues)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Communications](#communications)
* [Becomign a Maintainer](#becoming-a-maintainer)
* [Becoming a Maintainer](#becoming-a-maintainer)
## Reporting Issues

View File

@@ -1,6 +1,9 @@
FROM fedora
RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md2man \
# storage deps
btrfs-progs-devel \
device-mapper-devel \
# gpgme bindings deps
libassuan-devel gpgme-devel \
gnupg \
@@ -43,7 +46,7 @@ RUN set -x \
RUN set -x \
&& yum install -y which git tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \
&& export GOPATH=$(mktemp -d) \
&& git clone -b v1.3.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
&& git clone -b v1.5.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
&& (cd "$GOPATH/src/github.com/openshift/origin" && make clean build && make all WHAT=cmd/dockerregistry) \
&& cp -a "$GOPATH/src/github.com/openshift/origin/_output/local/bin/linux"/*/* /usr/local/bin \
&& cp "$GOPATH/src/github.com/openshift/origin/images/dockerregistry/config.yml" /atomic-registry-config.yml \

View File

@@ -1,5 +1,5 @@
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y golang git-core libgpgme11-dev
RUN apt-get install -y golang btrfs-tools git-core libdevmapper-dev libgpgme11-dev go-md2man
ENV GOPATH=/
WORKDIR /src/github.com/projectatomic/skopeo

View File

@@ -7,8 +7,9 @@ INSTALLDIR=${PREFIX}/bin
MANINSTALLDIR=${PREFIX}/share/man
CONTAINERSSYSCONFIGDIR=${DESTDIR}/etc/containers
REGISTRIESDDIR=${CONTAINERSSYSCONFIGDIR}/registries.d
SIGSTOREDIR=${DESTDIR}/var/lib/atomic/sigstore
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
GO_MD2MAN ?= /usr/bin/go-md2man
GO_MD2MAN ?= go-md2man
ifeq ($(DEBUG), 1)
override GOGCFLAGS += -N -l
@@ -31,6 +32,11 @@ GIT_COMMIT := $(shell git rev-parse HEAD 2> /dev/null || true)
MANPAGES_MD = $(wildcard docs/*.md)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh)
LIBDM_BUILD_TAG = $(shell hack/libdm_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG)
BUILDTAGS += $(LOCAL_BUILD_TAGS)
# make all DEBUG=1
# Note: Uses the -N -l go compiler options to disable compiler optimizations
# and inlining. Using these build options allows you to subsequently
@@ -42,12 +48,19 @@ all: binary docs
binary: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
skopeobuildimage make binary-local $(if $(DEBUG),DEBUG=$(DEBUG))
skopeobuildimage make binary-local $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
binary-static: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
skopeobuildimage make binary-local-static $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
# Build w/o using Docker containers
binary-local:
go build -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -o skopeo ./cmd/skopeo
go build -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
binary-local-static:
go build -ldflags "-extldflags \"-static\" -X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
build-container:
docker build ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" .
@@ -62,8 +75,9 @@ clean:
rm -f skopeo docs/*.1
install: install-binary install-docs install-completions
install -d -m 755 ${SIGSTOREDIR}
install -D -m 644 default-policy.json ${CONTAINERSSYSCONFIGDIR}/policy.json
install -d -m 755 ${REGISTRIESDDIR}
install -D -m 644 default.yaml ${REGISTRIESDDIR}/default.yaml
install-binary: ./skopeo
install -D -m 755 skopeo ${INSTALLDIR}/skopeo
@@ -72,7 +86,7 @@ install-docs: docs/skopeo.1
install -D -m 644 docs/skopeo.1 ${MANINSTALLDIR}/man1/skopeo.1
install-completions:
install -m 644 -D hack/make/bash_autocomplete ${BASHINSTALLDIR}/skopeo
install -m 644 -D completions/bash/skopeo ${BASHINSTALLDIR}/skopeo
shell: build-container
$(DOCKER_RUN_DOCKER) bash
@@ -81,11 +95,11 @@ check: validate test-unit test-integration
# The tests can run out of entropy and block in containers, so replace /dev/random.
test-integration: build-container
$(DOCKER_RUN_DOCKER) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; SKOPEO_CONTAINER_TESTS=1 hack/make.sh test-integration'
$(DOCKER_RUN_DOCKER) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; SKOPEO_CONTAINER_TESTS=1 BUILDTAGS="$(BUILDTAGS)" hack/make.sh test-integration'
test-unit: build-container
# Just call (make test unit-local) here instead of worrying about environment differences, e.g. GO15VENDOREXPERIMENT.
$(DOCKER_RUN_DOCKER) make test-unit-local
$(DOCKER_RUN_DOCKER) make test-unit-local BUILDTAGS='$(BUILDTAGS)'
validate: build-container
$(DOCKER_RUN_DOCKER) hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
@@ -97,4 +111,4 @@ validate-local:
hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
test-unit-local:
go test $$(go list -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')
go test -tags "$(BUILDTAGS)" $$(go list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')

View File

@@ -49,11 +49,12 @@ Copying images
-
`skopeo` can copy container images between various storage mechanisms,
e.g. Docker registries (including the Docker Hub), the Atomic Registry,
and local directories:
local directories, and local OCI-layout directories:
```sh
$ skopeo copy docker://busybox:1-glibc atomic:myns/unsigned:streaming
$ skopeo copy docker://busybox:latest dir:existingemptydirectory
$ skopeo copy docker://busybox:latest oci:busybox_ocilayout
```
Deleting images
@@ -65,14 +66,10 @@ $ skopeo delete docker://localhost:5000/imagename:latest
Private registries with authentication
-
When interacting with private registries, `skopeo` first looks for the Docker's cli config file (usually located at `$HOME/.docker/config.json`) to get the credentials needed to authenticate. When the file isn't available it falls back looking for `--username` and `--password` flags. The ultimate fallback, as Docker does, is to provide an empty authentication when interacting with those registries.
When interacting with private registries, `skopeo` first looks for `--creds` (for `skopeo inspect|delete`) or `--src-creds|--dest-creds` (for `skopeo copy`) flags. If those aren't provided, it looks for the Docker's cli config file (usually located at `$HOME/.docker/config.json`) to get the credentials needed to authenticate. The ultimate fallback, as Docker does, is to provide an empty authentication when interacting with those registries.
Examples:
```sh
# on my system
$ skopeo --help | grep docker-cfg
--docker-cfg "/home/runcom/.docker" Docker's cli config for auth
$ cat /home/runcom/.docker/config.json
{
"auths": {
@@ -88,16 +85,22 @@ $ skopeo inspect docker://myregistrydomain.com:5000/busybox
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
# let's try now to fake a non existent Docker's config file
$ skopeo --docker-cfg="" inspect docker://myregistrydomain.com:5000/busybox
FATA[0000] Get https://myregistrydomain.com:5000/v2/busybox/manifests/latest: no basic auth credentials
$ cat /home/runcom/.docker/config.json
{}
# passing --username and --password - we can see that everything goes fine
$ skopeo --docker-cfg="" --username=testuser --password=testpassword inspect docker://myregistrydomain.com:5000/busybox
$ skopeo inspect docker://myregistrydomain.com:5000/busybox
FATA[0000] unauthorized: authentication required
# passing --creds - we can see that everything goes fine
$ skopeo inspect --creds=testuser:testpassword docker://myregistrydomain.com:5000/busybox
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
# skopeo copy example:
$ skopeo copy --src-creds=testuser:testpassword docker://myregistrydomain.com:5000/private oci:local_oci_image
```
If your cli config is found but it doesn't contain the necessary credentials for the queried registry
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--username`
and `--password`.
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--creds` or `--src-creds|--dest-creds`.
Building
-
To build the manual you will need go-md2man.
@@ -110,7 +113,13 @@ $ git clone https://github.com/projectatomic/skopeo $GOPATH/src/github.com/proje
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make all
```
You may need to install additional development packages: gpgme-devel and libassuan-devel
To build localy on OSX:
```sh
$ brew install gpgme
$ make binary-local
```
You may need to install additional development packages: `gpgme-devel` and `libassuan-devel`
```sh
$ dnf install gpgme-devel libassuan-devel
```
@@ -137,6 +146,28 @@ NOT TODO
-
- provide a _format_ flag - just use the awesome [jq](https://stedolan.github.io/jq/)
CONTRIBUTING
-
### Dependencies management
`skopeo` uses [`vndr`](https://github.com/LK4D4/vndr) for dependencies management.
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- run `vndr github.com/pkg/errors`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `vndr github.com/pkg/errors`
In order to test out new PRs from [containers/image](https://github.com/containers/image) to not break `skopeo`:
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `vndr github.com/containers/image`
License
-
skopeo is licensed under the Apache License, Version 2.0. See

View File

@@ -6,10 +6,26 @@ import (
"os"
"github.com/containers/image/copy"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/urfave/cli"
)
// contextsFromGlobalOptions returns source and destionation types.SystemContext depending on c.
func contextsFromGlobalOptions(c *cli.Context) (*types.SystemContext, *types.SystemContext, error) {
sourceCtx, err := contextFromGlobalOptions(c, "src-")
if err != nil {
return nil, nil, err
}
destinationCtx, err := contextFromGlobalOptions(c, "dest-")
if err != nil {
return nil, nil, err
}
return sourceCtx, destinationCtx, nil
}
func copyHandler(context *cli.Context) error {
if len(context.Args()) != 2 {
return errors.New("Usage: copy source destination")
@@ -21,21 +37,28 @@ func copyHandler(context *cli.Context) error {
}
defer policyContext.Destroy()
srcRef, err := transports.ParseImageName(context.Args()[0])
srcRef, err := alltransports.ParseImageName(context.Args()[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
}
destRef, err := transports.ParseImageName(context.Args()[1])
destRef, err := alltransports.ParseImageName(context.Args()[1])
if err != nil {
return fmt.Errorf("Invalid destination name %s: %v", context.Args()[1], err)
}
signBy := context.String("sign-by")
removeSignatures := context.Bool("remove-signatures")
return copy.Image(contextFromGlobalOptions(context), policyContext, destRef, srcRef, &copy.Options{
sourceCtx, destinationCtx, err := contextsFromGlobalOptions(context)
if err != nil {
return err
}
return copy.Image(policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: removeSignatures,
SignBy: signBy,
ReportWriter: os.Stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
})
}
@@ -54,5 +77,33 @@ var copyCmd = cli.Command{
Name: "sign-by",
Usage: "Sign the image using a GPG key with the specified `FINGERPRINT`",
},
cli.StringFlag{
Name: "src-creds, screds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the source registry",
},
cli.StringFlag{
Name: "dest-creds, dcreds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the destination registry",
},
cli.StringFlag{
Name: "src-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the source registry",
},
cli.BoolTFlag{
Name: "src-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker source registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the destination registry",
},
cli.BoolTFlag{
Name: "dest-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker destination registry (defaults to true)",
},
},
}

View File

@@ -4,7 +4,7 @@ import (
"errors"
"fmt"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/urfave/cli"
)
@@ -13,12 +13,16 @@ func deleteHandler(context *cli.Context) error {
return errors.New("Usage: delete imageReference")
}
ref, err := transports.ParseImageName(context.Args()[0])
ref, err := alltransports.ParseImageName(context.Args()[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
}
if err := ref.DeleteImage(contextFromGlobalOptions(context)); err != nil {
ctx, err := contextFromGlobalOptions(context, "")
if err != nil {
return err
}
if err := ref.DeleteImage(ctx); err != nil {
return err
}
return nil
@@ -29,4 +33,20 @@ var deleteCmd = cli.Command{
Usage: "Delete image IMAGE-NAME",
ArgsUsage: "IMAGE-NAME",
Action: deleteHandler,
Flags: []cli.Flag{
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
},
},
}

View File

@@ -3,10 +3,14 @@ package main
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker"
"github.com/containers/image/manifest"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
@@ -14,7 +18,7 @@ import (
type inspectOutput struct {
Name string `json:",omitempty"`
Tag string `json:",omitempty"`
Digest string
Digest digest.Digest
RepoTags []string
Created time.Time
DockerVersion string
@@ -29,17 +33,36 @@ var inspectCmd = cli.Command{
Usage: "Inspect image IMAGE-NAME",
ArgsUsage: "IMAGE-NAME",
Flags: []cli.Flag{
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
},
cli.BoolFlag{
Name: "raw",
Usage: "output raw manifest",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
},
},
Action: func(c *cli.Context) error {
Action: func(c *cli.Context) (retErr error) {
img, err := parseImage(c)
if err != nil {
return err
}
defer img.Close()
defer func() {
if err := img.Close(); err != nil {
retErr = errors.Wrapf(retErr, fmt.Sprintf("(could not close image: %v) ", err))
}
}()
rawManifest, _, err := img.Manifest()
if err != nil {
@@ -76,7 +99,13 @@ var inspectCmd = cli.Command{
outputData.Name = dockerImg.SourceRefFullName()
outputData.RepoTags, err = dockerImg.GetRepositoryTags()
if err != nil {
return fmt.Errorf("Error determining repository tags: %v", err)
// some registries may decide to block the "list all tags" endpoint
// gracefully allow the inspect to continue in this case. Currently
// the IBM Bluemix container registry has this restriction.
if !strings.Contains(err.Error(), "401") {
return fmt.Errorf("Error determining repository tags: %v", err)
}
logrus.Warnf("Registry disallows tag list retrieval; skipping")
}
}
out, err := json.MarshalIndent(outputData, "", " ")

View File

@@ -1,24 +1,32 @@
package main
import (
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/containers/image/directory"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
// TODO(runcom): document args and usage
var layersCmd = cli.Command{
Name: "layers",
Usage: "Get layers of IMAGE-NAME",
ArgsUsage: "IMAGE-NAME",
Action: func(c *cli.Context) error {
ArgsUsage: "IMAGE-NAME [LAYER...]",
Hidden: true,
Action: func(c *cli.Context) (retErr error) {
fmt.Fprintln(os.Stderr, `DEPRECATED: skopeo layers is deprecated in favor of skopeo copy`)
if c.NArg() == 0 {
return errors.New("Usage: layers imageReference [layer...]")
}
rawSource, err := parseImageSource(c, c.Args()[0], []string{
// TODO: skopeo layers only support these now
// TODO: skopeo layers only supports these now
// eventually we'll remove this command altogether...
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
@@ -26,26 +34,42 @@ var layersCmd = cli.Command{
if err != nil {
return err
}
src := image.FromSource(rawSource)
defer src.Close()
src, err := image.FromSource(rawSource)
if err != nil {
if closeErr := rawSource.Close(); closeErr != nil {
return errors.Wrapf(err, " (close error: %v)", closeErr)
}
blobDigests := c.Args().Tail()
if len(blobDigests) == 0 {
layers, err := src.LayerInfos()
return err
}
defer func() {
if err := src.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (close error: %v)", err)
}
}()
var blobDigests []digest.Digest
for _, dString := range c.Args().Tail() {
if !strings.HasPrefix(dString, "sha256:") {
dString = "sha256:" + dString
}
d, err := digest.Parse(dString)
if err != nil {
return err
}
seenLayers := map[string]struct{}{}
blobDigests = append(blobDigests, d)
}
if len(blobDigests) == 0 {
layers := src.LayerInfos()
seenLayers := map[digest.Digest]struct{}{}
for _, info := range layers {
if _, ok := seenLayers[info.Digest]; !ok {
blobDigests = append(blobDigests, info.Digest)
seenLayers[info.Digest] = struct{}{}
}
}
configInfo, err := src.ConfigInfo()
if err != nil {
return err
}
configInfo := src.ConfigInfo()
if configInfo.Digest != "" {
blobDigests = append(blobDigests, configInfo.Digest)
}
@@ -63,21 +87,24 @@ var layersCmd = cli.Command{
if err != nil {
return err
}
defer dest.Close()
defer func() {
if err := dest.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (close error: %v)", err)
}
}()
for _, digest := range blobDigests {
if !strings.HasPrefix(digest, "sha256:") {
digest = "sha256:" + digest
}
r, blobSize, err := rawSource.GetBlob(digest)
r, blobSize, err := rawSource.GetBlob(types.BlobInfo{Digest: digest, Size: -1})
if err != nil {
return err
}
if _, err := dest.PutBlob(r, types.BlobInfo{Digest: digest, Size: blobSize}); err != nil {
r.Close()
if closeErr := r.Close(); closeErr != nil {
return errors.Wrapf(err, " (close error: %v)", closeErr)
}
return err
}
r.Close()
}
manifest, _, err := src.Manifest()

View File

@@ -6,6 +6,7 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/signature"
"github.com/containers/storage/pkg/reexec"
"github.com/projectatomic/skopeo/version"
"github.com/urfave/cli"
)
@@ -25,37 +26,25 @@ func createApp() *cli.App {
app.Version = version.Version
}
app.Usage = "Various operations with container images and container image registries"
// TODO(runcom)
//app.EnableBashCompletion = true
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug",
Usage: "enable debug output",
},
cli.StringFlag{
Name: "username",
Value: "",
Usage: "use `USERNAME` for accessing the registry",
},
cli.StringFlag{
Name: "password",
Value: "",
Usage: "use `PASSWORD` for accessing the registry",
},
cli.StringFlag{
Name: "cert-path",
Value: "",
Usage: "use certificates at `PATH` (cert.pem, key.pem) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Hidden: true,
},
cli.StringFlag{
Name: "policy",
Value: "",
Usage: "Path to a trust policy file",
},
cli.BoolFlag{
Name: "insecure-policy",
Usage: "run the tool without any policy check",
},
cli.StringFlag{
Name: "registries.d",
Value: "",
@@ -66,6 +55,9 @@ func createApp() *cli.App {
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
if c.GlobalIsSet("tls-verify") {
logrus.Warn("'--tls-verify' is deprecated, please set this on the specific subcommand")
}
return nil
}
app.Commands = []cli.Command{
@@ -76,11 +68,15 @@ func createApp() *cli.App {
manifestDigestCmd,
standaloneSignCmd,
standaloneVerifyCmd,
untrustedSignatureDumpCmd,
}
return app
}
func main() {
if reexec.Init() {
return
}
app := createApp()
if err := app.Run(os.Args); err != nil {
logrus.Fatal(err)
@@ -92,7 +88,9 @@ func getPolicyContext(c *cli.Context) (*signature.PolicyContext, error) {
policyPath := c.GlobalString("policy")
var policy *signature.Policy // This could be cached across calls, if we had an application context.
var err error
if policyPath == "" {
if c.GlobalBool("insecure-policy") {
policy = &signature.Policy{Default: []signature.PolicyRequirement{signature.NewPRInsecureAcceptAnything()}}
} else if policyPath == "" {
policy, err = signature.DefaultPolicy(nil)
} else {
policy, err = signature.NewPolicyFromFile(policyPath)

View File

@@ -27,5 +27,5 @@ func TestManifestDigest(t *testing.T) {
// Success
out, err = runSkopeo("manifest-digest", "fixtures/image.manifest.json")
assert.NoError(t, err)
assert.Equal(t, fixturesTestImageManifestDigest+"\n", out)
assert.Equal(t, fixturesTestImageManifestDigest.String()+"\n", out)
}

View File

@@ -1,6 +1,7 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
@@ -27,6 +28,7 @@ func standaloneSign(context *cli.Context) error {
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
}
defer mech.Close()
signature, err := signature.SignDockerManifest(manifest, dockerReference, mech, fingerprint)
if err != nil {
return fmt.Errorf("Error creating signature: %v", err)
@@ -73,6 +75,7 @@ func standaloneVerify(context *cli.Context) error {
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
}
defer mech.Close()
sig, err := signature.VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest, expectedDockerReference, mech, expectedFingerprint)
if err != nil {
return fmt.Errorf("Error verifying signature: %v", err)
@@ -88,3 +91,40 @@ var standaloneVerifyCmd = cli.Command{
ArgsUsage: "MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT SIGNATURE",
Action: standaloneVerify,
}
func untrustedSignatureDump(context *cli.Context) error {
if len(context.Args()) != 1 {
return errors.New("Usage: skopeo untrusted-signature-dump-without-verification signature")
}
untrustedSignaturePath := context.Args()[0]
untrustedSignature, err := ioutil.ReadFile(untrustedSignaturePath)
if err != nil {
return fmt.Errorf("Error reading untrusted signature from %s: %v", untrustedSignaturePath, err)
}
untrustedInfo, err := signature.GetUntrustedSignatureInformationWithoutVerifying(untrustedSignature)
if err != nil {
return fmt.Errorf("Error decoding untrusted signature: %v", err)
}
untrustedOut, err := json.MarshalIndent(untrustedInfo, "", " ")
if err != nil {
return err
}
fmt.Fprintln(context.App.Writer, string(untrustedOut))
return nil
}
// WARNING: Do not use the contents of this for ANY security decisions,
// and be VERY CAREFUL about showing this information to humans in any way which suggest that these values “are probably” reliable.
// There is NO REASON to expect the values to be correct, or not intentionally misleading
// (including things like “✅ Verified by $authority”)
//
// The subcommand is undocumented, and it may be renamed or entirely disappear in the future.
var untrustedSignatureDumpCmd = cli.Command{
Name: "untrusted-signature-dump-without-verification",
Usage: "Dump contents of a signature WITHOUT VERIFYING IT",
ArgsUsage: "SIGNATURE",
Hidden: true,
Action: untrustedSignatureDump,
}

View File

@@ -1,20 +1,25 @@
package main
import (
"encoding/json"
"io/ioutil"
"os"
"testing"
"time"
"github.com/containers/image/signature"
"github.com/opencontainers/go-digest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
// fixturesTestImageManifestDigest is the Docker manifest digest of "image.manifest.json"
fixturesTestImageManifestDigest = "sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55"
fixturesTestImageManifestDigest = digest.Digest("sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55")
// fixturesTestKeyFingerprint is the fingerprint of the private key.
fixturesTestKeyFingerprint = "1D8230F6CDB6A06716E414C1DB72F2188BB46CC8"
// fixturesTestKeyFingerprint is the key ID of the private key.
fixturesTestKeyShortID = "DB72F2188BB46CC8"
)
// Test that results of runSkopeo failed with nothing on stdout, and substring
@@ -26,6 +31,13 @@ func assertTestFailed(t *testing.T, stdout string, err error, substring string)
}
func TestStandaloneSign(t *testing.T) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
require.NoError(t, err)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
t.Skipf("Signing not supported: %v", err)
}
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/manifest"
os.Setenv("GNUPGHOME", "fixtures")
@@ -54,7 +66,7 @@ func TestStandaloneSign(t *testing.T) {
manifestPath, "" /* empty reference */, fixturesTestKeyFingerprint)
assertTestFailed(t, out, err, "empty signature content")
// Unknown key. (FIXME? The error is 'Error creating signature: End of file")
// Unknown key.
out, err = runSkopeo("standalone-sign", "-o", "/dev/null",
manifestPath, dockerReference, "UNKNOWN GPG FINGERPRINT")
assert.Error(t, err)
@@ -71,17 +83,18 @@ func TestStandaloneSign(t *testing.T) {
defer os.Remove(sigOutput.Name())
out, err = runSkopeo("standalone-sign", "-o", sigOutput.Name(),
manifestPath, dockerReference, fixturesTestKeyFingerprint)
assert.NoError(t, err)
require.NoError(t, err)
assert.Empty(t, out)
sig, err := ioutil.ReadFile(sigOutput.Name())
require.NoError(t, err)
manifest, err := ioutil.ReadFile(manifestPath)
require.NoError(t, err)
mech, err := signature.NewGPGSigningMechanism()
mech, err = signature.NewGPGSigningMechanism()
require.NoError(t, err)
defer mech.Close()
verified, err := signature.VerifyDockerManifestSignature(sig, manifest, dockerReference, mech, fixturesTestKeyFingerprint)
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, dockerReference, verified.DockerReference)
assert.Equal(t, fixturesTestImageManifestDigest, verified.DockerManifestDigest)
}
@@ -122,5 +135,44 @@ func TestStandaloneVerify(t *testing.T) {
out, err = runSkopeo("standalone-verify", manifestPath,
dockerReference, fixturesTestKeyFingerprint, signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified, digest "+fixturesTestImageManifestDigest+"\n", out)
assert.Equal(t, "Signature verified, digest "+fixturesTestImageManifestDigest.String()+"\n", out)
}
func TestUntrustedSignatureDump(t *testing.T) {
// Invalid command-line arguments
for _, args := range [][]string{
{},
{"a1", "a2"},
{"a1", "a2", "a3", "a4"},
} {
out, err := runSkopeo(append([]string{"untrusted-signature-dump-without-verification"}, args...)...)
assertTestFailed(t, out, err, "Usage")
}
// Error reading manifest
out, err := runSkopeo("untrusted-signature-dump-without-verification",
"/this/doesnt/exist")
assertTestFailed(t, out, err, "/this/doesnt/exist")
// Error reading signature (input is not a signature)
out, err = runSkopeo("untrusted-signature-dump-without-verification", "fixtures/image.manifest.json")
assertTestFailed(t, out, err, "Error decoding untrusted signature")
// Success
for _, path := range []string{"fixtures/image.signature", "fixtures/corrupt.signature"} {
// Success
out, err = runSkopeo("untrusted-signature-dump-without-verification", path)
require.NoError(t, err)
var info signature.UntrustedSignatureInformation
err := json.Unmarshal([]byte(out), &info)
require.NoError(t, err)
assert.Equal(t, fixturesTestImageManifestDigest, info.UntrustedDockerManifestDigest)
assert.Equal(t, "testing/manifest", info.UntrustedDockerReference)
assert.NotNil(t, info.UntrustedCreatorID)
assert.Equal(t, "atomic ", *info.UntrustedCreatorID)
assert.NotNil(t, info.UntrustedTimestamp)
assert.True(t, time.Unix(1458239713, 0).Equal(*info.UntrustedTimestamp))
assert.Equal(t, fixturesTestKeyShortID, info.UntrustedShortKeyIdentifier)
}
}

View File

@@ -1,39 +1,86 @@
package main
import (
"github.com/containers/image/transports"
"errors"
"strings"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/urfave/cli"
)
// contextFromGlobalOptions returns a types.SystemContext depending on c.
func contextFromGlobalOptions(c *cli.Context) *types.SystemContext {
tlsVerify := c.GlobalBoolT("tls-verify")
return &types.SystemContext{
RegistriesDirPath: c.GlobalString("registries.d"),
DockerCertPath: c.GlobalString("cert-path"),
DockerInsecureSkipTLSVerify: !tlsVerify,
func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemContext, error) {
ctx := &types.SystemContext{
RegistriesDirPath: c.GlobalString("registries.d"),
DockerCertPath: c.String(flagPrefix + "cert-dir"),
// DEPRECATED: keep this here for backward compatibility, but override
// them if per subcommand flags are provided (see below).
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
}
if c.IsSet(flagPrefix + "tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT(flagPrefix + "tls-verify")
}
if c.IsSet(flagPrefix + "creds") {
var err error
ctx.DockerAuthConfig, err = getDockerAuth(c.String(flagPrefix + "creds"))
if err != nil {
return nil, err
}
}
return ctx, nil
}
// ParseImage converts image URL-like string to an initialized handler for that image.
// The caller must call .Close() on the returned Image.
func parseImage(c *cli.Context) (types.Image, error) {
imgName := c.Args().First()
ref, err := transports.ParseImageName(imgName)
func parseCreds(creds string) (string, string, error) {
if creds == "" {
return "", "", errors.New("credentials can't be empty")
}
up := strings.SplitN(creds, ":", 2)
if len(up) == 1 {
return up[0], "", nil
}
if up[0] == "" {
return "", "", errors.New("username can't be empty")
}
return up[0], up[1], nil
}
func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
username, password, err := parseCreds(creds)
if err != nil {
return nil, err
}
return ref.NewImage(contextFromGlobalOptions(c))
return &types.DockerAuthConfig{
Username: username,
Password: password,
}, nil
}
// parseImage converts image URL-like string to an initialized handler for that image.
// The caller must call .Close() on the returned Image.
func parseImage(c *cli.Context) (types.Image, error) {
imgName := c.Args().First()
ref, err := alltransports.ParseImageName(imgName)
if err != nil {
return nil, err
}
ctx, err := contextFromGlobalOptions(c, "")
if err != nil {
return nil, err
}
return ref.NewImage(ctx)
}
// parseImageSource converts image URL-like string to an ImageSource.
// requestedManifestMIMETypes is as in types.ImageReference.NewImageSource.
// The caller must call .Close() on the returned ImageSource.
func parseImageSource(c *cli.Context, name string, requestedManifestMIMETypes []string) (types.ImageSource, error) {
ref, err := transports.ParseImageName(name)
ref, err := alltransports.ParseImageName(name)
if err != nil {
return nil, err
}
return ref.NewImageSource(contextFromGlobalOptions(c), requestedManifestMIMETypes)
ctx, err := contextFromGlobalOptions(c, "")
if err != nil {
return nil, err
}
return ref.NewImageSource(ctx, requestedManifestMIMETypes)
}

158
completions/bash/skopeo Normal file
View File

@@ -0,0 +1,158 @@
#! /bin/bash
: ${PROG:=$(basename ${BASH_SOURCE})}
_complete_() {
local options_with_args=$1
local boolean_options="$2 -h --help"
case "$prev" in
$options_with_args)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
esac
}
_skopeo_copy() {
local options_with_args="
--sign-by
--src-creds --screds
--src-cert-dir
--src-tls-verify
--dest-creds --dcreds
--dest-cert-dir
--dest-tls-verify
"
local boolean_options="
--remove-signatures
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_inspect() {
local options_with_args="
--creds
--cert-dir
"
local boolean_options="
--raw
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_standalone_sign() {
local options_with_args="
-o --output
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_standalone_verify() {
local options_with_args="
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_manifest_digest() {
local options_with_args="
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_delete() {
local options_with_args="
--creds
--cert-dir
"
local boolean_options="
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_layers() {
local options_with_args="
--creds
--cert-dir
"
local boolean_options="
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_skopeo() {
local options_with_args="
--policy
--registries.d
"
local boolean_options="
--insecure-policy
--debug
--version -v
--help -h
"
commands=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
case "$prev" in
$main_options_with_args_glob )
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]} help" -- "$cur" ) )
;;
esac
}
_cli_bash_autocomplete() {
local cur opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=()
local cur prev words cword
_get_comp_words_by_ref -n : cur prev words cword
local command=${PROG} cpos=0
local counter=1
counter=1
while [ $counter -lt $cword ]; do
case "!${words[$counter]}" in
*)
command=$(echo "${words[$counter]}" | sed 's/-/_/g')
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
done
local completions_func=_skopeo_${command}
declare -F $completions_func >/dev/null && $completions_func
eval "$previous_extglob_setting"
return 0
}
complete -F _cli_bash_autocomplete $PROG

View File

@@ -3,5 +3,12 @@
{
"type": "insecureAcceptAnything"
}
]
}
],
"transports":
{
"docker-daemon":
{
"": [{"type":"insecureAcceptAnything"}]
}
}
}

26
default.yaml Normal file
View File

@@ -0,0 +1,26 @@
# This is a default registries.d configuration file. You may
# add to this file or create additional files in registries.d/.
#
# sigstore: indicates a location that is read and write
# sigstore-staging: indicates a location that is only for write
#
# sigstore and sigstore-staging take a value of the following:
# sigstore: {schema}://location
#
# For reading signatures, schema may be http, https, or file.
# For writing signatures, schema may only be file.
# This is the default signature write location for docker registries.
default-docker:
# sigstore: file:///var/lib/atomic/sigstore
sigstore-staging: file:///var/lib/atomic/sigstore
# The 'docker' indicator here is the start of the configuration
# for docker registries.
#
# docker:
#
# privateregistry.com:
# sigstore: http://privateregistry.com/sigstore/
# sigstore-staging: /mnt/nfs/privateregistry/sigstore

View File

@@ -37,17 +37,11 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**--debug** enable debug output
**--username** _username_ for accessing the registry
**--password** _password_ for accessing the registry
**--cert-path** _path_ Use certificates at _path_ (cert.pem, key.pem) to connect to the registry
**--policy** _path-to-policy_ Path to a policy.json file to use for verifying signatures and deciding whether an image is trusted, overriding the default trust policy file.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for docker signature storage), overriding the default path.
**--insecure-policy** Adopt an insecure, permissive policy that allows anything. This obviates the need for a policy file.
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for docker signature storage), overriding the default path.
**--help**|**-h** Show help
@@ -70,6 +64,18 @@ Uses the system's trust policy to validate images, rejects images not trusted by
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
**--src-creds** _username[:password]_ for accessing the source registry
**--dest-creds** _username[:password]_ for accessing the destination registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker source registry (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker destination registry (defaults to true)
Existing signatures, if any, are preserved as well.
## skopeo delete
@@ -81,6 +87,12 @@ Mark _image-name_ for deletion. To release the allocated disk space, you need t
$ docker exec -it registry bin/registry garbage-collect /etc/docker/registry/config.yml
```
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
## skopeo inspect
@@ -92,12 +104,11 @@ Return low-level information about _image-name_ in a registry
_image-name_ name of image to retrieve information about
## skopeo layers
**skopeo layers** _image-name_
**--creds** _username[:password]_ for accessing the registry
Get image layers of _image-name_
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
_image-name_ name of the image to retrieve layers
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
## skopeo manifest-digest
**skopeo manifest-digest** _manifest-file_

View File

@@ -1,115 +0,0 @@
#!/usr/bin/env bash
PROJECT=github.com/projectatomic/skopeo
# Downloads dependencies into vendor/ directory
mkdir -p vendor
original_GOPATH=$GOPATH
export GOPATH="${PWD}/vendor:$GOPATH"
find="/usr/bin/find"
clone() {
local delete_vendor=true
if [ "x$1" = x--keep-vendor ]; then
delete_vendor=false
shift
fi
local vcs="$1"
local pkg="$2"
local rev="$3"
local url="$4"
: ${url:=https://$pkg}
local target="vendor/src/$pkg"
echo -n "$pkg @ $rev: "
if [ -d "$target" ]; then
echo -n 'rm old, '
rm -rf "$target"
fi
echo -n 'clone, '
case "$vcs" in
git)
git clone --quiet --no-checkout "$url" "$target"
( cd "$target" && git checkout --quiet "$rev" && git reset --quiet --hard "$rev" -- )
;;
hg)
hg clone --quiet --updaterev "$rev" "$url" "$target"
;;
esac
echo -n 'rm VCS, '
( cd "$target" && rm -rf .{git,hg} )
if $delete_vendor; then
echo -n 'rm vendor, '
( cd "$target" && rm -rf vendor Godeps/_workspace )
fi
echo done
}
clean() {
# If $GOPATH starts with ./vendor, (go list) shows the short-form import paths for packages inside ./vendor.
# So, reset GOPATH to the external value (without ./vendor), so that the grep -v works.
local packages=($(GOPATH=$original_GOPATH go list -e ./... | grep -v "^${PROJECT}/vendor"))
local platforms=( linux/amd64 linux/386 )
local buildTags=( )
echo
echo -n 'collecting import graph, '
local IFS=$'\n'
local imports=( $(
for platform in "${platforms[@]}"; do
export GOOS="${platform%/*}";
export GOARCH="${platform##*/}";
go list -e -tags "$buildTags" -f '{{join .Deps "\n"}}' "${packages[@]}"
go list -e -tags "$buildTags" -f '{{join .TestImports "\n"}}' "${packages[@]}"
done | grep -vE "^${PROJECT}" | sort -u
) )
# .TestImports does not include indirect dependencies, so do one more iteration.
imports+=( $(
go list -e -f '{{join .Deps "\n"}}' "${imports[@]}" | grep -vE "^${PROJECT}" | sort -u
) )
imports=( $(go list -e -f '{{if not .Standard}}{{.ImportPath}}{{end}}' "${imports[@]}") )
unset IFS
echo -n 'pruning unused packages, '
findArgs=(
# This directory contains only .c and .h files which are necessary
# -path vendor/src/github.com/mattn/go-sqlite3/code
)
for import in "${imports[@]}"; do
[ "${#findArgs[@]}" -eq 0 ] || findArgs+=( -or )
findArgs+=( -path "vendor/src/$import" )
done
local IFS=$'\n'
local prune=( $($find vendor -depth -type d -not '(' "${findArgs[@]}" ')') )
unset IFS
for dir in "${prune[@]}"; do
$find "$dir" -maxdepth 1 -not -type d -not -name 'LICENSE*' -not -name 'COPYING*' -exec rm -v -f '{}' ';'
rmdir "$dir" 2>/dev/null || true
done
echo -n 'pruning unused files, '
$find vendor -type f -name '*_test.go' -exec rm -v '{}' ';'
echo done
}
# Fix up hard-coded imports that refer to Godeps paths so they'll work with our vendoring
fix_rewritten_imports () {
local pkg="$1"
local remove="${pkg}/Godeps/_workspace/src/"
local target="vendor/src/$pkg"
echo "$pkg: fixing rewritten imports"
$find "$target" -name \*.go -exec sed -i -e "s|\"${remove}|\"|g" {} \;
}

7
hack/btrfs_tag.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
cc -E - > /dev/null 2> /dev/null << EOF
#include <btrfs/version.h>
EOF
if test $? -ne 0 ; then
echo btrfs_noversion
fi

14
hack/libdm_tag.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
tmpdir="$PWD/tmp.$RANDOM"
mkdir -p "$tmpdir"
trap 'rm -fr "$tmpdir"' EXIT
cc -c -o "$tmpdir"/libdm_tag.o -x c - > /dev/null 2> /dev/null << EOF
#include <libdevmapper.h>
int main() {
struct dm_task *task;
return 0;
}
EOF
if test $? -ne 0 ; then
echo libdm_no_deferred_remove
fi

View File

@@ -72,10 +72,10 @@ TESTFLAGS+=" -test.timeout=10m"
go_test_dir() {
dir=$1
(
echo '+ go test' $TESTFLAGS "${SKOPEO_PKG}${dir#.}"
echo '+ go test' $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"} "${SKOPEO_PKG}${dir#.}"
cd "$dir"
export DEST="$ABS_DEST" # we're in a subshell, so this is safe -- our integration-cli tests need DEST, and "cd" screws it up
go test $TESTFLAGS
go test $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"}
)
}

View File

@@ -1,14 +0,0 @@
#! /bin/bash
: ${PROG:=$(basename ${BASH_SOURCE})}
_cli_bash_autocomplete() {
local cur opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
opts=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
return 0
}
complete -F _cli_bash_autocomplete $PROG

View File

@@ -8,7 +8,7 @@ bundle_test_integration() {
# subshell so that we can export PATH without breaking other things
(
make binary-local
make binary-local ${BUILDTAGS:+BUILDTAGS="$BUILDTAGS"}
make install
export GO15VENDOREXPERIMENT=1
bundle_test_integration

48
hack/vendor.sh Executable file → Normal file
View File

@@ -1,39 +1,15 @@
#!/usr/bin/env bash
#!/bin/bash
# This file is just wrapper around vndr (github.com/LK4D4/vndr) tool.
# For updating dependencies you should change `vendor.conf` file in root of the
# project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for
# vndr usage.
set -e
cd "$(dirname "$BASH_SOURCE")/.."
rm -rf vendor/
source 'hack/.vendor-helpers.sh'
if ! hash vndr; then
echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH"
exit 1
fi
clone git github.com/urfave/cli v1.17.0
clone git github.com/containers/image master
clone git gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
clone git github.com/Sirupsen/logrus v0.10.0
clone git github.com/go-check/check v1
clone git github.com/stretchr/testify v1.1.3
clone git github.com/davecgh/go-spew master
clone git github.com/pmezard/go-difflib master
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
clone git github.com/docker/docker v1.11.2
clone git github.com/docker/engine-api v0.3.3
clone git github.com/docker/go-connections v0.2.0
clone git github.com/vbatts/tar-split v0.9.11
clone git github.com/gorilla/context 14f550f51a
clone git github.com/docker/go-units 651fc226e7441360384da338d0fd37f2440ffbe3
clone git golang.org/x/net master https://github.com/golang/net.git
# end docker deps
clone git github.com/docker/distribution master
clone git github.com/docker/libtrust master
clone git github.com/opencontainers/runc master
clone git github.com/opencontainers/image-spec master
clone git github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
clone git github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
clone git k8s.io/kubernetes 4a3f9c5b19c7ff804cbc1bf37a15c044ca5d2353 https://github.com/openshift/kubernetes
clone git github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
clone git gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
clone git github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
clean
mv vendor/src/* vendor/
vndr "$@"

View File

@@ -75,22 +75,21 @@ func (s *SkopeoSuite) TestVersion(c *check.C) {
assertSkopeoSucceeds(c, wanted, "--version")
}
const (
errFetchManifestRegexp = ".*error fetching manifest: status code: %s.*"
)
func (s *SkopeoSuite) TestCanAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
// TODO(runcom)
c.Skip("we need to restore --username --password flags!")
wanted := fmt.Sprintf(errFetchManifestRegexp, "401")
assertSkopeoFails(c, wanted, "--docker-cfg=''", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
wanted := ".*manifest unknown: manifest unknown.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", "--creds="+s.regV2WithAuth.username+":"+s.regV2WithAuth.password, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
// TODO(runcom): mock the empty docker-cfg by removing it in the test itself (?)
c.Skip("mock empty docker config")
wanted := fmt.Sprintf(errFetchManifestRegexp, "401")
assertSkopeoFails(c, wanted, "--docker-cfg=''", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
wanted := ".*unauthorized: authentication required.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestCertDirInsteadOfCertPath(c *check.C) {
wanted := ".*flag provided but not defined: -cert-path.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-path=/")
wanted = ".*unauthorized: authentication required.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-dir=/etc/docker/certs.d/")
}
// TODO(runcom): as soon as we can push to registries ensure you can inspect here
@@ -98,8 +97,8 @@ func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C
func (s *SkopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound(c *check.C) {
out, err := exec.Command(skopeoBinary, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2.url)).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf(string(out)))
wanted := fmt.Sprintf(errFetchManifestRegexp, "404")
wanted := ".*manifest unknown.*"
c.Assert(string(out), check.Matches, "(?s)"+wanted) // (?s) : '.' will also match newlines
wanted = fmt.Sprintf(errFetchManifestRegexp, "401")
wanted = ".*unauthorized: authentication required.*"
c.Assert(string(out), check.Not(check.Matches), "(?s)"+wanted) // (?s) : '.' will also match newlines
}

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"fmt"
"io/ioutil"
"log"
"net/http"
"net/http/httptest"
"os"
@@ -11,7 +12,10 @@ import (
"strings"
"github.com/containers/image/manifest"
"github.com/containers/image/signature"
"github.com/go-check/check"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-tools/image"
)
func init() {
@@ -96,8 +100,11 @@ func fileFromFixture(c *check.C, inputPath string, edits map[string]string) stri
return path
}
// The most basic (skopeo copy) use:
func (s *CopySuite) TestCopySimple(c *check.C) {
func (s *CopySuite) TestCopyFailsWithManifestList(c *check.C) {
assertSkopeoFails(c, ".*can not copy docker://estesp/busybox:latest: manifest contains multiple images.*", "copy", "docker://estesp/busybox:latest", "dir:somedir")
}
func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
@@ -107,11 +114,32 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
// FIXME: It would be nice to use one of the local Docker registries instead of neeeding an Internet connection.
// "pull": docker: → dir:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:latest", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
// "push": dir: → atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, "atomic:localhost:5000/myns/unsigned:unsigned")
// The result of pushing and pulling is an unmodified image.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:unsigned", "dir:"+dir2)
destructiveCheckDirImagesAreEqual(c, dir1, dir2)
}
// The most basic (skopeo copy) use:
func (s *CopySuite) TestCopySimple(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
// FIXME: It would be nice to use one of the local Docker registries instead of neeeding an Internet connection.
// "pull": docker: → dir:
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+dir1)
// "push": dir: → docker(v2s2):
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, ourRegistry+"busybox:unsigned")
// The result of pushing and pulling is an unmodified image.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", ourRegistry+"busybox:unsigned", "dir:"+dir2)
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
@@ -123,8 +151,28 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:latest", "oci:"+ociDest)
_, err = os.Stat(ociDest)
c.Assert(err, check.IsNil)
}
// FIXME: Also check pushing to docker://
// Check whether dir: images in dir1 and dir2 are equal.
// WARNING: This modifies the contents of dir1 and dir2!
func destructiveCheckDirImagesAreEqual(c *check.C, dir1, dir2 string) {
// The manifests may have different JWS signatures; so, compare the manifests by digests, which
// strips the signatures, and remove them, comparing the rest file by file.
digests := []digest.Digest{}
for _, dir := range []string{dir1, dir2} {
manifestPath := filepath.Join(dir, "manifest.json")
m, err := ioutil.ReadFile(manifestPath)
c.Assert(err, check.IsNil)
digest, err := manifest.Digest(m)
c.Assert(err, check.IsNil)
digests = append(digests, digest)
err = os.Remove(manifestPath)
c.Assert(err, check.IsNil)
c.Logf("Manifest file %s (digest %s) removed", manifestPath, digest)
}
c.Assert(digests[0], check.Equals, digests[1])
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
}
// Streaming (skopeo copy)
@@ -142,28 +190,66 @@ func (s *CopySuite) TestCopyStreaming(c *check.C) {
// Compare (copies of) the original and the copy:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:streaming", "dir:"+dir2)
// The manifests will have different JWS signatures; so, compare the manifests by digests, which
// strips the signatures, and remove them, comparing the rest file by file.
digests := []string{}
for _, dir := range []string{dir1, dir2} {
manifestPath := filepath.Join(dir, "manifest.json")
m, err := ioutil.ReadFile(manifestPath)
c.Assert(err, check.IsNil)
digest, err := manifest.Digest(m)
c.Assert(err, check.IsNil)
digests = append(digests, digest)
err = os.Remove(manifestPath)
c.Assert(err, check.IsNil)
c.Logf("Manifest file %s (digest %s) removed", manifestPath, digest)
}
c.Assert(digests[0], check.Equals, digests[1])
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
destructiveCheckDirImagesAreEqual(c, dir1, dir2)
// FIXME: Also check pushing to docker://
}
// OCI round-trip testing. It's very important to make sure that OCI <-> Docker
// conversion works (while skopeo handles many things, one of the most obvious
// benefits of a tool like skopeo is that you can use OCI tooling to create an
// image and then as the final step convert the image to a non-standard format
// like Docker). But this only works if we _test_ it.
func (s *CopySuite) TestCopyOCIRoundTrip(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
oci1, err := ioutil.TempDir("", "oci-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci1)
oci2, err := ioutil.TempDir("", "oci-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci2)
// Docker -> OCI
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "docker://busybox", "oci:"+oci1+":latest")
// OCI -> Docker
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "oci:"+oci1+":latest", ourRegistry+"original/busybox:oci_copy")
// Docker -> OCI
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", ourRegistry+"original/busybox:oci_copy", "oci:"+oci2+":latest")
// OCI -> Docker
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "oci:"+oci2+":latest", ourRegistry+"original/busybox:oci_copy2")
// TODO: Add some more tags to output to and check those work properly.
// First, make sure the OCI blobs are the same. This should _always_ be true.
out := combinedOutputOfCommand(c, "diff", "-urN", oci1+"/blobs", oci2+"/blobs")
c.Assert(out, check.Equals, "")
// For some silly reason we pass a logger to the OCI library here...
logger := log.New(os.Stderr, "", 0)
// TODO: Verify using the upstream OCI image validator.
err = image.ValidateLayout(oci1, nil, logger)
c.Assert(err, check.IsNil)
err = image.ValidateLayout(oci2, nil, logger)
c.Assert(err, check.IsNil)
// Now verify that everything is identical. Currently this is true, but
// because we recompute the manifests on-the-fly this doesn't necessarily
// always have to be true (but if this breaks in the future __PLEASE__ make
// sure that the breakage actually makes sense before removing this check).
out = combinedOutputOfCommand(c, "diff", "-urN", oci1, oci2)
c.Assert(out, check.Equals, "")
}
// --sign-by and --policy copy, primarily using atomic:
func (s *CopySuite) TestCopySignatures(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
dir, err := ioutil.TempDir("", "signatures-dest")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir)
@@ -213,6 +299,13 @@ func (s *CopySuite) TestCopySignatures(c *check.C) {
// --policy copy for dir: sources
func (s *CopySuite) TestCopyDirSignatures(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
topDir, err := ioutil.TempDir("", "dir-signatures-top")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
@@ -312,11 +405,18 @@ func findRegularFiles(c *check.C, root string) []string {
// --sign-by and policy use for docker: with sigstore
func (s *CopySuite) TestCopyDockerSigstore(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
tmpDir, err := ioutil.TempDir("", "signatures-sigstore")
c.Assert(err, check.IsNil)
//defer os.RemoveAll(tmpDir)
defer os.RemoveAll(tmpDir)
copyDest := filepath.Join(tmpDir, "dest")
err = os.Mkdir(copyDest, 0755)
c.Assert(err, check.IsNil)
@@ -357,7 +457,6 @@ func (s *CopySuite) TestCopyDockerSigstore(c *check.C) {
// Deleting the image succeeds,
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "delete", ourRegistry+"signed/busybox")
// and the signature file has been deleted (but we leave the directories around).
// a signature file has been created,
foundFiles = findRegularFiles(c, plainSigstore)
c.Assert(foundFiles, check.HasLen, 0)
@@ -373,3 +472,87 @@ func (s *CopySuite) TestCopyDockerSigstore(c *check.C) {
splitSigstoreReadServerHandler = http.FileServer(http.Dir(splitSigstoreStaging))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir, "copy", ourRegistry+"public/busybox", dirDest)
}
// atomic: and docker: X-Registry-Supports-Signatures works and interoperates
func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that the reading/writing works using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
topDir, err := ioutil.TempDir("", "atomic-extension")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
for _, subdir := range []string{"dirAA", "dirAD", "dirDA", "dirDD", "registries.d"} {
err := os.MkdirAll(filepath.Join(topDir, subdir), 0755)
c.Assert(err, check.IsNil)
}
registriesDir := filepath.Join(topDir, "registries.d")
dirDest := "dir:" + topDir
policy := fileFromFixture(c, "fixtures/policy.json", map[string]string{"@keydir@": s.gpgHome})
defer os.Remove(policy)
// Get an image to work with to an atomic: destination. Also verifies that we can use Docker repositories without X-Registry-Supports-Signatures
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "copy", "docker://busybox", "atomic:localhost:5000/myns/extension:unsigned")
// Pulling an unsigned image using atomic: fails.
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
"--tls-verify=false", "--policy", policy,
"copy", "atomic:localhost:5000/myns/extension:unsigned", dirDest+"/dirAA")
// The same when pulling using docker:
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
"--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:unsigned", dirDest+"/dirAD")
// Sign the image using atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false",
"copy", "--sign-by", "personal@example.com", "atomic:localhost:5000/myns/extension:unsigned", "atomic:localhost:5000/myns/extension:atomic")
// Pulling the image using atomic: now succeeds.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy,
"copy", "atomic:localhost:5000/myns/extension:atomic", dirDest+"/dirAA")
// The same when pulling using docker:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:atomic", dirDest+"/dirAD")
// Both access methods result in the same data.
destructiveCheckDirImagesAreEqual(c, filepath.Join(topDir, "dirAA"), filepath.Join(topDir, "dirAD"))
// Get another image (different so that they don't share signatures, and sign it using docker://)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir,
"copy", "--sign-by", "personal@example.com", "docker://estesp/busybox:ppc64le", "atomic:localhost:5000/myns/extension:extension")
c.Logf("%s", combinedOutputOfCommand(c, "oc", "get", "istag", "extension:extension", "-o", "json"))
// Pulling the image using atomic: succeeds.
assertSkopeoSucceeds(c, "", "--debug", "--tls-verify=false", "--policy", policy,
"copy", "atomic:localhost:5000/myns/extension:extension", dirDest+"/dirDA")
// The same when pulling using docker:
assertSkopeoSucceeds(c, "", "--debug", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:extension", dirDest+"/dirDD")
// Both access methods result in the same data.
destructiveCheckDirImagesAreEqual(c, filepath.Join(topDir, "dirDA"), filepath.Join(topDir, "dirDD"))
}
func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--src-creds=testuser:testpassword", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "dir:"+dir1)
}
func (s *SkopeoSuite) TestCopyDestWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestCopySrcAndDestWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--src-creds=testuser:testpassword", "--dest-creds=testuser:testpassword", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), fmt.Sprintf("docker://%s/test:auth", s.regV2WithAuth.url))
}
func (s *CopySuite) TestCopyNoPanicOnHTTPResponseWOTLSVerifyFalse(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
// dir:test isn't created beforehand just because we already know this could
// just fail when evaluating the src
assertSkopeoFails(c, ".*server gave HTTP response to HTTPS client.*",
"copy", ourRegistry+"foobar", "dir:test")
}

View File

@@ -13,6 +13,13 @@
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"localhost:5000/myns/extension": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"docker.io/openshift": [
{
"type": "insecureAcceptAnything"
@@ -85,6 +92,13 @@
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"localhost:5000/myns/extension": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
]
}
}

View File

@@ -14,6 +14,10 @@ import (
"github.com/go-check/check"
)
var adminKUBECONFIG = map[string]string{
"KUBECONFIG": "openshift.local.config/master/admin.kubeconfig",
}
// openshiftCluster is an OpenShift API master and integrated registry
// running on localhost.
type openshiftCluster struct {
@@ -42,10 +46,20 @@ func startOpenshiftCluster(c *check.C) *openshiftCluster {
return cluster
}
// clusterCmd creates an exec.Cmd in c.workingDir with current environment modified by environment
func (c *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
cmd := exec.Command(name, args...)
cmd.Dir = c.workingDir
cmd.Env = os.Environ()
for key, value := range env {
cmd.Env = modifyEnviron(cmd.Env, key, value)
}
return cmd
}
// startMaster starts the OpenShift master (etcd+API server) and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startMaster() {
c.master = exec.Command("openshift", "start", "master")
c.master.Dir = c.workingDir
c.master = c.clusterCmd(nil, "openshift", "start", "master")
stdout, err := c.master.StdoutPipe()
// Send both to the same pipe. This might cause the two streams to be mixed up,
// but logging actually goes only to stderr - this primarily ensure we log any
@@ -112,14 +126,35 @@ func (c *openshiftCluster) startMaster() {
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startRegistry() {
//KUBECONFIG=openshift.local.config/master/openshift-registry.kubeconfig DOCKER_REGISTRY_URL=127.0.0.1:5000
c.registry = exec.Command("dockerregistry", "/atomic-registry-config.yml")
c.registry.Dir = c.workingDir
c.registry.Env = os.Environ()
c.registry.Env = modifyEnviron(c.registry.Env, "KUBECONFIG", "openshift.local.config/master/openshift-registry.kubeconfig")
c.registry.Env = modifyEnviron(c.registry.Env, "DOCKER_REGISTRY_URL", "127.0.0.1:5000")
// This partially mimics the objects created by (oadm registry), except that we run the
// server directly as an ordinary process instead of a pod with an implicitly attached service account.
saJSON := `{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "registry"
}
}`
cmd := c.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c.c, cmd, saJSON)
cmd = c.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
cmd = c.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
out, err = cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "")
//KUBECONFIG=openshift.local.registry/openshift-registry.kubeconfig DOCKER_REGISTRY_URL=127.0.0.1:5000
c.registry = c.clusterCmd(map[string]string{
"KUBECONFIG": "openshift.local.registry/openshift-registry.kubeconfig",
"DOCKER_REGISTRY_URL": "127.0.0.1:5000",
}, "dockerregistry", "/atomic-registry-config.yml")
consumeAndLogOutputs(c.c, "registry", c.registry)
err := c.registry.Start()
err = c.registry.Start()
c.c.Assert(err, check.IsNil)
portOpen, terminatePortCheck := newPortChecker(c.c, 5000)
@@ -134,8 +169,7 @@ func (c *openshiftCluster) startRegistry() {
// ocLogin runs (oc login) and (oc new-project) on the cluster, or terminates on failure.
func (c *openshiftCluster) ocLoginToProject() {
c.c.Logf("oc login")
cmd := exec.Command("oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
cmd.Dir = c.workingDir
cmd := c.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
@@ -170,13 +204,10 @@ func (c *openshiftCluster) dockerLogin() {
// FIXME: This also allows anyone to DoS anyone else; this design is really not all
// that workable, but it is the best we can do for now.
func (c *openshiftCluster) relaxImageSignerPermissions() {
cmd := exec.Command("oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
cmd.Dir = c.workingDir
cmd.Env = os.Environ()
cmd.Env = modifyEnviron(cmd.Env, "KUBECONFIG", "openshift.local.config/master/admin.kubeconfig")
cmd := c.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "")
c.c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
}
// tearDown stops the cluster services and deletes (only some!) of the state.

View File

@@ -0,0 +1,40 @@
// +build openshift_shell
package main
import (
"os"
"os/exec"
"github.com/go-check/check"
)
/*
TestRunShell is not really a test; it is a convenient way to use the registry setup code
in openshift.go and CopySuite to get an interactive environment for experimentation.
To use it, run:
sudo make shell
to start a container, then within the container:
SKOPEO_CONTAINER_TESTS=1 PS1='nested> ' go test -tags openshift_shell -timeout=24h ./integration -v -check.v -check.vv -check.f='CopySuite.TestRunShell'
An example of what can be done within the container:
cd ..; make binary-local install
./skopeo --tls-verify=false copy --sign-by=personal@example.com docker://busybox:latest atomic:localhost:5000/myns/personal:personal
oc get istag personal:personal -o json
curl -L -v 'http://localhost:5000/v2/'
cat ~/.docker/config.json
curl -L -v 'http://localhost:5000/openshift/token&scope=repository:myns/personal:pull' --header 'Authorization: Basic $auth_from_docker'
curl -L -v 'http://localhost:5000/v2/myns/personal/manifests/personal' --header 'Authorization: Bearer $token_from_oauth'
curl -L -v 'http://localhost:5000/extensions/v2/myns/personal/signatures/$manifest_digest' --header 'Authorization: Bearer $token_from_oauth'
*/
func (s *CopySuite) TestRunShell(c *check.C) {
cmd := exec.Command("bash", "-i")
tty, err := os.OpenFile("/dev/tty", os.O_RDWR, 0)
c.Assert(err, check.IsNil)
cmd.Stdin = tty
cmd.Stdout = tty
cmd.Stderr = tty
err = cmd.Run()
c.Assert(err, check.IsNil)
}

View File

@@ -8,6 +8,7 @@ import (
"os/exec"
"strings"
"github.com/containers/image/signature"
"github.com/go-check/check"
)
@@ -36,7 +37,14 @@ func findFingerprint(lineBytes []byte) (string, error) {
}
func (s *SigningSuite) SetUpTest(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
_, err = exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.gpgHome, err = ioutil.TempDir("", "skopeo-gpg")

View File

@@ -74,9 +74,15 @@ func assertSkopeoFails(c *check.C, regexp string, args ...string) {
// runCommandWithInput runs a command as if exec.Command(), sending it the input to stdin,
// and verifies that the exit status is 0, or terminates c on failure.
func runCommandWithInput(c *check.C, input string, name string, args ...string) {
c.Logf("Running %s %s", name, strings.Join(args, " "))
cmd := exec.Command(name, args...)
consumeAndLogOutputs(c, name+" "+strings.Join(args, " "), cmd)
runExecCmdWithInput(c, cmd, input)
}
// runExecCmdWithInput runs an exec.Cmd, sending it the input to stdin,
// and verifies that the exit status is 0, or terminates c on failure.
func runExecCmdWithInput(c *check.C, cmd *exec.Cmd, input string) {
c.Logf("Running %s %s", cmd.Path, strings.Join(cmd.Args, " "))
consumeAndLogOutputs(c, cmd.Path+" "+strings.Join(cmd.Args, " "), cmd)
stdin, err := cmd.StdinPipe()
c.Assert(err, check.IsNil)
err = cmd.Start()

45
vendor.conf Normal file
View File

@@ -0,0 +1,45 @@
github.com/urfave/cli v1.17.0
github.com/containers/image master
github.com/opencontainers/go-digest master
gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
github.com/containers/storage master
github.com/Sirupsen/logrus v0.10.0
github.com/go-check/check v1
github.com/stretchr/testify v1.1.3
github.com/davecgh/go-spew master
github.com/pmezard/go-difflib master
github.com/pkg/errors master
golang.org/x/crypto master
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
github.com/docker/docker v1.13.0
github.com/docker/go-connections 4ccf312bf1d35e5dbda654e57a9be4c3f3cd0366
github.com/vbatts/tar-split v0.10.1
github.com/gorilla/context 14f550f51a
github.com/gorilla/mux e444e69cbd
github.com/docker/go-units 8a7beacffa3009a9ac66bad506b18ffdd110cf97
golang.org/x/net master
# end docker deps
golang.org/x/text master
github.com/docker/distribution master
github.com/docker/libtrust master
github.com/opencontainers/runc master
github.com/opencontainers/image-spec v1.0.0-rc4
# -- start OCI image validation requirements.
github.com/opencontainers/runtime-spec v1.0.0-rc4
github.com/opencontainers/image-tools v0.1.0
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
go4.org master https://github.com/camlistore/go4
# -- end OCI image validation requirements
github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
k8s.io/client-go master
github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
# containers/storage's dependencies that aren't already being pulled in
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
github.com/opencontainers/selinux master

View File

@@ -1 +0,0 @@
logrus

View File

@@ -1,9 +0,0 @@
language: go
go:
- 1.3
- 1.4
- 1.5
- tip
install:
- go get -t ./...
script: GOMAXPROCS=4 GORACE="halt_on_error=1" go test -race -v ./...

View File

@@ -1,66 +0,0 @@
# 0.10.0
* feature: Add a test hook (#180)
* feature: `ParseLevel` is now case-insensitive (#326)
* feature: `FieldLogger` interface that generalizes `Logger` and `Entry` (#308)
* performance: avoid re-allocations on `WithFields` (#335)
# 0.9.0
* logrus/text_formatter: don't emit empty msg
* logrus/hooks/airbrake: move out of main repository
* logrus/hooks/sentry: move out of main repository
* logrus/hooks/papertrail: move out of main repository
* logrus/hooks/bugsnag: move out of main repository
* logrus/core: run tests with `-race`
* logrus/core: detect TTY based on `stderr`
* logrus/core: support `WithError` on logger
* logrus/core: Solaris support
# 0.8.7
* logrus/core: fix possible race (#216)
* logrus/doc: small typo fixes and doc improvements
# 0.8.6
* hooks/raven: allow passing an initialized client
# 0.8.5
* logrus/core: revert #208
# 0.8.4
* formatter/text: fix data race (#218)
# 0.8.3
* logrus/core: fix entry log level (#208)
* logrus/core: improve performance of text formatter by 40%
* logrus/core: expose `LevelHooks` type
* logrus/core: add support for DragonflyBSD and NetBSD
* formatter/text: print structs more verbosely
# 0.8.2
* logrus: fix more Fatal family functions
# 0.8.1
* logrus: fix not exiting on `Fatalf` and `Fatalln`
# 0.8.0
* logrus: defaults to stderr instead of stdout
* hooks/sentry: add special field for `*http.Request`
* formatter/text: ignore Windows for colors
# 0.7.3
* formatter/\*: allow configuration of timestamp layout
# 0.7.2
* formatter/text: Add configuration option for time format (#158)

52
vendor/github.com/containers/image/README.md generated vendored Normal file
View File

@@ -0,0 +1,52 @@
[![GoDoc](https://godoc.org/github.com/containers/image?status.svg)](https://godoc.org/github.com/containers/image) [![Build Status](https://travis-ci.org/containers/image.svg?branch=master)](https://travis-ci.org/containers/image)
=
`image` is a set of Go libraries aimed at working in various way with
containers' images and container image registries.
The containers/image library allows application to pull and push images from
container image registries, like the upstream docker registry. It also
implements "simple image signing".
The containers/image library also allows you to inspect a repository on a
container registry without pulling down the image. This means it fetches the
repository's manifest and it is able to show you a `docker inspect`-like json
output about a whole repository or a tag. This library, in contrast to `docker
inspect`, helps you gather useful information about a repository or a tag
without requiring you to run `docker pull`.
The containers/image library also allows you to translate from one image format
to another, for example docker container images to OCI images. It also allows
you to copy container images between various registries, possibly converting
them as necessary, and to sign and verify images.
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
containers/image library and takes advantage of its many features.
## Dependencies
Dependencies that this library prefers will not be found in the `vendor`
directory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects.
What this project tests against dependencies-wise is located
[here](https://github.com/containers/image/blob/master/vendor.conf).
## Building
For ordinary use, `go build ./...` is sufficient.
When developing this library, please use `make` to take advantage of the tests and validation.
Optionally, you can use the `containers_image_openpgp` build tag (using `go build -tags …`, or `make … BUILDTAGS=…`).
This will use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
the primary downside is that creating new signatures with the Golang-only implementation is not supported.
## License
ASL 2.0
## Contact
- Mailing list: [containers-dev](https://groups.google.com/forum/?hl=en#!forum/containers-dev)
- IRC: #[container-projects](irc://irc.freenode.net:6667/#container-projects) on freenode.net

View File

@@ -3,60 +3,67 @@ package copy
import (
"bytes"
"compress/gzip"
"crypto/sha256"
"crypto/subtle"
"encoding/hex"
"errors"
"fmt"
"hash"
"io"
"io/ioutil"
"reflect"
"strings"
"time"
pb "gopkg.in/cheggaaa/pb.v1"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// supportedDigests lists the supported blob digest types.
var supportedDigests = map[string]func() hash.Hash{
"sha256": sha256.New,
}
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
type digestingReader struct {
source io.Reader
digest hash.Hash
expectedDigest []byte
digester digest.Digester
expectedDigest digest.Digest
validationFailed bool
}
// imageCopier allows us to keep track of diffID values for blobs, and other
// data, that we're copying between images, and cache other information that
// might allow us to take some shortcuts
type imageCopier struct {
copiedBlobs map[digest.Digest]digest.Digest
cachedDiffIDs map[digest.Digest]digest.Digest
manifestUpdates *types.ManifestUpdateOptions
dest types.ImageDestination
src types.Image
rawSource types.ImageSource
diffIDsAreNeeded bool
canModifyManifest bool
reportWriter io.Writer
progressInterval time.Duration
progress chan types.ProgressProperties
}
// newDigestingReader returns an io.Reader implementation with contents of source, which will eventually return a non-EOF error
// and set validationFailed to true if the source stream does not match expectedDigestString.
func newDigestingReader(source io.Reader, expectedDigestString string) (*digestingReader, error) {
fields := strings.SplitN(expectedDigestString, ":", 2)
if len(fields) != 2 {
return nil, fmt.Errorf("Invalid digest specification %s", expectedDigestString)
// and set validationFailed to true if the source stream does not match expectedDigest.
func newDigestingReader(source io.Reader, expectedDigest digest.Digest) (*digestingReader, error) {
if err := expectedDigest.Validate(); err != nil {
return nil, errors.Errorf("Invalid digest specification %s", expectedDigest)
}
fn, ok := supportedDigests[fields[0]]
if !ok {
return nil, fmt.Errorf("Invalid digest specification %s: unknown digest type %s", expectedDigestString, fields[0])
}
digest := fn()
expectedDigest, err := hex.DecodeString(fields[1])
if err != nil {
return nil, fmt.Errorf("Invalid digest value %s: %v", expectedDigestString, err)
}
if len(expectedDigest) != digest.Size() {
return nil, fmt.Errorf("Invalid digest specification %s: length %d does not match %d", expectedDigestString, len(expectedDigest), digest.Size())
digestAlgorithm := expectedDigest.Algorithm()
if !digestAlgorithm.Available() {
return nil, errors.Errorf("Invalid digest specification %s: unsupported digest algorithm %s", expectedDigest, digestAlgorithm)
}
return &digestingReader{
source: source,
digest: digest,
digester: digestAlgorithm.Digester(),
expectedDigest: expectedDigest,
validationFailed: false,
}, nil
@@ -65,18 +72,18 @@ func newDigestingReader(source io.Reader, expectedDigestString string) (*digesti
func (d *digestingReader) Read(p []byte) (int, error) {
n, err := d.source.Read(p)
if n > 0 {
if n2, err := d.digest.Write(p[:n]); n2 != n || err != nil {
if n2, err := d.digester.Hash().Write(p[:n]); n2 != n || err != nil {
// Coverage: This should not happen, the hash.Hash interface requires
// d.digest.Write to never return an error, and the io.Writer interface
// requires n2 == len(input) if no error is returned.
return 0, fmt.Errorf("Error updating digest during verification: %d vs. %d, %v", n2, n, err)
return 0, errors.Wrapf(err, "Error updating digest during verification: %d vs. %d", n2, n)
}
}
if err == io.EOF {
actualDigest := d.digest.Sum(nil)
if subtle.ConstantTimeCompare(actualDigest, d.expectedDigest) != 1 {
actualDigest := d.digester.Digest()
if actualDigest != d.expectedDigest {
d.validationFailed = true
return 0, fmt.Errorf("Digest did not match, expected %s, got %s", hex.EncodeToString(d.expectedDigest), hex.EncodeToString(actualDigest))
return 0, errors.Errorf("Digest did not match, expected %s, got %s", d.expectedDigest, actualDigest)
}
}
return n, err
@@ -87,146 +94,221 @@ type Options struct {
RemoveSignatures bool // Remove any pre-existing signatures. SignBy will still add a new signature.
SignBy string // If non-empty, asks for a signature to be added during the copy, and specifies a key ID, as accepted by signature.NewGPGSigningMechanism().SignDockerManifest(),
ReportWriter io.Writer
SourceCtx *types.SystemContext
DestinationCtx *types.SystemContext
ProgressInterval time.Duration // time to wait between reports to signal the progress channel
Progress chan types.ProgressProperties // Reported to when ProgressInterval has arrived for a single artifact+offset.
}
// Image copies image from srcRef to destRef, using policyContext to validate source image admissibility.
func Image(ctx *types.SystemContext, policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) error {
// Image copies image from srcRef to destRef, using policyContext to validate
// source image admissibility.
func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) (retErr error) {
// NOTE this function uses an output parameter for the error return value.
// Setting this and returning is the ideal way to return an error.
//
// the defers in this routine will wrap the error return with its own errors
// which can be valuable context in the middle of a multi-streamed copy.
if options == nil {
options = &Options{}
}
reportWriter := ioutil.Discard
if options != nil && options.ReportWriter != nil {
if options.ReportWriter != nil {
reportWriter = options.ReportWriter
}
writeReport := func(f string, a ...interface{}) {
fmt.Fprintf(reportWriter, f, a...)
}
dest, err := destRef.NewImageDestination(ctx)
dest, err := destRef.NewImageDestination(options.DestinationCtx)
if err != nil {
return fmt.Errorf("Error initializing destination %s: %v", transports.ImageName(destRef), err)
return errors.Wrapf(err, "Error initializing destination %s", transports.ImageName(destRef))
}
defer dest.Close()
defer func() {
if err := dest.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (dest: %v)", err)
}
}()
rawSource, err := srcRef.NewImageSource(ctx, dest.SupportedManifestMIMETypes())
destSupportedManifestMIMETypes := dest.SupportedManifestMIMETypes()
rawSource, err := srcRef.NewImageSource(options.SourceCtx, destSupportedManifestMIMETypes)
if err != nil {
return fmt.Errorf("Error initializing source %s: %v", transports.ImageName(srcRef), err)
return errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
}
src := image.FromSource(rawSource)
defer src.Close()
unparsedImage := image.UnparsedFromSource(rawSource)
defer func() {
if unparsedImage != nil {
if err := unparsedImage.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (unparsed: %v)", err)
}
}
}()
// Please keep this policy check BEFORE reading any other information about the image.
if allowed, err := policyContext.IsRunningImageAllowed(src); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return fmt.Errorf("Source image rejected: %v", err)
if allowed, err := policyContext.IsRunningImageAllowed(unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return errors.Wrap(err, "Source image rejected")
}
writeReport("Getting image source manifest\n")
manifest, _, err := src.Manifest()
src, err := image.FromUnparsedImage(unparsedImage)
if err != nil {
return fmt.Errorf("Error reading manifest: %v", err)
return errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(srcRef))
}
unparsedImage = nil
defer func() {
if err := src.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (source: %v)", err)
}
}()
if src.IsMultiImage() {
return errors.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
}
var sigs [][]byte
if options != nil && options.RemoveSignatures {
if options.RemoveSignatures {
sigs = [][]byte{}
} else {
writeReport("Getting image source signatures\n")
s, err := src.Signatures()
if err != nil {
return fmt.Errorf("Error reading signatures: %v", err)
return errors.Wrap(err, "Error reading signatures")
}
sigs = s
}
if len(sigs) != 0 {
writeReport("Checking if image destination supports signatures\n")
if err := dest.SupportsSignatures(); err != nil {
return fmt.Errorf("Can not copy signatures: %v", err)
return errors.Wrap(err, "Can not copy signatures")
}
}
canModifyManifest := len(sigs) == 0
writeReport("Getting image source configuration\n")
srcConfigInfo, err := src.ConfigInfo()
if err != nil {
return fmt.Errorf("Error parsing manifest: %v", err)
}
if srcConfigInfo.Digest != "" {
writeReport("Uploading blob %s\n", srcConfigInfo.Digest)
destConfigInfo, err := copyBlob(dest, rawSource, srcConfigInfo, false, reportWriter)
if err != nil {
return err
}
if destConfigInfo.Digest != srcConfigInfo.Digest {
return fmt.Errorf("Internal error: copying uncompressed config blob %s changed digest to %s", srcConfigInfo.Digest, destConfigInfo.Digest)
}
}
srcLayerInfos, err := src.LayerInfos()
if err != nil {
return fmt.Errorf("Error parsing manifest: %v", err)
}
destLayerInfos := []types.BlobInfo{}
copiedLayers := map[string]types.BlobInfo{}
for _, srcLayer := range srcLayerInfos {
destLayer, ok := copiedLayers[srcLayer.Digest]
if !ok {
writeReport("Uploading blob %s\n", srcLayer.Digest)
destLayer, err = copyBlob(dest, rawSource, srcLayer, canModifyManifest, reportWriter)
if err != nil {
return err
}
copiedLayers[srcLayer.Digest] = destLayer
}
destLayerInfos = append(destLayerInfos, destLayer)
}
manifestUpdates := types.ManifestUpdateOptions{}
if layerDigestsDiffer(srcLayerInfos, destLayerInfos) {
manifestUpdates.LayerInfos = destLayerInfos
if err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest); err != nil {
return err
}
if !reflect.DeepEqual(manifestUpdates, types.ManifestUpdateOptions{}) {
// If src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates) will be true, it needs to be true by the time we get here.
ic := imageCopier{
copiedBlobs: make(map[digest.Digest]digest.Digest),
cachedDiffIDs: make(map[digest.Digest]digest.Digest),
manifestUpdates: &manifestUpdates,
dest: dest,
src: src,
rawSource: rawSource,
diffIDsAreNeeded: src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates),
canModifyManifest: canModifyManifest,
reportWriter: reportWriter,
progressInterval: options.ProgressInterval,
progress: options.Progress,
}
if err := ic.copyLayers(); err != nil {
return err
}
pendingImage := src
if !reflect.DeepEqual(manifestUpdates, types.ManifestUpdateOptions{InformationOnly: manifestUpdates.InformationOnly}) {
if !canModifyManifest {
return fmt.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
return errors.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
}
manifest, err = src.UpdatedManifest(manifestUpdates)
manifestUpdates.InformationOnly.Destination = dest
pendingImage, err = src.UpdatedImage(manifestUpdates)
if err != nil {
return fmt.Errorf("Error creating an updated manifest: %v", err)
return errors.Wrap(err, "Error creating an updated image manifest")
}
}
manifest, _, err := pendingImage.Manifest()
if err != nil {
return errors.Wrap(err, "Error reading manifest")
}
if options != nil && options.SignBy != "" {
if err := ic.copyConfig(pendingImage); err != nil {
return err
}
if options.SignBy != "" {
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
return errors.Wrap(err, "Error initializing GPG")
}
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
return errors.Wrap(err, "Signing not supported")
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return fmt.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
return errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
writeReport("Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, options.SignBy)
if err != nil {
return fmt.Errorf("Error creating signature: %v", err)
return errors.Wrap(err, "Error creating signature")
}
sigs = append(sigs, newSig)
}
writeReport("Uploading manifest to image destination\n")
writeReport("Writing manifest to image destination\n")
if err := dest.PutManifest(manifest); err != nil {
return fmt.Errorf("Error writing manifest: %v", err)
return errors.Wrap(err, "Error writing manifest")
}
writeReport("Storing signatures\n")
if err := dest.PutSignatures(sigs); err != nil {
return fmt.Errorf("Error writing signatures: %v", err)
return errors.Wrap(err, "Error writing signatures")
}
if err := dest.Commit(); err != nil {
return fmt.Errorf("Error committing the finished image: %v", err)
return errors.Wrap(err, "Error committing the finished image")
}
return nil
}
// copyLayers copies layers from src/rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
func (ic *imageCopier) copyLayers() error {
srcInfos := ic.src.LayerInfos()
destInfos := []types.BlobInfo{}
diffIDs := []digest.Digest{}
for _, srcLayer := range srcInfos {
var (
destInfo types.BlobInfo
diffID digest.Digest
err error
)
if ic.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
// DiffIDs are, currently, needed only when converting from schema1.
// In which case src.LayerInfos will not have URLs because schema1
// does not support them.
if ic.diffIDsAreNeeded {
return errors.New("getting DiffID for foreign layers is unimplemented")
}
destInfo = srcLayer
fmt.Fprintf(ic.reportWriter, "Skipping foreign layer %q copy to %s\n", destInfo.Digest, ic.dest.Reference().Transport().Name())
} else {
destInfo, diffID, err = ic.copyLayer(srcLayer)
if err != nil {
return err
}
}
destInfos = append(destInfos, destInfo)
diffIDs = append(diffIDs, diffID)
}
ic.manifestUpdates.InformationOnly.LayerInfos = destInfos
if ic.diffIDsAreNeeded {
ic.manifestUpdates.InformationOnly.LayerDiffIDs = diffIDs
}
if layerDigestsDiffer(srcInfos, destInfos) {
ic.manifestUpdates.LayerInfos = destInfos
}
return nil
}
// layerDigestsDiffer return true iff the digests in a and b differ (ignoring sizes and possible other fields)
func layerDigestsDiffer(a, b []types.BlobInfo) bool {
if len(a) != len(b) {
@@ -240,15 +322,154 @@ func layerDigestsDiffer(a, b []types.BlobInfo) bool {
return false
}
// copyBlob copies a blob with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied blob.
func copyBlob(dest types.ImageDestination, src types.ImageSource, srcInfo types.BlobInfo, canCompress bool, reportWriter io.Writer) (types.BlobInfo, error) {
srcStream, srcBlobSize, err := src.GetBlob(srcInfo.Digest) // We currently completely ignore srcInfo.Size throughout.
// copyConfig copies config.json, if any, from src to dest.
func (ic *imageCopier) copyConfig(src types.Image) error {
srcInfo := src.ConfigInfo()
if srcInfo.Digest != "" {
fmt.Fprintf(ic.reportWriter, "Copying config %s\n", srcInfo.Digest)
configBlob, err := src.ConfigBlob()
if err != nil {
return errors.Wrapf(err, "Error reading config blob %s", srcInfo.Digest)
}
destInfo, err := ic.copyBlobFromStream(bytes.NewReader(configBlob), srcInfo, nil, false)
if err != nil {
return err
}
if destInfo.Digest != srcInfo.Digest {
return errors.Errorf("Internal error: copying uncompressed config blob %s changed digest to %s", srcInfo.Digest, destInfo.Digest)
}
}
return nil
}
// diffIDResult contains both a digest value and an error from diffIDComputationGoroutine.
// We could also send the error through the pipeReader, but this more cleanly separates the copying of the layer and the DiffID computation.
type diffIDResult struct {
digest digest.Digest
err error
}
// copyLayer copies a layer with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied layer, and a value for LayerDiffIDs if diffIDIsNeeded
func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest.Digest, error) {
// Check if we already have a blob with this digest
haveBlob, extantBlobSize, err := ic.dest.HasBlob(srcInfo)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error reading blob %s: %v", srcInfo.Digest, err)
return types.BlobInfo{}, "", errors.Wrapf(err, "Error checking for blob %s at destination", srcInfo.Digest)
}
// If we already have a cached diffID for this blob, we don't need to compute it
diffIDIsNeeded := ic.diffIDsAreNeeded && (ic.cachedDiffIDs[srcInfo.Digest] == "")
// If we already have the blob, and we don't need to recompute the diffID, then we might be able to avoid reading it again
if haveBlob && !diffIDIsNeeded {
// Check the blob sizes match, if we were given a size this time
if srcInfo.Size != -1 && srcInfo.Size != extantBlobSize {
return types.BlobInfo{}, "", errors.Errorf("Error: blob %s is already present, but with size %d instead of %d", srcInfo.Digest, extantBlobSize, srcInfo.Size)
}
srcInfo.Size = extantBlobSize
// Tell the image destination that this blob's delta is being applied again. For some image destinations, this can be faster than using GetBlob/PutBlob
blobinfo, err := ic.dest.ReapplyBlob(srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reapplying blob %s at destination", srcInfo.Digest)
}
fmt.Fprintf(ic.reportWriter, "Skipping fetch of repeat blob %s\n", srcInfo.Digest)
return blobinfo, ic.cachedDiffIDs[srcInfo.Digest], err
}
// Fallback: copy the layer, computing the diffID if we need to do so
fmt.Fprintf(ic.reportWriter, "Copying blob %s\n", srcInfo.Digest)
srcStream, srcBlobSize, err := ic.rawSource.GetBlob(srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
defer srcStream.Close()
blobInfo, diffIDChan, err := ic.copyLayerFromStream(srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
diffIDIsNeeded)
if err != nil {
return types.BlobInfo{}, "", err
}
var diffIDResult diffIDResult // = {digest:""}
if diffIDIsNeeded {
diffIDResult = <-diffIDChan
if diffIDResult.err != nil {
return types.BlobInfo{}, "", errors.Wrap(diffIDResult.err, "Error computing layer DiffID")
}
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
ic.cachedDiffIDs[srcInfo.Digest] = diffIDResult.digest
}
return blobInfo, diffIDResult.digest, nil
}
// copyLayerFromStream is an implementation detail of copyLayer; mostly providing a separate “defer” scope.
// it copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
// perhaps compressing the stream if canCompress,
// and returns a complete blobInfo of the copied blob and perhaps a <-chan diffIDResult if diffIDIsNeeded, to be read by the caller.
func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
diffIDIsNeeded bool) (types.BlobInfo, <-chan diffIDResult, error) {
var getDiffIDRecorder func(compression.DecompressorFunc) io.Writer // = nil
var diffIDChan chan diffIDResult
err := errors.New("Internal error: unexpected panic in copyLayer") // For pipeWriter.CloseWithError below
if diffIDIsNeeded {
diffIDChan = make(chan diffIDResult, 1) // Buffered, so that sending a value after this or our caller has failed and exited does not block.
pipeReader, pipeWriter := io.Pipe()
defer func() { // Note that this is not the same as {defer pipeWriter.CloseWithError(err)}; we need err to be evaluated lazily.
pipeWriter.CloseWithError(err) // CloseWithError(nil) is equivalent to Close()
}()
getDiffIDRecorder = func(decompressor compression.DecompressorFunc) io.Writer {
// If this fails, e.g. because we have exited and due to pipeWriter.CloseWithError() above further
// reading from the pipe has failed, we dont really care.
// We only read from diffIDChan if the rest of the flow has succeeded, and when we do read from it,
// the return value includes an error indication, which we do check.
//
// If this gets never called, pipeReader will not be used anywhere, but pipeWriter will only be
// closed above, so we are happy enough with both pipeReader and pipeWriter to just get collected by GC.
go diffIDComputationGoroutine(diffIDChan, pipeReader, decompressor) // Closes pipeReader
return pipeWriter
}
}
blobInfo, err := ic.copyBlobFromStream(srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest) // Sets err to nil on success
return blobInfo, diffIDChan, err
// We need the defer … pipeWriter.CloseWithError() to happen HERE so that the caller can block on reading from diffIDChan
}
// diffIDComputationGoroutine reads all input from layerStream, uncompresses using decompressor if necessary, and sends its digest, and status, if any, to dest.
func diffIDComputationGoroutine(dest chan<- diffIDResult, layerStream io.ReadCloser, decompressor compression.DecompressorFunc) {
result := diffIDResult{
digest: "",
err: errors.New("Internal error: unexpected panic in diffIDComputationGoroutine"),
}
defer func() { dest <- result }()
defer layerStream.Close() // We do not care to bother the other end of the pipe with other failures; we send them to dest instead.
result.digest, result.err = computeDiffID(layerStream, decompressor)
}
// computeDiffID reads all input from layerStream, uncompresses it using decompressor if necessary, and returns its digest.
func computeDiffID(stream io.Reader, decompressor compression.DecompressorFunc) (digest.Digest, error) {
if decompressor != nil {
s, err := decompressor(stream)
if err != nil {
return "", err
}
stream = s
}
return digest.Canonical.FromReader(stream)
}
// copyBlobFromStream copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
// perhaps sending a copy to an io.Writer if getOriginalLayerCopyWriter != nil,
// perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied blob.
func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
getOriginalLayerCopyWriter func(decompressor compression.DecompressorFunc) io.Writer,
canCompress bool) (types.BlobInfo, error) {
// The copying happens through a pipeline of connected io.Readers.
// === Input: srcStream
// === Process input through digestingReader to validate against the expected digest.
// Be paranoid; in case PutBlob somehow managed to ignore an error from digestingReader,
// use a separate validation failure indicator.
// Note that we don't use a stronger "validationSucceeded" indicator, because
@@ -256,29 +477,40 @@ func copyBlob(dest types.ImageDestination, src types.ImageSource, srcInfo types.
// read stream to the end, and validation does not happen.
digestingReader, err := newDigestingReader(srcStream, srcInfo.Digest)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error preparing to verify blob %s: %v", srcInfo.Digest, err)
return types.BlobInfo{}, errors.Wrapf(err, "Error preparing to verify blob %s", srcInfo.Digest)
}
var destStream io.Reader = digestingReader
isCompressed, destStream, err := isStreamCompressed(destStream) // We could skip this in some cases, but let's keep the code path uniform
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error reading blob %s: %v", srcInfo.Digest, err)
}
// === Detect compression of the input stream.
// This requires us to “peek ahead” into the stream to read the initial part, which requires us to chain through another io.Reader returned by DetectCompression.
decompressor, destStream, err := compression.DetectCompression(destStream) // We could skip this in some cases, but let's keep the code path uniform
if err != nil {
return types.BlobInfo{}, errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
isCompressed := decompressor != nil
// === Report progress using a pb.Reader.
bar := pb.New(int(srcInfo.Size)).SetUnits(pb.U_BYTES)
bar.Output = reportWriter
bar.Output = ic.reportWriter
bar.SetMaxWidth(80)
bar.ShowTimeLeft = false
bar.ShowPercent = false
bar.Start()
destStream = bar.NewProxyReader(destStream)
defer fmt.Fprint(ic.reportWriter, "\n")
defer fmt.Fprint(reportWriter, "\n")
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.
if getOriginalLayerCopyWriter != nil {
destStream = io.TeeReader(destStream, getOriginalLayerCopyWriter(decompressor))
originalLayerReader = destStream
}
// === Compress the layer if it is uncompressed and compression is desired
var inputInfo types.BlobInfo
if !canCompress || isCompressed || !dest.ShouldCompressLayers() {
if !canCompress || isCompressed || !ic.dest.ShouldCompressLayers() {
logrus.Debugf("Using original blob without modification")
inputInfo.Digest = srcInfo.Digest
inputInfo.Size = srcBlobSize
inputInfo = srcInfo
} else {
logrus.Debugf("Compressing blob on the fly")
pipeReader, pipeWriter := io.Pipe()
@@ -293,51 +525,42 @@ func copyBlob(dest types.ImageDestination, src types.ImageSource, srcInfo types.
inputInfo.Size = -1
}
uploadedInfo, err := dest.PutBlob(destStream, inputInfo)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error writing blob: %v", err)
}
if digestingReader.validationFailed { // Coverage: This should never happen.
return types.BlobInfo{}, fmt.Errorf("Internal error uploading blob %s, digest verification failed but was ignored", srcInfo.Digest)
}
if inputInfo.Digest != "" && uploadedInfo.Digest != inputInfo.Digest {
return types.BlobInfo{}, fmt.Errorf("Internal error uploading blob %s, blob with digest %s uploaded with digest %s", srcInfo.Digest, inputInfo.Digest, uploadedInfo.Digest)
}
return uploadedInfo, nil
}
// compressionPrefixes is an internal implementation detail of isStreamCompressed
var compressionPrefixes = map[string][]byte{
"gzip": {0x1F, 0x8B, 0x08}, // gzip (RFC 1952)
"bzip2": {0x42, 0x5A, 0x68}, // bzip2 (decompress.c:BZ2_decompress)
"xz": {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}, // xz (/usr/share/doc/xz/xz-file-format.txt)
}
// isStreamCompressed returns true if input is recognized as a compressed format.
// Because it consumes the start of input, other consumers must use the returned io.Reader instead to also read from the beginning.
func isStreamCompressed(input io.Reader) (bool, io.Reader, error) {
buffer := [8]byte{}
n, err := io.ReadAtLeast(input, buffer[:], len(buffer))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// This is a “real” error. We could just ignore it this time, process the data we have, and hope that the source will report the same error again.
// Instead, fail immediately with the original error cause instead of a possibly secondary/misleading error returned later.
return false, nil, err
}
isCompressed := false
for algo, prefix := range compressionPrefixes {
if bytes.HasPrefix(buffer[:n], prefix) {
logrus.Debugf("Detected compression format %s", algo)
isCompressed = true
break
// === Report progress using the ic.progress channel, if required.
if ic.progress != nil && ic.progressInterval > 0 {
destStream = &progressReader{
source: destStream,
channel: ic.progress,
interval: ic.progressInterval,
artifact: srcInfo,
lastTime: time.Now(),
}
}
if !isCompressed {
logrus.Debugf("No compression detected")
// === Finally, send the layer stream to dest.
uploadedInfo, err := ic.dest.PutBlob(destStream, inputInfo)
if err != nil {
return types.BlobInfo{}, errors.Wrap(err, "Error writing blob")
}
return isCompressed, io.MultiReader(bytes.NewReader(buffer[:n]), input), nil
// This is fairly horrible: the writer from getOriginalLayerCopyWriter wants to consumer
// all of the input (to compute DiffIDs), even if dest.PutBlob does not need it.
// So, read everything from originalLayerReader, which will cause the rest to be
// sent there if we are not already at EOF.
if getOriginalLayerCopyWriter != nil {
logrus.Debugf("Consuming rest of the original blob to satisfy getOriginalLayerCopyWriter")
_, err := io.Copy(ioutil.Discard, originalLayerReader)
if err != nil {
return types.BlobInfo{}, errors.Wrapf(err, "Error reading input blob %s", srcInfo.Digest)
}
}
if digestingReader.validationFailed { // Coverage: This should never happen.
return types.BlobInfo{}, errors.Errorf("Internal error writing blob %s, digest verification failed but was ignored", srcInfo.Digest)
}
if inputInfo.Digest != "" && uploadedInfo.Digest != inputInfo.Digest {
return types.BlobInfo{}, errors.Errorf("Internal error writing blob %s, blob with digest %s saved with digest %s", srcInfo.Digest, inputInfo.Digest, uploadedInfo.Digest)
}
return uploadedInfo, nil
}
// compressGoroutine reads all input from src and writes its compressed equivalent to dest.
@@ -352,3 +575,41 @@ func compressGoroutine(dest *io.PipeWriter, src io.Reader) {
_, err = io.Copy(zipper, src) // Sets err to nil, i.e. causes dest.Close()
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) error {
if len(destSupportedManifestMIMETypes) == 0 {
return nil // Anything goes
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
supportedByDest[t] = struct{}{}
}
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return errors.Wrap(err, "Error reading manifest")
}
if _, ok := supportedByDest[srcType]; ok {
logrus.Debugf("Manifest MIME type %s is declared supported by the destination", srcType)
return nil
}
// OK, we should convert the manifest.
if !canModifyManifest {
logrus.Debugf("Manifest MIME type %s is not supported by the destination, but we can't modify the manifest, hoping for the best...")
return nil // Take our chances - FIXME? Or should we fail without trying?
}
var chosenType = destSupportedManifestMIMETypes[0] // This one is known to be supported.
for _, t := range preferredManifestMIMETypes {
if _, ok := supportedByDest[t]; ok {
chosenType = t
break
}
}
logrus.Debugf("Will convert manifest from MIME type %s to %s", srcType, chosenType)
manifestUpdates.ManifestMIMEType = chosenType
return nil
}

View File

@@ -0,0 +1,28 @@
package copy
import (
"io"
"time"
"github.com/containers/image/types"
)
// progressReader is a reader that reports its progress on an interval.
type progressReader struct {
source io.Reader
channel chan types.ProgressProperties
interval time.Duration
artifact types.BlobInfo
lastTime time.Time
offset uint64
}
func (r *progressReader) Read(p []byte) (int, error) {
n, err := r.source.Read(p)
r.offset += uint64(n)
if time.Since(r.lastTime) > r.interval {
r.channel <- types.ProgressProperties{Artifact: r.artifact, Offset: r.offset}
r.lastTime = time.Now()
}
return n, err
}

View File

@@ -1,14 +1,13 @@
package directory
import (
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"os"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type dirImageDestination struct {
@@ -27,7 +26,8 @@ func (d *dirImageDestination) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *dirImageDestination) Close() {
func (d *dirImageDestination) Close() error {
return nil
}
func (d *dirImageDestination) SupportedManifestMIMETypes() []string {
@@ -45,6 +45,12 @@ func (d *dirImageDestination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *dirImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -64,16 +70,16 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
}
}()
h := sha256.New()
tee := io.TeeReader(stream, h)
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := hex.EncodeToString(h.Sum(nil))
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, fmt.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
@@ -86,7 +92,30 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
return types.BlobInfo{}, err
}
succeeded = true
return types.BlobInfo{Digest: "sha256:" + computedDigest, Size: size}, nil
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dirImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
blobPath := d.ref.layerPath(info.Digest)
finfo, err := os.Stat(blobPath)
if err != nil && os.IsNotExist(err) {
return false, -1, nil
}
if err != nil {
return false, -1, err
}
return true, finfo.Size(), nil
}
func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
func (d *dirImageDestination) PutManifest(manifest []byte) error {

View File

@@ -5,7 +5,10 @@ import (
"io/ioutil"
"os"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type dirImageSource struct {
@@ -25,21 +28,27 @@ func (s *dirImageSource) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *dirImageSource) Close() {
func (s *dirImageSource) Close() error {
return nil
}
// it's up to the caller to determine the MIME type of the returned manifest's bytes
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *dirImageSource) GetManifest() ([]byte, string, error) {
m, err := ioutil.ReadFile(s.ref.manifestPath())
if err != nil {
return nil, "", err
}
return m, "", err
return m, manifest.GuessMIMEType(m), err
}
func (s *dirImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return nil, "", errors.Errorf(`Getting target manifest not supported by "dir:"`)
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dirImageSource) GetBlob(digest string) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(digest))
func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(info.Digest))
if err != nil {
return nil, 0, nil
}

View File

@@ -1,17 +1,24 @@
package directory
import (
"errors"
"fmt"
"path/filepath"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/opencontainers/go-digest"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for directory paths.
var Transport = dirTransport{}
@@ -32,7 +39,7 @@ func (t dirTransport) ParseReference(reference string) (types.ImageReference, er
// scope passed to this function will not be "", that value is always allowed.
func (t dirTransport) ValidatePolicyConfigurationScope(scope string) error {
if !strings.HasPrefix(scope, "/") {
return fmt.Errorf("Invalid scope %s: Must be an absolute path", scope)
return errors.Errorf("Invalid scope %s: Must be an absolute path", scope)
}
// Refuse also "/", otherwise "/" and "" would have the same semantics,
// and "" could be unexpectedly shadowed by the "/" entry.
@@ -41,7 +48,7 @@ func (t dirTransport) ValidatePolicyConfigurationScope(scope string) error {
}
cleaned := filepath.Clean(scope)
if cleaned != scope {
return fmt.Errorf(`Invalid scope %s: Uses non-canonical format, perhaps try %s`, scope, cleaned)
return errors.Errorf(`Invalid scope %s: Uses non-canonical format, perhaps try %s`, scope, cleaned)
}
return nil
}
@@ -127,11 +134,13 @@ func (ref dirReference) PolicyConfigurationNamespaces() []string {
return res
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref dirReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src := newImageSource(ref)
return image.FromSource(src), nil
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
@@ -150,7 +159,7 @@ func (ref dirReference) NewImageDestination(ctx *types.SystemContext) (types.Ima
// DeleteImage deletes the named image from the registry, if supported.
func (ref dirReference) DeleteImage(ctx *types.SystemContext) error {
return fmt.Errorf("Deleting images not implemented for dir: images")
return errors.Errorf("Deleting images not implemented for dir: images")
}
// manifestPath returns a path for the manifest within a directory using our conventions.
@@ -159,9 +168,9 @@ func (ref dirReference) manifestPath() string {
}
// layerPath returns a path for a layer tarball within a directory using our conventions.
func (ref dirReference) layerPath(digest string) string {
func (ref dirReference) layerPath(digest digest.Digest) string {
// FIXME: Should we keep the digest identification?
return filepath.Join(ref.path, strings.TrimPrefix(digest, "sha256:")+".tar")
return filepath.Join(ref.path, digest.Hex()+".tar")
}
// signaturePath returns a path for a signature within a directory using our conventions.

View File

@@ -1,9 +1,10 @@
package explicitfilepath
import (
"fmt"
"os"
"path/filepath"
"github.com/pkg/errors"
)
// ResolvePathToFullyExplicit returns the input path converted to an absolute, no-symlinks, cleaned up path.
@@ -25,14 +26,14 @@ func ResolvePathToFullyExplicit(path string) (string, error) {
// This can still happen if there is a filesystem race condition, causing the Lstat() above to fail but the later resolution to succeed.
// We do not care to promise anything if such filesystem race conditions can happen, but we definitely don't want to return "."/".." components
// in the resulting path, and especially not at the end.
return "", fmt.Errorf("Unexpectedly missing special filename component in %s", path)
return "", errors.Errorf("Unexpectedly missing special filename component in %s", path)
}
resolvedPath := filepath.Join(resolvedParent, file)
// As a sanity check, ensure that there are no "." or ".." components.
cleanedResolvedPath := filepath.Clean(resolvedPath)
if cleanedResolvedPath != resolvedPath {
// Coverage: This should never happen.
return "", fmt.Errorf("Internal inconsistency: Path %s resolved to %s still cleaned up to %s", path, resolvedPath, cleanedResolvedPath)
return "", errors.Errorf("Internal inconsistency: Path %s resolved to %s still cleaned up to %s", path, resolvedPath, cleanedResolvedPath)
}
return resolvedPath, nil
default: // err != nil, unrecognized

View File

@@ -0,0 +1,57 @@
package archive
import (
"io"
"os"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
type archiveImageDestination struct {
*tarfile.Destination // Implements most of types.ImageDestination
ref archiveReference
writer io.Closer
}
func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.ImageDestination, error) {
if ref.destinationRef == nil {
return nil, errors.Errorf("docker-archive: destination reference not supplied (must be of form <path>:<reference:tag>)")
}
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_EXCL|os.O_CREATE, 0644)
if err != nil {
// FIXME: It should be possible to modify archives, but the only really
// sane way of doing it is to create a copy of the image, modify
// it and then do a rename(2).
if os.IsExist(err) {
err = errors.New("docker-archive doesn't support modifying existing images")
}
return nil, err
}
return &archiveImageDestination{
Destination: tarfile.NewDestination(fh, ref.destinationRef),
ref: ref,
writer: fh,
}, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *archiveImageDestination) Reference() types.ImageReference {
return d.ref
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *archiveImageDestination) Close() error {
return d.writer.Close()
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *archiveImageDestination) Commit() error {
return d.Destination.Commit()
}

View File

@@ -0,0 +1,36 @@
package archive
import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
)
type archiveImageSource struct {
*tarfile.Source // Implements most of types.ImageSource
ref archiveReference
}
// newImageSource returns a types.ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref archiveReference) types.ImageSource {
if ref.destinationRef != nil {
logrus.Warnf("docker-archive: references are not supported for sources (ignoring)")
}
src := tarfile.NewSource(ref.path)
return &archiveImageSource{
Source: src,
ref: ref,
}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (s *archiveImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *archiveImageSource) Close() error {
return nil
}

View File

@@ -0,0 +1,155 @@
package archive
import (
"fmt"
"strings"
"github.com/containers/image/docker/reference"
ctrImage "github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for local Docker archives.
var Transport = archiveTransport{}
type archiveTransport struct{}
func (t archiveTransport) Name() string {
return "docker-archive"
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func (t archiveTransport) ParseReference(reference string) (types.ImageReference, error) {
return ParseReference(reference)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t archiveTransport) ValidatePolicyConfigurationScope(scope string) error {
// See the explanation in archiveReference.PolicyConfigurationIdentity.
return errors.New(`docker-archive: does not support any scopes except the default "" one`)
}
// archiveReference is an ImageReference for Docker images.
type archiveReference struct {
destinationRef reference.NamedTagged // only used for destinations
path string
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an Docker ImageReference.
func ParseReference(refString string) (types.ImageReference, error) {
if refString == "" {
return nil, errors.Errorf("docker-archive reference %s isn't of the form <path>[:<reference>]", refString)
}
parts := strings.SplitN(refString, ":", 2)
path := parts[0]
var destinationRef reference.NamedTagged
// A :tag was specified, which is only necessary for destinations.
if len(parts) == 2 {
ref, err := reference.ParseNormalizedNamed(parts[1])
if err != nil {
return nil, errors.Wrapf(err, "docker-archive parsing reference")
}
ref = reference.TagNameOnly(ref)
if _, isDigest := ref.(reference.Canonical); isDigest {
return nil, errors.Errorf("docker-archive doesn't support digest references: %s", refString)
}
refTagged, isTagged := ref.(reference.NamedTagged)
if !isTagged {
// Really shouldn't be hit...
return nil, errors.Errorf("internal error: reference is not tagged even after reference.TagNameOnly: %s", refString)
}
destinationRef = refTagged
}
return archiveReference{
destinationRef: destinationRef,
path: path,
}, nil
}
func (ref archiveReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref archiveReference) StringWithinTransport() string {
if ref.destinationRef == nil {
return ref.path
}
return fmt.Sprintf("%s:%s", ref.path, ref.destinationRef.String())
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref archiveReference) DockerReference() reference.Named {
return ref.destinationRef
}
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
// (i.e. various references with exactly the same semantics should return the same configuration identity)
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
// Returns "" if configuration identities for these references are not supported.
func (ref archiveReference) PolicyConfigurationIdentity() string {
// Punt, the justification is similar to dockerReference.PolicyConfigurationIdentity.
return ""
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref archiveReference) PolicyConfigurationNamespaces() []string {
// TODO
return []string{}
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref archiveReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src := newImageSource(ctx, ref)
return ctrImage.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref archiveReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref), nil
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref archiveReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref archiveReference) DeleteImage(ctx *types.SystemContext) error {
// Not really supported, for safety reasons.
return errors.New("Deleting images not implemented for docker-archive: images")
}

View File

@@ -0,0 +1,123 @@
package daemon
import (
"io"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
type daemonImageDestination struct {
ref daemonReference
*tarfile.Destination // Implements most of types.ImageDestination
// For talking to imageLoadGoroutine
goroutineCancel context.CancelFunc
statusChannel <-chan error
writer *io.PipeWriter
// Other state
committed bool // writer has been closed
}
// newImageDestination returns a types.ImageDestination for the specified image reference.
func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
if ref.ref == nil {
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
namedTaggedRef, ok := ref.ref.(reference.NamedTagged)
if !ok {
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
reader, writer := io.Pipe()
// Commit() may never be called, so we may never read from this channel; so, make this buffered to allow imageLoadGoroutine to write status and terminate even if we never read it.
statusChannel := make(chan error, 1)
ctx, goroutineCancel := context.WithCancel(context.Background())
go imageLoadGoroutine(ctx, c, reader, statusChannel)
return &daemonImageDestination{
ref: ref,
Destination: tarfile.NewDestination(writer, namedTaggedRef),
goroutineCancel: goroutineCancel,
statusChannel: statusChannel,
writer: writer,
committed: false,
}, nil
}
// imageLoadGoroutine accepts tar stream on reader, sends it to c, and reports error or success by writing to statusChannel
func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeReader, statusChannel chan<- error) {
err := errors.New("Internal error: unexpected panic in imageLoadGoroutine")
defer func() {
logrus.Debugf("docker-daemon: sending done, status %v", err)
statusChannel <- err
}()
defer func() {
if err == nil {
reader.Close()
} else {
reader.CloseWithError(err)
}
}()
resp, err := c.ImageLoad(ctx, reader, true)
if err != nil {
err = errors.Wrap(err, "Error saving image to docker engine")
return
}
defer resp.Body.Close()
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *daemonImageDestination) Close() error {
if !d.committed {
logrus.Debugf("docker-daemon: Closing tar stream to abort loading")
// In principle, goroutineCancel() should abort the HTTP request and stop the process from continuing.
// In practice, though, various HTTP implementations used by client.Client.ImageLoad() (including
// https://github.com/golang/net/blob/master/context/ctxhttp/ctxhttp_pre17.go and the
// net/http version with native Context support in Go 1.7) do not always actually immediately cancel
// the operation: they may process the HTTP request, or a part of it, to completion in a goroutine, and
// return early if the context is canceled without terminating the goroutine at all.
// So we need this CloseWithError to terminate sending the HTTP request Body
// immediately, and hopefully, through terminating the sending which uses "Transfer-Encoding: chunked"" without sending
// the terminating zero-length chunk, prevent the docker daemon from processing the tar stream at all.
// Whether that works or not, closing the PipeWriter seems desirable in any case.
d.writer.CloseWithError(errors.New("Aborting upload, daemonImageDestination closed without a previous .Commit()"))
}
d.goroutineCancel()
return nil
}
func (d *daemonImageDestination) Reference() types.ImageReference {
return d.ref
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *daemonImageDestination) Commit() error {
logrus.Debugf("docker-daemon: Closing tar stream")
if err := d.Destination.Commit(); err != nil {
return err
}
if err := d.writer.Close(); err != nil {
return err
}
d.committed = true // We may still fail, but we are done sending to imageLoadGoroutine.
logrus.Debugf("docker-daemon: Waiting for status")
err := <-d.statusChannel
return err
}

View File

@@ -0,0 +1,85 @@
package daemon
import (
"io"
"io/ioutil"
"os"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
type daemonImageSource struct {
ref daemonReference
*tarfile.Source // Implements most of types.ImageSource
tarCopyPath string
}
type layerInfo struct {
path string
size int64
}
// newImageSource returns a types.ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
//
// It would be great if we were able to stream the input tar as it is being
// sent; but Docker sends the top-level manifest, which determines which paths
// to look for, at the end, so in we will need to seek back and re-read, several times.
// (We could, perhaps, expect an exact sequence, assume that the first plaintext file
// is the config, and that the following len(RootFS) files are the layers, but that feels
// way too brittle.)
func newImageSource(ctx *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
// Per NewReference(), ref.StringWithinTransport() is either an image ID (config digest), or a !reference.NameOnly() reference.
// Either way ImageSave should create a tarball with exactly one image.
inputStream, err := c.ImageSave(context.TODO(), []string{ref.StringWithinTransport()})
if err != nil {
return nil, errors.Wrap(err, "Error loading image from docker engine")
}
defer inputStream.Close()
// FIXME: use SystemContext here.
tarCopyFile, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-daemon-tar")
if err != nil {
return nil, err
}
defer tarCopyFile.Close()
succeeded := false
defer func() {
if !succeeded {
os.Remove(tarCopyFile.Name())
}
}()
if _, err := io.Copy(tarCopyFile, inputStream); err != nil {
return nil, err
}
succeeded = true
return &daemonImageSource{
ref: ref,
Source: tarfile.NewSource(tarCopyFile.Name()),
tarCopyPath: tarCopyFile.Name(),
}, nil
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (s *daemonImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *daemonImageSource) Close() error {
return os.Remove(s.tarCopyPath)
}

View File

@@ -0,0 +1,184 @@
package daemon
import (
"github.com/pkg/errors"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for images managed by a local Docker daemon.
var Transport = daemonTransport{}
type daemonTransport struct{}
// Name returns the name of the transport, which must be unique among other transports.
func (t daemonTransport) Name() string {
return "docker-daemon"
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func (t daemonTransport) ParseReference(reference string) (types.ImageReference, error) {
return ParseReference(reference)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t daemonTransport) ValidatePolicyConfigurationScope(scope string) error {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return errors.New(`docker-daemon: does not support any scopes except the default "" one`)
}
// daemonReference is an ImageReference for images managed by a local Docker daemon
// Exactly one of id and ref can be set.
// For daemonImageSource, both id and ref are acceptable, ref must not be a NameOnly (interpreted as all tags in that repository by the daemon)
// For daemonImageDestination, it must be a ref, which is NamedTagged.
// (We could, in principle, also allow storing images without tagging them, and the user would have to refer to them using the docker image ID = config digest.
// Using the config digest requires the caller to parse the manifest themselves, which is very cumbersome; so, for now, we dont bother.)
type daemonReference struct {
id digest.Digest
ref reference.Named // !reference.IsNameOnly
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func ParseReference(refString string) (types.ImageReference, error) {
// This is intended to be compatible with reference.ParseAnyReference, but more strict about refusing some of the ambiguous cases.
// In particular, this rejects unprefixed digest values (64 hex chars), and sha256 digest prefixes (sha256:fewer-than-64-hex-chars).
// digest:hexstring is structurally the same as a reponame:tag (meaning docker.io/library/reponame:tag).
// reference.ParseAnyReference interprets such strings as digests.
if dgst, err := digest.Parse(refString); err == nil {
// The daemon explicitly refuses to tag images with a reponame equal to digest.Canonical - but _only_ this digest name.
// Other digest references are ambiguous, so refuse them.
if dgst.Algorithm() != digest.Canonical {
return nil, errors.Errorf("Invalid docker-daemon: reference %s: only digest algorithm %s accepted", refString, digest.Canonical)
}
return NewReference(dgst, nil)
}
ref, err := reference.ParseNormalizedNamed(refString) // This also rejects unprefixed digest values
if err != nil {
return nil, err
}
if reference.FamiliarName(ref) == digest.Canonical.String() {
return nil, errors.Errorf("Invalid docker-daemon: reference %s: The %s repository name is reserved for (non-shortened) digest references", refString, digest.Canonical)
}
return NewReference("", ref)
}
// NewReference returns a docker-daemon reference for either the supplied image ID (config digest) or the supplied reference (which must satisfy !reference.IsNameOnly)
func NewReference(id digest.Digest, ref reference.Named) (types.ImageReference, error) {
if id != "" && ref != nil {
return nil, errors.New("docker-daemon: reference must not have an image ID and a reference string specified at the same time")
}
if ref != nil {
if reference.IsNameOnly(ref) {
return nil, errors.Errorf("docker-daemon: reference %s has neither a tag nor a digest", reference.FamiliarString(ref))
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// Most versions of docker/reference do not handle that (ignoring the tag), so reject such input.
// This MAY be accepted in the future.
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
return nil, errors.Errorf("docker-daemon: references with both a tag and digest are currently not supported")
}
}
return daemonReference{
id: id,
ref: ref,
}, nil
}
func (ref daemonReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix;
// instead, see transports.ImageName().
func (ref daemonReference) StringWithinTransport() string {
switch {
case ref.id != "":
return ref.id.String()
case ref.ref != nil:
return reference.FamiliarString(ref.ref)
default: // Coverage: Should never happen, NewReference above should refuse such values.
panic("Internal inconsistency: daemonReference has empty id and nil ref")
}
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref daemonReference) DockerReference() reference.Named {
return ref.ref // May be nil
}
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
// (i.e. various references with exactly the same semantics should return the same configuration identity)
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
// Returns "" if configuration identities for these references are not supported.
func (ref daemonReference) PolicyConfigurationIdentity() string {
// We must allow referring to images in the daemon by image ID, otherwise untagged images would not be accessible.
// But the existence of image IDs means that we cant truly well namespace the input; the untagged images would have to fall into the default policy,
// which can be unexpected. So, punt.
return "" // This still allows using the default "" scope to define a policy for this transport.
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref daemonReference) PolicyConfigurationNamespaces() []string {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return []string{}
}
// NewImage returns a types.Image for this reference.
// The caller must call .Close() on the returned Image.
func (ref daemonReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := newImageSource(ctx, ref)
if err != nil {
return nil, err
}
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref daemonReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref daemonReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref daemonReference) DeleteImage(ctx *types.SystemContext) error {
// Should this just untag the image? Should this stop running containers?
// The semantics is not quite as clear as for remote repositories.
// The user can run (docker rmi) directly anyway, so, for now(?), punt instead of trying to guess what the user meant.
return errors.Errorf("Deleting images not implemented for docker-daemon: images")
}

View File

@@ -7,14 +7,22 @@ import (
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"os"
"path/filepath"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/docker/docker/pkg/homedir"
"github.com/containers/storage/pkg/homedir"
"github.com/docker/distribution/registry/client"
"github.com/docker/go-connections/sockets"
"github.com/docker/go-connections/tlsconfig"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
const (
@@ -26,56 +34,185 @@ const (
dockerCfgFileName = "config.json"
dockerCfgObsolete = ".dockercfg"
baseURL = "%s://%s/v2/"
tagsURL = "%s/tags/list"
manifestURL = "%s/manifests/%s"
blobsURL = "%s/blobs/%s"
blobUploadURL = "%s/blobs/uploads/"
resolvedPingV2URL = "%s://%s/v2/"
resolvedPingV1URL = "%s://%s/v1/_ping"
tagsPath = "/v2/%s/tags/list"
manifestPath = "/v2/%s/manifests/%s"
blobsPath = "/v2/%s/blobs/%s"
blobUploadPath = "/v2/%s/blobs/uploads/"
extensionsSignaturePath = "/extensions/v2/%s/signatures/%s"
minimumTokenLifetimeSeconds = 60
extensionSignatureSchemaVersion = 2 // extensionSignature.Version
extensionSignatureTypeAtomic = "atomic" // extensionSignature.Type
)
// ErrV1NotSupported is returned when we're trying to talk to a
// docker V1 registry.
var ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
// extensionSignature and extensionSignatureList come from github.com/openshift/origin/pkg/dockerregistry/server/signaturedispatcher.go:
// signature represents a Docker image signature.
type extensionSignature struct {
Version int `json:"schemaVersion"` // Version specifies the schema version
Name string `json:"name"` // Name must be in "sha256:<digest>@signatureName" format
Type string `json:"type"` // Type is optional, of not set it will be defaulted to "AtomicImageV1"
Content []byte `json:"content"` // Content contains the signature
}
// signatureList represents list of Docker image signatures.
type extensionSignatureList struct {
Signatures []extensionSignature `json:"signatures"`
}
type bearerToken struct {
Token string `json:"token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
}
// dockerClient is configuration for dealing with a single Docker registry.
type dockerClient struct {
ctx *types.SystemContext
registry string
username string
password string
wwwAuthenticate string // Cache of a value set by ping() if scheme is not empty
scheme string // Cache of a value returned by a successful ping() if not empty
client *http.Client
signatureBase signatureStorageBase
// The following members are set by newDockerClient and do not change afterwards.
ctx *types.SystemContext
registry string
username string
password string
client *http.Client
signatureBase signatureStorageBase
scope authScope
// The following members are detected registry properties:
// They are set after a successful detectProperties(), and never change afterwards.
scheme string // Empty value also used to indicate detectProperties() has not yet succeeded.
challenges []challenge
supportsSignatures bool
// The following members are private state for setupRequestAuth, both are valid if token != nil.
token *bearerToken
tokenExpiration time.Time
}
type authScope struct {
remoteName string
actions string
}
// this is cloned from docker/go-connections because upstream docker has changed
// it and make deps here fails otherwise.
// We'll drop this once we upgrade to docker 1.13.x deps.
func serverDefault() *tls.Config {
return &tls.Config{
// Avoid fallback to SSL protocols < TLS1.0
MinVersion: tls.VersionTLS10,
PreferServerCipherSuites: true,
CipherSuites: tlsconfig.DefaultServerAcceptedCiphers,
}
}
func newTransport() *http.Transport {
direct := &net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}
tr := &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: direct.Dial,
TLSHandshakeTimeout: 10 * time.Second,
// TODO(dmcgowan): Call close idle connections when complete and use keep alive
DisableKeepAlives: true,
}
proxyDialer, err := sockets.DialerFromEnvironment(direct)
if err == nil {
tr.Dial = proxyDialer.Dial
}
return tr
}
func setupCertificates(dir string, tlsc *tls.Config) error {
if dir == "" {
return nil
}
fs, err := ioutil.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
return err
}
for _, f := range fs {
fullPath := filepath.Join(dir, f.Name())
if strings.HasSuffix(f.Name(), ".crt") {
systemPool, err := tlsconfig.SystemCertPool()
if err != nil {
return errors.Wrap(err, "unable to get system cert pool")
}
tlsc.RootCAs = systemPool
logrus.Debugf("crt: %s", fullPath)
data, err := ioutil.ReadFile(fullPath)
if err != nil {
return err
}
tlsc.RootCAs.AppendCertsFromPEM(data)
}
if strings.HasSuffix(f.Name(), ".cert") {
certName := f.Name()
keyName := certName[:len(certName)-5] + ".key"
logrus.Debugf("cert: %s", fullPath)
if !hasFile(fs, keyName) {
return errors.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
cert, err := tls.LoadX509KeyPair(filepath.Join(dir, certName), filepath.Join(dir, keyName))
if err != nil {
return err
}
tlsc.Certificates = append(tlsc.Certificates, cert)
}
if strings.HasSuffix(f.Name(), ".key") {
keyName := f.Name()
certName := keyName[:len(keyName)-4] + ".cert"
logrus.Debugf("key: %s", fullPath)
if !hasFile(fs, certName) {
return errors.Errorf("missing client certificate %s for key %s", certName, keyName)
}
}
}
return nil
}
func hasFile(files []os.FileInfo, name string) bool {
for _, f := range files {
if f.Name() == name {
return true
}
}
return false
}
// newDockerClient returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
// “write” specifies whether the client will be used for "write" access (in particular passed to lookaside.go:toplevelFromSection)
func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool) (*dockerClient, error) {
registry := ref.ref.Hostname()
func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
registry := reference.Domain(ref.ref)
if registry == dockerHostname {
registry = dockerRegistry
}
username, password, err := getAuth(ref.ref.Hostname())
username, password, err := getAuth(ctx, reference.Domain(ref.ref))
if err != nil {
return nil, err
}
var tr *http.Transport
tr := newTransport()
if ctx != nil && (ctx.DockerCertPath != "" || ctx.DockerInsecureSkipTLSVerify) {
tlsc := &tls.Config{}
if ctx.DockerCertPath != "" {
cert, err := tls.LoadX509KeyPair(filepath.Join(ctx.DockerCertPath, "cert.pem"), filepath.Join(ctx.DockerCertPath, "key.pem"))
if err != nil {
return nil, fmt.Errorf("Error loading x509 key pair: %s", err)
}
tlsc.Certificates = append(tlsc.Certificates, cert)
if err := setupCertificates(ctx.DockerCertPath, tlsc); err != nil {
return nil, err
}
tlsc.InsecureSkipVerify = ctx.DockerInsecureSkipTLSVerify
tr = &http.Transport{
TLSClientConfig: tlsc,
}
tr.TLSClientConfig = tlsc
}
client := &http.Client{}
if tr != nil {
client.Transport = tr
if tr.TLSClientConfig == nil {
tr.TLSClientConfig = serverDefault()
}
client := &http.Client{Transport: tr}
sigBase, err := configuredSignatureStorageBase(ctx, ref, write)
if err != nil {
@@ -89,29 +226,29 @@ func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool)
password: password,
client: client,
signatureBase: sigBase,
scope: authScope{
actions: actions,
remoteName: reference.Path(ref.ref),
},
}, nil
}
// makeRequest creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// url is NOT an absolute URL, but a path relative to the /v2/ top-level API path. The host name and schema is taken from the client or autodetected.
func (c *dockerClient) makeRequest(method, url string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
if c.scheme == "" {
pr, err := c.ping()
if err != nil {
return nil, err
}
c.wwwAuthenticate = pr.WWWAuthenticate
c.scheme = pr.scheme
// The host name and schema is taken from the client or autodetected, and the path is relative to it, i.e. the path usually starts with /v2/.
func (c *dockerClient) makeRequest(method, path string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
if err := c.detectProperties(); err != nil {
return nil, err
}
url = fmt.Sprintf(baseURL, c.scheme, c.registry) + url
return c.makeRequestToResolvedURL(method, url, headers, stream, -1)
url := fmt.Sprintf("%s://%s%s", c.scheme, c.registry, path)
return c.makeRequestToResolvedURL(method, url, headers, stream, -1, true)
}
// makeRequestToResolvedURL creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// streamLen, if not -1, specifies the length of the data expected on stream.
// makeRequest should generally be preferred.
func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[string][]string, stream io.Reader, streamLen int64) (*http.Response, error) {
// TODO(runcom): too many arguments here, use a struct
func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[string][]string, stream io.Reader, streamLen int64, sendAuth bool) (*http.Response, error) {
req, err := http.NewRequest(method, url, stream)
if err != nil {
return nil, err
@@ -125,7 +262,10 @@ func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[
req.Header.Add(n, hh)
}
}
if c.wwwAuthenticate != "" {
if c.ctx != nil && c.ctx.DockerRegistryUserAgent != "" {
req.Header.Add("User-Agent", c.ctx.DockerRegistryUserAgent)
}
if sendAuth {
if err := c.setupRequestAuth(req); err != nil {
return nil, err
}
@@ -138,63 +278,48 @@ func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[
return res, nil
}
// we're using the challenges from the /v2/ ping response and not the one from the destination
// URL in this request because:
//
// 1) docker does that as well
// 2) gcr.io is sending 401 without a WWW-Authenticate header in the real request
//
// debugging: https://github.com/containers/image/pull/211#issuecomment-273426236 and follows up
func (c *dockerClient) setupRequestAuth(req *http.Request) error {
tokens := strings.SplitN(strings.TrimSpace(c.wwwAuthenticate), " ", 2)
if len(tokens) != 2 {
return fmt.Errorf("expected 2 tokens in WWW-Authenticate: %d, %s", len(tokens), c.wwwAuthenticate)
if len(c.challenges) == 0 {
return nil
}
switch tokens[0] {
case "Basic":
// assume just one...
challenge := c.challenges[0]
switch challenge.Scheme {
case "basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "Bearer":
// FIXME? This gets a new token for every API request;
// we may be easily able to reuse a previous token, e.g.
// for OpenShift the token only identifies the user and does not vary
// across operations. Should we just try the request first, and
// only get a new token on failure?
// OTOH what to do with the single-use body stream in that case?
// Try performing the request, expecting it to fail.
testReq := *req
// Do not use the body stream, or we couldn't reuse it for the "real" call later.
testReq.Body = nil
testReq.ContentLength = 0
res, err := c.client.Do(&testReq)
if err != nil {
return err
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope := fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
}
chs := parseAuthHeader(res.Header)
if res.StatusCode != http.StatusUnauthorized || chs == nil || len(chs) == 0 {
// no need for bearer? wtf?
return nil
}
// Arbitrarily use the first challenge, there is no reason to expect more than one.
challenge := chs[0]
if challenge.Scheme != "bearer" { // Another artifact of trying to handle WWW-Authenticate before it actually happens.
return fmt.Errorf("Unimplemented: WWW-Authenticate Bearer replaced by %#v", challenge.Scheme)
}
realm, ok := challenge.Parameters["realm"]
if !ok {
return fmt.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope, _ := challenge.Parameters["scope"] // Will be "" if not present
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
return nil
}
return fmt.Errorf("no handler for %s authentication", tokens[0])
// support docker bearer with authconfig's Auth string? see docker2aci
return errors.Errorf("no handler for %s authentication", challenge.Scheme)
}
func (c *dockerClient) getBearerToken(realm, service, scope string) (string, error) {
func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToken, error) {
authReq, err := http.NewRequest("GET", realm, nil)
if err != nil {
return "", err
return nil, err
}
getParams := authReq.URL.Query()
if service != "" {
@@ -207,145 +332,182 @@ func (c *dockerClient) getBearerToken(realm, service, scope string) (string, err
if c.username != "" && c.password != "" {
authReq.SetBasicAuth(c.username, c.password)
}
// insecure for now to contact the external token service
tr := &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}}
tr := newTransport()
// TODO(runcom): insecure for now to contact the external token service
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
client := &http.Client{Transport: tr}
res, err := client.Do(authReq)
if err != nil {
return "", err
return nil, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusUnauthorized:
return "", fmt.Errorf("unable to retrieve auth token: 401 unauthorized")
return nil, errors.Errorf("unable to retrieve auth token: 401 unauthorized")
case http.StatusOK:
break
default:
return "", fmt.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
return nil, errors.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
}
tokenBlob, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
return nil, err
}
tokenStruct := struct {
Token string `json:"token"`
}{}
if err := json.Unmarshal(tokenBlob, &tokenStruct); err != nil {
return "", err
var token bearerToken
if err := json.Unmarshal(tokenBlob, &token); err != nil {
return nil, err
}
// TODO(runcom): reuse tokens?
//hostAuthTokens, ok = rb.hostsV2AuthTokens[req.URL.Host]
//if !ok {
//hostAuthTokens = make(map[string]string)
//rb.hostsV2AuthTokens[req.URL.Host] = hostAuthTokens
//}
//hostAuthTokens[repo] = tokenStruct.Token
return tokenStruct.Token, nil
if token.ExpiresIn < minimumTokenLifetimeSeconds {
token.ExpiresIn = minimumTokenLifetimeSeconds
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
}
if token.IssuedAt.IsZero() {
token.IssuedAt = time.Now().UTC()
}
return &token, nil
}
func getAuth(hostname string) (string, string, error) {
// TODO(runcom): get this from *cli.Context somehow
//if username != "" && password != "" {
//return username, password, nil
//}
if hostname == dockerHostname {
hostname = dockerAuthRegistry
func getAuth(ctx *types.SystemContext, registry string) (string, string, error) {
if ctx != nil && ctx.DockerAuthConfig != nil {
return ctx.DockerAuthConfig.Username, ctx.DockerAuthConfig.Password, nil
}
var dockerAuth dockerConfigFile
dockerCfgPath := filepath.Join(getDefaultConfigDir(".docker"), dockerCfgFileName)
if _, err := os.Stat(dockerCfgPath); err == nil {
j, err := ioutil.ReadFile(dockerCfgPath)
if err != nil {
return "", "", err
}
var dockerAuth dockerConfigFile
if err := json.Unmarshal(j, &dockerAuth); err != nil {
return "", "", err
}
// try the normal case
if c, ok := dockerAuth.AuthConfigs[hostname]; ok {
return decodeDockerAuth(c.Auth)
}
} else if os.IsNotExist(err) {
// try old config path
oldDockerCfgPath := filepath.Join(getDefaultConfigDir(dockerCfgObsolete))
if _, err := os.Stat(oldDockerCfgPath); err != nil {
return "", "", nil //missing file is not an error
if os.IsNotExist(err) {
return "", "", nil
}
return "", "", errors.Wrap(err, oldDockerCfgPath)
}
j, err := ioutil.ReadFile(oldDockerCfgPath)
if err != nil {
return "", "", err
}
var dockerAuthOld map[string]dockerAuthConfigObsolete
if err := json.Unmarshal(j, &dockerAuthOld); err != nil {
if err := json.Unmarshal(j, &dockerAuth.AuthConfigs); err != nil {
return "", "", err
}
if c, ok := dockerAuthOld[hostname]; ok {
return decodeDockerAuth(c.Auth)
}
} else {
// if file is there but we can't stat it for any reason other
// than it doesn't exist then stop
return "", "", fmt.Errorf("%s - %v", dockerCfgPath, err)
} else if err != nil {
return "", "", errors.Wrap(err, dockerCfgPath)
}
// I'm feeling lucky
if c, exists := dockerAuth.AuthConfigs[registry]; exists {
return decodeDockerAuth(c.Auth)
}
// bad luck; let's normalize the entries first
registry = normalizeRegistry(registry)
normalizedAuths := map[string]dockerAuthConfig{}
for k, v := range dockerAuth.AuthConfigs {
normalizedAuths[normalizeRegistry(k)] = v
}
if c, exists := normalizedAuths[registry]; exists {
return decodeDockerAuth(c.Auth)
}
return "", "", nil
}
type apiErr struct {
Code string
Message string
Detail interface{}
}
// detectProperties detects various properties of the registry.
// See the dockerClient documentation for members which are affected by this.
func (c *dockerClient) detectProperties() error {
if c.scheme != "" {
return nil
}
type pingResponse struct {
WWWAuthenticate string
APIVersion string
scheme string
errors []apiErr
}
func (c *dockerClient) ping() (*pingResponse, error) {
ping := func(scheme string) (*pingResponse, error) {
url := fmt.Sprintf(baseURL, scheme, c.registry)
resp, err := c.client.Get(url)
ping := func(scheme string) error {
url := fmt.Sprintf(resolvedPingV2URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
if err != nil {
return nil, err
return err
}
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", scheme+"://"+c.registry+"/v2/", resp.StatusCode)
logrus.Debugf("Ping %s status %d", url, resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return nil, fmt.Errorf("error pinging repository, response code %d", resp.StatusCode)
return errors.Errorf("error pinging repository, response code %d", resp.StatusCode)
}
pr := &pingResponse{}
pr.WWWAuthenticate = resp.Header.Get("WWW-Authenticate")
pr.APIVersion = resp.Header.Get("Docker-Distribution-Api-Version")
pr.scheme = scheme
if resp.StatusCode == http.StatusUnauthorized {
type APIErrors struct {
Errors []apiErr
}
errs := &APIErrors{}
if err := json.NewDecoder(resp.Body).Decode(errs); err != nil {
return nil, err
}
pr.errors = errs.Errors
}
return pr, nil
c.challenges = parseAuthHeader(resp.Header)
c.scheme = scheme
c.supportsSignatures = resp.Header.Get("X-Registry-Supports-Signatures") == "1"
return nil
}
pr, err := ping("https")
err := ping("https")
if err != nil && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
pr, err = ping("http")
err = ping("http")
}
return pr, err
if err != nil {
err = errors.Wrap(err, "pinging docker registry returned")
if c.ctx != nil && c.ctx.DockerDisableV1Ping {
return err
}
// best effort to understand if we're talking to a V1 registry
pingV1 := func(scheme string) bool {
url := fmt.Sprintf(resolvedPingV1URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
if err != nil {
return false
}
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", url, resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return false
}
return true
}
isV1 := pingV1("https")
if !isV1 && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
isV1 = pingV1("http")
}
if isV1 {
err = ErrV1NotSupported
}
}
return err
}
// getExtensionsSignatures returns signatures from the X-Registry-Supports-Signatures API extension,
// using the original data structures.
func (c *dockerClient) getExtensionsSignatures(ref dockerReference, manifestDigest digest.Digest) (*extensionSignatureList, error) {
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(ref.ref), manifestDigest)
res, err := c.makeRequest("GET", path, nil, nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, client.HandleErrorResponse(res)
}
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, err
}
var parsedBody extensionSignatureList
if err := json.Unmarshal(body, &parsedBody); err != nil {
return nil, errors.Wrapf(err, "Error decoding signature list")
}
return &parsedBody, nil
}
func getDefaultConfigDir(confPath string) string {
return filepath.Join(homedir.Get(), confPath)
}
type dockerAuthConfigObsolete struct {
Auth string `json:"auth"`
}
type dockerAuthConfig struct {
Auth string `json:"auth,omitempty"`
}
@@ -368,3 +530,28 @@ func decodeDockerAuth(s string) (string, string, error) {
password := strings.Trim(parts[1], "\x00")
return user, password, nil
}
// convertToHostname converts a registry url which has http|https prepended
// to just an hostname.
// Copied from github.com/docker/docker/registry/auth.go
func convertToHostname(url string) string {
stripped := url
if strings.HasPrefix(url, "http://") {
stripped = strings.TrimPrefix(url, "http://")
} else if strings.HasPrefix(url, "https://") {
stripped = strings.TrimPrefix(url, "https://")
}
nameParts := strings.SplitN(stripped, "/", 2)
return nameParts[0]
}
func normalizeRegistry(registry string) string {
normalized := convertToHostname(registry)
switch normalized {
case "registry-1.docker.io", "docker.io":
return "index.docker.io"
}
return normalized
}

View File

@@ -5,8 +5,10 @@ import (
"fmt"
"net/http"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// Image is a Docker-specific implementation of types.Image with a few extra methods
@@ -24,25 +26,29 @@ func newImage(ctx *types.SystemContext, ref dockerReference) (types.Image, error
if err != nil {
return nil, err
}
return &Image{Image: image.FromSource(s), src: s}, nil
img, err := image.FromSource(s)
if err != nil {
return nil, err
}
return &Image{Image: img, src: s}, nil
}
// SourceRefFullName returns a fully expanded name for the repository this image is in.
func (i *Image) SourceRefFullName() string {
return i.src.ref.ref.FullName()
return i.src.ref.ref.Name()
}
// GetRepositoryTags list all tags available in the repository. Note that this has no connection with the tag(s) used for this specific image, if any.
func (i *Image) GetRepositoryTags() ([]string, error) {
url := fmt.Sprintf(tagsURL, i.src.ref.ref.RemoteName())
res, err := i.src.c.makeRequest("GET", url, nil, nil)
path := fmt.Sprintf(tagsPath, reference.Path(i.src.ref.ref))
res, err := i.src.c.makeRequest("GET", path, nil, nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
// print url also
return nil, fmt.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
return nil, errors.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
}
type tagsRes struct {
Tags []string

View File

@@ -2,8 +2,8 @@ package docker
import (
"bytes"
"crypto/sha256"
"encoding/hex"
"crypto/rand"
"encoding/json"
"fmt"
"io"
"io/ioutil"
@@ -11,23 +11,40 @@ import (
"net/url"
"os"
"path/filepath"
"strconv"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
var manifestMIMETypes = []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
func supportedManifestMIMETypesMap() map[string]bool {
m := make(map[string]bool, len(manifestMIMETypes))
for _, mt := range manifestMIMETypes {
m[mt] = true
}
return m
}
type dockerImageDestination struct {
ref dockerReference
c *dockerClient
// State
manifestDigest string // or "" if not yet known.
manifestDigest digest.Digest // or "" if not yet known.
}
// newImageDestination creates a new ImageDestination for the specified image reference.
func newImageDestination(ctx *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
c, err := newDockerClient(ctx, ref, true)
c, err := newDockerClient(ctx, ref, true, "pull,push")
if err != nil {
return nil, err
}
@@ -44,22 +61,28 @@ func (d *dockerImageDestination) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *dockerImageDestination) Close() {
func (d *dockerImageDestination) Close() error {
return nil
}
func (d *dockerImageDestination) SupportedManifestMIMETypes() []string {
return []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
return manifestMIMETypes
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *dockerImageDestination) SupportsSignatures() error {
return fmt.Errorf("Pushing signatures to a Docker Registry is not supported")
if err := d.c.detectProperties(); err != nil {
return err
}
switch {
case d.c.signatureBase != nil:
return nil
case d.c.supportsSignatures:
return nil
default:
return errors.Errorf("X-Registry-Supports-Signatures extension not supported, and lookaside is not configured")
}
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
@@ -67,6 +90,12 @@ func (d *dockerImageDestination) ShouldCompressLayers() bool {
return true
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }
@@ -82,88 +111,104 @@ func (c *sizeCounter) Write(p []byte) (n int, err error) {
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
if inputInfo.Digest != "" {
checkURL := fmt.Sprintf(blobsURL, d.ref.ref.RemoteName(), inputInfo.Digest)
logrus.Debugf("Checking %s", checkURL)
res, err := d.c.makeRequest("HEAD", checkURL, nil, nil)
if inputInfo.Digest.String() != "" {
haveBlob, size, err := d.HasBlob(inputInfo)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusOK:
logrus.Debugf("... already exists, not uploading")
blobLength, err := strconv.ParseInt(res.Header.Get("Content-Length"), 10, 64)
if err != nil {
return types.BlobInfo{}, err
}
return types.BlobInfo{Digest: inputInfo.Digest, Size: blobLength}, nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return types.BlobInfo{}, fmt.Errorf("not authorized to read from destination repository %s", d.ref.ref.RemoteName())
case http.StatusNotFound:
// noop
default:
return types.BlobInfo{}, fmt.Errorf("failed to read from destination repository %s: %v", d.ref.ref.RemoteName(), http.StatusText(res.StatusCode))
if haveBlob {
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
}
logrus.Debugf("... failed, status %d", res.StatusCode)
}
// FIXME? Chunked upload, progress reporting, etc.
uploadURL := fmt.Sprintf(blobUploadURL, d.ref.ref.RemoteName())
logrus.Debugf("Uploading %s", uploadURL)
res, err := d.c.makeRequest("POST", uploadURL, nil, nil)
uploadPath := fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref))
logrus.Debugf("Uploading %s", uploadPath)
res, err := d.c.makeRequest("POST", uploadPath, nil, nil)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusAccepted {
logrus.Debugf("Error initiating layer upload, response %#v", *res)
return types.BlobInfo{}, fmt.Errorf("Error initiating layer upload to %s, status %d", uploadURL, res.StatusCode)
return types.BlobInfo{}, errors.Errorf("Error initiating layer upload to %s, status %d", uploadPath, res.StatusCode)
}
uploadLocation, err := res.Location()
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error determining upload URL: %s", err.Error())
return types.BlobInfo{}, errors.Wrap(err, "Error determining upload URL")
}
h := sha256.New()
digester := digest.Canonical.Digester()
sizeCounter := &sizeCounter{}
tee := io.TeeReader(stream, io.MultiWriter(h, sizeCounter))
res, err = d.c.makeRequestToResolvedURL("PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size)
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
res, err = d.c.makeRequestToResolvedURL("PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
if err != nil {
logrus.Debugf("Error uploading layer chunked, response %#v", *res)
logrus.Debugf("Error uploading layer chunked, response %#v", res)
return types.BlobInfo{}, err
}
defer res.Body.Close()
hash := h.Sum(nil)
computedDigest := "sha256:" + hex.EncodeToString(hash[:])
computedDigest := digester.Digest()
uploadLocation, err = res.Location()
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error determining upload URL: %s", err.Error())
return types.BlobInfo{}, errors.Wrap(err, "Error determining upload URL")
}
// FIXME: DELETE uploadLocation on failure
locationQuery := uploadLocation.Query()
// TODO: check inputInfo.Digest == computedDigest https://github.com/containers/image/pull/70#discussion_r77646717
locationQuery.Set("digest", computedDigest)
locationQuery.Set("digest", computedDigest.String())
uploadLocation.RawQuery = locationQuery.Encode()
res, err = d.c.makeRequestToResolvedURL("PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1)
res, err = d.c.makeRequestToResolvedURL("PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
logrus.Debugf("Error uploading layer, response %#v", *res)
return types.BlobInfo{}, fmt.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
return types.BlobInfo{}, errors.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
}
logrus.Debugf("Upload of layer %s complete", computedDigest)
return types.BlobInfo{Digest: computedDigest, Size: sizeCounter.size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
checkPath := fmt.Sprintf(blobsPath, reference.Path(d.ref.ref), info.Digest.String())
logrus.Debugf("Checking %s", checkPath)
res, err := d.c.makeRequest("HEAD", checkPath, nil, nil)
if err != nil {
return false, -1, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusOK:
logrus.Debugf("... already exists")
return true, getBlobSize(res), nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return false, -1, errors.Errorf("not authorized to read from destination repository %s", reference.Path(d.ref.ref))
case http.StatusNotFound:
logrus.Debugf("... not present")
return false, -1, nil
default:
return false, -1, errors.Errorf("failed to read from destination repository %s: %v", reference.Path(d.ref.ref), http.StatusText(res.StatusCode))
}
}
func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
func (d *dockerImageDestination) PutManifest(m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
@@ -171,18 +216,18 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
}
d.manifestDigest = digest
reference, err := d.ref.tagOrDigest()
refTail, err := d.ref.tagOrDigest()
if err != nil {
return err
}
url := fmt.Sprintf(manifestURL, d.ref.ref.RemoteName(), reference)
path := fmt.Sprintf(manifestPath, reference.Path(d.ref.ref), refTail)
headers := map[string][]string{}
mimeType := manifest.GuessMIMEType(m)
if mimeType != "" {
headers["Content-Type"] = []string{mimeType}
}
res, err := d.c.makeRequest("PUT", url, headers, bytes.NewReader(m))
res, err := d.c.makeRequest("PUT", path, headers, bytes.NewReader(m))
if err != nil {
return err
}
@@ -193,12 +238,32 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
logrus.Debugf("Error body %s", string(body))
}
logrus.Debugf("Error uploading manifest, status %d, %#v", res.StatusCode, res)
return fmt.Errorf("Error uploading manifest to %s, status %d", url, res.StatusCode)
return errors.Errorf("Error uploading manifest to %s, status %d", path, res.StatusCode)
}
return nil
}
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
// Do not fail if we dont really need to support signatures.
if len(signatures) == 0 {
return nil
}
if err := d.c.detectProperties(); err != nil {
return err
}
switch {
case d.c.signatureBase != nil:
return d.putSignaturesToLookaside(signatures)
case d.c.supportsSignatures:
return d.putSignaturesToAPIExtension(signatures)
default:
return errors.Errorf("X-Registry-Supports-Signatures extension not supported, and lookaside is not configured")
}
}
// putSignaturesToLookaside implements PutSignatures() from the lookaside location configured in s.c.signatureBase,
// which is not nil.
func (d *dockerImageDestination) putSignaturesToLookaside(signatures [][]byte) error {
// FIXME? This overwrites files one at a time, definitely not atomic.
// A failure when updating signatures with a reordered copy could lose some of them.
@@ -206,19 +271,17 @@ func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) == 0 {
return nil
}
if d.c.signatureBase == nil {
return fmt.Errorf("Pushing signatures to a Docker Registry is not supported, and there is no applicable signature storage configured")
}
// FIXME: This assumption that signatures are stored after the manifest rather breaks the model.
if d.manifestDigest == "" {
return fmt.Errorf("Unknown manifest digest, can't add signatures")
if d.manifestDigest.String() == "" {
// This shouldnt happen, ImageDestination users are required to call PutManifest before PutSignatures
return errors.Errorf("Unknown manifest digest, can't add signatures")
}
// NOTE: Keep this in sync with docs/signature-protocols.md!
for i, signature := range signatures {
url := signatureStorageURL(d.c.signatureBase, d.manifestDigest, i)
if url == nil {
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
err := d.putOneSignature(url, signature)
if err != nil {
@@ -233,7 +296,7 @@ func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
for i := len(signatures); ; i++ {
url := signatureStorageURL(d.c.signatureBase, d.manifestDigest, i)
if url == nil {
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
missing, err := d.c.deleteOneSignature(url)
if err != nil {
@@ -248,6 +311,7 @@ func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
}
// putOneSignature stores one signature to url.
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (d *dockerImageDestination) putOneSignature(url *url.URL, signature []byte) error {
switch url.Scheme {
case "file":
@@ -263,14 +327,15 @@ func (d *dockerImageDestination) putOneSignature(url *url.URL, signature []byte)
return nil
case "http", "https":
return fmt.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location.", url.Scheme, url.String())
return errors.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location", url.Scheme, url.String())
default:
return fmt.Errorf("Unsupported scheme when writing signature to %s", url.String())
return errors.Errorf("Unsupported scheme when writing signature to %s", url.String())
}
}
// deleteOneSignature deletes a signature from url, if it exists.
// If it successfully determines that the signature does not exist, returns (true, nil)
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error) {
switch url.Scheme {
case "file":
@@ -282,12 +347,88 @@ func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error
return false, err
case "http", "https":
return false, fmt.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location.", url.Scheme, url.String())
return false, errors.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location", url.Scheme, url.String())
default:
return false, fmt.Errorf("Unsupported scheme when deleting signature from %s", url.String())
return false, errors.Errorf("Unsupported scheme when deleting signature from %s", url.String())
}
}
// putSignaturesToAPIExtension implements PutSignatures() using the X-Registry-Supports-Signatures API extension.
func (d *dockerImageDestination) putSignaturesToAPIExtension(signatures [][]byte) error {
// Skip dealing with the manifest digest, or reading the old state, if not necessary.
if len(signatures) == 0 {
return nil
}
if d.manifestDigest.String() == "" {
// This shouldnt happen, ImageDestination users are required to call PutManifest before PutSignatures
return errors.Errorf("Unknown manifest digest, can't add signatures")
}
// Because image signatures are a shared resource in Atomic Registry, the default upload
// always adds signatures. Eventually we should also allow removing signatures,
// but the X-Registry-Supports-Signatures API extension does not support that yet.
existingSignatures, err := d.c.getExtensionsSignatures(d.ref, d.manifestDigest)
if err != nil {
return err
}
existingSigNames := map[string]struct{}{}
for _, sig := range existingSignatures.Signatures {
existingSigNames[sig.Name] = struct{}{}
}
sigExists:
for _, newSig := range signatures {
for _, existingSig := range existingSignatures.Signatures {
if existingSig.Version == extensionSignatureSchemaVersion && existingSig.Type == extensionSignatureTypeAtomic && bytes.Equal(existingSig.Content, newSig) {
continue sigExists
}
}
// The API expect us to invent a new unique name. This is racy, but hopefully good enough.
var signatureName string
for {
randBytes := make([]byte, 16)
n, err := rand.Read(randBytes)
if err != nil || n != 16 {
return errors.Wrapf(err, "Error generating random signature len %d", n)
}
signatureName = fmt.Sprintf("%s@%032x", d.manifestDigest.String(), randBytes)
if _, ok := existingSigNames[signatureName]; !ok {
break
}
}
sig := extensionSignature{
Version: extensionSignatureSchemaVersion,
Name: signatureName,
Type: extensionSignatureTypeAtomic,
Content: newSig,
}
body, err := json.Marshal(sig)
if err != nil {
return err
}
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), d.manifestDigest.String())
res, err := d.c.makeRequest("PUT", path, nil, bytes.NewReader(body))
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
body, err := ioutil.ReadAll(res.Body)
if err == nil {
logrus.Debugf("Error body %s", string(body))
}
logrus.Debugf("Error uploading signature, status %d, %#v", res.StatusCode, res)
return errors.Errorf("Error uploading signature to %s, status %d", path, res.StatusCode)
}
}
return nil
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called

View File

@@ -11,19 +11,14 @@ import (
"strconv"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type errFetchManifest struct {
statusCode int
body []byte
}
func (e errFetchManifest) Error() string {
return fmt.Sprintf("error fetching manifest: status code: %d, body: %s", e.statusCode, string(e.body))
}
type dockerImageSource struct {
ref dockerReference
requestedManifestMIMETypes []string
@@ -38,13 +33,24 @@ type dockerImageSource struct {
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref dockerReference, requestedManifestMIMETypes []string) (*dockerImageSource, error) {
c, err := newDockerClient(ctx, ref, false)
c, err := newDockerClient(ctx, ref, false, "pull")
if err != nil {
return nil, err
}
if requestedManifestMIMETypes == nil {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
supportedMIMEs := supportedManifestMIMETypesMap()
acceptableRequestedMIMEs := false
for _, mtrequested := range requestedManifestMIMETypes {
if supportedMIMEs[mtrequested] {
acceptableRequestedMIMEs = true
break
}
}
if !acceptableRequestedMIMEs {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
return &dockerImageSource{
ref: ref,
requestedManifestMIMETypes: requestedManifestMIMETypes,
@@ -59,7 +65,8 @@ func (s *dockerImageSource) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *dockerImageSource) Close() {
func (s *dockerImageSource) Close() error {
return nil
}
// simplifyContentType drops parameters from a HTTP media type (see https://tools.ietf.org/html/rfc7231#section-3.1.1.1)
@@ -75,6 +82,8 @@ func simplifyContentType(contentType string) string {
return mimeType
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
err := s.ensureManifestIsLoaded()
if err != nil {
@@ -83,6 +92,31 @@ func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
return s.cachedManifest, s.cachedManifestMIMEType, nil
}
func (s *dockerImageSource) fetchManifest(tagOrDigest string) ([]byte, string, error) {
path := fmt.Sprintf(manifestPath, reference.Path(s.ref.ref), tagOrDigest)
headers := make(map[string][]string)
headers["Accept"] = s.requestedManifestMIMETypes
res, err := s.c.makeRequest("GET", path, headers, nil)
if err != nil {
return nil, "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, "", client.HandleErrorResponse(res)
}
manblob, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, "", err
}
return manblob, simplifyContentType(res.Header.Get("Content-Type")), nil
}
// GetTargetManifest returns an image's manifest given a digest.
// This is mainly used to retrieve a single image's manifest out of a manifest list.
func (s *dockerImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return s.fetchManifest(digest.String())
}
// ensureManifestIsLoaded sets s.cachedManifest and s.cachedManifestMIMEType
//
// ImageSource implementations are not required or expected to do any caching,
@@ -99,53 +133,82 @@ func (s *dockerImageSource) ensureManifestIsLoaded() error {
if err != nil {
return err
}
url := fmt.Sprintf(manifestURL, s.ref.ref.RemoteName(), reference)
// TODO(runcom) set manifest version header! schema1 for now - then schema2 etc etc and v1
// TODO(runcom) NO, switch on the resulter manifest like Docker is doing
headers := make(map[string][]string)
headers["Accept"] = s.requestedManifestMIMETypes
res, err := s.c.makeRequest("GET", url, headers, nil)
manblob, mt, err := s.fetchManifest(reference)
if err != nil {
return err
}
defer res.Body.Close()
manblob, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
}
if res.StatusCode != http.StatusOK {
return errFetchManifest{res.StatusCode, manblob}
}
// We might validate manblob against the Docker-Content-Digest header here to protect against transport errors.
s.cachedManifest = manblob
s.cachedManifestMIMEType = simplifyContentType(res.Header.Get("Content-Type"))
s.cachedManifestMIMEType = mt
return nil
}
func (s *dockerImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, error) {
var (
resp *http.Response
err error
)
for _, url := range urls {
resp, err = s.c.makeRequestToResolvedURL("GET", url, nil, nil, -1, false)
if err == nil {
if resp.StatusCode != http.StatusOK {
err = errors.Errorf("error fetching external blob from %q: %d", url, resp.StatusCode)
logrus.Debug(err)
continue
}
}
}
if resp.Body != nil && err == nil {
return resp.Body, getBlobSize(resp), nil
}
return nil, 0, err
}
func getBlobSize(resp *http.Response) int64 {
size, err := strconv.ParseInt(resp.Header.Get("Content-Length"), 10, 64)
if err != nil {
size = -1
}
return size
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dockerImageSource) GetBlob(digest string) (io.ReadCloser, int64, error) {
url := fmt.Sprintf(blobsURL, s.ref.ref.RemoteName(), digest)
logrus.Debugf("Downloading %s", url)
res, err := s.c.makeRequest("GET", url, nil, nil)
func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if len(info.URLs) != 0 {
return s.getExternalBlob(info.URLs)
}
path := fmt.Sprintf(blobsPath, reference.Path(s.ref.ref), info.Digest.String())
logrus.Debugf("Downloading %s", path)
res, err := s.c.makeRequest("GET", path, nil, nil)
if err != nil {
return nil, 0, err
}
if res.StatusCode != http.StatusOK {
// print url also
return nil, 0, fmt.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
return nil, 0, errors.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
}
size, err := strconv.ParseInt(res.Header.Get("Content-Length"), 10, 64)
if err != nil {
size = -1
}
return res.Body, size, nil
return res.Body, getBlobSize(res), nil
}
func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
if s.c.signatureBase == nil { // Skip dealing with the manifest digest if not necessary.
if err := s.c.detectProperties(); err != nil {
return nil, err
}
switch {
case s.c.signatureBase != nil:
return s.getSignaturesFromLookaside()
case s.c.supportsSignatures:
return s.getSignaturesFromAPIExtension()
default:
return [][]byte{}, nil
}
}
// getSignaturesFromLookaside implements GetSignatures() from the lookaside location configured in s.c.signatureBase,
// which is not nil.
func (s *dockerImageSource) getSignaturesFromLookaside() ([][]byte, error) {
if err := s.ensureManifestIsLoaded(); err != nil {
return nil, err
}
@@ -154,11 +217,12 @@ func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
return nil, err
}
// NOTE: Keep this in sync with docs/signature-protocols.md!
signatures := [][]byte{}
for i := 0; ; i++ {
url := signatureStorageURL(s.c.signatureBase, manifestDigest, i)
if url == nil {
return nil, fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return nil, errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
signature, missing, err := s.getOneSignature(url)
if err != nil {
@@ -174,6 +238,7 @@ func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
// getOneSignature downloads one signature from url.
// If it successfully determines that the signature does not exist, returns with missing set to true and error set to nil.
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, missing bool, err error) {
switch url.Scheme {
case "file":
@@ -197,7 +262,7 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
if res.StatusCode == http.StatusNotFound {
return nil, true, nil
} else if res.StatusCode != http.StatusOK {
return nil, false, fmt.Errorf("Error reading signature from %s: status %d", url.String(), res.StatusCode)
return nil, false, errors.Errorf("Error reading signature from %s: status %d", url.String(), res.StatusCode)
}
sig, err := ioutil.ReadAll(res.Body)
if err != nil {
@@ -206,13 +271,37 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
return sig, false, nil
default:
return nil, false, fmt.Errorf("Unsupported scheme when reading signature from %s", url.String())
return nil, false, errors.Errorf("Unsupported scheme when reading signature from %s", url.String())
}
}
// getSignaturesFromAPIExtension implements GetSignatures() using the X-Registry-Supports-Signatures API extension.
func (s *dockerImageSource) getSignaturesFromAPIExtension() ([][]byte, error) {
if err := s.ensureManifestIsLoaded(); err != nil {
return nil, err
}
manifestDigest, err := manifest.Digest(s.cachedManifest)
if err != nil {
return nil, err
}
parsedBody, err := s.c.getExtensionsSignatures(s.ref, manifestDigest)
if err != nil {
return nil, err
}
var sigs [][]byte
for _, sig := range parsedBody.Signatures {
if sig.Version == extensionSignatureSchemaVersion && sig.Type == extensionSignatureTypeAtomic {
sigs = append(sigs, sig.Content)
}
}
return sigs, nil
}
// deleteImage deletes the named image from the registry, if supported.
func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
c, err := newDockerClient(ctx, ref, true)
c, err := newDockerClient(ctx, ref, true, "push")
if err != nil {
return err
}
@@ -222,12 +311,12 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
headers := make(map[string][]string)
headers["Accept"] = []string{manifest.DockerV2Schema2MediaType}
reference, err := ref.tagOrDigest()
refTail, err := ref.tagOrDigest()
if err != nil {
return err
}
getURL := fmt.Sprintf(manifestURL, ref.ref.RemoteName(), reference)
get, err := c.makeRequest("GET", getURL, headers, nil)
getPath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), refTail)
get, err := c.makeRequest("GET", getPath, headers, nil)
if err != nil {
return err
}
@@ -239,17 +328,17 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
switch get.StatusCode {
case http.StatusOK:
case http.StatusNotFound:
return fmt.Errorf("Unable to delete %v. Image may not exist or is not stored with a v2 Schema in a v2 registry.", ref.ref)
return errors.Errorf("Unable to delete %v. Image may not exist or is not stored with a v2 Schema in a v2 registry", ref.ref)
default:
return fmt.Errorf("Failed to delete %v: %s (%v)", ref.ref, manifestBody, get.Status)
return errors.Errorf("Failed to delete %v: %s (%v)", ref.ref, manifestBody, get.Status)
}
digest := get.Header.Get("Docker-Content-Digest")
deleteURL := fmt.Sprintf(manifestURL, ref.ref.RemoteName(), digest)
deletePath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), digest)
// When retrieving the digest from a registry >= 2.3 use the following header:
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
delete, err := c.makeRequest("DELETE", deleteURL, headers, nil)
delete, err := c.makeRequest("DELETE", deletePath, headers, nil)
if err != nil {
return err
}
@@ -260,7 +349,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
return err
}
if delete.StatusCode != http.StatusAccepted {
return fmt.Errorf("Failed to delete %v: %s (%v)", deleteURL, string(body), delete.Status)
return errors.Errorf("Failed to delete %v: %s (%v)", deletePath, string(body), delete.Status)
}
if c.signatureBase != nil {
@@ -272,7 +361,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
for i := 0; ; i++ {
url := signatureStorageURL(c.signatureBase, manifestDigest, i)
if url == nil {
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
missing, err := c.deleteOneSignature(url)
if err != nil {

View File

@@ -5,10 +5,16 @@ import (
"strings"
"github.com/containers/image/docker/policyconfiguration"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/pkg/errors"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for Docker registry-hosted images.
var Transport = dockerTransport{}
@@ -42,29 +48,30 @@ type dockerReference struct {
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an Docker ImageReference.
func ParseReference(refString string) (types.ImageReference, error) {
if !strings.HasPrefix(refString, "//") {
return nil, fmt.Errorf("docker: image reference %s does not start with //", refString)
return nil, errors.Errorf("docker: image reference %s does not start with //", refString)
}
ref, err := reference.ParseNamed(strings.TrimPrefix(refString, "//"))
ref, err := reference.ParseNormalizedNamed(strings.TrimPrefix(refString, "//"))
if err != nil {
return nil, err
}
ref = reference.WithDefaultTag(ref)
ref = reference.TagNameOnly(ref)
return NewReference(ref)
}
// NewReference returns a Docker reference for a named reference. The reference must satisfy !reference.IsNameOnly().
func NewReference(ref reference.Named) (types.ImageReference, error) {
if reference.IsNameOnly(ref) {
return nil, fmt.Errorf("Docker reference %s has neither a tag nor a digest", ref.String())
return nil, errors.Errorf("Docker reference %s has neither a tag nor a digest", reference.FamiliarString(ref))
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// docker/reference does not handle that, so fail.
// The docker/distribution API does not really support that (we cant ask for an image with a specific
// tag and digest), so fail. This MAY be accepted in the future.
// (Even if it were supported, the semantics of policy namespaces are unclear - should we drop
// the tag or the digest first?)
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
return nil, fmt.Errorf("Docker references with both a tag and digest are currently not supported")
return nil, errors.Errorf("Docker references with both a tag and digest are currently not supported")
}
return dockerReference{
ref: ref,
@@ -81,7 +88,7 @@ func (ref dockerReference) Transport() types.ImageTransport {
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref dockerReference) StringWithinTransport() string {
return "//" + ref.ref.String()
return "//" + reference.FamiliarString(ref.ref)
}
// DockerReference returns a Docker reference associated with this reference
@@ -115,8 +122,10 @@ func (ref dockerReference) PolicyConfigurationNamespaces() []string {
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref dockerReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return newImage(ctx, ref)
}
@@ -149,5 +158,5 @@ func (ref dockerReference) tagOrDigest() (string, error) {
return ref.Tag(), nil
}
// This should not happen, NewReference above refuses reference.IsNameOnly values.
return "", fmt.Errorf("Internal inconsistency: Reference %s unexpectedly has neither a digest nor a tag", ref.ref.String())
return "", errors.Errorf("Internal inconsistency: Reference %s unexpectedly has neither a digest nor a tag", reference.FamiliarString(ref.ref))
}

View File

@@ -9,10 +9,12 @@ import (
"path/filepath"
"strings"
"github.com/ghodss/yaml"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/ghodss/yaml"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// systemRegistriesDirPath is the path to registries.d, used for locating lookaside Docker signature storage.
@@ -59,12 +61,13 @@ func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReferenc
url, err := url.Parse(topLevel)
if err != nil {
return nil, fmt.Errorf("Invalid signature storage URL %s: %v", topLevel, err)
return nil, errors.Wrapf(err, "Invalid signature storage URL %s", topLevel)
}
// NOTE: Keep this in sync with docs/signature-protocols.md!
// FIXME? Restrict to explicitly supported schemes?
repo := ref.ref.FullName() // Note that this is without a tag or digest.
if path.Clean(repo) != repo { // Coverage: This should not be reachable because /./ and /../ components are not valid in docker references
return nil, fmt.Errorf("Unexpected path elements in Docker reference %s for signature storage", ref.ref.String())
repo := reference.Path(ref.ref) // Note that this is without a tag or digest.
if path.Clean(repo) != repo { // Coverage: This should not be reachable because /./ and /../ components are not valid in docker references
return nil, errors.Errorf("Unexpected path elements in Docker reference %s for signature storage", ref.ref.String())
}
url.Path = url.Path + "/" + repo
return url, nil
@@ -113,12 +116,12 @@ func loadAndMergeConfig(dirPath string) (*registryConfiguration, error) {
var config registryConfiguration
err = yaml.Unmarshal(configBytes, &config)
if err != nil {
return nil, fmt.Errorf("Error parsing %s: %v", configPath, err)
return nil, errors.Wrapf(err, "Error parsing %s", configPath)
}
if config.DefaultDocker != nil {
if mergedConfig.DefaultDocker != nil {
return nil, fmt.Errorf(`Error parsing signature storage configuration: "default-docker" defined both in "%s" and "%s"`,
return nil, errors.Errorf(`Error parsing signature storage configuration: "default-docker" defined both in "%s" and "%s"`,
dockerDefaultMergedFrom, configPath)
}
mergedConfig.DefaultDocker = config.DefaultDocker
@@ -127,7 +130,7 @@ func loadAndMergeConfig(dirPath string) (*registryConfiguration, error) {
for nsName, nsConfig := range config.Docker { // includes config.Docker == nil
if _, ok := mergedConfig.Docker[nsName]; ok {
return nil, fmt.Errorf(`Error parsing signature storage configuration: "docker" namespace "%s" defined both in "%s" and "%s"`,
return nil, errors.Errorf(`Error parsing signature storage configuration: "docker" namespace "%s" defined both in "%s" and "%s"`,
nsName, nsMergedFrom[nsName], configPath)
}
mergedConfig.Docker[nsName] = nsConfig
@@ -188,11 +191,12 @@ func (ns registryNamespace) signatureTopLevel(write bool) string {
// signatureStorageURL returns an URL usable for acessing signature index in base with known manifestDigest, or nil if not applicable.
// Returns nil iff base == nil.
func signatureStorageURL(base signatureStorageBase, manifestDigest string, index int) *url.URL {
// NOTE: Keep this in sync with docs/signature-protocols.md!
func signatureStorageURL(base signatureStorageBase, manifestDigest digest.Digest, index int) *url.URL {
if base == nil {
return nil
}
url := *base
url.Path = fmt.Sprintf("%s@%s/signature-%d", url.Path, manifestDigest, index+1)
url.Path = fmt.Sprintf("%s@%s=%s/signature-%d", url.Path, manifestDigest.Algorithm(), manifestDigest.Hex(), index+1)
return &url
}

View File

@@ -1,25 +1,24 @@
package policyconfiguration
import (
"errors"
"fmt"
"strings"
"github.com/docker/docker/reference"
"github.com/containers/image/docker/reference"
"github.com/pkg/errors"
)
// DockerReferenceIdentity returns a string representation of the reference, suitable for policy lookup,
// as a backend for ImageReference.PolicyConfigurationIdentity.
// The reference must satisfy !reference.IsNameOnly().
func DockerReferenceIdentity(ref reference.Named) (string, error) {
res := ref.FullName()
res := ref.Name()
tagged, isTagged := ref.(reference.NamedTagged)
digested, isDigested := ref.(reference.Canonical)
switch {
case isTagged && isDigested: // This should not happen, docker/reference.ParseNamed drops the tag.
return "", fmt.Errorf("Unexpected Docker reference %s with both a name and a digest", ref.String())
case isTagged && isDigested: // Note that this CAN actually happen.
return "", errors.Errorf("Unexpected Docker reference %s with both a name and a digest", reference.FamiliarString(ref))
case !isTagged && !isDigested: // This should not happen, the caller is expected to ensure !reference.IsNameOnly()
return "", fmt.Errorf("Internal inconsistency: Docker reference %s with neither a tag nor a digest", ref.String())
return "", errors.Errorf("Internal inconsistency: Docker reference %s with neither a tag nor a digest", reference.FamiliarString(ref))
case isTagged:
res = res + ":" + tagged.Tag()
case isDigested:
@@ -43,7 +42,7 @@ func DockerReferenceNamespaces(ref reference.Named) []string {
// ref.FullName() == ref.Hostname() + "/" + ref.RemoteName(), so the last
// iteration matches the host name (for any namespace).
res := []string{}
name := ref.FullName()
name := ref.Name()
for {
res = append(res, name)

View File

@@ -0,0 +1,2 @@
This is a copy of github.com/docker/distribution/reference as of commit fb0bebc4b64e3881cc52a2478d749845ed76d2a8,
except that ParseAnyReferenceWithSet has been removed to drop the dependency on github.com/docker/distribution/digestset.

View File

@@ -0,0 +1,42 @@
package reference
import "path"
// IsNameOnly returns true if reference only contains a repo name.
func IsNameOnly(ref Named) bool {
if _, ok := ref.(NamedTagged); ok {
return false
}
if _, ok := ref.(Canonical); ok {
return false
}
return true
}
// FamiliarName returns the familiar name string
// for the given named, familiarizing if needed.
func FamiliarName(ref Named) string {
if nn, ok := ref.(normalizedNamed); ok {
return nn.Familiar().Name()
}
return ref.Name()
}
// FamiliarString returns the familiar string representation
// for the given reference, familiarizing if needed.
func FamiliarString(ref Reference) string {
if nn, ok := ref.(normalizedNamed); ok {
return nn.Familiar().String()
}
return ref.String()
}
// FamiliarMatch reports whether ref matches the specified pattern.
// See https://godoc.org/path#Match for supported patterns.
func FamiliarMatch(pattern string, ref Reference) (bool, error) {
matched, err := path.Match(pattern, FamiliarString(ref))
if namedRef, isNamed := ref.(Named); isNamed && !matched {
matched, _ = path.Match(pattern, FamiliarName(namedRef))
}
return matched, err
}

View File

@@ -0,0 +1,152 @@
package reference
import (
"errors"
"fmt"
"strings"
"github.com/opencontainers/go-digest"
)
var (
legacyDefaultDomain = "index.docker.io"
defaultDomain = "docker.io"
officialRepoName = "library"
defaultTag = "latest"
)
// normalizedNamed represents a name which has been
// normalized and has a familiar form. A familiar name
// is what is used in Docker UI. An example normalized
// name is "docker.io/library/ubuntu" and corresponding
// familiar name of "ubuntu".
type normalizedNamed interface {
Named
Familiar() Named
}
// ParseNormalizedNamed parses a string into a named reference
// transforming a familiar name from Docker UI to a fully
// qualified reference. If the value may be an identifier
// use ParseAnyReference.
func ParseNormalizedNamed(s string) (Named, error) {
if ok := anchoredIdentifierRegexp.MatchString(s); ok {
return nil, fmt.Errorf("invalid repository name (%s), cannot specify 64-byte hexadecimal strings", s)
}
domain, remainder := splitDockerDomain(s)
var remoteName string
if tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {
remoteName = remainder[:tagSep]
} else {
remoteName = remainder
}
if strings.ToLower(remoteName) != remoteName {
return nil, errors.New("invalid reference format: repository name must be lowercase")
}
ref, err := Parse(domain + "/" + remainder)
if err != nil {
return nil, err
}
named, isNamed := ref.(Named)
if !isNamed {
return nil, fmt.Errorf("reference %s has no name", ref.String())
}
return named, nil
}
// splitDockerDomain splits a repository name to domain and remotename string.
// If no valid domain is found, the default domain is used. Repository name
// needs to be already validated before.
func splitDockerDomain(name string) (domain, remainder string) {
i := strings.IndexRune(name, '/')
if i == -1 || (!strings.ContainsAny(name[:i], ".:") && name[:i] != "localhost") {
domain, remainder = defaultDomain, name
} else {
domain, remainder = name[:i], name[i+1:]
}
if domain == legacyDefaultDomain {
domain = defaultDomain
}
if domain == defaultDomain && !strings.ContainsRune(remainder, '/') {
remainder = officialRepoName + "/" + remainder
}
return
}
// familiarizeName returns a shortened version of the name familiar
// to to the Docker UI. Familiar names have the default domain
// "docker.io" and "library/" repository prefix removed.
// For example, "docker.io/library/redis" will have the familiar
// name "redis" and "docker.io/dmcgowan/myapp" will be "dmcgowan/myapp".
// Returns a familiarized named only reference.
func familiarizeName(named namedRepository) repository {
repo := repository{
domain: named.Domain(),
path: named.Path(),
}
if repo.domain == defaultDomain {
repo.domain = ""
// Handle official repositories which have the pattern "library/<official repo name>"
if split := strings.Split(repo.path, "/"); len(split) == 2 && split[0] == officialRepoName {
repo.path = split[1]
}
}
return repo
}
func (r reference) Familiar() Named {
return reference{
namedRepository: familiarizeName(r.namedRepository),
tag: r.tag,
digest: r.digest,
}
}
func (r repository) Familiar() Named {
return familiarizeName(r)
}
func (t taggedReference) Familiar() Named {
return taggedReference{
namedRepository: familiarizeName(t.namedRepository),
tag: t.tag,
}
}
func (c canonicalReference) Familiar() Named {
return canonicalReference{
namedRepository: familiarizeName(c.namedRepository),
digest: c.digest,
}
}
// TagNameOnly adds the default tag "latest" to a reference if it only has
// a repo name.
func TagNameOnly(ref Named) Named {
if IsNameOnly(ref) {
namedTagged, err := WithTag(ref, defaultTag)
if err != nil {
// Default tag must be valid, to create a NamedTagged
// type with non-validated input the WithTag function
// should be used instead
panic(err)
}
return namedTagged
}
return ref
}
// ParseAnyReference parses a reference string as a possible identifier,
// full digest, or familiar name.
func ParseAnyReference(ref string) (Reference, error) {
if ok := anchoredIdentifierRegexp.MatchString(ref); ok {
return digestReference("sha256:" + ref), nil
}
if dgst, err := digest.Parse(ref); err == nil {
return digestReference(dgst), nil
}
return ParseNormalizedNamed(ref)
}

View File

@@ -0,0 +1,433 @@
// Package reference provides a general type to represent any way of referencing images within the registry.
// Its main purpose is to abstract tags and digests (content-addressable hash).
//
// Grammar
//
// reference := name [ ":" tag ] [ "@" digest ]
// name := [domain '/'] path-component ['/' path-component]*
// domain := domain-component ['.' domain-component]* [':' port-number]
// domain-component := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
// port-number := /[0-9]+/
// path-component := alpha-numeric [separator alpha-numeric]*
// alpha-numeric := /[a-z0-9]+/
// separator := /[_.]|__|[-]*/
//
// tag := /[\w][\w.-]{0,127}/
//
// digest := digest-algorithm ":" digest-hex
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]
// digest-algorithm-separator := /[+.-_]/
// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value
//
// identifier := /[a-f0-9]{64}/
// short-identifier := /[a-f0-9]{6,64}/
package reference
import (
"errors"
"fmt"
"strings"
"github.com/opencontainers/go-digest"
)
const (
// NameTotalLengthMax is the maximum total number of characters in a repository name.
NameTotalLengthMax = 255
)
var (
// ErrReferenceInvalidFormat represents an error while trying to parse a string as a reference.
ErrReferenceInvalidFormat = errors.New("invalid reference format")
// ErrTagInvalidFormat represents an error while trying to parse a string as a tag.
ErrTagInvalidFormat = errors.New("invalid tag format")
// ErrDigestInvalidFormat represents an error while trying to parse a string as a tag.
ErrDigestInvalidFormat = errors.New("invalid digest format")
// ErrNameContainsUppercase is returned for invalid repository names that contain uppercase characters.
ErrNameContainsUppercase = errors.New("repository name must be lowercase")
// ErrNameEmpty is returned for empty, invalid repository names.
ErrNameEmpty = errors.New("repository name must have at least one component")
// ErrNameTooLong is returned when a repository name is longer than NameTotalLengthMax.
ErrNameTooLong = fmt.Errorf("repository name must not be more than %v characters", NameTotalLengthMax)
// ErrNameNotCanonical is returned when a name is not canonical.
ErrNameNotCanonical = errors.New("repository name must be canonical")
)
// Reference is an opaque object reference identifier that may include
// modifiers such as a hostname, name, tag, and digest.
type Reference interface {
// String returns the full reference
String() string
}
// Field provides a wrapper type for resolving correct reference types when
// working with encoding.
type Field struct {
reference Reference
}
// AsField wraps a reference in a Field for encoding.
func AsField(reference Reference) Field {
return Field{reference}
}
// Reference unwraps the reference type from the field to
// return the Reference object. This object should be
// of the appropriate type to further check for different
// reference types.
func (f Field) Reference() Reference {
return f.reference
}
// MarshalText serializes the field to byte text which
// is the string of the reference.
func (f Field) MarshalText() (p []byte, err error) {
return []byte(f.reference.String()), nil
}
// UnmarshalText parses text bytes by invoking the
// reference parser to ensure the appropriately
// typed reference object is wrapped by field.
func (f *Field) UnmarshalText(p []byte) error {
r, err := Parse(string(p))
if err != nil {
return err
}
f.reference = r
return nil
}
// Named is an object with a full name
type Named interface {
Reference
Name() string
}
// Tagged is an object which has a tag
type Tagged interface {
Reference
Tag() string
}
// NamedTagged is an object including a name and tag.
type NamedTagged interface {
Named
Tag() string
}
// Digested is an object which has a digest
// in which it can be referenced by
type Digested interface {
Reference
Digest() digest.Digest
}
// Canonical reference is an object with a fully unique
// name including a name with domain and digest
type Canonical interface {
Named
Digest() digest.Digest
}
// namedRepository is a reference to a repository with a name.
// A namedRepository has both domain and path components.
type namedRepository interface {
Named
Domain() string
Path() string
}
// Domain returns the domain part of the Named reference
func Domain(named Named) string {
if r, ok := named.(namedRepository); ok {
return r.Domain()
}
domain, _ := splitDomain(named.Name())
return domain
}
// Path returns the name without the domain part of the Named reference
func Path(named Named) (name string) {
if r, ok := named.(namedRepository); ok {
return r.Path()
}
_, path := splitDomain(named.Name())
return path
}
func splitDomain(name string) (string, string) {
match := anchoredNameRegexp.FindStringSubmatch(name)
if len(match) != 3 {
return "", name
}
return match[1], match[2]
}
// SplitHostname splits a named reference into a
// hostname and name string. If no valid hostname is
// found, the hostname is empty and the full value
// is returned as name
// DEPRECATED: Use Domain or Path
func SplitHostname(named Named) (string, string) {
if r, ok := named.(namedRepository); ok {
return r.Domain(), r.Path()
}
return splitDomain(named.Name())
}
// Parse parses s and returns a syntactically valid Reference.
// If an error was encountered it is returned, along with a nil Reference.
// NOTE: Parse will not handle short digests.
func Parse(s string) (Reference, error) {
matches := ReferenceRegexp.FindStringSubmatch(s)
if matches == nil {
if s == "" {
return nil, ErrNameEmpty
}
if ReferenceRegexp.FindStringSubmatch(strings.ToLower(s)) != nil {
return nil, ErrNameContainsUppercase
}
return nil, ErrReferenceInvalidFormat
}
if len(matches[1]) > NameTotalLengthMax {
return nil, ErrNameTooLong
}
var repo repository
nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
if nameMatch != nil && len(nameMatch) == 3 {
repo.domain = nameMatch[1]
repo.path = nameMatch[2]
} else {
repo.domain = ""
repo.path = matches[1]
}
ref := reference{
namedRepository: repo,
tag: matches[2],
}
if matches[3] != "" {
var err error
ref.digest, err = digest.Parse(matches[3])
if err != nil {
return nil, err
}
}
r := getBestReferenceType(ref)
if r == nil {
return nil, ErrNameEmpty
}
return r, nil
}
// ParseNamed parses s and returns a syntactically valid reference implementing
// the Named interface. The reference must have a name and be in the canonical
// form, otherwise an error is returned.
// If an error was encountered it is returned, along with a nil Reference.
// NOTE: ParseNamed will not handle short digests.
func ParseNamed(s string) (Named, error) {
named, err := ParseNormalizedNamed(s)
if err != nil {
return nil, err
}
if named.String() != s {
return nil, ErrNameNotCanonical
}
return named, nil
}
// WithName returns a named object representing the given string. If the input
// is invalid ErrReferenceInvalidFormat will be returned.
func WithName(name string) (Named, error) {
if len(name) > NameTotalLengthMax {
return nil, ErrNameTooLong
}
match := anchoredNameRegexp.FindStringSubmatch(name)
if match == nil || len(match) != 3 {
return nil, ErrReferenceInvalidFormat
}
return repository{
domain: match[1],
path: match[2],
}, nil
}
// WithTag combines the name from "name" and the tag from "tag" to form a
// reference incorporating both the name and the tag.
func WithTag(name Named, tag string) (NamedTagged, error) {
if !anchoredTagRegexp.MatchString(tag) {
return nil, ErrTagInvalidFormat
}
var repo repository
if r, ok := name.(namedRepository); ok {
repo.domain = r.Domain()
repo.path = r.Path()
} else {
repo.path = name.Name()
}
if canonical, ok := name.(Canonical); ok {
return reference{
namedRepository: repo,
tag: tag,
digest: canonical.Digest(),
}, nil
}
return taggedReference{
namedRepository: repo,
tag: tag,
}, nil
}
// WithDigest combines the name from "name" and the digest from "digest" to form
// a reference incorporating both the name and the digest.
func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
if !anchoredDigestRegexp.MatchString(digest.String()) {
return nil, ErrDigestInvalidFormat
}
var repo repository
if r, ok := name.(namedRepository); ok {
repo.domain = r.Domain()
repo.path = r.Path()
} else {
repo.path = name.Name()
}
if tagged, ok := name.(Tagged); ok {
return reference{
namedRepository: repo,
tag: tagged.Tag(),
digest: digest,
}, nil
}
return canonicalReference{
namedRepository: repo,
digest: digest,
}, nil
}
// TrimNamed removes any tag or digest from the named reference.
func TrimNamed(ref Named) Named {
domain, path := SplitHostname(ref)
return repository{
domain: domain,
path: path,
}
}
func getBestReferenceType(ref reference) Reference {
if ref.Name() == "" {
// Allow digest only references
if ref.digest != "" {
return digestReference(ref.digest)
}
return nil
}
if ref.tag == "" {
if ref.digest != "" {
return canonicalReference{
namedRepository: ref.namedRepository,
digest: ref.digest,
}
}
return ref.namedRepository
}
if ref.digest == "" {
return taggedReference{
namedRepository: ref.namedRepository,
tag: ref.tag,
}
}
return ref
}
type reference struct {
namedRepository
tag string
digest digest.Digest
}
func (r reference) String() string {
return r.Name() + ":" + r.tag + "@" + r.digest.String()
}
func (r reference) Tag() string {
return r.tag
}
func (r reference) Digest() digest.Digest {
return r.digest
}
type repository struct {
domain string
path string
}
func (r repository) String() string {
return r.Name()
}
func (r repository) Name() string {
if r.domain == "" {
return r.path
}
return r.domain + "/" + r.path
}
func (r repository) Domain() string {
return r.domain
}
func (r repository) Path() string {
return r.path
}
type digestReference digest.Digest
func (d digestReference) String() string {
return digest.Digest(d).String()
}
func (d digestReference) Digest() digest.Digest {
return digest.Digest(d)
}
type taggedReference struct {
namedRepository
tag string
}
func (t taggedReference) String() string {
return t.Name() + ":" + t.tag
}
func (t taggedReference) Tag() string {
return t.tag
}
type canonicalReference struct {
namedRepository
digest digest.Digest
}
func (c canonicalReference) String() string {
return c.Name() + "@" + c.digest.String()
}
func (c canonicalReference) Digest() digest.Digest {
return c.digest
}

View File

@@ -0,0 +1,143 @@
package reference
import "regexp"
var (
// alphaNumericRegexp defines the alpha numeric atom, typically a
// component of names. This only allows lower case characters and digits.
alphaNumericRegexp = match(`[a-z0-9]+`)
// separatorRegexp defines the separators allowed to be embedded in name
// components. This allow one period, one or two underscore and multiple
// dashes.
separatorRegexp = match(`(?:[._]|__|[-]*)`)
// nameComponentRegexp restricts registry path component names to start
// with at least one letter or number, with following parts able to be
// separated by one period, one or two underscore and multiple dashes.
nameComponentRegexp = expression(
alphaNumericRegexp,
optional(repeated(separatorRegexp, alphaNumericRegexp)))
// domainComponentRegexp restricts the registry domain component of a
// repository name to start with a component as defined by domainRegexp
// and followed by an optional port.
domainComponentRegexp = match(`(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`)
// domainRegexp defines the structure of potential domain components
// that may be part of image names. This is purposely a subset of what is
// allowed by DNS to ensure backwards compatibility with Docker image
// names.
domainRegexp = expression(
domainComponentRegexp,
optional(repeated(literal(`.`), domainComponentRegexp)),
optional(literal(`:`), match(`[0-9]+`)))
// TagRegexp matches valid tag names. From docker/docker:graph/tags.go.
TagRegexp = match(`[\w][\w.-]{0,127}`)
// anchoredTagRegexp matches valid tag names, anchored at the start and
// end of the matched string.
anchoredTagRegexp = anchored(TagRegexp)
// DigestRegexp matches valid digests.
DigestRegexp = match(`[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][[:xdigit:]]{32,}`)
// anchoredDigestRegexp matches valid digests, anchored at the start and
// end of the matched string.
anchoredDigestRegexp = anchored(DigestRegexp)
// NameRegexp is the format for the name component of references. The
// regexp has capturing groups for the domain and name part omitting
// the separating forward slash from either.
NameRegexp = expression(
optional(domainRegexp, literal(`/`)),
nameComponentRegexp,
optional(repeated(literal(`/`), nameComponentRegexp)))
// anchoredNameRegexp is used to parse a name value, capturing the
// domain and trailing components.
anchoredNameRegexp = anchored(
optional(capture(domainRegexp), literal(`/`)),
capture(nameComponentRegexp,
optional(repeated(literal(`/`), nameComponentRegexp))))
// ReferenceRegexp is the full supported format of a reference. The regexp
// is anchored and has capturing groups for name, tag, and digest
// components.
ReferenceRegexp = anchored(capture(NameRegexp),
optional(literal(":"), capture(TagRegexp)),
optional(literal("@"), capture(DigestRegexp)))
// IdentifierRegexp is the format for string identifier used as a
// content addressable identifier using sha256. These identifiers
// are like digests without the algorithm, since sha256 is used.
IdentifierRegexp = match(`([a-f0-9]{64})`)
// ShortIdentifierRegexp is the format used to represent a prefix
// of an identifier. A prefix may be used to match a sha256 identifier
// within a list of trusted identifiers.
ShortIdentifierRegexp = match(`([a-f0-9]{6,64})`)
// anchoredIdentifierRegexp is used to check or match an
// identifier value, anchored at start and end of string.
anchoredIdentifierRegexp = anchored(IdentifierRegexp)
// anchoredShortIdentifierRegexp is used to check if a value
// is a possible identifier prefix, anchored at start and end
// of string.
anchoredShortIdentifierRegexp = anchored(ShortIdentifierRegexp)
)
// match compiles the string to a regular expression.
var match = regexp.MustCompile
// literal compiles s into a literal regular expression, escaping any regexp
// reserved characters.
func literal(s string) *regexp.Regexp {
re := match(regexp.QuoteMeta(s))
if _, complete := re.LiteralPrefix(); !complete {
panic("must be a literal")
}
return re
}
// expression defines a full expression, where each regular expression must
// follow the previous.
func expression(res ...*regexp.Regexp) *regexp.Regexp {
var s string
for _, re := range res {
s += re.String()
}
return match(s)
}
// optional wraps the expression in a non-capturing group and makes the
// production optional.
func optional(res ...*regexp.Regexp) *regexp.Regexp {
return match(group(expression(res...)).String() + `?`)
}
// repeated wraps the regexp in a non-capturing group to get one or more
// matches.
func repeated(res ...*regexp.Regexp) *regexp.Regexp {
return match(group(expression(res...)).String() + `+`)
}
// group wraps the regexp in a non-capturing group.
func group(res ...*regexp.Regexp) *regexp.Regexp {
return match(`(?:` + expression(res...).String() + `)`)
}
// capture wraps the expression in a capturing group.
func capture(res ...*regexp.Regexp) *regexp.Regexp {
return match(`(` + expression(res...).String() + `)`)
}
// anchored anchors the regular expression by adding start and end delimiters.
func anchored(res ...*regexp.Regexp) *regexp.Regexp {
return match(`^` + expression(res...).String() + `$`)
}

View File

@@ -0,0 +1,250 @@
package tarfile
import (
"archive/tar"
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
// Destination is a partial implementation of types.ImageDestination for writing to an io.Writer.
type Destination struct {
writer io.Writer
tar *tar.Writer
repoTag string
// Other state.
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
}
// NewDestination returns a tarfile.Destination for the specified io.Writer.
func NewDestination(dest io.Writer, ref reference.NamedTagged) *Destination {
// For github.com/docker/docker consumers, this works just as well as
// refString := ref.String()
// because when reading the RepoTags strings, github.com/docker/docker/reference
// normalizes both of them to the same value.
//
// Doing it this way to include the normalized-out `docker.io[/library]` does make
// a difference for github.com/projectatomic/docker consumers, with the
// “Add --add-registry and --block-registry options to docker daemon” patch.
// These consumers treat reference strings which include a hostname and reference
// strings without a hostname differently.
//
// Using the host name here is more explicit about the intent, and it has the same
// effect as (docker pull) in projectatomic/docker, which tags the result using
// a hostname-qualified reference.
// See https://github.com/containers/image/issues/72 for a more detailed
// analysis and explanation.
refString := fmt.Sprintf("%s:%s", ref.Name(), ref.Tag())
return &Destination{
writer: dest,
tar: tar.NewWriter(dest),
repoTag: refString,
blobs: make(map[digest.Digest]types.BlobInfo),
}
}
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
// If an empty slice or nil it's returned, then any mime type can be tried to upload
func (d *Destination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema2MediaType, // We rely on the types.Image.UpdatedImage schema conversion capabilities.
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *Destination) SupportsSignatures() error {
return errors.Errorf("Storing signatures for docker tar files is not supported")
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *Destination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *Destination) AcceptsForeignLayerURLs() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
if inputInfo.Digest.String() == "" {
return types.BlobInfo{}, errors.Errorf("Can not stream a blob with unknown digest to docker tarfile")
}
ok, size, err := d.HasBlob(inputInfo)
if err != nil {
return types.BlobInfo{}, err
}
if ok {
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
}
if inputInfo.Size == -1 { // Ouch, we need to stream the blob into a temporary file just to determine the size.
logrus.Debugf("docker tarfile: input with unknown size, streaming to disk first ...")
streamCopy, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-tarfile-blob")
if err != nil {
return types.BlobInfo{}, err
}
defer os.Remove(streamCopy.Name())
defer streamCopy.Close()
size, err := io.Copy(streamCopy, stream)
if err != nil {
return types.BlobInfo{}, err
}
_, err = streamCopy.Seek(0, os.SEEK_SET)
if err != nil {
return types.BlobInfo{}, err
}
inputInfo.Size = size // inputInfo is a struct, so we are only modifying our copy.
stream = streamCopy
logrus.Debugf("... streaming done")
}
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
if err := d.sendFile(inputInfo.Digest.String(), inputInfo.Size, tee); err != nil {
return types.BlobInfo{}, err
}
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}
return types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with
// the matching digest which can be reapplied using ReapplyBlob. Unlike
// PutBlob, the digest can not be empty. If HasBlob returns true, the size of
// the blob must also be returned. If the destination does not contain the
// blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil); it
// returns a non-nil error only on an unexpected failure.
func (d *Destination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf("Can not check for a blob with unknown digest")
}
if blob, ok := d.blobs[info.Digest]; ok {
return true, blob.Size, nil
}
return false, -1, nil
}
// ReapplyBlob informs the image destination that a blob for which HasBlob
// previously returned true would have been passed to PutBlob if it had
// returned false. Like HasBlob and unlike PutBlob, the digest can not be
// empty. If the blob is a filesystem layer, this signifies that the changes
// it describes need to be applied again when composing a filesystem tree.
func (d *Destination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest sends the given manifest blob to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate
// between schema versions.
func (d *Destination) PutManifest(m []byte) error {
var man schema2Manifest
if err := json.Unmarshal(m, &man); err != nil {
return errors.Wrap(err, "Error parsing manifest")
}
if man.SchemaVersion != 2 || man.MediaType != manifest.DockerV2Schema2MediaType {
return errors.Errorf("Unsupported manifest type, need a Docker schema 2 manifest")
}
layerPaths := []string{}
for _, l := range man.Layers {
layerPaths = append(layerPaths, l.Digest.String())
}
items := []manifestItem{{
Config: man.Config.Digest.String(),
RepoTags: []string{d.repoTag},
Layers: layerPaths,
Parent: "",
LayerSources: nil,
}}
itemsBytes, err := json.Marshal(&items)
if err != nil {
return err
}
// FIXME? Do we also need to support the legacy format?
return d.sendFile(manifestFileName, int64(len(itemsBytes)), bytes.NewReader(itemsBytes))
}
type tarFI struct {
path string
size int64
}
func (t *tarFI) Name() string {
return t.path
}
func (t *tarFI) Size() int64 {
return t.size
}
func (t *tarFI) Mode() os.FileMode {
return 0444
}
func (t *tarFI) ModTime() time.Time {
return time.Unix(0, 0)
}
func (t *tarFI) IsDir() bool {
return false
}
func (t *tarFI) Sys() interface{} {
return nil
}
// sendFile sends a file into the tar stream.
func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader) error {
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: expectedSize}, "")
if err != nil {
return nil
}
logrus.Debugf("Sending as tar file %s", path)
if err := d.tar.WriteHeader(hdr); err != nil {
return err
}
size, err := io.Copy(d.tar, stream)
if err != nil {
return err
}
if size != expectedSize {
return errors.Errorf("Size mismatch when copying %s, expected %d, got %d", path, expectedSize, size)
}
return nil
}
// PutSignatures adds the given signatures to the docker tarfile (currently not
// supported). MUST be called after PutManifest (signatures reference manifest
// contents)
func (d *Destination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return errors.Errorf("Storing signatures for docker tar files is not supported")
}
return nil
}
// Commit finishes writing data to the underlying io.Writer.
// It is the caller's responsibility to close it, if necessary.
func (d *Destination) Commit() error {
return d.tar.Close()
}

View File

@@ -0,0 +1,3 @@
// Package tarfile is an internal implementation detail of some transports.
// Do not use outside of the github.com/containers/image repo!
package tarfile

View File

@@ -0,0 +1,352 @@
package tarfile
import (
"archive/tar"
"bytes"
"encoding/json"
"io"
"io/ioutil"
"os"
"path"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// Source is a partial implementation of types.ImageSource for reading from tarPath.
type Source struct {
tarPath string
// The following data is only available after ensureCachedDataIsPresent() succeeds
tarManifest *manifestItem // nil if not available yet.
configBytes []byte
configDigest digest.Digest
orderedDiffIDList []diffID
knownLayers map[diffID]*layerInfo
// Other state
generatedManifest []byte // Private cache for GetManifest(), nil if not set yet.
}
type layerInfo struct {
path string
size int64
}
// NewSource returns a tarfile.Source for the specified path.
func NewSource(path string) *Source {
// TODO: We could add support for multiple images in a single archive, so
// that people could use docker-archive:opensuse.tar:opensuse:leap as
// the source of an image.
return &Source{
tarPath: path,
}
}
// tarReadCloser is a way to close the backing file of a tar.Reader when the user no longer needs the tar component.
type tarReadCloser struct {
*tar.Reader
backingFile *os.File
}
func (t *tarReadCloser) Close() error {
return t.backingFile.Close()
}
// openTarComponent returns a ReadCloser for the specific file within the archive.
// This is linear scan; we assume that the tar file will have a fairly small amount of files (~layers),
// and that filesystem caching will make the repeated seeking over the (uncompressed) tarPath cheap enough.
// The caller should call .Close() on the returned stream.
func (s *Source) openTarComponent(componentPath string) (io.ReadCloser, error) {
f, err := os.Open(s.tarPath)
if err != nil {
return nil, err
}
succeeded := false
defer func() {
if !succeeded {
f.Close()
}
}()
tarReader, header, err := findTarComponent(f, componentPath)
if err != nil {
return nil, err
}
if header == nil {
return nil, os.ErrNotExist
}
if header.FileInfo().Mode()&os.ModeType == os.ModeSymlink { // FIXME: untested
// We follow only one symlink; so no loops are possible.
if _, err := f.Seek(0, os.SEEK_SET); err != nil {
return nil, err
}
// The new path could easily point "outside" the archive, but we only compare it to existing tar headers without extracting the archive,
// so we don't care.
tarReader, header, err = findTarComponent(f, path.Join(path.Dir(componentPath), header.Linkname))
if err != nil {
return nil, err
}
if header == nil {
return nil, os.ErrNotExist
}
}
if !header.FileInfo().Mode().IsRegular() {
return nil, errors.Errorf("Error reading tar archive component %s: not a regular file", header.Name)
}
succeeded = true
return &tarReadCloser{Reader: tarReader, backingFile: f}, nil
}
// findTarComponent returns a header and a reader matching path within inputFile,
// or (nil, nil, nil) if not found.
func findTarComponent(inputFile io.Reader, path string) (*tar.Reader, *tar.Header, error) {
t := tar.NewReader(inputFile)
for {
h, err := t.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, nil, err
}
if h.Name == path {
return t, h, nil
}
}
return nil, nil, nil
}
// readTarComponent returns full contents of componentPath.
func (s *Source) readTarComponent(path string) ([]byte, error) {
file, err := s.openTarComponent(path)
if err != nil {
return nil, errors.Wrapf(err, "Error loading tar component %s", path)
}
defer file.Close()
bytes, err := ioutil.ReadAll(file)
if err != nil {
return nil, err
}
return bytes, nil
}
// ensureCachedDataIsPresent loads data necessary for any of the public accessors.
func (s *Source) ensureCachedDataIsPresent() error {
if s.tarManifest != nil {
return nil
}
// Read and parse manifest.json
tarManifest, err := s.loadTarManifest()
if err != nil {
return err
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest.Config)
if err != nil {
return err
}
var parsedConfig image // Most fields ommitted, we only care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest.Config)
}
knownLayers, err := s.prepareLayerData(tarManifest, &parsedConfig)
if err != nil {
return err
}
// Success; commit.
s.tarManifest = tarManifest
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
s.knownLayers = knownLayers
return nil
}
// loadTarManifest loads and decodes the manifest.json.
func (s *Source) loadTarManifest() (*manifestItem, error) {
// FIXME? Do we need to deal with the legacy format?
bytes, err := s.readTarComponent(manifestFileName)
if err != nil {
return nil, err
}
var items []manifestItem
if err := json.Unmarshal(bytes, &items); err != nil {
return nil, errors.Wrap(err, "Error decoding tar manifest.json")
}
if len(items) != 1 {
return nil, errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(items))
}
return &items[0], nil
}
func (s *Source) prepareLayerData(tarManifest *manifestItem, parsedConfig *image) (map[diffID]*layerInfo, error) {
// Collect layer data available in manifest and config.
if len(tarManifest.Layers) != len(parsedConfig.RootFS.DiffIDs) {
return nil, errors.Errorf("Inconsistent layer count: %d in manifest, %d in config", len(tarManifest.Layers), len(parsedConfig.RootFS.DiffIDs))
}
knownLayers := map[diffID]*layerInfo{}
unknownLayerSizes := map[string]*layerInfo{} // Points into knownLayers, a "to do list" of items with unknown sizes.
for i, diffID := range parsedConfig.RootFS.DiffIDs {
if _, ok := knownLayers[diffID]; ok {
// Apparently it really can happen that a single image contains the same layer diff more than once.
// In that case, the diffID validation ensures that both layers truly are the same, and it should not matter
// which of the tarManifest.Layers paths is used; (docker save) actually makes the duplicates symlinks to the original.
continue
}
layerPath := tarManifest.Layers[i]
if _, ok := unknownLayerSizes[layerPath]; ok {
return nil, errors.Errorf("Layer tarfile %s used for two different DiffID values", layerPath)
}
li := &layerInfo{ // A new element in each iteration
path: layerPath,
size: -1,
}
knownLayers[diffID] = li
unknownLayerSizes[layerPath] = li
}
// Scan the tar file to collect layer sizes.
file, err := os.Open(s.tarPath)
if err != nil {
return nil, err
}
defer file.Close()
t := tar.NewReader(file)
for {
h, err := t.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, err
}
if li, ok := unknownLayerSizes[h.Name]; ok {
li.size = h.Size
delete(unknownLayerSizes, h.Name)
}
}
if len(unknownLayerSizes) != 0 {
return nil, errors.Errorf("Some layer tarfiles are missing in the tarball") // This could do with a better error reporting, if this ever happened in practice.
}
return knownLayers, nil
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *Source) GetManifest() ([]byte, string, error) {
if s.generatedManifest == nil {
if err := s.ensureCachedDataIsPresent(); err != nil {
return nil, "", err
}
m := schema2Manifest{
SchemaVersion: 2,
MediaType: manifest.DockerV2Schema2MediaType,
Config: distributionDescriptor{
MediaType: manifest.DockerV2Schema2ConfigMediaType,
Size: int64(len(s.configBytes)),
Digest: s.configDigest,
},
Layers: []distributionDescriptor{},
}
for _, diffID := range s.orderedDiffIDList {
li, ok := s.knownLayers[diffID]
if !ok {
return nil, "", errors.Errorf("Internal inconsistency: Information about layer %s missing", diffID)
}
m.Layers = append(m.Layers, distributionDescriptor{
Digest: digest.Digest(diffID), // diffID is a digest of the uncompressed tarball
MediaType: manifest.DockerV2Schema2LayerMediaType,
Size: li.size,
})
}
manifestBytes, err := json.Marshal(&m)
if err != nil {
return nil, "", err
}
s.generatedManifest = manifestBytes
}
return s.generatedManifest, manifest.DockerV2Schema2MediaType, nil
}
// GetTargetManifest returns an image's manifest given a digest. This is mainly used to retrieve a single image's manifest
// out of a manifest list.
func (s *Source) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
// How did we even get here? GetManifest() above has returned a manifest.DockerV2Schema2MediaType.
return nil, "", errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
}
type readCloseWrapper struct {
io.Reader
closeFunc func() error
}
func (r readCloseWrapper) Close() error {
if r.closeFunc != nil {
return r.closeFunc()
}
return nil
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if err := s.ensureCachedDataIsPresent(); err != nil {
return nil, 0, err
}
if info.Digest == s.configDigest { // FIXME? Implement a more general algorithm matching instead of assuming sha256.
return ioutil.NopCloser(bytes.NewReader(s.configBytes)), int64(len(s.configBytes)), nil
}
if li, ok := s.knownLayers[diffID(info.Digest)]; ok { // diffID is a digest of the uncompressed tarball,
stream, err := s.openTarComponent(li.path)
if err != nil {
return nil, 0, err
}
// In order to handle the fact that digests != diffIDs (and thus that a
// caller which is trying to verify the blob will run into problems),
// we need to decompress blobs. This is a bit ugly, but it's a
// consequence of making everything addressable by their DiffID rather
// than by their digest...
//
// In particular, because the v2s2 manifest being generated uses
// DiffIDs, any caller of GetBlob is going to be asking for DiffIDs of
// layers not their _actual_ digest. The result is that copy/... will
// be verifing a "digest" which is not the actual layer's digest (but
// is instead the DiffID).
decompressFunc, reader, err := compression.DetectCompression(stream)
if err != nil {
return nil, 0, errors.Wrapf(err, "Detecting compression in blob %s", info.Digest)
}
if decompressFunc != nil {
reader, err = decompressFunc(reader)
if err != nil {
return nil, 0, errors.Wrapf(err, "Decompressing blob %s stream", info.Digest)
}
}
newStream := readCloseWrapper{
Reader: reader,
closeFunc: stream.Close,
}
return newStream, li.size, nil
}
return nil, 0, errors.Errorf("Unknown blob %s", info.Digest)
}
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
func (s *Source) GetSignatures() ([][]byte, error) {
return [][]byte{}, nil
}

View File

@@ -0,0 +1,53 @@
package tarfile
import "github.com/opencontainers/go-digest"
// Various data structures.
// Based on github.com/docker/docker/image/tarexport/tarexport.go
const (
manifestFileName = "manifest.json"
// legacyLayerFileName = "layer.tar"
// legacyConfigFileName = "json"
// legacyVersionFileName = "VERSION"
// legacyRepositoriesFileName = "repositories"
)
type manifestItem struct {
Config string
RepoTags []string
Layers []string
Parent imageID `json:",omitempty"`
LayerSources map[diffID]distributionDescriptor `json:",omitempty"`
}
type imageID string
type diffID digest.Digest
// Based on github.com/docker/distribution/blobs.go
type distributionDescriptor struct {
MediaType string `json:"mediaType,omitempty"`
Size int64 `json:"size,omitempty"`
Digest digest.Digest `json:"digest,omitempty"`
URLs []string `json:"urls,omitempty"`
}
// Based on github.com/docker/distribution/manifest/schema2/manifest.go
// FIXME: We are repeating this all over the place; make a public copy?
type schema2Manifest struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType,omitempty"`
Config distributionDescriptor `json:"config"`
Layers []distributionDescriptor `json:"layers"`
}
// Based on github.com/docker/docker/image/image.go
// MOST CONTENT OMITTED AS UNNECESSARY
type image struct {
RootFS *rootFS `json:"rootfs,omitempty"`
}
type rootFS struct {
Type string `json:"type"`
DiffIDs []diffID `json:"diff_ids,omitempty"`
}

View File

@@ -0,0 +1,63 @@
package image
import (
"encoding/json"
"runtime"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type platformSpec struct {
Architecture string `json:"architecture"`
OS string `json:"os"`
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
Variant string `json:"variant,omitempty"`
Features []string `json:"features,omitempty"`
}
// A manifestDescriptor references a platform-specific manifest.
type manifestDescriptor struct {
descriptor
Platform platformSpec `json:"platform"`
}
type manifestList struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
Manifests []manifestDescriptor `json:"manifests"`
}
func manifestSchema2FromManifestList(src types.ImageSource, manblob []byte) (genericManifest, error) {
list := manifestList{}
if err := json.Unmarshal(manblob, &list); err != nil {
return nil, err
}
var targetManifestDigest digest.Digest
for _, d := range list.Manifests {
if d.Platform.Architecture == runtime.GOARCH && d.Platform.OS == runtime.GOOS {
targetManifestDigest = d.Digest
break
}
}
if targetManifestDigest == "" {
return nil, errors.New("no supported platform found in manifest list")
}
manblob, mt, err := src.GetTargetManifest(targetManifestDigest)
if err != nil {
return nil, err
}
matches, err := manifest.MatchesDigest(manblob, targetManifestDigest)
if err != nil {
return nil, errors.Wrap(err, "Error computing manifest digest")
}
if !matches {
return nil, errors.Errorf("Manifest image does not match selected manifest digest %s", targetManifestDigest)
}
return manifestInstanceFromBlob(src, manblob, mt)
}

View File

@@ -2,12 +2,16 @@ package image
import (
"encoding/json"
"errors"
"fmt"
"regexp"
"strings"
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
var (
@@ -15,18 +19,33 @@ var (
)
type fsLayersSchema1 struct {
BlobSum string `json:"blobSum"`
BlobSum digest.Digest `json:"blobSum"`
}
type historySchema1 struct {
V1Compatibility string `json:"v1Compatibility"`
}
// historySchema1 is a string containing this. It is similar to v1Image but not the same, in particular note the ThrowAway field.
type v1Compatibility struct {
ID string `json:"id"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig struct {
Cmd []string
} `json:"container_config,omitempty"`
Author string `json:"author,omitempty"`
ThrowAway bool `json:"throwaway,omitempty"`
}
type manifestSchema1 struct {
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []fsLayersSchema1 `json:"fsLayers"`
History []struct {
V1Compatibility string `json:"v1Compatibility"`
} `json:"history"`
SchemaVersion int `json:"schemaVersion"`
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []fsLayersSchema1 `json:"fsLayers"`
History []historySchema1 `json:"history"`
SchemaVersion int `json:"schemaVersion"`
}
func manifestSchema1FromManifest(manifest []byte) (genericManifest, error) {
@@ -34,23 +53,80 @@ func manifestSchema1FromManifest(manifest []byte) (genericManifest, error) {
if err := json.Unmarshal(manifest, mschema1); err != nil {
return nil, err
}
if mschema1.SchemaVersion != 1 {
return nil, errors.Errorf("unsupported schema version %d", mschema1.SchemaVersion)
}
if len(mschema1.FSLayers) != len(mschema1.History) {
return nil, errors.New("length of history not equal to number of layers")
}
if len(mschema1.FSLayers) == 0 {
return nil, errors.New("no FSLayers in manifest")
}
if err := fixManifestLayers(mschema1); err != nil {
return nil, err
}
// TODO(runcom): verify manifest schema 1, 2 etc
//if len(m.FSLayers) != len(m.History) {
//return nil, fmt.Errorf("length of history not equal to number of layers for %q", ref.String())
//}
//if len(m.FSLayers) == 0 {
//return nil, fmt.Errorf("no FSLayers in manifest for %q", ref.String())
//}
return mschema1, nil
}
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
func manifestSchema1FromComponents(ref reference.Named, fsLayers []fsLayersSchema1, history []historySchema1, architecture string) genericManifest {
var name, tag string
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
name = reference.Path(ref)
if tagged, ok := ref.(reference.NamedTagged); ok {
tag = tagged.Tag()
}
}
return &manifestSchema1{
Name: name,
Tag: tag,
Architecture: architecture,
FSLayers: fsLayers,
History: history,
SchemaVersion: 1,
}
}
func (m *manifestSchema1) serialize() ([]byte, error) {
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
unsigned, err := json.Marshal(*m)
if err != nil {
return nil, err
}
return manifest.AddDummyV2S1Signature(unsigned)
}
func (m *manifestSchema1) manifestMIMEType() string {
return manifest.DockerV2Schema1SignedMediaType
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{}
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema1) ConfigBlob() ([]byte, error) {
return nil, nil
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestSchema1) OCIConfig() (*imgspecv1.Image, error) {
v2s2, err := m.convertToManifestSchema2(nil, nil)
if err != nil {
return nil, err
}
return v2s2.OCIConfig()
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
layers := make([]types.BlobInfo, len(m.FSLayers))
for i, layer := range m.FSLayers { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
@@ -59,17 +135,9 @@ func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
return layers
}
func (m *manifestSchema1) Config() ([]byte, error) {
return []byte(m.History[0].V1Compatibility), nil
}
func (m *manifestSchema1) ImageInspectInfo() (*types.ImageInspectInfo, error) {
func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
v1 := &v1Image{}
config, err := m.Config()
if err != nil {
return nil, err
}
if err := json.Unmarshal(config, v1); err != nil {
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
@@ -82,12 +150,21 @@ func (m *manifestSchema1) ImageInspectInfo() (*types.ImageInspectInfo, error) {
}, nil
}
func (m *manifestSchema1) UpdatedManifest(options types.ManifestUpdateOptions) ([]byte, error) {
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return options.ManifestMIMEType == manifest.DockerV2Schema2MediaType
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m
if options.LayerInfos != nil {
// Our LayerInfos includes empty layers (where m.History.V1Compatibility->ThrowAway), so expect them to be included here as well.
if len(copy.FSLayers) != len(options.LayerInfos) {
return nil, fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.FSLayers), len(options.LayerInfos))
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.FSLayers), len(options.LayerInfos))
}
for i, info := range options.LayerInfos {
// (docker push) sets up m.History.V1Compatibility->{Id,Parent} based on values of info.Digest,
@@ -96,12 +173,19 @@ func (m *manifestSchema1) UpdatedManifest(options types.ManifestUpdateOptions) (
copy.FSLayers[(len(options.LayerInfos)-1)-i].BlobSum = info.Digest
}
}
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
unsigned, err := json.Marshal(copy)
if err != nil {
return nil, err
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
// We have 2 MIME types for schema 1, which are basically equivalent (even the un-"Signed" MIME type will be rejected if there isnt a signature; so,
// handle conversions between them by doing nothing.
case manifest.DockerV2Schema2MediaType:
return copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema1SignedMediaType, options.ManifestMIMEType)
}
return manifest.AddDummyV2S1Signature(unsigned)
return memoryImageFromManifest(&copy), nil
}
// fixManifestLayers, after validating the supplied manifest
@@ -130,7 +214,7 @@ func fixManifestLayers(manifest *manifestSchema1) error {
}
}
if imgs[len(imgs)-1].Parent != "" {
return errors.New("Invalid parent ID in the base layer of the image.")
return errors.New("Invalid parent ID in the base layer of the image")
}
// check general duplicates to error instead of a deadlock
idmap := make(map[string]struct{})
@@ -138,7 +222,7 @@ func fixManifestLayers(manifest *manifestSchema1) error {
for _, img := range imgs {
// skip IDs that appear after each other, we handle those later
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID)
return errors.Errorf("ID %+v appears multiple times in manifest", img.ID)
}
lastID = img.ID
idmap[lastID] = struct{}{}
@@ -149,7 +233,7 @@ func fixManifestLayers(manifest *manifestSchema1) error {
manifest.FSLayers = append(manifest.FSLayers[:i], manifest.FSLayers[i+1:]...)
manifest.History = append(manifest.History[:i], manifest.History[i+1:]...)
} else if imgs[i].Parent != imgs[i+1].ID {
return fmt.Errorf("Invalid parent ID. Expected %v, got %v.", imgs[i+1].ID, imgs[i].Parent)
return errors.Errorf("Invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent)
}
}
return nil
@@ -157,7 +241,106 @@ func fixManifestLayers(manifest *manifestSchema1) error {
func validateV1ID(id string) error {
if ok := validHex.MatchString(id); !ok {
return fmt.Errorf("image ID %q is invalid", id)
return errors.Errorf("image ID %q is invalid", id)
}
return nil
}
// Based on github.com/docker/docker/distribution/pull_v2.go
func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.BlobInfo, layerDiffIDs []digest.Digest) (types.Image, error) {
if len(m.History) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
return nil, errors.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema2MediaType)
}
if len(m.History) != len(m.FSLayers) {
return nil, errors.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.History), len(m.FSLayers))
}
if uploadedLayerInfos != nil && len(uploadedLayerInfos) != len(m.FSLayers) {
return nil, errors.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.FSLayers))
}
if layerDiffIDs != nil && len(layerDiffIDs) != len(m.FSLayers) {
return nil, errors.Errorf("Internal error: collected %d DiffID values, but schema1 manifest has %d fsLayers", len(layerDiffIDs), len(m.FSLayers))
}
rootFS := rootFS{
Type: "layers",
DiffIDs: []digest.Digest{},
BaseLayer: "",
}
var layers []descriptor
history := make([]imageHistory, len(m.History))
for v1Index := len(m.History) - 1; v1Index >= 0; v1Index-- {
v2Index := (len(m.History) - 1) - v1Index
var v1compat v1Compatibility
if err := json.Unmarshal([]byte(m.History[v1Index].V1Compatibility), &v1compat); err != nil {
return nil, errors.Wrapf(err, "Error decoding history entry %d", v1Index)
}
history[v2Index] = imageHistory{
Created: v1compat.Created,
Author: v1compat.Author,
CreatedBy: strings.Join(v1compat.ContainerConfig.Cmd, " "),
Comment: v1compat.Comment,
EmptyLayer: v1compat.ThrowAway,
}
if !v1compat.ThrowAway {
var size int64
if uploadedLayerInfos != nil {
size = uploadedLayerInfos[v2Index].Size
}
var d digest.Digest
if layerDiffIDs != nil {
d = layerDiffIDs[v2Index]
}
layers = append(layers, descriptor{
MediaType: "application/vnd.docker.image.rootfs.diff.tar.gzip",
Size: size,
Digest: m.FSLayers[v1Index].BlobSum,
})
rootFS.DiffIDs = append(rootFS.DiffIDs, d)
}
}
configJSON, err := configJSONFromV1Config([]byte(m.History[0].V1Compatibility), rootFS, history)
if err != nil {
return nil, err
}
configDescriptor := descriptor{
MediaType: "application/vnd.docker.container.image.v1+json",
Size: int64(len(configJSON)),
Digest: digest.FromBytes(configJSON),
}
m2 := manifestSchema2FromComponents(configDescriptor, nil, configJSON, layers)
return memoryImageFromManifest(m2), nil
}
func configJSONFromV1Config(v1ConfigJSON []byte, rootFS rootFS, history []imageHistory) ([]byte, error) {
// github.com/docker/docker/image/v1/imagev1.go:MakeConfigFromV1Config unmarshals and re-marshals the input if docker_version is < 1.8.3 to remove blank fields;
// we don't do that here. FIXME? Should we? AFAICT it would only affect the digest value of the schema2 manifest, and we don't particularly need that to be
// a consistently reproducible value.
// Preserve everything we don't specifically know about.
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
rawContents := map[string]*json.RawMessage{}
if err := json.Unmarshal(v1ConfigJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
return nil, err
}
delete(rawContents, "id")
delete(rawContents, "parent")
delete(rawContents, "Size")
delete(rawContents, "parent_id")
delete(rawContents, "layer_id")
delete(rawContents, "throwaway")
updates := map[string]interface{}{"rootfs": rootFS, "history": history}
for field, value := range updates {
encoded, err := json.Marshal(value)
if err != nil {
return nil, err
}
rawContents[field] = (*json.RawMessage)(&encoded)
}
return json.Marshal(rawContents)
}

View File

@@ -1,25 +1,47 @@
package image
import (
"bytes"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io/ioutil"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// gzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
// This comes from github.com/docker/distribution/manifest/schema1/config_builder.go; there is
// a non-zero embedded timestamp; we could zero that, but that would just waste storage space
// in registries, so lets use the same values.
var gzippedEmptyLayer = []byte{
31, 139, 8, 0, 0, 9, 110, 136, 0, 255, 98, 24, 5, 163, 96, 20, 140, 88,
0, 8, 0, 0, 255, 255, 46, 175, 181, 239, 0, 4, 0, 0,
}
// gzippedEmptyLayerDigest is a digest of gzippedEmptyLayer
const gzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
type descriptor struct {
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
Digest string `json:"digest"`
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
URLs []string `json:"urls,omitempty"`
}
type manifestSchema2 struct {
src types.ImageSource
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
}
func manifestSchema2FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
@@ -30,30 +52,96 @@ func manifestSchema2FromManifest(src types.ImageSource, manifest []byte) (generi
return &v2s2, nil
}
// manifestSchema2FromComponents builds a new manifestSchema2 from the supplied data:
func manifestSchema2FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
return &manifestSchema2{
src: src,
configBlob: configBlob,
SchemaVersion: 2,
MediaType: manifest.DockerV2Schema2MediaType,
ConfigDescriptor: config,
LayersDescriptors: layers,
}
}
func (m *manifestSchema2) serialize() ([]byte, error) {
return json.Marshal(*m)
}
func (m *manifestSchema2) manifestMIMEType() string {
return m.MediaType
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema2) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestSchema2) OCIConfig() (*imgspecv1.Image, error) {
configBlob, err := m.ConfigBlob()
if err != nil {
return nil, err
}
// docker v2s2 and OCI v1 are mostly compatible but v2s2 contains more fields
// than OCI v1. This unmarshal makes sure we drop docker v2s2
// fields that aren't needed in OCI v1.
configOCI := &imgspecv1.Image{}
if err := json.Unmarshal(configBlob, configOCI); err != nil {
return nil, err
}
return configOCI, nil
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestSchema2")
}
stream, _, err := m.src.GetBlob(types.BlobInfo{
Digest: m.ConfigDescriptor.Digest,
Size: m.ConfigDescriptor.Size,
URLs: m.ConfigDescriptor.URLs,
})
if err != nil {
return nil, err
}
defer stream.Close()
blob, err := ioutil.ReadAll(stream)
if err != nil {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.ConfigDescriptor.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
}
m.configBlob = blob
}
return m.configBlob, nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size})
blobs = append(blobs, types.BlobInfo{
Digest: layer.Digest,
Size: layer.Size,
URLs: layer.URLs,
})
}
return blobs
}
func (m *manifestSchema2) Config() ([]byte, error) {
rawConfig, _, err := m.src.GetBlob(m.ConfigDescriptor.Digest)
if err != nil {
return nil, err
}
config, err := ioutil.ReadAll(rawConfig)
rawConfig.Close()
return config, err
}
func (m *manifestSchema2) ImageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.Config()
func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
return nil, err
}
@@ -70,16 +158,195 @@ func (m *manifestSchema2) ImageInspectInfo() (*types.ImageInspectInfo, error) {
}, nil
}
func (m *manifestSchema2) UpdatedManifest(options types.ManifestUpdateOptions) ([]byte, error) {
copy := *m
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema2) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return false
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
if options.LayerInfos != nil {
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
copy.LayersDescriptors[i].URLs = info.URLs
}
}
return json.Marshal(copy)
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema1SignedMediaType, manifest.DockerV2Schema1MediaType:
return copy.convertToManifestSchema1(options.InformationOnly.Destination)
case imgspecv1.MediaTypeImageManifest:
return copy.convertToManifestOCI1()
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema2MediaType, options.ManifestMIMEType)
}
return memoryImageFromManifest(&copy), nil
}
func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
configOCI, err := m.OCIConfig()
if err != nil {
return nil, err
}
configOCIBytes, err := json.Marshal(configOCI)
if err != nil {
return nil, err
}
config := descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Size: int64(len(configOCIBytes)),
Digest: digest.FromBytes(configOCIBytes),
}
layers := make([]descriptor, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
if m.LayersDescriptors[idx].MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable
} else {
// we assume layers are gzip'ed because docker v2s2 only deals with
// gzip'ed layers. However, OCI has non-gzip'ed layers as well.
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerGzip
}
}
m1 := manifestOCI1FromComponents(config, m.src, configOCIBytes, layers)
return memoryImageFromManifest(m1), nil
}
// Based on docker/distribution/manifest/schema1/config_builder.go
func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination) (types.Image, error) {
configBytes, err := m.ConfigBlob()
if err != nil {
return nil, err
}
imageConfig := &image{}
if err := json.Unmarshal(configBytes, imageConfig); err != nil {
return nil, err
}
// Build fsLayers and History, discarding all configs. We will patch the top-level config in later.
fsLayers := make([]fsLayersSchema1, len(imageConfig.History))
history := make([]historySchema1, len(imageConfig.History))
nonemptyLayerIndex := 0
var parentV1ID string // Set in the loop
v1ID := ""
haveGzippedEmptyLayer := false
if len(imageConfig.History) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
return nil, errors.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema1SignedMediaType)
}
for v2Index, historyEntry := range imageConfig.History {
parentV1ID = v1ID
v1Index := len(imageConfig.History) - 1 - v2Index
var blobDigest digest.Digest
if historyEntry.EmptyLayer {
if !haveGzippedEmptyLayer {
logrus.Debugf("Uploading empty layer during conversion to schema 1")
info, err := dest.PutBlob(bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Digest: gzippedEmptyLayerDigest, Size: int64(len(gzippedEmptyLayer))})
if err != nil {
return nil, errors.Wrap(err, "Error uploading empty layer")
}
if info.Digest != gzippedEmptyLayerDigest {
return nil, errors.Errorf("Internal error: Uploaded empty layer has digest %#v instead of %s", info.Digest, gzippedEmptyLayerDigest)
}
haveGzippedEmptyLayer = true
}
blobDigest = gzippedEmptyLayerDigest
} else {
if nonemptyLayerIndex >= len(m.LayersDescriptors) {
return nil, errors.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.LayersDescriptors))
}
blobDigest = m.LayersDescriptors[nonemptyLayerIndex].Digest
nonemptyLayerIndex++
}
// AFAICT pull ignores these ID values, at least nowadays, so we could use anything unique, including a simple counter. Use what Docker uses for cargo-cult consistency.
v, err := v1IDFromBlobDigestAndComponents(blobDigest, parentV1ID)
if err != nil {
return nil, err
}
v1ID = v
fakeImage := v1Compatibility{
ID: v1ID,
Parent: parentV1ID,
Comment: historyEntry.Comment,
Created: historyEntry.Created,
Author: historyEntry.Author,
ThrowAway: historyEntry.EmptyLayer,
}
fakeImage.ContainerConfig.Cmd = []string{historyEntry.CreatedBy}
v1CompatibilityBytes, err := json.Marshal(&fakeImage)
if err != nil {
return nil, errors.Errorf("Internal error: Error creating v1compatibility for %#v", fakeImage)
}
fsLayers[v1Index] = fsLayersSchema1{BlobSum: blobDigest}
history[v1Index] = historySchema1{V1Compatibility: string(v1CompatibilityBytes)}
// Note that parentV1ID of the top layer is preserved when exiting this loop
}
// Now patch in real configuration for the top layer (v1Index == 0)
v1ID, err = v1IDFromBlobDigestAndComponents(fsLayers[0].BlobSum, parentV1ID, string(configBytes)) // See above WRT v1ID value generation and cargo-cult consistency.
if err != nil {
return nil, err
}
v1Config, err := v1ConfigFromConfigJSON(configBytes, v1ID, parentV1ID, imageConfig.History[len(imageConfig.History)-1].EmptyLayer)
if err != nil {
return nil, err
}
history[0].V1Compatibility = string(v1Config)
m1 := manifestSchema1FromComponents(dest.Reference().DockerReference(), fsLayers, history, imageConfig.Architecture)
return memoryImageFromManifest(m1), nil
}
func v1IDFromBlobDigestAndComponents(blobDigest digest.Digest, others ...string) (string, error) {
if err := blobDigest.Validate(); err != nil {
return "", err
}
parts := append([]string{blobDigest.Hex()}, others...)
v1IDHash := sha256.Sum256([]byte(strings.Join(parts, " ")))
return hex.EncodeToString(v1IDHash[:]), nil
}
func v1ConfigFromConfigJSON(configJSON []byte, v1ID, parentV1ID string, throwaway bool) ([]byte, error) {
// Preserve everything we don't specifically know about.
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
rawContents := map[string]*json.RawMessage{}
if err := json.Unmarshal(configJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
return nil, err
}
delete(rawContents, "rootfs")
delete(rawContents, "history")
updates := map[string]interface{}{"id": v1ID}
if parentV1ID != "" {
updates["parent"] = parentV1ID
}
if throwaway {
updates["throwaway"] = throwaway
}
for field, value := range updates {
encoded, err := json.Marshal(value)
if err != nil {
return nil, err
}
rawContents[field] = (*json.RawMessage)(&encoded)
}
return json.Marshal(rawContents)
}

View File

@@ -1,186 +0,0 @@
// Package image consolidates knowledge about various container image formats
// (as opposed to image storage mechanisms, which are handled by types.ImageSource)
// and exposes all of them using an unified interface.
package image
import (
"errors"
"fmt"
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
)
// genericImage is a general set of utilities for working with container images,
// whatever is their underlying location (i.e. dockerImageSource-independent).
// Note the existence of skopeo/docker.Image: some instances of a `types.Image`
// may not be a `genericImage` directly. However, most users of `types.Image`
// do not care, and those who care about `skopeo/docker.Image` know they do.
type genericImage struct {
src types.ImageSource
// private cache for Manifest(); nil if not yet known.
cachedManifest []byte
// private cache for the manifest media type w/o having to guess it
// this may be the empty string in case the MIME Type wasn't guessed correctly
// this field is valid only if cachedManifest is not nil
cachedManifestMIMEType string
// private cache for Signatures(); nil if not yet known.
cachedSignatures [][]byte
}
// FromSource returns a types.Image implementation for source.
// The caller must call .Close() on the returned Image.
//
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// Image and ImageSource objects simultaneously, but it means that they only need to
// the Image.)
func FromSource(src types.ImageSource) types.Image {
return &genericImage{src: src}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *genericImage) Reference() types.ImageReference {
return i.src.Reference()
}
// Close removes resources associated with an initialized Image, if any.
func (i *genericImage) Close() {
i.src.Close()
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
// NOTE: It is essential for signature verification that Manifest returns the manifest from which ConfigInfo and LayerInfos is computed.
func (i *genericImage) Manifest() ([]byte, string, error) {
if i.cachedManifest == nil {
m, mt, err := i.src.GetManifest()
if err != nil {
return nil, "", err
}
i.cachedManifest = m
if mt == "" || mt == "text/plain" {
// Crane registries can return "text/plain".
// This makes no real sense, but it happens
// because requests for manifests are
// redirected to a content distribution
// network which is configured that way.
mt = manifest.GuessMIMEType(i.cachedManifest)
}
i.cachedManifestMIMEType = mt
}
return i.cachedManifest, i.cachedManifestMIMEType, nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *genericImage) Signatures() ([][]byte, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignatures()
if err != nil {
return nil, err
}
i.cachedSignatures = sigs
}
return i.cachedSignatures, nil
}
type config struct {
Labels map[string]string
}
type v1Image struct {
// Config is the configuration of the container received from the client
Config *config `json:"config,omitempty"`
// DockerVersion specifies version on which image is built
DockerVersion string `json:"docker_version,omitempty"`
// Created timestamp when image was created
Created time.Time `json:"created"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
}
// will support v1 one day...
type genericManifest interface {
Config() ([]byte, error)
ConfigInfo() types.BlobInfo
LayerInfos() []types.BlobInfo
ImageInspectInfo() (*types.ImageInspectInfo, error) // The caller will need to fill in Layers
UpdatedManifest(types.ManifestUpdateOptions) ([]byte, error)
}
// getParsedManifest parses the manifest into a data structure, cleans it up, and returns it.
// NOTE: The manifest may have been modified in the process; DO NOT reserialize and store the return value
// if you want to preserve the original manifest; use the blob returned by Manifest() directly.
// NOTE: It is essential for signature verification that the object is computed from the same manifest which is returned by Manifest().
func (i *genericImage) getParsedManifest() (genericManifest, error) {
manblob, mt, err := i.Manifest()
if err != nil {
return nil, err
}
switch mt {
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
// need to happen within the ImageSource.
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, "application/json":
return manifestSchema1FromManifest(manblob)
case manifest.DockerV2Schema2MediaType:
return manifestSchema2FromManifest(i.src, manblob)
case "":
return nil, errors.New("could not guess manifest media type")
default:
return nil, fmt.Errorf("unsupported manifest media type %s", mt)
}
}
func (i *genericImage) Inspect() (*types.ImageInspectInfo, error) {
// TODO(runcom): unused version param for now, default to docker v2-1
m, err := i.getParsedManifest()
if err != nil {
return nil, err
}
info, err := m.ImageInspectInfo()
if err != nil {
return nil, err
}
layers := m.LayerInfos()
info.Layers = make([]string, len(layers))
for i, layer := range layers {
info.Layers[i] = layer.Digest
}
return info, nil
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// NOTE: It is essential for signature verification that ConfigInfo is computed from the same manifest which is returned by Manifest().
func (i *genericImage) ConfigInfo() (types.BlobInfo, error) {
m, err := i.getParsedManifest()
if err != nil {
return types.BlobInfo{}, err
}
return m.ConfigInfo(), nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// NOTE: It is essential for signature verification that LayerInfos is computed from the same manifest which is returned by Manifest().
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (i *genericImage) LayerInfos() ([]types.BlobInfo, error) {
m, err := i.getParsedManifest()
if err != nil {
return nil, err
}
return m.LayerInfos(), nil
}
// UpdatedManifest returns the image's manifest modified according to updateOptions.
// This does not change the state of the Image object.
func (i *genericImage) UpdatedManifest(options types.ManifestUpdateOptions) ([]byte, error) {
m, err := i.getParsedManifest()
if err != nil {
return nil, err
}
return m.UpdatedManifest(options)
}

124
vendor/github.com/containers/image/image/manifest.go generated vendored Normal file
View File

@@ -0,0 +1,124 @@
package image
import (
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/strslice"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type config struct {
Cmd strslice.StrSlice
Labels map[string]string
}
type v1Image struct {
ID string `json:"id,omitempty"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig *config `json:"container_config,omitempty"`
DockerVersion string `json:"docker_version,omitempty"`
Author string `json:"author,omitempty"`
// Config is the configuration of the container received from the client
Config *config `json:"config,omitempty"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
}
type image struct {
v1Image
History []imageHistory `json:"history,omitempty"`
RootFS *rootFS `json:"rootfs,omitempty"`
}
type imageHistory struct {
Created time.Time `json:"created"`
Author string `json:"author,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
Comment string `json:"comment,omitempty"`
EmptyLayer bool `json:"empty_layer,omitempty"`
}
type rootFS struct {
Type string `json:"type"`
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
BaseLayer string `json:"base_layer,omitempty"`
}
// genericManifest is an interface for parsing, modifying image manifests and related data.
// Note that the public methods are intended to be a subset of types.Image
// so that embedding a genericManifest into structs works.
// will support v1 one day...
type genericManifest interface {
serialize() ([]byte, error)
manifestMIMEType() string
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
ConfigInfo() types.BlobInfo
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
ConfigBlob() ([]byte, error)
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
OCIConfig() (*imgspecv1.Image, error)
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []types.BlobInfo
imageInspectInfo() (*types.ImageInspectInfo, error) // To be called by inspectManifest
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error)
}
func manifestInstanceFromBlob(src types.ImageSource, manblob []byte, mt string) (genericManifest, error) {
switch mt {
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
// need to happen within the ImageSource.
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, "application/json":
return manifestSchema1FromManifest(manblob)
case imgspecv1.MediaTypeImageManifest:
return manifestOCI1FromManifest(src, manblob)
case manifest.DockerV2Schema2MediaType:
return manifestSchema2FromManifest(src, manblob)
case manifest.DockerV2ListMediaType:
return manifestSchema2FromManifestList(src, manblob)
default:
// If it's not a recognized manifest media type, or we have failed determining the type, we'll try one last time
// to deserialize using v2s1 as per https://github.com/docker/distribution/blob/master/manifests.go#L108
// and https://github.com/docker/distribution/blob/master/manifest/schema1/manifest.go#L50
//
// Crane registries can also return "text/plain", or pretty much anything else depending on a file extension “recognized” in the tag.
// This makes no real sense, but it happens
// because requests for manifests are
// redirected to a content distribution
// network which is configured that way. See https://bugzilla.redhat.com/show_bug.cgi?id=1389442
return manifestSchema1FromManifest(manblob)
}
}
// inspectManifest is an implementation of types.Image.Inspect
func inspectManifest(m genericManifest) (*types.ImageInspectInfo, error) {
info, err := m.imageInspectInfo()
if err != nil {
return nil, err
}
layers := m.LayerInfos()
info.Layers = make([]string, len(layers))
for i, layer := range layers {
info.Layers[i] = layer.Digest.String()
}
return info, nil
}

71
vendor/github.com/containers/image/image/memory.go generated vendored Normal file
View File

@@ -0,0 +1,71 @@
package image
import (
"github.com/pkg/errors"
"github.com/containers/image/types"
)
// memoryImage is a mostly-implementation of types.Image assembled from data
// created in memory, used primarily as a return value of types.Image.UpdatedImage
// as a way to carry various structured information in a type-safe and easy-to-use way.
// Note that this _only_ carries the immediate metadata; it is _not_ a stand-alone
// collection of all related information, e.g. there is no way to get layer blobs
// from a memoryImage.
type memoryImage struct {
genericManifest
serializedManifest []byte // A private cache for Manifest()
}
func memoryImageFromManifest(m genericManifest) types.Image {
return &memoryImage{
genericManifest: m,
serializedManifest: nil,
}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *memoryImage) Reference() types.ImageReference {
// It would really be inappropriate to return the ImageReference of the image this was based on.
return nil
}
// Close removes resources associated with an initialized UnparsedImage, if any.
func (i *memoryImage) Close() error {
return nil
}
// Size returns the size of the image as stored, if known, or -1 if not.
func (i *memoryImage) Size() (int64, error) {
return -1, nil
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Manifest() ([]byte, string, error) {
if i.serializedManifest == nil {
m, err := i.genericManifest.serialize()
if err != nil {
return nil, "", err
}
i.serializedManifest = m
}
return i.serializedManifest, i.genericManifest.manifestMIMEType(), nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Signatures() ([][]byte, error) {
// Modifying an image invalidates signatures; a caller asking the updated image for signatures
// is probably confused.
return nil, errors.New("Internal error: Image.Signatures() is not supported for images modified in memory")
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (i *memoryImage) Inspect() (*types.ImageInspectInfo, error) {
return inspectManifest(i.genericManifest)
}
// IsMultiImage returns true if the image's manifest is a list of images, false otherwise.
func (i *memoryImage) IsMultiImage() bool {
return false
}

180
vendor/github.com/containers/image/image/oci.go generated vendored Normal file
View File

@@ -0,0 +1,180 @@
package image
import (
"encoding/json"
"io/ioutil"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
type manifestOCI1 struct {
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
}
func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
oci := manifestOCI1{src: src}
if err := json.Unmarshal(manifest, &oci); err != nil {
return nil, err
}
return &oci, nil
}
// manifestOCI1FromComponents builds a new manifestOCI1 from the supplied data:
func manifestOCI1FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
return &manifestOCI1{
src: src,
configBlob: configBlob,
SchemaVersion: 2,
ConfigDescriptor: config,
LayersDescriptors: layers,
}
}
func (m *manifestOCI1) serialize() ([]byte, error) {
return json.Marshal(*m)
}
func (m *manifestOCI1) manifestMIMEType() string {
return imgspecv1.MediaTypeImageManifest
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestOCI1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestOCI1")
}
stream, _, err := m.src.GetBlob(types.BlobInfo{
Digest: m.ConfigDescriptor.Digest,
Size: m.ConfigDescriptor.Size,
URLs: m.ConfigDescriptor.URLs,
})
if err != nil {
return nil, err
}
defer stream.Close()
blob, err := ioutil.ReadAll(stream)
if err != nil {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.ConfigDescriptor.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
}
m.configBlob = blob
}
return m.configBlob, nil
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestOCI1) OCIConfig() (*imgspecv1.Image, error) {
cb, err := m.ConfigBlob()
if err != nil {
return nil, err
}
configOCI := &imgspecv1.Image{}
if err := json.Unmarshal(cb, configOCI); err != nil {
return nil, err
}
return configOCI, nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size})
}
return blobs
}
func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
return nil, err
}
v1 := &v1Image{}
if err := json.Unmarshal(config, v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
}, nil
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestOCI1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return false
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
if options.LayerInfos != nil {
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
}
}
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema2MediaType:
return copy.convertToManifestSchema2()
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", imgspecv1.MediaTypeImageManifest, options.ManifestMIMEType)
}
return memoryImageFromManifest(&copy), nil
}
func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
// Create a copy of the descriptor.
config := m.ConfigDescriptor
// The only difference between OCI and DockerSchema2 is the mediatypes. The
// media type of the manifest is handled by manifestSchema2FromComponents.
config.MediaType = manifest.DockerV2Schema2ConfigMediaType
layers := make([]descriptor, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
layers[idx].MediaType = manifest.DockerV2Schema2LayerMediaType
}
// Rather than copying the ConfigBlob now, we just pass m.src to the
// translated manifest, since the only difference is the mediatype of
// descriptors there is no change to any blob stored in m.src.
m1 := manifestSchema2FromComponents(config, m.src, nil, layers)
return memoryImageFromManifest(m1), nil
}

90
vendor/github.com/containers/image/image/sourced.go generated vendored Normal file
View File

@@ -0,0 +1,90 @@
// Package image consolidates knowledge about various container image formats
// (as opposed to image storage mechanisms, which are handled by types.ImageSource)
// and exposes all of them using an unified interface.
package image
import (
"github.com/containers/image/manifest"
"github.com/containers/image/types"
)
// FromSource returns a types.Image implementation for source.
// The caller must call .Close() on the returned Image.
//
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// Image and ImageSource objects simultaneously, but it means that they only need to
// the Image.)
//
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage instead of calling this function.
func FromSource(src types.ImageSource) (types.Image, error) {
return FromUnparsedImage(UnparsedFromSource(src))
}
// sourcedImage is a general set of utilities for working with container images,
// whatever is their underlying location (i.e. dockerImageSource-independent).
// Note the existence of skopeo/docker.Image: some instances of a `types.Image`
// may not be a `sourcedImage` directly. However, most users of `types.Image`
// do not care, and those who care about `skopeo/docker.Image` know they do.
type sourcedImage struct {
*UnparsedImage
manifestBlob []byte
manifestMIMEType string
// genericManifest contains data corresponding to manifestBlob.
// NOTE: The manifest may have been modified in the process; DO NOT reserialize and store genericManifest
// if you want to preserve the original manifest; use manifestBlob directly.
genericManifest
}
// FromUnparsedImage returns a types.Image implementation for unparsed.
// The caller must call .Close() on the returned Image.
//
// FromSource “takes ownership” of the input UnparsedImage and will call uparsed.Close()
// when the image is closed. (This does not prevent callers from using both the
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
// keep a reference to the Image.)
func FromUnparsedImage(unparsed *UnparsedImage) (types.Image, error) {
// Note that the input parameter above is specifically *image.UnparsedImage, not types.UnparsedImage:
// we want to be able to use unparsed.src. We could make that an explicit interface, but, well,
// this is the only UnparsedImage implementation around, anyway.
// Also, we do not explicitly implement types.Image.Close; we let the implementation fall through to
// unparsed.Close.
// NOTE: It is essential for signature verification that all parsing done in this object happens on the same manifest which is returned by unparsed.Manifest().
manifestBlob, manifestMIMEType, err := unparsed.Manifest()
if err != nil {
return nil, err
}
parsedManifest, err := manifestInstanceFromBlob(unparsed.src, manifestBlob, manifestMIMEType)
if err != nil {
return nil, err
}
return &sourcedImage{
UnparsedImage: unparsed,
manifestBlob: manifestBlob,
manifestMIMEType: manifestMIMEType,
genericManifest: parsedManifest,
}, nil
}
// Size returns the size of the image as stored, if it's known, or -1 if it isn't.
func (i *sourcedImage) Size() (int64, error) {
return -1, nil
}
// Manifest overrides the UnparsedImage.Manifest to always use the fields which we have already fetched.
func (i *sourcedImage) Manifest() ([]byte, string, error) {
return i.manifestBlob, i.manifestMIMEType, nil
}
func (i *sourcedImage) Inspect() (*types.ImageInspectInfo, error) {
return inspectManifest(i.genericManifest)
}
func (i *sourcedImage) IsMultiImage() bool {
return i.manifestMIMEType == manifest.DockerV2ListMediaType
}

83
vendor/github.com/containers/image/image/unparsed.go generated vendored Normal file
View File

@@ -0,0 +1,83 @@
package image
import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// UnparsedImage implements types.UnparsedImage .
type UnparsedImage struct {
src types.ImageSource
cachedManifest []byte // A private cache for Manifest(); nil if not yet known.
// A private cache for Manifest(), may be the empty string if guessing failed.
// Valid iff cachedManifest is not nil.
cachedManifestMIMEType string
cachedSignatures [][]byte // A private cache for Signatures(); nil if not yet known.
}
// UnparsedFromSource returns a types.UnparsedImage implementation for source.
// The caller must call .Close() on the returned UnparsedImage.
//
// UnparsedFromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
// keep a reference to the UnparsedImage.)
func UnparsedFromSource(src types.ImageSource) *UnparsedImage {
return &UnparsedImage{src: src}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *UnparsedImage) Reference() types.ImageReference {
return i.src.Reference()
}
// Close removes resources associated with an initialized UnparsedImage, if any.
func (i *UnparsedImage) Close() error {
return i.src.Close()
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Manifest() ([]byte, string, error) {
if i.cachedManifest == nil {
m, mt, err := i.src.GetManifest()
if err != nil {
return nil, "", err
}
// ImageSource.GetManifest does not do digest verification, but we do;
// this immediately protects also any user of types.Image.
ref := i.Reference().DockerReference()
if ref != nil {
if canonical, ok := ref.(reference.Canonical); ok {
digest := digest.Digest(canonical.Digest())
matches, err := manifest.MatchesDigest(m, digest)
if err != nil {
return nil, "", errors.Wrap(err, "Error computing manifest digest")
}
if !matches {
return nil, "", errors.Errorf("Manifest does not match provided manifest digest %s", digest)
}
}
}
i.cachedManifest = m
i.cachedManifestMIMEType = mt
}
return i.cachedManifest, i.cachedManifestMIMEType, nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Signatures() ([][]byte, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignatures()
if err != nil {
return nil, err
}
i.cachedSignatures = sigs
}
return i.cachedSignatures, nil
}

View File

@@ -1,11 +1,10 @@
package manifest
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"github.com/docker/libtrust"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
@@ -19,8 +18,14 @@ const (
DockerV2Schema1SignedMediaType = "application/vnd.docker.distribution.manifest.v1+prettyjws"
// DockerV2Schema2MediaType MIME type represents Docker manifest schema 2
DockerV2Schema2MediaType = "application/vnd.docker.distribution.manifest.v2+json"
// DockerV2Schema2ConfigMediaType is the MIME type used for schema 2 config blobs.
DockerV2Schema2ConfigMediaType = "application/vnd.docker.container.image.v1+json"
// DockerV2Schema2LayerMediaType is the MIME type used for schema 2 layers.
DockerV2Schema2LayerMediaType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
// DockerV2ListMediaType MIME type represents Docker manifest schema 2 list
DockerV2ListMediaType = "application/vnd.docker.distribution.manifest.list.v2+json"
// DockerV2Schema2ForeignLayerMediaType is the MIME type used for schema 2 foreign layers.
DockerV2Schema2ForeignLayerMediaType = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
)
// DefaultRequestedManifestMIMETypes is a list of MIME types a types.ImageSource
@@ -30,6 +35,7 @@ var DefaultRequestedManifestMIMETypes = []string{
DockerV2Schema2MediaType,
DockerV2Schema1SignedMediaType,
DockerV2Schema1MediaType,
DockerV2ListMediaType,
}
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
@@ -65,7 +71,7 @@ func GuessMIMEType(manifest []byte) string {
}
// Digest returns the a digest of a docker manifest, with any necessary implied transformations like stripping v1s1 signatures.
func Digest(manifest []byte) (string, error) {
func Digest(manifest []byte) (digest.Digest, error) {
if GuessMIMEType(manifest) == DockerV2Schema1SignedMediaType {
sig, err := libtrust.ParsePrettySignature(manifest, "signatures")
if err != nil {
@@ -79,15 +85,14 @@ func Digest(manifest []byte) (string, error) {
}
}
hash := sha256.Sum256(manifest)
return "sha256:" + hex.EncodeToString(hash[:]), nil
return digest.FromBytes(manifest), nil
}
// MatchesDigest returns true iff the manifest matches expectedDigest.
// Error may be set if this returns false.
// Note that this is not doing ConstantTimeCompare; by the time we get here, the cryptographic signature must already have been verified,
// or we are not using a cryptographic channel and the attacker can modify the digest along with the manifest blob.
func MatchesDigest(manifest []byte, expectedDigest string) (bool, error) {
func MatchesDigest(manifest []byte, expectedDigest digest.Digest) (bool, error) {
// This should eventually support various digest types.
actualDigest, err := Digest(manifest)
if err != nil {

View File

@@ -1,19 +1,17 @@
package layout
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/pkg/errors"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
imgspec "github.com/opencontainers/image-spec/specs-go"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
@@ -33,24 +31,30 @@ func (d *ociImageDestination) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *ociImageDestination) Close() {
func (d *ociImageDestination) Close() error {
return nil
}
func (d *ociImageDestination) SupportedManifestMIMETypes() []string {
return []string{
imgspecv1.MediaTypeImageManifest,
manifest.DockerV2Schema2MediaType,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *ociImageDestination) SupportsSignatures() error {
return fmt.Errorf("Pushing signatures for OCI images is not supported")
return errors.Errorf("Pushing signatures for OCI images is not supported")
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *ociImageDestination) ShouldCompressLayers() bool {
return true
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ociImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
@@ -76,16 +80,16 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
}
}()
h := sha256.New()
tee := io.TeeReader(stream, h)
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := "sha256:" + hex.EncodeToString(h.Sum(nil))
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, fmt.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
@@ -108,56 +112,42 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
func createManifest(m []byte) ([]byte, string, error) {
om := imgspecv1.Manifest{}
mt := manifest.GuessMIMEType(m)
switch mt {
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
// There a simple reason about not yet implementing this.
// OCI image-spec assure about backward compatibility with docker v2s2 but not v2s1
// generating a v2s2 is a migration docker does when upgrading to 1.10.3
// and I don't think we should bother about this now (I don't want to have migration code here in skopeo)
return nil, "", errors.New("can't create an OCI manifest from Docker V2 schema 1 manifest")
case manifest.DockerV2Schema2MediaType:
if err := json.Unmarshal(m, &om); err != nil {
return nil, "", err
}
om.MediaType = imgspecv1.MediaTypeImageManifest
for i := range om.Layers {
om.Layers[i].MediaType = imgspecv1.MediaTypeImageLayer
}
om.Config.MediaType = imgspecv1.MediaTypeImageConfig
b, err := json.Marshal(om)
if err != nil {
return nil, "", err
}
return b, om.MediaType, nil
case manifest.DockerV2ListMediaType:
return nil, "", errors.New("can't create an OCI manifest from Docker V2 schema 2 manifest list")
case imgspecv1.MediaTypeImageManifestList:
return nil, "", errors.New("can't create an OCI manifest from OCI manifest list")
case imgspecv1.MediaTypeImageManifest:
return m, mt, nil
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *ociImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
return nil, "", fmt.Errorf("unrecognized manifest media type %q", mt)
blobPath, err := d.ref.blobPath(info.Digest)
if err != nil {
return false, -1, err
}
finfo, err := os.Stat(blobPath)
if err != nil && os.IsNotExist(err) {
return false, -1, nil
}
if err != nil {
return false, -1, err
}
return true, finfo.Size(), nil
}
func (d *ociImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
func (d *ociImageDestination) PutManifest(m []byte) error {
// TODO(mitr, runcom): this breaks signatures entirely since at this point we're creating a new manifest
// and signatures don't apply anymore. Will fix.
ociMan, mt, err := createManifest(m)
digest, err := manifest.Digest(m)
if err != nil {
return err
}
digest, err := manifest.Digest(ociMan)
if err != nil {
return err
}
desc := imgspec.Descriptor{}
desc := imgspecv1.Descriptor{}
desc.Digest = digest
// TODO(runcom): beaware and add support for OCI manifest list
desc.MediaType = mt
desc.Size = int64(len(ociMan))
desc.MediaType = imgspecv1.MediaTypeImageManifest
desc.Size = int64(len(m))
data, err := json.Marshal(desc)
if err != nil {
return err
@@ -167,7 +157,10 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
if err != nil {
return err
}
if err := ioutil.WriteFile(blobPath, ociMan, 0644); err != nil {
if err := ensureParentDirectoryExists(blobPath); err != nil {
return err
}
if err := ioutil.WriteFile(blobPath, m, 0644); err != nil {
return err
}
// TODO(runcom): ugly here?
@@ -197,7 +190,7 @@ func ensureParentDirectoryExists(path string) error {
func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return fmt.Errorf("Pushing signatures for OCI images is not supported")
return errors.Errorf("Pushing signatures for OCI images is not supported")
}
return nil
}

View File

@@ -0,0 +1,98 @@
package layout
import (
"encoding/json"
"io"
"io/ioutil"
"os"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type ociImageSource struct {
ref ociReference
}
// newImageSource returns an ImageSource for reading from an existing directory.
func newImageSource(ref ociReference) types.ImageSource {
return &ociImageSource{ref: ref}
}
// Reference returns the reference used to set up this source.
func (s *ociImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *ociImageSource) Close() error {
return nil
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *ociImageSource) GetManifest() ([]byte, string, error) {
descriptorPath := s.ref.descriptorPath(s.ref.tag)
data, err := ioutil.ReadFile(descriptorPath)
if err != nil {
return nil, "", err
}
desc := imgspecv1.Descriptor{}
err = json.Unmarshal(data, &desc)
if err != nil {
return nil, "", err
}
manifestPath, err := s.ref.blobPath(digest.Digest(desc.Digest))
if err != nil {
return nil, "", err
}
m, err := ioutil.ReadFile(manifestPath)
if err != nil {
return nil, "", err
}
return m, desc.MediaType, nil
}
func (s *ociImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
manifestPath, err := s.ref.blobPath(digest)
if err != nil {
return nil, "", err
}
m, err := ioutil.ReadFile(manifestPath)
if err != nil {
return nil, "", err
}
// XXX: GetTargetManifest means that we don't have the context of what
// mediaType the manifest has. In OCI this means that we don't know
// what reference it came from, so we just *assume* that its
// MediaTypeImageManifest.
return m, imgspecv1.MediaTypeImageManifest, nil
}
// GetBlob returns a stream for the specified blob, and the blob's size.
func (s *ociImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
path, err := s.ref.blobPath(info.Digest)
if err != nil {
return nil, 0, err
}
r, err := os.Open(path)
if err != nil {
return nil, 0, err
}
fi, err := r.Stat()
if err != nil {
return nil, 0, err
}
return r, fi.Size(), nil
}
func (s *ociImageSource) GetSignatures() ([][]byte, error) {
return [][]byte{}, nil
}

View File

@@ -1,17 +1,24 @@
package layout
import (
"errors"
"fmt"
"path/filepath"
"regexp"
"strings"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for OCI directories.
var Transport = ociTransport{}
@@ -41,16 +48,16 @@ func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
dir = scope[:sep]
tag := scope[sep+1:]
if !refRegexp.MatchString(tag) {
return fmt.Errorf("Invalid tag %s", tag)
return errors.Errorf("Invalid tag %s", tag)
}
}
if strings.Contains(dir, ":") {
return fmt.Errorf("Invalid OCI reference %s: path contains a colon", scope)
return errors.Errorf("Invalid OCI reference %s: path contains a colon", scope)
}
if !strings.HasPrefix(dir, "/") {
return fmt.Errorf("Invalid scope %s: must be an absolute path", scope)
return errors.Errorf("Invalid scope %s: must be an absolute path", scope)
}
// Refuse also "/", otherwise "/" and "" would have the same semantics,
// and "" could be unexpectedly shadowed by the "/" entry.
@@ -60,7 +67,7 @@ func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
}
cleaned := filepath.Clean(dir)
if cleaned != dir {
return fmt.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
return errors.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
}
return nil
}
@@ -104,10 +111,10 @@ func NewReference(dir, tag string) (types.ImageReference, error) {
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
// from being ambiguous with values of PolicyConfigurationIdentity.
if strings.Contains(resolved, ":") {
return nil, fmt.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, tag, resolved)
return nil, errors.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, tag, resolved)
}
if !refRegexp.MatchString(tag) {
return nil, fmt.Errorf("Invalid tag %s", tag)
return nil, errors.Errorf("Invalid tag %s", tag)
}
return ociReference{dir: dir, resolvedDir: resolved, tag: tag}, nil
}
@@ -164,10 +171,13 @@ func (ref ociReference) PolicyConfigurationNamespaces() []string {
return res
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return nil, errors.New("Full Image support not implemented for oci: image names")
src := newImageSource(ref)
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
@@ -175,7 +185,7 @@ func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error)
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref ociReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return nil, errors.New("Reading images not implemented for oci: image names")
return newImageSource(ref), nil
}
// NewImageDestination returns a types.ImageDestination for this reference.
@@ -186,7 +196,7 @@ func (ref ociReference) NewImageDestination(ctx *types.SystemContext) (types.Ima
// DeleteImage deletes the named image from the registry, if supported.
func (ref ociReference) DeleteImage(ctx *types.SystemContext) error {
return fmt.Errorf("Deleting images not implemented for oci: images")
return errors.Errorf("Deleting images not implemented for oci: images")
}
// ociLayoutPathPath returns a path for the oci-layout within a directory using OCI conventions.
@@ -195,12 +205,11 @@ func (ref ociReference) ociLayoutPath() string {
}
// blobPath returns a path for a blob within a directory using OCI image-layout conventions.
func (ref ociReference) blobPath(digest string) (string, error) {
pts := strings.SplitN(digest, ":", 2)
if len(pts) != 2 {
return "", fmt.Errorf("unexpected digest reference %s", digest)
func (ref ociReference) blobPath(digest digest.Digest) (string, error) {
if err := digest.Validate(); err != nil {
return "", errors.Wrapf(err, "unexpected digest reference %s", digest)
}
return filepath.Join(ref.dir, "blobs", pts[0], pts[1]), nil
return filepath.Join(ref.dir, "blobs", digest.Algorithm().String(), digest.Hex()), nil
}
// descriptorPath returns a path for the manifest within a directory using OCI conventions.

View File

@@ -4,7 +4,6 @@ import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net"
@@ -14,13 +13,14 @@ import (
"path"
"path/filepath"
"reflect"
"strings"
"time"
"github.com/ghodss/yaml"
"github.com/imdario/mergo"
utilerrors "k8s.io/kubernetes/pkg/util/errors"
"k8s.io/kubernetes/pkg/util/homedir"
utilnet "k8s.io/kubernetes/pkg/util/net"
"github.com/pkg/errors"
"golang.org/x/net/http2"
"k8s.io/client-go/util/homedir"
)
// restTLSClientConfig is a modified copy of k8s.io/kubernets/pkg/client/restclient.TLSClientConfig.
@@ -348,20 +348,20 @@ func validateClusterInfo(clusterName string, clusterInfo clientcmdCluster) []err
if len(clusterInfo.Server) == 0 {
if len(clusterName) == 0 {
validationErrors = append(validationErrors, fmt.Errorf("default cluster has no server defined"))
validationErrors = append(validationErrors, errors.Errorf("default cluster has no server defined"))
} else {
validationErrors = append(validationErrors, fmt.Errorf("no server found for cluster %q", clusterName))
validationErrors = append(validationErrors, errors.Errorf("no server found for cluster %q", clusterName))
}
}
// Make sure CA data and CA file aren't both specified
if len(clusterInfo.CertificateAuthority) != 0 && len(clusterInfo.CertificateAuthorityData) != 0 {
validationErrors = append(validationErrors, fmt.Errorf("certificate-authority-data and certificate-authority are both specified for %v. certificate-authority-data will override", clusterName))
validationErrors = append(validationErrors, errors.Errorf("certificate-authority-data and certificate-authority are both specified for %v. certificate-authority-data will override", clusterName))
}
if len(clusterInfo.CertificateAuthority) != 0 {
clientCertCA, err := os.Open(clusterInfo.CertificateAuthority)
defer clientCertCA.Close()
if err != nil {
validationErrors = append(validationErrors, fmt.Errorf("unable to read certificate-authority %v for %v due to %v", clusterInfo.CertificateAuthority, clusterName, err))
validationErrors = append(validationErrors, errors.Errorf("unable to read certificate-authority %v for %v due to %v", clusterInfo.CertificateAuthority, clusterName, err))
}
}
@@ -385,36 +385,36 @@ func validateAuthInfo(authInfoName string, authInfo clientcmdAuthInfo) []error {
if len(authInfo.ClientCertificate) != 0 || len(authInfo.ClientCertificateData) != 0 {
// Make sure cert data and file aren't both specified
if len(authInfo.ClientCertificate) != 0 && len(authInfo.ClientCertificateData) != 0 {
validationErrors = append(validationErrors, fmt.Errorf("client-cert-data and client-cert are both specified for %v. client-cert-data will override", authInfoName))
validationErrors = append(validationErrors, errors.Errorf("client-cert-data and client-cert are both specified for %v. client-cert-data will override", authInfoName))
}
// Make sure key data and file aren't both specified
if len(authInfo.ClientKey) != 0 && len(authInfo.ClientKeyData) != 0 {
validationErrors = append(validationErrors, fmt.Errorf("client-key-data and client-key are both specified for %v; client-key-data will override", authInfoName))
validationErrors = append(validationErrors, errors.Errorf("client-key-data and client-key are both specified for %v; client-key-data will override", authInfoName))
}
// Make sure a key is specified
if len(authInfo.ClientKey) == 0 && len(authInfo.ClientKeyData) == 0 {
validationErrors = append(validationErrors, fmt.Errorf("client-key-data or client-key must be specified for %v to use the clientCert authentication method", authInfoName))
validationErrors = append(validationErrors, errors.Errorf("client-key-data or client-key must be specified for %v to use the clientCert authentication method", authInfoName))
}
if len(authInfo.ClientCertificate) != 0 {
clientCertFile, err := os.Open(authInfo.ClientCertificate)
defer clientCertFile.Close()
if err != nil {
validationErrors = append(validationErrors, fmt.Errorf("unable to read client-cert %v for %v due to %v", authInfo.ClientCertificate, authInfoName, err))
validationErrors = append(validationErrors, errors.Errorf("unable to read client-cert %v for %v due to %v", authInfo.ClientCertificate, authInfoName, err))
}
}
if len(authInfo.ClientKey) != 0 {
clientKeyFile, err := os.Open(authInfo.ClientKey)
defer clientKeyFile.Close()
if err != nil {
validationErrors = append(validationErrors, fmt.Errorf("unable to read client-key %v for %v due to %v", authInfo.ClientKey, authInfoName, err))
validationErrors = append(validationErrors, errors.Errorf("unable to read client-key %v for %v due to %v", authInfo.ClientKey, authInfoName, err))
}
}
}
// authPath also provides information for the client to identify the server, so allow multiple auth methods in that case
if (len(methods) > 1) && (!usingAuthPath) {
validationErrors = append(validationErrors, fmt.Errorf("more than one authentication method found for %v; found %v, only one is allowed", authInfoName, methods))
validationErrors = append(validationErrors, errors.Errorf("more than one authentication method found for %v; found %v, only one is allowed", authInfoName, methods))
}
return validationErrors
@@ -450,6 +450,55 @@ func (config *directClientConfig) getCluster() clientcmdCluster {
return mergedClusterInfo
}
// aggregateErr is a modified copy of k8s.io/apimachinery/pkg/util/errors.aggregate.
// This helper implements the error and Errors interfaces. Keeping it private
// prevents people from making an aggregate of 0 errors, which is not
// an error, but does satisfy the error interface.
type aggregateErr []error
// newAggregate is a modified copy of k8s.io/apimachinery/pkg/util/errors.NewAggregate.
// NewAggregate converts a slice of errors into an Aggregate interface, which
// is itself an implementation of the error interface. If the slice is empty,
// this returns nil.
// It will check if any of the element of input error list is nil, to avoid
// nil pointer panic when call Error().
func newAggregate(errlist []error) error {
if len(errlist) == 0 {
return nil
}
// In case of input error list contains nil
var errs []error
for _, e := range errlist {
if e != nil {
errs = append(errs, e)
}
}
if len(errs) == 0 {
return nil
}
return aggregateErr(errs)
}
// Error is a modified copy of k8s.io/apimachinery/pkg/util/errors.aggregate.Error.
// Error is part of the error interface.
func (agg aggregateErr) Error() string {
if len(agg) == 0 {
// This should never happen, really.
return ""
}
if len(agg) == 1 {
return agg[0].Error()
}
result := fmt.Sprintf("[%s", agg[0].Error())
for i := 1; i < len(agg); i++ {
result += fmt.Sprintf(", %s", agg[i].Error())
}
result += "]"
return result
}
// REMOVED: aggregateErr.Errors
// errConfigurationInvalid is a modified? copy of k8s.io/kubernetes/pkg/client/unversioned/clientcmd.errConfigurationInvalid.
// errConfigurationInvalid is a set of errors indicating the configuration is invalid.
type errConfigurationInvalid []error
@@ -470,7 +519,7 @@ func newErrConfigurationInvalid(errs []error) error {
// Error implements the error interface
func (e errConfigurationInvalid) Error() string {
return fmt.Sprintf("invalid configuration: %v", utilerrors.NewAggregate(e).Error())
return fmt.Sprintf("invalid configuration: %v", newAggregate(e).Error())
}
// clientConfigLoadingRules is a modified copy of k8s.io/kubernetes/pkg/client/unversioned/clientcmd.ClientConfigLoadingRules
@@ -518,7 +567,7 @@ func (rules *clientConfigLoadingRules) Load() (*clientcmdConfig, error) {
continue
}
if err != nil {
errlist = append(errlist, fmt.Errorf("Error loading config file \"%s\": %v", filename, err))
errlist = append(errlist, errors.Wrapf(err, "Error loading config file \"%s\"", filename))
continue
}
@@ -550,7 +599,7 @@ func (rules *clientConfigLoadingRules) Load() (*clientcmdConfig, error) {
errlist = append(errlist, err)
}
return config, utilerrors.NewAggregate(errlist)
return config, newAggregate(errlist)
}
// loadFromFile is a modified copy of k8s.io/kubernetes/pkg/client/unversioned/clientcmd.LoadFromFile
@@ -623,7 +672,7 @@ func resolveLocalPaths(config *clientcmdConfig) error {
}
base, err := filepath.Abs(filepath.Dir(cluster.LocationOfOrigin))
if err != nil {
return fmt.Errorf("Could not determine the absolute path of config file %s: %v", cluster.LocationOfOrigin, err)
return errors.Wrapf(err, "Could not determine the absolute path of config file %s", cluster.LocationOfOrigin)
}
if err := resolvePaths(getClusterFileReferences(cluster), base); err != nil {
@@ -636,7 +685,7 @@ func resolveLocalPaths(config *clientcmdConfig) error {
}
base, err := filepath.Abs(filepath.Dir(authInfo.LocationOfOrigin))
if err != nil {
return fmt.Errorf("Could not determine the absolute path of config file %s: %v", authInfo.LocationOfOrigin, err)
return errors.Wrapf(err, "Could not determine the absolute path of config file %s", authInfo.LocationOfOrigin)
}
if err := resolvePaths(getAuthInfoFileReferences(authInfo), base); err != nil {
@@ -706,7 +755,7 @@ func restClientFor(config *restConfig) (*url.URL, *http.Client, error) {
// Kubernetes API.
func defaultServerURL(host string, defaultTLS bool) (*url.URL, error) {
if host == "" {
return nil, fmt.Errorf("host must be a URL or a host:port pair")
return nil, errors.Errorf("host must be a URL or a host:port pair")
}
base := host
hostURL, err := url.Parse(base)
@@ -723,7 +772,7 @@ func defaultServerURL(host string, defaultTLS bool) (*url.URL, error) {
return nil, err
}
if hostURL.Path != "" && hostURL.Path != "/" {
return nil, fmt.Errorf("host must be a URL or a host:port pair: %q", base)
return nil, errors.Errorf("host must be a URL or a host:port pair: %q", base)
}
}
@@ -793,12 +842,58 @@ func transportNew(config *restConfig) (http.RoundTripper, error) {
// REMOVED: HTTPWrappersForConfig(config, rt) in favor of the caller setting HTTP headers itself based on restConfig. Only this inlined check remains.
if len(config.Username) != 0 && len(config.BearerToken) != 0 {
return nil, fmt.Errorf("username/password or bearer token may be set, but not both")
return nil, errors.Errorf("username/password or bearer token may be set, but not both")
}
return rt, nil
}
// newProxierWithNoProxyCIDR is a modified copy of k8s.io/apimachinery/pkg/util/net.NewProxierWithNoProxyCIDR.
// NewProxierWithNoProxyCIDR constructs a Proxier function that respects CIDRs in NO_PROXY and delegates if
// no matching CIDRs are found
func newProxierWithNoProxyCIDR(delegate func(req *http.Request) (*url.URL, error)) func(req *http.Request) (*url.URL, error) {
// we wrap the default method, so we only need to perform our check if the NO_PROXY envvar has a CIDR in it
noProxyEnv := os.Getenv("NO_PROXY")
noProxyRules := strings.Split(noProxyEnv, ",")
cidrs := []*net.IPNet{}
for _, noProxyRule := range noProxyRules {
_, cidr, _ := net.ParseCIDR(noProxyRule)
if cidr != nil {
cidrs = append(cidrs, cidr)
}
}
if len(cidrs) == 0 {
return delegate
}
return func(req *http.Request) (*url.URL, error) {
host := req.URL.Host
// for some urls, the Host is already the host, not the host:port
if net.ParseIP(host) == nil {
var err error
host, _, err = net.SplitHostPort(req.URL.Host)
if err != nil {
return delegate(req)
}
}
ip := net.ParseIP(host)
if ip == nil {
return delegate(req)
}
for _, cidr := range cidrs {
if cidr.Contains(ip) {
return nil, nil
}
}
return delegate(req)
}
}
// tlsCacheGet is a modified copy of k8s.io/kubernetes/pkg/client/transport.tlsTransportCache.get.
func tlsCacheGet(config *restConfig) (http.RoundTripper, error) {
// REMOVED: any actual caching
@@ -813,15 +908,23 @@ func tlsCacheGet(config *restConfig) (http.RoundTripper, error) {
return http.DefaultTransport, nil
}
return utilnet.SetTransportDefaults(&http.Transport{ // FIXME??
Proxy: http.ProxyFromEnvironment,
// REMOVED: Call to k8s.io/apimachinery/pkg/util/net.SetTransportDefaults; instead of the generic machinery and conditionals, hard-coded the result here.
t := &http.Transport{
// http.ProxyFromEnvironment doesn't respect CIDRs and that makes it impossible to exclude things like pod and service IPs from proxy settings
// ProxierWithNoProxyCIDR allows CIDR rules in NO_PROXY
Proxy: newProxierWithNoProxyCIDR(http.ProxyFromEnvironment),
TLSHandshakeTimeout: 10 * time.Second,
TLSClientConfig: tlsConfig,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
}), nil
}
// Allow clients to disable http2 if needed.
if s := os.Getenv("DISABLE_HTTP2"); len(s) == 0 {
_ = http2.ConfigureTransport(t)
}
return t, nil
}
// tlsConfigFor is a modified copy of k8s.io/kubernetes/pkg/client/transport.TLSConfigFor.
@@ -832,7 +935,7 @@ func tlsConfigFor(c *restConfig) (*tls.Config, error) {
return nil, nil
}
if c.HasCA() && c.Insecure {
return nil, fmt.Errorf("specifying a root certificates file with the insecure flag is not allowed")
return nil, errors.Errorf("specifying a root certificates file with the insecure flag is not allowed")
}
if err := loadTLSFiles(c); err != nil {
return nil, err

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"crypto/rand"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
@@ -14,9 +13,12 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/image/version"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// openshiftClient is configuration for dealing with a single image stream, for reading or writing.
@@ -123,7 +125,7 @@ func (c *openshiftClient) doRequest(method, path string, requestBody []byte) ([]
if statusValid {
return nil, errors.New(status.Message)
}
return nil, fmt.Errorf("HTTP error: status code: %d, body: %s", res.StatusCode, string(body))
return nil, errors.Errorf("HTTP error: status code: %d, body: %s", res.StatusCode, string(body))
}
return body, nil
@@ -150,9 +152,9 @@ func (c *openshiftClient) getImage(imageStreamImageName string) (*image, error)
func (c *openshiftClient) convertDockerImageReference(ref string) (string, error) {
parts := strings.SplitN(ref, "/", 2)
if len(parts) != 2 {
return "", fmt.Errorf("Invalid format of docker reference %s: missing '/'", ref)
return "", errors.Errorf("Invalid format of docker reference %s: missing '/'", ref)
}
return c.ref.dockerReference.Hostname() + "/" + parts[1], nil
return reference.Domain(c.ref.dockerReference) + "/" + parts[1], nil
}
type openshiftImageSource struct {
@@ -189,13 +191,26 @@ func (s *openshiftImageSource) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *openshiftImageSource) Close() {
func (s *openshiftImageSource) Close() error {
if s.docker != nil {
s.docker.Close()
err := s.docker.Close()
s.docker = nil
return err
}
return nil
}
func (s *openshiftImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, "", err
}
return s.docker.GetTargetManifest(digest)
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, "", err
@@ -204,11 +219,11 @@ func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *openshiftImageSource) GetBlob(digest string) (io.ReadCloser, int64, error) {
func (s *openshiftImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, 0, err
}
return s.docker.GetBlob(digest)
return s.docker.GetBlob(info)
}
func (s *openshiftImageSource) GetSignatures() ([][]byte, error) {
@@ -257,7 +272,7 @@ func (s *openshiftImageSource) ensureImageIsResolved() error {
}
}
if te == nil {
return fmt.Errorf("No matching tag found")
return errors.Errorf("No matching tag found")
}
logrus.Debugf("tag event %#v", te)
dockerRefString, err := s.client.convertDockerImageReference(te.DockerImageReference)
@@ -295,7 +310,7 @@ func newImageDestination(ctx *types.SystemContext, ref openshiftReference) (type
// FIXME: Should this always use a digest, not a tag? Uploading to Docker by tag requires the tag _inside_ the manifest to match,
// i.e. a single signed image cannot be available under multiple tags. But with types.ImageDestination, we don't know
// the manifest digest at this point.
dockerRefString := fmt.Sprintf("//%s/%s/%s:%s", client.ref.dockerReference.Hostname(), client.ref.namespace, client.ref.stream, client.ref.dockerReference.Tag())
dockerRefString := fmt.Sprintf("//%s/%s/%s:%s", reference.Domain(client.ref.dockerReference), client.ref.namespace, client.ref.stream, client.ref.dockerReference.Tag())
dockerRef, err := docker.ParseReference(dockerRefString)
if err != nil {
return nil, err
@@ -318,8 +333,8 @@ func (d *openshiftImageDestination) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *openshiftImageDestination) Close() {
d.docker.Close()
func (d *openshiftImageDestination) Close() error {
return d.docker.Close()
}
func (d *openshiftImageDestination) SupportedManifestMIMETypes() []string {
@@ -340,6 +355,12 @@ func (d *openshiftImageDestination) ShouldCompressLayers() bool {
return true
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *openshiftImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -350,19 +371,31 @@ func (d *openshiftImageDestination) PutBlob(stream io.Reader, inputInfo types.Bl
return d.docker.PutBlob(stream, inputInfo)
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *openshiftImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
return d.docker.HasBlob(info)
}
func (d *openshiftImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return d.docker.ReapplyBlob(info)
}
func (d *openshiftImageDestination) PutManifest(m []byte) error {
manifestDigest, err := manifest.Digest(m)
if err != nil {
return err
}
d.imageStreamImageName = manifestDigest
d.imageStreamImageName = manifestDigest.String()
return d.docker.PutManifest(m)
}
func (d *openshiftImageDestination) PutSignatures(signatures [][]byte) error {
if d.imageStreamImageName == "" {
return fmt.Errorf("Internal error: Unknown manifest digest, can't add signatures")
return errors.Errorf("Internal error: Unknown manifest digest, can't add signatures")
}
// Because image signatures are a shared resource in Atomic Registry, the default upload
// always adds signatures. Eventually we should also allow removing signatures.
@@ -394,7 +427,7 @@ sigExists:
randBytes := make([]byte, 16)
n, err := rand.Read(randBytes)
if err != nil || n != 16 {
return fmt.Errorf("Error generating random signature ID: %v, len %d", err, n)
return errors.Wrapf(err, "Error generating random signature len %d", n)
}
signatureName = fmt.Sprintf("%s@%032x", d.imageStreamImageName, randBytes)
if _, ok := existingSigNames[signatureName]; !ok {

View File

@@ -6,11 +6,17 @@ import (
"strings"
"github.com/containers/image/docker/policyconfiguration"
"github.com/containers/image/docker/reference"
genericImage "github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/pkg/errors"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for OpenShift registry-hosted images.
var Transport = openshiftTransport{}
@@ -36,7 +42,7 @@ var scopeRegexp = regexp.MustCompile("^[^/]*(/[^:/]*(/[^:/]*(:[^:/]*)?)?)?$")
// scope passed to this function will not be "", that value is always allowed.
func (t openshiftTransport) ValidatePolicyConfigurationScope(scope string) error {
if scopeRegexp.FindStringIndex(scope) == nil {
return fmt.Errorf("Invalid scope name %s", scope)
return errors.Errorf("Invalid scope name %s", scope)
}
return nil
}
@@ -50,22 +56,23 @@ type openshiftReference struct {
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an OpenShift ImageReference.
func ParseReference(ref string) (types.ImageReference, error) {
r, err := reference.ParseNamed(ref)
r, err := reference.ParseNormalizedNamed(ref)
if err != nil {
return nil, fmt.Errorf("failed to parse image reference %q, %v", ref, err)
return nil, errors.Wrapf(err, "failed to parse image reference %q", ref)
}
tagged, ok := r.(reference.NamedTagged)
if !ok {
return nil, fmt.Errorf("invalid image reference %s, %#v", ref, r)
return nil, errors.Errorf("invalid image reference %s, expected format: 'hostname/namespace/stream:tag'", ref)
}
return NewReference(tagged)
}
// NewReference returns an OpenShift reference for a reference.NamedTagged
func NewReference(dockerRef reference.NamedTagged) (types.ImageReference, error) {
r := strings.SplitN(dockerRef.RemoteName(), "/", 3)
r := strings.SplitN(reference.Path(dockerRef), "/", 3)
if len(r) != 2 {
return nil, fmt.Errorf("invalid image reference %s", dockerRef.String())
return nil, errors.Errorf("invalid image reference: %s, expected format: 'hostname/namespace/stream:tag'",
reference.FamiliarString(dockerRef))
}
return openshiftReference{
namespace: r[0],
@@ -84,7 +91,7 @@ func (ref openshiftReference) Transport() types.ImageTransport {
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref openshiftReference) StringWithinTransport() string {
return ref.dockerReference.String()
return reference.FamiliarString(ref.dockerReference)
}
// DockerReference returns a Docker reference associated with this reference
@@ -118,14 +125,16 @@ func (ref openshiftReference) PolicyConfigurationNamespaces() []string {
return policyconfiguration.DockerReferenceNamespaces(ref.dockerReference)
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref openshiftReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := newImageSource(ctx, ref, nil)
if err != nil {
return nil, err
}
return genericImage.FromSource(src), nil
return genericImage.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
@@ -144,5 +153,5 @@ func (ref openshiftReference) NewImageDestination(ctx *types.SystemContext) (typ
// DeleteImage deletes the named image from the registry, if supported.
func (ref openshiftReference) DeleteImage(ctx *types.SystemContext) error {
return fmt.Errorf("Deleting images not implemented for atomic: images")
return errors.Errorf("Deleting images not implemented for atomic: images")
}

View File

@@ -0,0 +1,67 @@
package compression
import (
"bytes"
"compress/bzip2"
"compress/gzip"
"io"
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
)
// DecompressorFunc returns the decompressed stream, given a compressed stream.
type DecompressorFunc func(io.Reader) (io.Reader, error)
// GzipDecompressor is a DecompressorFunc for the gzip compression algorithm.
func GzipDecompressor(r io.Reader) (io.Reader, error) {
return gzip.NewReader(r)
}
// Bzip2Decompressor is a DecompressorFunc for the bzip2 compression algorithm.
func Bzip2Decompressor(r io.Reader) (io.Reader, error) {
return bzip2.NewReader(r), nil
}
// XzDecompressor is a DecompressorFunc for the xz compression algorithm.
func XzDecompressor(r io.Reader) (io.Reader, error) {
return nil, errors.New("Decompressing xz streams is not supported")
}
// compressionAlgos is an internal implementation detail of DetectCompression
var compressionAlgos = map[string]struct {
prefix []byte
decompressor DecompressorFunc
}{
"gzip": {[]byte{0x1F, 0x8B, 0x08}, GzipDecompressor}, // gzip (RFC 1952)
"bzip2": {[]byte{0x42, 0x5A, 0x68}, Bzip2Decompressor}, // bzip2 (decompress.c:BZ2_decompress)
"xz": {[]byte{0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}, XzDecompressor}, // xz (/usr/share/doc/xz/xz-file-format.txt)
}
// DetectCompression returns a DecompressorFunc if the input is recognized as a compressed format, nil otherwise.
// Because it consumes the start of input, other consumers must use the returned io.Reader instead to also read from the beginning.
func DetectCompression(input io.Reader) (DecompressorFunc, io.Reader, error) {
buffer := [8]byte{}
n, err := io.ReadAtLeast(input, buffer[:], len(buffer))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// This is a “real” error. We could just ignore it this time, process the data we have, and hope that the source will report the same error again.
// Instead, fail immediately with the original error cause instead of a possibly secondary/misleading error returned later.
return nil, nil, err
}
var decompressor DecompressorFunc
for name, algo := range compressionAlgos {
if bytes.HasPrefix(buffer[:n], algo.prefix) {
logrus.Debugf("Detected compression format %s", name)
decompressor = algo.decompressor
break
}
}
if decompressor == nil {
logrus.Debugf("No compression detected")
}
return decompressor, io.MultiReader(bytes.NewReader(buffer[:n]), input), nil
}

View File

@@ -0,0 +1 @@
This package was replicated from [github.com/docker/docker v17.04.0-ce](https://github.com/docker/docker/tree/v17.04.0-ce/api/types/strslice).

View File

@@ -5,7 +5,9 @@ package signature
import (
"fmt"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/opencontainers/go-digest"
)
// SignDockerManifest returns a signature for manifest as the specified dockerReference,
@@ -15,12 +17,7 @@ func SignDockerManifest(m []byte, dockerReference string, mech SigningMechanism,
if err != nil {
return nil, err
}
sig := privateSignature{
Signature{
DockerManifestDigest: manifestDigest,
DockerReference: dockerReference,
},
}
sig := newUntrustedSignature(manifestDigest, dockerReference)
return sig.sign(mech, keyIdentity)
}
@@ -28,6 +25,10 @@ func SignDockerManifest(m []byte, dockerReference string, mech SigningMechanism,
// using mech.
func VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest []byte,
expectedDockerReference string, mech SigningMechanism, expectedKeyIdentity string) (*Signature, error) {
expectedRef, err := reference.ParseNormalizedNamed(expectedDockerReference)
if err != nil {
return nil, err
}
sig, err := verifyAndExtractSignature(mech, unverifiedSignature, signatureAcceptanceRules{
validateKeyIdentity: func(keyIdentity string) error {
if keyIdentity != expectedKeyIdentity {
@@ -36,13 +37,17 @@ func VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest []byt
return nil
},
validateSignedDockerReference: func(signedDockerReference string) error {
if signedDockerReference != expectedDockerReference {
signedRef, err := reference.ParseNormalizedNamed(signedDockerReference)
if err != nil {
return InvalidSignatureError{msg: fmt.Sprintf("Invalid docker reference %s in signature", signedDockerReference)}
}
if signedRef.String() != expectedRef.String() {
return InvalidSignatureError{msg: fmt.Sprintf("Docker reference %s does not match %s",
signedDockerReference, expectedDockerReference)}
}
return nil
},
validateSignedDockerManifestDigest: func(signedDockerManifestDigest string) error {
validateSignedDockerManifestDigest: func(signedDockerManifestDigest digest.Digest) error {
matches, err := manifest.MatchesDigest(unverifiedManifest, signedDockerManifestDigest)
if err != nil {
return err

View File

@@ -14,47 +14,6 @@ func (err jsonFormatError) Error() string {
return string(err)
}
// validateExactMapKeys returns an error if the keys of m are not exactly expectedKeys, which must be pairwise distinct
func validateExactMapKeys(m map[string]interface{}, expectedKeys ...string) error {
if len(m) != len(expectedKeys) {
return jsonFormatError("Unexpected keys in a JSON object")
}
for _, k := range expectedKeys {
if _, ok := m[k]; !ok {
return jsonFormatError(fmt.Sprintf("Key %s missing in a JSON object", k))
}
}
// Assuming expectedKeys are pairwise distinct, we know m contains len(expectedKeys) different values in expectedKeys.
return nil
}
// mapField returns a member fieldName of m, if it is a JSON map, or an error.
func mapField(m map[string]interface{}, fieldName string) (map[string]interface{}, error) {
untyped, ok := m[fieldName]
if !ok {
return nil, jsonFormatError(fmt.Sprintf("Field %s missing", fieldName))
}
v, ok := untyped.(map[string]interface{})
if !ok {
return nil, jsonFormatError(fmt.Sprintf("Field %s is not a JSON object", fieldName))
}
return v, nil
}
// stringField returns a member fieldName of m, if it is a string, or an error.
func stringField(m map[string]interface{}, fieldName string) (string, error) {
untyped, ok := m[fieldName]
if !ok {
return "", jsonFormatError(fmt.Sprintf("Field %s missing", fieldName))
}
v, ok := untyped.(string)
if !ok {
return "", jsonFormatError(fmt.Sprintf("Field %s is not a JSON object", fieldName))
}
return v, nil
}
// paranoidUnmarshalJSONObject unmarshals data as a JSON object, but failing on the slightest unexpected aspect
// (including duplicated keys, unrecognized keys, and non-matching types). Uses fieldResolver to
// determine the destination for a field value, which should return a pointer to the destination if valid, or nil if the key is rejected.
@@ -105,3 +64,25 @@ func paranoidUnmarshalJSONObject(data []byte, fieldResolver func(string) interfa
}
return nil
}
// paranoidUnmarshalJSONObject unmarshals data as a JSON object, but failing on the slightest unexpected aspect
// (including duplicated keys, unrecognized keys, and non-matching types). Each of the fields in exactFields
// must be present exactly once, and none other fields are accepted.
func paranoidUnmarshalJSONObjectExactFields(data []byte, exactFields map[string]interface{}) error {
seenKeys := map[string]struct{}{}
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
if valuePtr, ok := exactFields[key]; ok {
seenKeys[key] = struct{}{}
return valuePtr
}
return nil
}); err != nil {
return err
}
for key := range exactFields {
if _, ok := seenKeys[key]; !ok {
return jsonFormatError(fmt.Sprintf(`Key "%s" missing in a JSON object`, key))
}
}
return nil
}

View File

@@ -4,118 +4,82 @@ package signature
import (
"bytes"
"errors"
"fmt"
"io/ioutil"
"strings"
"github.com/mtrmac/gpgme"
"golang.org/x/crypto/openpgp"
)
// SigningMechanism abstracts a way to sign binary blobs and verify their signatures.
// Each mechanism should eventually be closed by calling Close().
// FIXME: Eventually expand on keyIdentity (namespace them between mechanisms to
// eliminate ambiguities, support CA signatures and perhaps other key properties)
type SigningMechanism interface {
// ImportKeysFromBytes imports public keys from the supplied blob and returns their identities.
// The blob is assumed to have an appropriate format (the caller is expected to know which one).
// NOTE: This may modify long-term state (e.g. key storage in a directory underlying the mechanism).
ImportKeysFromBytes(blob []byte) ([]string, error)
// Sign creates a (non-detached) signature of input using keyidentity
// Close removes resources associated with the mechanism, if any.
Close() error
// SupportsSigning returns nil if the mechanism supports signing, or a SigningNotSupportedError.
SupportsSigning() error
// Sign creates a (non-detached) signature of input using keyIdentity.
// Fails with a SigningNotSupportedError if the mechanism does not support signing.
Sign(input []byte, keyIdentity string) ([]byte, error)
// Verify parses unverifiedSignature and returns the content and the signer's identity
Verify(unverifiedSignature []byte) (contents []byte, keyIdentity string, err error)
// UntrustedSignatureContents returns UNTRUSTED contents of the signature WITHOUT ANY VERIFICATION,
// along with a short identifier of the key used for signing.
// WARNING: The short key identifier (which correponds to "Key ID" for OpenPGP keys)
// is NOT the same as a "key identity" used in other calls ot this interface, and
// the values may have no recognizable relationship if the public key is not available.
UntrustedSignatureContents(untrustedSignature []byte) (untrustedContents []byte, shortKeyIdentifier string, err error)
}
// A GPG/OpenPGP signing mechanism.
type gpgSigningMechanism struct {
ctx *gpgme.Context
// SigningNotSupportedError is returned when trying to sign using a mechanism which does not support that.
type SigningNotSupportedError string
func (err SigningNotSupportedError) Error() string {
return string(err)
}
// NewGPGSigningMechanism returns a new GPG/OpenPGP signing mechanism.
// NewGPGSigningMechanism returns a new GPG/OpenPGP signing mechanism for the users default
// GPG configuration ($GNUPGHOME / ~/.gnupg)
// The caller must call .Close() on the returned SigningMechanism.
func NewGPGSigningMechanism() (SigningMechanism, error) {
return newGPGSigningMechanismInDirectory("")
}
// newGPGSigningMechanismInDirectory returns a new GPG/OpenPGP signing mechanism, using optionalDir if not empty.
func newGPGSigningMechanismInDirectory(optionalDir string) (SigningMechanism, error) {
ctx, err := gpgme.New()
if err != nil {
return nil, err
}
if err = ctx.SetProtocol(gpgme.ProtocolOpenPGP); err != nil {
return nil, err
}
if optionalDir != "" {
err := ctx.SetEngineInfo(gpgme.ProtocolOpenPGP, "", optionalDir)
if err != nil {
return nil, err
}
}
ctx.SetArmor(false)
ctx.SetTextMode(false)
return gpgSigningMechanism{ctx: ctx}, nil
// NewEphemeralGPGSigningMechanism returns a new GPG/OpenPGP signing mechanism which
// recognizes _only_ public keys from the supplied blob, and returns the identities
// of these keys.
// The caller must call .Close() on the returned SigningMechanism.
func NewEphemeralGPGSigningMechanism(blob []byte) (SigningMechanism, []string, error) {
return newEphemeralGPGSigningMechanism(blob)
}
// ImportKeysFromBytes implements SigningMechanism.ImportKeysFromBytes
func (m gpgSigningMechanism) ImportKeysFromBytes(blob []byte) ([]string, error) {
inputData, err := gpgme.NewDataBytes(blob)
if err != nil {
return nil, err
}
res, err := m.ctx.Import(inputData)
if err != nil {
return nil, err
}
keyIdentities := []string{}
for _, i := range res.Imports {
if i.Result == nil {
keyIdentities = append(keyIdentities, i.Fingerprint)
}
}
return keyIdentities, nil
}
// Sign implements SigningMechanism.Sign
func (m gpgSigningMechanism) Sign(input []byte, keyIdentity string) ([]byte, error) {
key, err := m.ctx.GetKey(keyIdentity, true)
if err != nil {
return nil, err
}
inputData, err := gpgme.NewDataBytes(input)
if err != nil {
return nil, err
}
var sigBuffer bytes.Buffer
sigData, err := gpgme.NewDataWriter(&sigBuffer)
if err != nil {
return nil, err
}
if err = m.ctx.Sign([]*gpgme.Key{key}, inputData, sigData, gpgme.SigModeNormal); err != nil {
return nil, err
}
return sigBuffer.Bytes(), nil
}
// Verify implements SigningMechanism.Verify
func (m gpgSigningMechanism) Verify(unverifiedSignature []byte) (contents []byte, keyIdentity string, err error) {
signedBuffer := bytes.Buffer{}
signedData, err := gpgme.NewDataWriter(&signedBuffer)
// gpgUntrustedSignatureContents returns UNTRUSTED contents of the signature WITHOUT ANY VERIFICATION,
// along with a short identifier of the key used for signing.
// WARNING: The short key identifier (which correponds to "Key ID" for OpenPGP keys)
// is NOT the same as a "key identity" used in other calls ot this interface, and
// the values may have no recognizable relationship if the public key is not available.
func gpgUntrustedSignatureContents(untrustedSignature []byte) (untrustedContents []byte, shortKeyIdentifier string, err error) {
// This uses the Golang-native OpenPGP implementation instead of gpgme because we are not doing any cryptography.
md, err := openpgp.ReadMessage(bytes.NewReader(untrustedSignature), openpgp.EntityList{}, nil, nil)
if err != nil {
return nil, "", err
}
unverifiedSignatureData, err := gpgme.NewDataBytes(unverifiedSignature)
if !md.IsSigned {
return nil, "", errors.New("The input is not a signature")
}
content, err := ioutil.ReadAll(md.UnverifiedBody)
if err != nil {
// Coverage: An error during reading the body can happen only if
// 1) the message is encrypted, which is not our case (and we dont give ReadMessage the key
// to decrypt the contents anyway), or
// 2) the message is signed AND we give ReadMessage a correspnding public key, which we dont.
return nil, "", err
}
_, sigs, err := m.ctx.Verify(unverifiedSignatureData, nil, signedData)
if err != nil {
return nil, "", err
}
if len(sigs) != 1 {
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Unexpected GPG signature count %d", len(sigs))}
}
sig := sigs[0]
// This is sig.Summary == gpgme.SigSumValid except for key trust, which we handle ourselves
if sig.Status != nil || sig.Validity == gpgme.ValidityNever || sig.ValidityReason != nil || sig.WrongKeyUsage {
// FIXME: Better error reporting eventually
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Invalid GPG signature: %#v", sig)}
}
return signedBuffer.Bytes(), sig.Fingerprint, nil
// Uppercase the key ID for minimal consistency with the gpgme-returned fingerprints
// (but note that key ID is a suffix of the fingerprint only for V4 keys, not V3)!
return content, strings.ToUpper(fmt.Sprintf("%016X", md.SignedByKeyId)), nil
}

View File

@@ -0,0 +1,175 @@
// +build !containers_image_openpgp
package signature
import (
"bytes"
"fmt"
"io/ioutil"
"os"
"github.com/mtrmac/gpgme"
)
// A GPG/OpenPGP signing mechanism, implemented using gpgme.
type gpgmeSigningMechanism struct {
ctx *gpgme.Context
ephemeralDir string // If not "", a directory to be removed on Close()
}
// newGPGSigningMechanismInDirectory returns a new GPG/OpenPGP signing mechanism, using optionalDir if not empty.
// The caller must call .Close() on the returned SigningMechanism.
func newGPGSigningMechanismInDirectory(optionalDir string) (SigningMechanism, error) {
ctx, err := newGPGMEContext(optionalDir)
if err != nil {
return nil, err
}
return &gpgmeSigningMechanism{
ctx: ctx,
ephemeralDir: "",
}, nil
}
// newEphemeralGPGSigningMechanism returns a new GPG/OpenPGP signing mechanism which
// recognizes _only_ public keys from the supplied blob, and returns the identities
// of these keys.
// The caller must call .Close() on the returned SigningMechanism.
func newEphemeralGPGSigningMechanism(blob []byte) (SigningMechanism, []string, error) {
dir, err := ioutil.TempDir("", "containers-ephemeral-gpg-")
if err != nil {
return nil, nil, err
}
removeDir := true
defer func() {
if removeDir {
os.RemoveAll(dir)
}
}()
ctx, err := newGPGMEContext(dir)
if err != nil {
return nil, nil, err
}
mech := &gpgmeSigningMechanism{
ctx: ctx,
ephemeralDir: dir,
}
keyIdentities, err := mech.importKeysFromBytes(blob)
if err != nil {
return nil, nil, err
}
removeDir = false
return mech, keyIdentities, nil
}
// newGPGMEContext returns a new *gpgme.Context, using optionalDir if not empty.
func newGPGMEContext(optionalDir string) (*gpgme.Context, error) {
ctx, err := gpgme.New()
if err != nil {
return nil, err
}
if err = ctx.SetProtocol(gpgme.ProtocolOpenPGP); err != nil {
return nil, err
}
if optionalDir != "" {
err := ctx.SetEngineInfo(gpgme.ProtocolOpenPGP, "", optionalDir)
if err != nil {
return nil, err
}
}
ctx.SetArmor(false)
ctx.SetTextMode(false)
return ctx, nil
}
func (m *gpgmeSigningMechanism) Close() error {
if m.ephemeralDir != "" {
os.RemoveAll(m.ephemeralDir) // Ignore an error, if any
}
return nil
}
// importKeysFromBytes imports public keys from the supplied blob and returns their identities.
// The blob is assumed to have an appropriate format (the caller is expected to know which one).
// NOTE: This may modify long-term state (e.g. key storage in a directory underlying the mechanism);
// but we do not make this public, it can only be used through newEphemeralGPGSigningMechanism.
func (m *gpgmeSigningMechanism) importKeysFromBytes(blob []byte) ([]string, error) {
inputData, err := gpgme.NewDataBytes(blob)
if err != nil {
return nil, err
}
res, err := m.ctx.Import(inputData)
if err != nil {
return nil, err
}
keyIdentities := []string{}
for _, i := range res.Imports {
if i.Result == nil {
keyIdentities = append(keyIdentities, i.Fingerprint)
}
}
return keyIdentities, nil
}
// SupportsSigning returns nil if the mechanism supports signing, or a SigningNotSupportedError.
func (m *gpgmeSigningMechanism) SupportsSigning() error {
return nil
}
// Sign creates a (non-detached) signature of input using keyIdentity.
// Fails with a SigningNotSupportedError if the mechanism does not support signing.
func (m *gpgmeSigningMechanism) Sign(input []byte, keyIdentity string) ([]byte, error) {
key, err := m.ctx.GetKey(keyIdentity, true)
if err != nil {
return nil, err
}
inputData, err := gpgme.NewDataBytes(input)
if err != nil {
return nil, err
}
var sigBuffer bytes.Buffer
sigData, err := gpgme.NewDataWriter(&sigBuffer)
if err != nil {
return nil, err
}
if err = m.ctx.Sign([]*gpgme.Key{key}, inputData, sigData, gpgme.SigModeNormal); err != nil {
return nil, err
}
return sigBuffer.Bytes(), nil
}
// Verify parses unverifiedSignature and returns the content and the signer's identity
func (m gpgmeSigningMechanism) Verify(unverifiedSignature []byte) (contents []byte, keyIdentity string, err error) {
signedBuffer := bytes.Buffer{}
signedData, err := gpgme.NewDataWriter(&signedBuffer)
if err != nil {
return nil, "", err
}
unverifiedSignatureData, err := gpgme.NewDataBytes(unverifiedSignature)
if err != nil {
return nil, "", err
}
_, sigs, err := m.ctx.Verify(unverifiedSignatureData, nil, signedData)
if err != nil {
return nil, "", err
}
if len(sigs) != 1 {
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Unexpected GPG signature count %d", len(sigs))}
}
sig := sigs[0]
// This is sig.Summary == gpgme.SigSumValid except for key trust, which we handle ourselves
if sig.Status != nil || sig.Validity == gpgme.ValidityNever || sig.ValidityReason != nil || sig.WrongKeyUsage {
// FIXME: Better error reporting eventually
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Invalid GPG signature: %#v", sig)}
}
return signedBuffer.Bytes(), sig.Fingerprint, nil
}
// UntrustedSignatureContents returns UNTRUSTED contents of the signature WITHOUT ANY VERIFICATION,
// along with a short identifier of the key used for signing.
// WARNING: The short key identifier (which correponds to "Key ID" for OpenPGP keys)
// is NOT the same as a "key identity" used in other calls ot this interface, and
// the values may have no recognizable relationship if the public key is not available.
func (m gpgmeSigningMechanism) UntrustedSignatureContents(untrustedSignature []byte) (untrustedContents []byte, shortKeyIdentifier string, err error) {
return gpgUntrustedSignatureContents(untrustedSignature)
}

View File

@@ -0,0 +1,153 @@
// +build containers_image_openpgp
package signature
import (
"bytes"
"errors"
"fmt"
"io/ioutil"
"os"
"path"
"strings"
"time"
"github.com/containers/storage/pkg/homedir"
"golang.org/x/crypto/openpgp"
)
// A GPG/OpenPGP signing mechanism, implemented using x/crypto/openpgp.
type openpgpSigningMechanism struct {
keyring openpgp.EntityList
}
// newGPGSigningMechanismInDirectory returns a new GPG/OpenPGP signing mechanism, using optionalDir if not empty.
// The caller must call .Close() on the returned SigningMechanism.
func newGPGSigningMechanismInDirectory(optionalDir string) (SigningMechanism, error) {
m := &openpgpSigningMechanism{
keyring: openpgp.EntityList{},
}
gpgHome := optionalDir
if gpgHome == "" {
gpgHome = os.Getenv("GNUPGHOME")
if gpgHome == "" {
gpgHome = path.Join(homedir.Get(), ".gnupg")
}
}
pubring, err := ioutil.ReadFile(path.Join(gpgHome, "pubring.gpg"))
if err != nil {
if !os.IsNotExist(err) {
return nil, err
}
} else {
_, err := m.importKeysFromBytes(pubring)
if err != nil {
return nil, err
}
}
return m, nil
}
// newEphemeralGPGSigningMechanism returns a new GPG/OpenPGP signing mechanism which
// recognizes _only_ public keys from the supplied blob, and returns the identities
// of these keys.
// The caller must call .Close() on the returned SigningMechanism.
func newEphemeralGPGSigningMechanism(blob []byte) (SigningMechanism, []string, error) {
m := &openpgpSigningMechanism{
keyring: openpgp.EntityList{},
}
keyIdentities, err := m.importKeysFromBytes(blob)
if err != nil {
return nil, nil, err
}
return m, keyIdentities, nil
}
func (m *openpgpSigningMechanism) Close() error {
return nil
}
// importKeysFromBytes imports public keys from the supplied blob and returns their identities.
// The blob is assumed to have an appropriate format (the caller is expected to know which one).
func (m *openpgpSigningMechanism) importKeysFromBytes(blob []byte) ([]string, error) {
keyring, err := openpgp.ReadKeyRing(bytes.NewReader(blob))
if err != nil {
k, e2 := openpgp.ReadArmoredKeyRing(bytes.NewReader(blob))
if e2 != nil {
return nil, err // The original error -- FIXME: is this better?
}
keyring = k
}
keyIdentities := []string{}
for _, entity := range keyring {
if entity.PrimaryKey == nil {
// Coverage: This should never happen, openpgp.ReadEntity fails with a
// openpgp.errors.StructuralError instead of returning an entity with this
// field set to nil.
continue
}
// Uppercase the fingerprint to be compatible with gpgme
keyIdentities = append(keyIdentities, strings.ToUpper(fmt.Sprintf("%x", entity.PrimaryKey.Fingerprint)))
m.keyring = append(m.keyring, entity)
}
return keyIdentities, nil
}
// SupportsSigning returns nil if the mechanism supports signing, or a SigningNotSupportedError.
func (m *openpgpSigningMechanism) SupportsSigning() error {
return SigningNotSupportedError("signing is not supported in github.com/containers/image built with the containers_image_openpgp build tag")
}
// Sign creates a (non-detached) signature of input using keyIdentity.
// Fails with a SigningNotSupportedError if the mechanism does not support signing.
func (m *openpgpSigningMechanism) Sign(input []byte, keyIdentity string) ([]byte, error) {
return nil, SigningNotSupportedError("signing is not supported in github.com/containers/image built with the containers_image_openpgp build tag")
}
// Verify parses unverifiedSignature and returns the content and the signer's identity
func (m *openpgpSigningMechanism) Verify(unverifiedSignature []byte) (contents []byte, keyIdentity string, err error) {
md, err := openpgp.ReadMessage(bytes.NewReader(unverifiedSignature), m.keyring, nil, nil)
if err != nil {
return nil, "", err
}
if !md.IsSigned {
return nil, "", errors.New("not signed")
}
content, err := ioutil.ReadAll(md.UnverifiedBody)
if err != nil {
// Coverage: md.UnverifiedBody.Read only fails if the body is encrypted
// (and possibly also signed, but it _must_ be encrypted) and the signing
// “modification detection code” detects a mismatch. But in that case,
// we would expect the signature verification to fail as well, and that is checked
// first. Besides, we are not supplying any decryption keys, so we really
// can never reach this “encrypted data MDC mismatch” path.
return nil, "", err
}
if md.SignatureError != nil {
return nil, "", fmt.Errorf("signature error: %v", md.SignatureError)
}
if md.SignedBy == nil {
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Invalid GPG signature: %#v", md.Signature)}
}
if md.Signature.SigLifetimeSecs != nil {
expiry := md.Signature.CreationTime.Add(time.Duration(*md.Signature.SigLifetimeSecs) * time.Second)
if time.Now().After(expiry) {
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Signature expired on %s", expiry)}
}
}
// Uppercase the fingerprint to be compatible with gpgme
return content, strings.ToUpper(fmt.Sprintf("%x", md.SignedBy.PublicKey.Fingerprint)), nil
}
// UntrustedSignatureContents returns UNTRUSTED contents of the signature WITHOUT ANY VERIFICATION,
// along with a short identifier of the key used for signing.
// WARNING: The short key identifier (which correponds to "Key ID" for OpenPGP keys)
// is NOT the same as a "key identity" used in other calls ot this interface, and
// the values may have no recognizable relationship if the public key is not available.
func (m openpgpSigningMechanism) UntrustedSignatureContents(untrustedSignature []byte) (untrustedContents []byte, shortKeyIdentifier string, err error) {
return gpgUntrustedSignatureContents(untrustedSignature)
}

View File

@@ -15,14 +15,14 @@ package signature
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"path/filepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/pkg/errors"
)
// systemDefaultPolicyPath is the policy path used for DefaultPolicy().
@@ -41,8 +41,6 @@ func (err InvalidPolicyFormatError) Error() string {
return string(err)
}
// FIXME: NewDefaultPolicy, from default file (or environment if trusted?)
// DefaultPolicy returns the default policy of the system.
// Most applications should be using this method to get the policy configured
// by the system administrator.
@@ -124,10 +122,8 @@ func (m *policyTransportsMap) UnmarshalJSON(data []byte) error {
// So, use a temporary map of pointers-to-slices and convert.
tmpMap := map[string]*PolicyTransportScopes{}
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
transport, ok := transports.KnownTransports[key]
if !ok {
return nil
}
// transport can be nil
transport := transports.Get(key)
// paranoidUnmarshalJSONObject detects key duplication for us, check just to be safe.
if _, ok := tmpMap[key]; ok {
return nil
@@ -157,7 +153,7 @@ func (m *PolicyTransportScopes) UnmarshalJSON(data []byte) error {
}
// policyTransportScopesWithTransport is a way to unmarshal a PolicyTransportScopes
// while validating using a specific ImageTransport.
// while validating using a specific ImageTransport if not nil.
type policyTransportScopesWithTransport struct {
transport types.ImageTransport
dest *PolicyTransportScopes
@@ -176,7 +172,7 @@ func (m *policyTransportScopesWithTransport) UnmarshalJSON(data []byte) error {
if _, ok := tmpMap[key]; ok {
return nil
}
if key != "" {
if key != "" && m.transport != nil {
if err := m.transport.ValidatePolicyConfigurationScope(key); err != nil {
return nil
}
@@ -259,13 +255,8 @@ var _ json.Unmarshaler = (*prInsecureAcceptAnything)(nil)
func (pr *prInsecureAcceptAnything) UnmarshalJSON(data []byte) error {
*pr = prInsecureAcceptAnything{}
var tmp prInsecureAcceptAnything
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
default:
return nil
}
if err := paranoidUnmarshalJSONObjectExactFields(data, map[string]interface{}{
"type": &tmp.Type,
}); err != nil {
return err
}
@@ -294,13 +285,8 @@ var _ json.Unmarshaler = (*prReject)(nil)
func (pr *prReject) UnmarshalJSON(data []byte) error {
*pr = prReject{}
var tmp prReject
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
default:
return nil
}
if err := paranoidUnmarshalJSONObjectExactFields(data, map[string]interface{}{
"type": &tmp.Type,
}); err != nil {
return err
}
@@ -386,7 +372,7 @@ func (pr *prSignedBy) UnmarshalJSON(data []byte) error {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
if signedIdentity == nil {
tmp.SignedIdentity = NewPRMMatchExact()
tmp.SignedIdentity = NewPRMMatchRepoDigestOrExact()
} else {
si, err := newPolicyReferenceMatchFromJSON(signedIdentity)
if err != nil {
@@ -407,7 +393,7 @@ func (pr *prSignedBy) UnmarshalJSON(data []byte) error {
case !gotKeyPath && !gotKeyData:
return InvalidPolicyFormatError("At least one of keyPath and keyData mus be specified")
default: // Coverage: This should never happen
return fmt.Errorf("Impossible keyPath/keyData presence combination!?")
return errors.Errorf("Impossible keyPath/keyData presence combination!?")
}
if err != nil {
return err
@@ -469,15 +455,9 @@ func (pr *prSignedBaseLayer) UnmarshalJSON(data []byte) error {
*pr = prSignedBaseLayer{}
var tmp prSignedBaseLayer
var baseLayerIdentity json.RawMessage
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
case "baseLayerIdentity":
return &baseLayerIdentity
default:
return nil
}
if err := paranoidUnmarshalJSONObjectExactFields(data, map[string]interface{}{
"type": &tmp.Type,
"baseLayerIdentity": &baseLayerIdentity,
}); err != nil {
return err
}
@@ -485,9 +465,6 @@ func (pr *prSignedBaseLayer) UnmarshalJSON(data []byte) error {
if tmp.Type != prTypeSignedBaseLayer {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
if baseLayerIdentity == nil {
return InvalidPolicyFormatError(fmt.Sprintf("baseLayerIdentity not specified"))
}
bli, err := newPolicyReferenceMatchFromJSON(baseLayerIdentity)
if err != nil {
return err
@@ -501,7 +478,7 @@ func (pr *prSignedBaseLayer) UnmarshalJSON(data []byte) error {
return nil
}
// newPolicyRequirementFromJSON parses JSON data into a PolicyReferenceMatch implementation.
// newPolicyReferenceMatchFromJSON parses JSON data into a PolicyReferenceMatch implementation.
func newPolicyReferenceMatchFromJSON(data []byte) (PolicyReferenceMatch, error) {
var typeField prmCommon
if err := json.Unmarshal(data, &typeField); err != nil {
@@ -511,6 +488,8 @@ func newPolicyReferenceMatchFromJSON(data []byte) (PolicyReferenceMatch, error)
switch typeField.Type {
case prmTypeMatchExact:
res = &prmMatchExact{}
case prmTypeMatchRepoDigestOrExact:
res = &prmMatchRepoDigestOrExact{}
case prmTypeMatchRepository:
res = &prmMatchRepository{}
case prmTypeExactReference:
@@ -543,13 +522,8 @@ var _ json.Unmarshaler = (*prmMatchExact)(nil)
func (prm *prmMatchExact) UnmarshalJSON(data []byte) error {
*prm = prmMatchExact{}
var tmp prmMatchExact
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
default:
return nil
}
if err := paranoidUnmarshalJSONObjectExactFields(data, map[string]interface{}{
"type": &tmp.Type,
}); err != nil {
return err
}
@@ -561,6 +535,36 @@ func (prm *prmMatchExact) UnmarshalJSON(data []byte) error {
return nil
}
// newPRMMatchRepoDigestOrExact is NewPRMMatchRepoDigestOrExact, except it resturns the private type.
func newPRMMatchRepoDigestOrExact() *prmMatchRepoDigestOrExact {
return &prmMatchRepoDigestOrExact{prmCommon{Type: prmTypeMatchRepoDigestOrExact}}
}
// NewPRMMatchRepoDigestOrExact returns a new "matchRepoDigestOrExact" PolicyReferenceMatch.
func NewPRMMatchRepoDigestOrExact() PolicyReferenceMatch {
return newPRMMatchRepoDigestOrExact()
}
// Compile-time check that prmMatchRepoDigestOrExact implements json.Unmarshaler.
var _ json.Unmarshaler = (*prmMatchRepoDigestOrExact)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (prm *prmMatchRepoDigestOrExact) UnmarshalJSON(data []byte) error {
*prm = prmMatchRepoDigestOrExact{}
var tmp prmMatchRepoDigestOrExact
if err := paranoidUnmarshalJSONObjectExactFields(data, map[string]interface{}{
"type": &tmp.Type,
}); err != nil {
return err
}
if tmp.Type != prmTypeMatchRepoDigestOrExact {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
*prm = *newPRMMatchRepoDigestOrExact()
return nil
}
// newPRMMatchRepository is NewPRMMatchRepository, except it resturns the private type.
func newPRMMatchRepository() *prmMatchRepository {
return &prmMatchRepository{prmCommon{Type: prmTypeMatchRepository}}
@@ -578,13 +582,8 @@ var _ json.Unmarshaler = (*prmMatchRepository)(nil)
func (prm *prmMatchRepository) UnmarshalJSON(data []byte) error {
*prm = prmMatchRepository{}
var tmp prmMatchRepository
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
default:
return nil
}
if err := paranoidUnmarshalJSONObjectExactFields(data, map[string]interface{}{
"type": &tmp.Type,
}); err != nil {
return err
}
@@ -598,7 +597,7 @@ func (prm *prmMatchRepository) UnmarshalJSON(data []byte) error {
// newPRMExactReference is NewPRMExactReference, except it resturns the private type.
func newPRMExactReference(dockerReference string) (*prmExactReference, error) {
ref, err := reference.ParseNamed(dockerReference)
ref, err := reference.ParseNormalizedNamed(dockerReference)
if err != nil {
return nil, InvalidPolicyFormatError(fmt.Sprintf("Invalid format of dockerReference %s: %s", dockerReference, err.Error()))
}
@@ -623,15 +622,9 @@ var _ json.Unmarshaler = (*prmExactReference)(nil)
func (prm *prmExactReference) UnmarshalJSON(data []byte) error {
*prm = prmExactReference{}
var tmp prmExactReference
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
case "dockerReference":
return &tmp.DockerReference
default:
return nil
}
if err := paranoidUnmarshalJSONObjectExactFields(data, map[string]interface{}{
"type": &tmp.Type,
"dockerReference": &tmp.DockerReference,
}); err != nil {
return err
}
@@ -650,7 +643,7 @@ func (prm *prmExactReference) UnmarshalJSON(data []byte) error {
// newPRMExactRepository is NewPRMExactRepository, except it resturns the private type.
func newPRMExactRepository(dockerRepository string) (*prmExactRepository, error) {
if _, err := reference.ParseNamed(dockerRepository); err != nil {
if _, err := reference.ParseNormalizedNamed(dockerRepository); err != nil {
return nil, InvalidPolicyFormatError(fmt.Sprintf("Invalid format of dockerRepository %s: %s", dockerRepository, err.Error()))
}
return &prmExactRepository{
@@ -671,15 +664,9 @@ var _ json.Unmarshaler = (*prmExactRepository)(nil)
func (prm *prmExactRepository) UnmarshalJSON(data []byte) error {
*prm = prmExactRepository{}
var tmp prmExactRepository
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
case "dockerRepository":
return &tmp.DockerRepository
default:
return nil
}
if err := paranoidUnmarshalJSONObjectExactFields(data, map[string]interface{}{
"type": &tmp.Type,
"dockerRepository": &tmp.DockerRepository,
}); err != nil {
return err
}

View File

@@ -6,10 +6,9 @@
package signature
import (
"fmt"
"github.com/Sirupsen/logrus"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// PolicyRequirementError is an explanatory text for rejecting a signature or an image.
@@ -54,14 +53,14 @@ type PolicyRequirement interface {
// a container based on this image; use IsRunningImageAllowed instead.
// - Just because a signature is accepted does not automatically mean the contents of the
// signature are authorized to run code as root, or to affect system or cluster configuration.
isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error)
isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error)
// isRunningImageAllowed returns true if the requirement allows running an image.
// If it returns false, err must be non-nil, and should be an PolicyRequirementError if evaluation
// succeeded but the result was rejection.
// WARNING: This validates signatures and the manifest, but does not download or validate the
// layers. Users must validate that the layers match their expected digests.
isRunningImageAllowed(image types.Image) (bool, error)
isRunningImageAllowed(image types.UnparsedImage) (bool, error)
}
// PolicyReferenceMatch specifies a set of image identities accepted in PolicyRequirement.
@@ -70,7 +69,7 @@ type PolicyReferenceMatch interface {
// matchesDockerReference decides whether a specific image identity is accepted for an image
// (or, usually, for the image's Reference().DockerReference()). Note that
// image.Reference().DockerReference() may be nil.
matchesDockerReference(image types.Image, signatureDockerReference string) bool
matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool
}
// PolicyContext encapsulates a policy and possible cached state
@@ -95,7 +94,7 @@ const (
// changeContextState changes pc.state, or fails if the state is unexpected
func (pc *PolicyContext) changeState(expected, new policyContextState) error {
if pc.state != expected {
return fmt.Errorf(`"Invalid PolicyContext state, expected "%s", found "%s"`, expected, pc.state)
return errors.Errorf(`"Invalid PolicyContext state, expected "%s", found "%s"`, expected, pc.state)
}
pc.state = new
return nil
@@ -174,7 +173,7 @@ func (pc *PolicyContext) requirementsForImageRef(ref types.ImageReference) Polic
// a container based on this image; use IsRunningImageAllowed instead.
// - Just because a signature is accepted does not automatically mean the contents of the
// signature are authorized to run code as root, or to affect system or cluster configuration.
func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.Image) (sigs []*Signature, finalErr error) {
func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.UnparsedImage) (sigs []*Signature, finalErr error) {
if err := pc.changeState(pcReady, pcInUse); err != nil {
return nil, err
}
@@ -254,7 +253,7 @@ func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.Image) (sig
// succeeded but the result was rejection.
// WARNING: This validates signatures and the manifest, but does not download or validate the
// layers. Users must validate that the layers match their expected digests.
func (pc *PolicyContext) IsRunningImageAllowed(image types.Image) (res bool, finalErr error) {
func (pc *PolicyContext) IsRunningImageAllowed(image types.UnparsedImage) (res bool, finalErr error) {
if err := pc.changeState(pcReady, pcInUse); err != nil {
return false, err
}

View File

@@ -7,11 +7,11 @@ import (
"github.com/containers/image/types"
)
func (pr *prSignedBaseLayer) isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error) {
func (pr *prSignedBaseLayer) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
return sarUnknown, nil, nil
}
func (pr *prSignedBaseLayer) isRunningImageAllowed(image types.Image) (bool, error) {
func (pr *prSignedBaseLayer) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
// FIXME? Reject this at policy parsing time already?
logrus.Errorf("signedBaseLayer not implemented yet!")
return false, PolicyRequirementError("signedBaseLayer not implemented yet!")

View File

@@ -3,25 +3,26 @@
package signature
import (
"errors"
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
)
func (pr *prSignedBy) isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error) {
func (pr *prSignedBy) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
switch pr.KeyType {
case SBKeyTypeGPGKeys:
case SBKeyTypeSignedByGPGKeys, SBKeyTypeX509Certificates, SBKeyTypeSignedByX509CAs:
// FIXME? Reject this at policy parsing time already?
return sarRejected, nil, fmt.Errorf(`"Unimplemented "keyType" value "%s"`, string(pr.KeyType))
return sarRejected, nil, errors.Errorf(`"Unimplemented "keyType" value "%s"`, string(pr.KeyType))
default:
// This should never happen, newPRSignedBy ensures KeyType.IsValid()
return sarRejected, nil, fmt.Errorf(`"Unknown "keyType" value "%s"`, string(pr.KeyType))
return sarRejected, nil, errors.Errorf(`"Unknown "keyType" value "%s"`, string(pr.KeyType))
}
if pr.KeyPath != "" && pr.KeyData != nil {
@@ -40,20 +41,11 @@ func (pr *prSignedBy) isSignatureAuthorAccepted(image types.Image, sig []byte) (
}
// FIXME: move this to per-context initialization
dir, err := ioutil.TempDir("", "skopeo-signedBy-")
if err != nil {
return sarRejected, nil, err
}
defer os.RemoveAll(dir)
mech, err := newGPGSigningMechanismInDirectory(dir)
if err != nil {
return sarRejected, nil, err
}
trustedIdentities, err := mech.ImportKeysFromBytes(data)
mech, trustedIdentities, err := NewEphemeralGPGSigningMechanism(data)
if err != nil {
return sarRejected, nil, err
}
defer mech.Close()
if len(trustedIdentities) == 0 {
return sarRejected, nil, PolicyRequirementError("No public keys imported")
}
@@ -75,7 +67,7 @@ func (pr *prSignedBy) isSignatureAuthorAccepted(image types.Image, sig []byte) (
}
return nil
},
validateSignedDockerManifestDigest: func(digest string) error {
validateSignedDockerManifestDigest: func(digest digest.Digest) error {
m, _, err := image.Manifest()
if err != nil {
return err
@@ -97,7 +89,7 @@ func (pr *prSignedBy) isSignatureAuthorAccepted(image types.Image, sig []byte) (
return sarAccepted, signature, nil
}
func (pr *prSignedBy) isRunningImageAllowed(image types.Image) (bool, error) {
func (pr *prSignedBy) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
sigs, err := image.Signatures()
if err != nil {
return false, err
@@ -115,7 +107,7 @@ func (pr *prSignedBy) isRunningImageAllowed(image types.Image) (bool, error) {
// Huh?! This should not happen at all; treat it as any other invalid value.
fallthrough
default:
reason = fmt.Errorf(`Internal error: Unexpected signature verification result "%s"`, string(res))
reason = errors.Errorf(`Internal error: Unexpected signature verification result "%s"`, string(res))
}
rejections = append(rejections, reason)
}

View File

@@ -9,20 +9,20 @@ import (
"github.com/containers/image/types"
)
func (pr *prInsecureAcceptAnything) isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error) {
func (pr *prInsecureAcceptAnything) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
// prInsecureAcceptAnything semantics: Every image is allowed to run,
// but this does not consider the signature as verified.
return sarUnknown, nil, nil
}
func (pr *prInsecureAcceptAnything) isRunningImageAllowed(image types.Image) (bool, error) {
func (pr *prInsecureAcceptAnything) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
return true, nil
}
func (pr *prReject) isSignatureAuthorAccepted(image types.Image, sig []byte) (signatureAcceptanceResult, *Signature, error) {
func (pr *prReject) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
return sarRejected, nil, PolicyRequirementError(fmt.Sprintf("Any signatures for image %s are rejected by policy.", transports.ImageName(image.Reference())))
}
func (pr *prReject) isRunningImageAllowed(image types.Image) (bool, error) {
func (pr *prReject) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
return false, PolicyRequirementError(fmt.Sprintf("Running image %s is rejected by policy.", transports.ImageName(image.Reference())))
}

View File

@@ -5,26 +5,26 @@ package signature
import (
"fmt"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
)
// parseImageAndDockerReference converts an image and a reference string into two parsed entities, failing on any error and handling unidentified images.
func parseImageAndDockerReference(image types.Image, s2 string) (reference.Named, reference.Named, error) {
func parseImageAndDockerReference(image types.UnparsedImage, s2 string) (reference.Named, reference.Named, error) {
r1 := image.Reference().DockerReference()
if r1 == nil {
return nil, nil, PolicyRequirementError(fmt.Sprintf("Docker reference match attempted on image %s with no known Docker reference identity",
transports.ImageName(image.Reference())))
}
r2, err := reference.ParseNamed(s2)
r2, err := reference.ParseNormalizedNamed(s2)
if err != nil {
return nil, nil, err
}
return r1, r2, nil
}
func (prm *prmMatchExact) matchesDockerReference(image types.Image, signatureDockerReference string) bool {
func (prm *prmMatchExact) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseImageAndDockerReference(image, signatureDockerReference)
if err != nil {
return false
@@ -36,7 +36,30 @@ func (prm *prmMatchExact) matchesDockerReference(image types.Image, signatureDoc
return signature.String() == intended.String()
}
func (prm *prmMatchRepository) matchesDockerReference(image types.Image, signatureDockerReference string) bool {
func (prm *prmMatchRepoDigestOrExact) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseImageAndDockerReference(image, signatureDockerReference)
if err != nil {
return false
}
// Do not add default tags: image.Reference().DockerReference() should contain it already, and signatureDockerReference should be exact; so, verify that now.
if reference.IsNameOnly(signature) {
return false
}
switch intended.(type) {
case reference.NamedTagged: // Includes the case when intended has both a tag and a digest.
return signature.String() == intended.String()
case reference.Canonical:
// We dont actually compare the manifest digest against the signature here; that happens prSignedBy.in UnparsedImage.Manifest.
// Becase UnparsedImage.Manifest verifies the intended.Digest() against the manifest, and prSignedBy verifies the signature digest against the manifest,
// we know that signature digest matches intended.Digest() (but intended.Digest() and signature digest may use different algorithms)
return signature.Name() == intended.Name()
default: // !reference.IsNameOnly(intended)
return false
}
}
func (prm *prmMatchRepository) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseImageAndDockerReference(image, signatureDockerReference)
if err != nil {
return false
@@ -46,18 +69,18 @@ func (prm *prmMatchRepository) matchesDockerReference(image types.Image, signatu
// parseDockerReferences converts two reference strings into parsed entities, failing on any error
func parseDockerReferences(s1, s2 string) (reference.Named, reference.Named, error) {
r1, err := reference.ParseNamed(s1)
r1, err := reference.ParseNormalizedNamed(s1)
if err != nil {
return nil, nil, err
}
r2, err := reference.ParseNamed(s2)
r2, err := reference.ParseNormalizedNamed(s2)
if err != nil {
return nil, nil, err
}
return r1, r2, nil
}
func (prm *prmExactReference) matchesDockerReference(image types.Image, signatureDockerReference string) bool {
func (prm *prmExactReference) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseDockerReferences(prm.DockerReference, signatureDockerReference)
if err != nil {
return false
@@ -69,7 +92,7 @@ func (prm *prmExactReference) matchesDockerReference(image types.Image, signatur
return signature.String() == intended.String()
}
func (prm *prmExactRepository) matchesDockerReference(image types.Image, signatureDockerReference string) bool {
func (prm *prmExactRepository) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
intended, signature, err := parseDockerReferences(prm.DockerRepository, signatureDockerReference)
if err != nil {
return false

View File

@@ -6,7 +6,9 @@
package signature
// Policy defines requirements for considering a signature valid.
// NOTE: Keep this in sync with docs/policy.json.md!
// Policy defines requirements for considering a signature, or an image, valid.
type Policy struct {
// Default applies to any image which does not have a matching policy in Transports.
// Note that this can happen even if a matching PolicyTransportScopes exists in Transports
@@ -114,10 +116,11 @@ type prmCommon struct {
type prmTypeIdentifier string
const (
prmTypeMatchExact prmTypeIdentifier = "matchExact"
prmTypeMatchRepository prmTypeIdentifier = "matchRepository"
prmTypeExactReference prmTypeIdentifier = "exactReference"
prmTypeExactRepository prmTypeIdentifier = "exactRepository"
prmTypeMatchExact prmTypeIdentifier = "matchExact"
prmTypeMatchRepoDigestOrExact prmTypeIdentifier = "matchRepoDigestOrExact"
prmTypeMatchRepository prmTypeIdentifier = "matchRepository"
prmTypeExactReference prmTypeIdentifier = "exactReference"
prmTypeExactRepository prmTypeIdentifier = "exactRepository"
)
// prmMatchExact is a PolicyReferenceMatch with type = prmMatchExact: the two references must match exactly.
@@ -125,6 +128,12 @@ type prmMatchExact struct {
prmCommon
}
// prmMatchRepoDigestOrExact is a PolicyReferenceMatch with type = prmMatchExactOrDigest: the two references must match exactly,
// except that digest references are also accepted if the repository name matches (regardless of tag/digest) and the signature applies to the referenced digest
type prmMatchRepoDigestOrExact struct {
prmCommon
}
// prmMatchRepository is a PolicyReferenceMatch with type = prmMatchRepository: the two references must use the same repository, may differ in the tag.
type prmMatchRepository struct {
prmCommon

View File

@@ -4,11 +4,13 @@ package signature
import (
"encoding/json"
"errors"
"fmt"
"time"
"github.com/pkg/errors"
"github.com/containers/image/version"
"github.com/opencontainers/go-digest"
)
const (
@@ -25,37 +27,74 @@ func (err InvalidSignatureError) Error() string {
}
// Signature is a parsed content of a signature.
// The only way to get this structure from a blob should be as a return value from a successful call to verifyAndExtractSignature below.
type Signature struct {
DockerManifestDigest string // FIXME: more precise type?
DockerManifestDigest digest.Digest
DockerReference string // FIXME: more precise type?
}
// Wrap signature to add to it some methods which we don't want to make public.
type privateSignature struct {
Signature
// untrustedSignature is a parsed content of a signature.
type untrustedSignature struct {
UntrustedDockerManifestDigest digest.Digest
UntrustedDockerReference string // FIXME: more precise type?
UntrustedCreatorID *string
// This is intentionally an int64; the native JSON float64 type would allow to represent _some_ sub-second precision,
// but not nearly enough (with current timestamp values, a single unit in the last place is on the order of hundreds of nanoseconds).
// So, this is explicitly an int64, and we reject fractional values. If we did need more precise timestamps eventually,
// we would add another field, UntrustedTimestampNS int64.
UntrustedTimestamp *int64
}
// Compile-time check that privateSignature implements json.Marshaler
var _ json.Marshaler = (*privateSignature)(nil)
// UntrustedSignatureInformation is information available in an untrusted signature.
// This may be useful when debugging signature verification failures,
// or when managing a set of signatures on a single image.
//
// WARNING: Do not use the contents of this for ANY security decisions,
// and be VERY CAREFUL about showing this information to humans in any way which suggest that these values “are probably” reliable.
// There is NO REASON to expect the values to be correct, or not intentionally misleading
// (including things like “✅ Verified by $authority”)
type UntrustedSignatureInformation struct {
UntrustedDockerManifestDigest digest.Digest
UntrustedDockerReference string // FIXME: more precise type?
UntrustedCreatorID *string
UntrustedTimestamp *time.Time
UntrustedShortKeyIdentifier string
}
// newUntrustedSignature returns an untrustedSignature object with
// the specified primary contents and appropriate metadata.
func newUntrustedSignature(dockerManifestDigest digest.Digest, dockerReference string) untrustedSignature {
// Use intermediate variables for these values so that we can take their addresses.
// Golang guarantees that they will have a new address on every execution.
creatorID := "atomic " + version.Version
timestamp := time.Now().Unix()
return untrustedSignature{
UntrustedDockerManifestDigest: dockerManifestDigest,
UntrustedDockerReference: dockerReference,
UntrustedCreatorID: &creatorID,
UntrustedTimestamp: &timestamp,
}
}
// Compile-time check that untrustedSignature implements json.Marshaler
var _ json.Marshaler = (*untrustedSignature)(nil)
// MarshalJSON implements the json.Marshaler interface.
func (s privateSignature) MarshalJSON() ([]byte, error) {
return s.marshalJSONWithVariables(time.Now().UTC().Unix(), "atomic "+version.Version)
}
// Implementation of MarshalJSON, with a caller-chosen values of the variable items to help testing.
func (s privateSignature) marshalJSONWithVariables(timestamp int64, creatorID string) ([]byte, error) {
if s.DockerManifestDigest == "" || s.DockerReference == "" {
func (s untrustedSignature) MarshalJSON() ([]byte, error) {
if s.UntrustedDockerManifestDigest == "" || s.UntrustedDockerReference == "" {
return nil, errors.New("Unexpected empty signature content")
}
critical := map[string]interface{}{
"type": signatureType,
"image": map[string]string{"docker-manifest-digest": s.DockerManifestDigest},
"identity": map[string]string{"docker-reference": s.DockerReference},
"image": map[string]string{"docker-manifest-digest": s.UntrustedDockerManifestDigest.String()},
"identity": map[string]string{"docker-reference": s.UntrustedDockerReference},
}
optional := map[string]interface{}{
"creator": creatorID,
"timestamp": timestamp,
optional := map[string]interface{}{}
if s.UntrustedCreatorID != nil {
optional["creator"] = *s.UntrustedCreatorID
}
if s.UntrustedTimestamp != nil {
optional["timestamp"] = *s.UntrustedTimestamp
}
signature := map[string]interface{}{
"critical": critical,
@@ -64,11 +103,11 @@ func (s privateSignature) marshalJSONWithVariables(timestamp int64, creatorID st
return json.Marshal(signature)
}
// Compile-time check that privateSignature implements json.Unmarshaler
var _ json.Unmarshaler = (*privateSignature)(nil)
// Compile-time check that untrustedSignature implements json.Unmarshaler
var _ json.Unmarshaler = (*untrustedSignature)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface
func (s *privateSignature) UnmarshalJSON(data []byte) error {
func (s *untrustedSignature) UnmarshalJSON(data []byte) error {
err := s.strictUnmarshalJSON(data)
if err != nil {
if _, ok := err.(jsonFormatError); ok {
@@ -80,72 +119,81 @@ func (s *privateSignature) UnmarshalJSON(data []byte) error {
// strictUnmarshalJSON is UnmarshalJSON, except that it may return the internal jsonFormatError error type.
// Splitting it into a separate function allows us to do the jsonFormatError → InvalidSignatureError in a single place, the caller.
func (s *privateSignature) strictUnmarshalJSON(data []byte) error {
var untyped interface{}
if err := json.Unmarshal(data, &untyped); err != nil {
return err
}
o, ok := untyped.(map[string]interface{})
if !ok {
return InvalidSignatureError{msg: "Invalid signature format"}
}
if err := validateExactMapKeys(o, "critical", "optional"); err != nil {
func (s *untrustedSignature) strictUnmarshalJSON(data []byte) error {
var critical, optional json.RawMessage
if err := paranoidUnmarshalJSONObjectExactFields(data, map[string]interface{}{
"critical": &critical,
"optional": &optional,
}); err != nil {
return err
}
c, err := mapField(o, "critical")
if err != nil {
var creatorID string
var timestamp float64
var gotCreatorID, gotTimestamp = false, false
if err := paranoidUnmarshalJSONObject(optional, func(key string) interface{} {
switch key {
case "creator":
gotCreatorID = true
return &creatorID
case "timestamp":
gotTimestamp = true
return &timestamp
default:
var ignore interface{}
return &ignore
}
}); err != nil {
return err
}
if err := validateExactMapKeys(c, "type", "image", "identity"); err != nil {
return err
if gotCreatorID {
s.UntrustedCreatorID = &creatorID
}
if gotTimestamp {
intTimestamp := int64(timestamp)
if float64(intTimestamp) != timestamp {
return InvalidSignatureError{msg: "Field optional.timestamp is not is not an integer"}
}
s.UntrustedTimestamp = &intTimestamp
}
optional, err := mapField(o, "optional")
if err != nil {
return err
}
_ = optional // We don't use anything from here for now.
t, err := stringField(c, "type")
if err != nil {
var t string
var image, identity json.RawMessage
if err := paranoidUnmarshalJSONObjectExactFields(critical, map[string]interface{}{
"type": &t,
"image": &image,
"identity": &identity,
}); err != nil {
return err
}
if t != signatureType {
return InvalidSignatureError{msg: fmt.Sprintf("Unrecognized signature type %s", t)}
}
image, err := mapField(c, "image")
if err != nil {
var digestString string
if err := paranoidUnmarshalJSONObjectExactFields(image, map[string]interface{}{
"docker-manifest-digest": &digestString,
}); err != nil {
return err
}
if err := validateExactMapKeys(image, "docker-manifest-digest"); err != nil {
return err
}
digest, err := stringField(image, "docker-manifest-digest")
if err != nil {
return err
}
s.DockerManifestDigest = digest
s.UntrustedDockerManifestDigest = digest.Digest(digestString)
identity, err := mapField(c, "identity")
if err != nil {
if err := paranoidUnmarshalJSONObjectExactFields(identity, map[string]interface{}{
"docker-reference": &s.UntrustedDockerReference,
}); err != nil {
return err
}
if err := validateExactMapKeys(identity, "docker-reference"); err != nil {
return err
}
reference, err := stringField(identity, "docker-reference")
if err != nil {
return err
}
s.DockerReference = reference
return nil
}
// Sign formats the signature and returns a blob signed using mech and keyIdentity
func (s privateSignature) sign(mech SigningMechanism, keyIdentity string) ([]byte, error) {
// (If it seems surprising that this is a method on untrustedSignature, note that there
// isnt a good reason to think that a key used by the user is trusted by any component
// of the system just because it is a private key — actually the presence of a private key
// on the system increases the likelihood of an a successful attack on that private key
// on that particular system.)
func (s untrustedSignature) sign(mech SigningMechanism, keyIdentity string) ([]byte, error) {
json, err := json.Marshal(s)
if err != nil {
return nil, err
@@ -157,12 +205,12 @@ func (s privateSignature) sign(mech SigningMechanism, keyIdentity string) ([]byt
// signatureAcceptanceRules specifies how to decide whether an untrusted signature is acceptable.
// We centralize the actual parsing and data extraction in verifyAndExtractSignature; this supplies
// the policy. We use an object instead of supplying func parameters to verifyAndExtractSignature
// because all of the functions have the same type, so there is a risk of exchanging the functions;
// because the functions have the same or similar types, so there is a risk of exchanging the functions;
// named members of this struct are more explicit.
type signatureAcceptanceRules struct {
validateKeyIdentity func(string) error
validateSignedDockerReference func(string) error
validateSignedDockerManifestDigest func(string) error
validateSignedDockerManifestDigest func(digest.Digest) error
}
// verifyAndExtractSignature verifies that unverifiedSignature has been signed, and that its principial components
@@ -176,16 +224,59 @@ func verifyAndExtractSignature(mech SigningMechanism, unverifiedSignature []byte
return nil, err
}
var unmatchedSignature privateSignature
var unmatchedSignature untrustedSignature
if err := json.Unmarshal(signed, &unmatchedSignature); err != nil {
return nil, InvalidSignatureError{msg: err.Error()}
}
if err := rules.validateSignedDockerManifestDigest(unmatchedSignature.DockerManifestDigest); err != nil {
if err := rules.validateSignedDockerManifestDigest(unmatchedSignature.UntrustedDockerManifestDigest); err != nil {
return nil, err
}
if err := rules.validateSignedDockerReference(unmatchedSignature.DockerReference); err != nil {
if err := rules.validateSignedDockerReference(unmatchedSignature.UntrustedDockerReference); err != nil {
return nil, err
}
signature := unmatchedSignature.Signature // Policy OK.
return &signature, nil
// signatureAcceptanceRules have accepted this value.
return &Signature{
DockerManifestDigest: unmatchedSignature.UntrustedDockerManifestDigest,
DockerReference: unmatchedSignature.UntrustedDockerReference,
}, nil
}
// GetUntrustedSignatureInformationWithoutVerifying extracts information available in an untrusted signature,
// WITHOUT doing any cryptographic verification.
// This may be useful when debugging signature verification failures,
// or when managing a set of signatures on a single image.
//
// WARNING: Do not use the contents of this for ANY security decisions,
// and be VERY CAREFUL about showing this information to humans in any way which suggest that these values “are probably” reliable.
// There is NO REASON to expect the values to be correct, or not intentionally misleading
// (including things like “✅ Verified by $authority”)
func GetUntrustedSignatureInformationWithoutVerifying(untrustedSignatureBytes []byte) (*UntrustedSignatureInformation, error) {
// NOTE: This should eventualy do format autodetection.
mech, _, err := NewEphemeralGPGSigningMechanism([]byte{})
if err != nil {
return nil, err
}
defer mech.Close()
untrustedContents, shortKeyIdentifier, err := mech.UntrustedSignatureContents(untrustedSignatureBytes)
if err != nil {
return nil, err
}
var untrustedDecodedContents untrustedSignature
if err := json.Unmarshal(untrustedContents, &untrustedDecodedContents); err != nil {
return nil, InvalidSignatureError{msg: err.Error()}
}
var timestamp *time.Time // = nil
if untrustedDecodedContents.UntrustedTimestamp != nil {
ts := time.Unix(*untrustedDecodedContents.UntrustedTimestamp, 0)
timestamp = &ts
}
return &UntrustedSignatureInformation{
UntrustedDockerManifestDigest: untrustedDecodedContents.UntrustedDockerManifestDigest,
UntrustedDockerReference: untrustedDecodedContents.UntrustedDockerReference,
UntrustedCreatorID: untrustedDecodedContents.UntrustedCreatorID,
UntrustedTimestamp: timestamp,
UntrustedShortKeyIdentifier: shortKeyIdentifier,
}, nil
}

View File

@@ -0,0 +1,588 @@
package storage
import (
"bytes"
"encoding/json"
"io"
"io/ioutil"
"time"
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/storage"
ddigest "github.com/opencontainers/go-digest"
)
var (
// ErrBlobDigestMismatch is returned when PutBlob() is given a blob
// with a digest-based name that doesn't match its contents.
ErrBlobDigestMismatch = errors.New("blob digest mismatch")
// ErrBlobSizeMismatch is returned when PutBlob() is given a blob
// with an expected size that doesn't match the reader.
ErrBlobSizeMismatch = errors.New("blob size mismatch")
// ErrNoManifestLists is returned when GetTargetManifest() is
// called.
ErrNoManifestLists = errors.New("manifest lists are not supported by this transport")
// ErrNoSuchImage is returned when we attempt to access an image which
// doesn't exist in the storage area.
ErrNoSuchImage = storage.ErrNotAnImage
)
type storageImageSource struct {
imageRef storageReference
Tag string `json:"tag,omitempty"`
Created time.Time `json:"created-time,omitempty"`
ID string `json:"id"`
BlobList []types.BlobInfo `json:"blob-list,omitempty"` // Ordered list of every blob the image has been told to handle
Layers map[ddigest.Digest][]string `json:"layers,omitempty"` // Map from digests of blobs to lists of layer IDs
LayerPosition map[ddigest.Digest]int `json:"-"` // Where we are in reading a blob's layers
SignatureSizes []int `json:"signature-sizes"` // List of sizes of each signature slice
}
type storageImageDestination struct {
imageRef storageReference
Tag string `json:"tag,omitempty"`
Created time.Time `json:"created-time,omitempty"`
ID string `json:"id"`
BlobList []types.BlobInfo `json:"blob-list,omitempty"` // Ordered list of every blob the image has been told to handle
Layers map[ddigest.Digest][]string `json:"layers,omitempty"` // Map from digests of blobs to lists of layer IDs
BlobData map[ddigest.Digest][]byte `json:"-"` // Map from names of blobs that aren't layers to contents, temporary
Manifest []byte `json:"-"` // Manifest contents, temporary
Signatures []byte `json:"-"` // Signature contents, temporary
SignatureSizes []int `json:"signature-sizes"` // List of sizes of each signature slice
}
type storageLayerMetadata struct {
Digest string `json:"digest,omitempty"`
Size int64 `json:"size"`
CompressedSize int64 `json:"compressed-size,omitempty"`
}
type storageImage struct {
types.Image
size int64
}
// newImageSource sets us up to read out an image, which needs to already exist.
func newImageSource(imageRef storageReference) (*storageImageSource, error) {
id := imageRef.resolveID()
if id == "" {
logrus.Errorf("no image matching reference %q found", imageRef.StringWithinTransport())
return nil, ErrNoSuchImage
}
img, err := imageRef.transport.store.GetImage(id)
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", id)
}
image := &storageImageSource{
imageRef: imageRef,
Created: time.Now(),
ID: img.ID,
BlobList: []types.BlobInfo{},
Layers: make(map[ddigest.Digest][]string),
LayerPosition: make(map[ddigest.Digest]int),
SignatureSizes: []int{},
}
if err := json.Unmarshal([]byte(img.Metadata), image); err != nil {
return nil, errors.Wrap(err, "error decoding metadata for source image")
}
return image, nil
}
// newImageDestination sets us up to write a new image.
func newImageDestination(imageRef storageReference) (*storageImageDestination, error) {
image := &storageImageDestination{
imageRef: imageRef,
Tag: imageRef.reference,
Created: time.Now(),
ID: imageRef.id,
BlobList: []types.BlobInfo{},
Layers: make(map[ddigest.Digest][]string),
BlobData: make(map[ddigest.Digest][]byte),
SignatureSizes: []int{},
}
return image, nil
}
func (s storageImageSource) Reference() types.ImageReference {
return s.imageRef
}
func (s storageImageDestination) Reference() types.ImageReference {
return s.imageRef
}
func (s storageImageSource) Close() error {
return nil
}
func (s storageImageDestination) Close() error {
return nil
}
func (s storageImageDestination) ShouldCompressLayers() bool {
// We ultimately have to decompress layers to populate trees on disk,
// so callers shouldn't bother compressing them before handing them to
// us, if they're not already compressed.
return false
}
// putBlob stores a layer or data blob, optionally enforcing that a digest in
// blobinfo matches the incoming data.
func (s *storageImageDestination) putBlob(stream io.Reader, blobinfo types.BlobInfo, enforceDigestAndSize bool) (types.BlobInfo, error) {
blobSize := blobinfo.Size
digest := blobinfo.Digest
errorBlobInfo := types.BlobInfo{
Digest: "",
Size: -1,
}
// Try to read an initial snippet of the blob.
header := make([]byte, 10240)
n, err := stream.Read(header)
if err != nil && err != io.EOF {
return errorBlobInfo, err
}
// Set up to read the whole blob (the initial snippet, plus the rest)
// while digesting it with either the default, or the passed-in digest,
// if one was specified.
hasher := ddigest.Canonical.Digester()
if digest.Validate() == nil {
if a := digest.Algorithm(); a.Available() {
hasher = a.Digester()
}
}
hash := ""
counter := ioutils.NewWriteCounter(hasher.Hash())
defragmented := io.MultiReader(bytes.NewBuffer(header[:n]), stream)
multi := io.TeeReader(defragmented, counter)
if (n > 0) && archive.IsArchive(header[:n]) {
// It's a filesystem layer. If it's not the first one in the
// image, we assume that the most recently added layer is its
// parent.
parentLayer := ""
for _, blob := range s.BlobList {
if layerList, ok := s.Layers[blob.Digest]; ok {
parentLayer = layerList[len(layerList)-1]
}
}
// If we have an expected content digest, generate a layer ID
// based on the parent's ID and the expected content digest.
id := ""
if digest.Validate() == nil {
id = ddigest.Canonical.FromBytes([]byte(parentLayer + "+" + digest.String())).Hex()
}
// Attempt to create the identified layer and import its contents.
layer, uncompressedSize, err := s.imageRef.transport.store.PutLayer(id, parentLayer, nil, "", true, multi)
if err != nil && err != storage.ErrDuplicateID {
logrus.Debugf("error importing layer blob %q as %q: %v", blobinfo.Digest, id, err)
return errorBlobInfo, err
}
if err == storage.ErrDuplicateID {
// We specified an ID, and there's already a layer with
// the same ID. Drain the input so that we can look at
// its length and digest.
_, err := io.Copy(ioutil.Discard, multi)
if err != nil && err != io.EOF {
logrus.Debugf("error digesting layer blob %q: %v", blobinfo.Digest, id, err)
return errorBlobInfo, err
}
hash = hasher.Digest().String()
} else {
// Applied the layer with the specified ID. Note the
// size info and computed digest.
hash = hasher.Digest().String()
layerMeta := storageLayerMetadata{
Digest: hash,
CompressedSize: counter.Count,
Size: uncompressedSize,
}
if metadata, err := json.Marshal(&layerMeta); len(metadata) != 0 && err == nil {
s.imageRef.transport.store.SetMetadata(layer.ID, string(metadata))
}
// Hang on to the new layer's ID.
id = layer.ID
}
// Check if the size looks right.
if enforceDigestAndSize && blobinfo.Size >= 0 && blobinfo.Size != counter.Count {
logrus.Debugf("layer blob %q size is %d, not %d, rejecting", blobinfo.Digest, counter.Count, blobinfo.Size)
if layer != nil {
// Something's wrong; delete the newly-created layer.
s.imageRef.transport.store.DeleteLayer(layer.ID)
}
return errorBlobInfo, ErrBlobSizeMismatch
}
// If the content digest was specified, verify it.
if enforceDigestAndSize && digest.Validate() == nil && digest.String() != hash {
logrus.Debugf("layer blob %q digests to %q, rejecting", blobinfo.Digest, hash)
if layer != nil {
// Something's wrong; delete the newly-created layer.
s.imageRef.transport.store.DeleteLayer(layer.ID)
}
return errorBlobInfo, ErrBlobDigestMismatch
}
// If we didn't get a blob size, return the one we calculated.
if blobSize == -1 {
blobSize = counter.Count
}
// If we didn't get a digest, construct one.
if digest == "" {
digest = ddigest.Digest(hash)
}
// Record that this layer blob is a layer, and the layer ID it
// ended up having. This is a list, in case the same blob is
// being applied more than once.
s.Layers[digest] = append(s.Layers[digest], id)
s.BlobList = append(s.BlobList, types.BlobInfo{Digest: digest, Size: counter.Count})
if layer != nil {
logrus.Debugf("blob %q imported as a filesystem layer %q", blobinfo.Digest, id)
} else {
logrus.Debugf("layer blob %q already present as layer %q", blobinfo.Digest, id)
}
} else {
// It's just data. Finish scanning it in, check that our
// computed digest matches the passed-in digest, and store it,
// but leave it out of the blob-to-layer-ID map so that we can
// tell that it's not a layer.
blob, err := ioutil.ReadAll(multi)
if err != nil && err != io.EOF {
return errorBlobInfo, err
}
hash = hasher.Digest().String()
if enforceDigestAndSize && blobinfo.Size >= 0 && int64(len(blob)) != blobinfo.Size {
logrus.Debugf("blob %q size is %d, not %d, rejecting", blobinfo.Digest, int64(len(blob)), blobinfo.Size)
return errorBlobInfo, ErrBlobSizeMismatch
}
// If we were given a digest, verify that the content matches
// it.
if enforceDigestAndSize && digest.Validate() == nil && digest.String() != hash {
logrus.Debugf("blob %q digests to %q, rejecting", blobinfo.Digest, hash)
return errorBlobInfo, ErrBlobDigestMismatch
}
// If we didn't get a blob size, return the one we calculated.
if blobSize == -1 {
blobSize = int64(len(blob))
}
// If we didn't get a digest, construct one.
if digest == "" {
digest = ddigest.Digest(hash)
}
// Save the blob for when we Commit().
s.BlobData[digest] = blob
s.BlobList = append(s.BlobList, types.BlobInfo{Digest: digest, Size: int64(len(blob))})
logrus.Debugf("blob %q imported as opaque data %q", blobinfo.Digest, digest)
}
return types.BlobInfo{
Digest: digest,
Size: blobSize,
}, nil
}
// PutBlob is used to both store filesystem layers and binary data that is part
// of the image. Filesystem layers are assumed to be imported in order, as
// that is required by some of the underlying storage drivers.
func (s *storageImageDestination) PutBlob(stream io.Reader, blobinfo types.BlobInfo) (types.BlobInfo, error) {
return s.putBlob(stream, blobinfo, true)
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (s *storageImageDestination) HasBlob(blobinfo types.BlobInfo) (bool, int64, error) {
if blobinfo.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
for _, blob := range s.BlobList {
if blob.Digest == blobinfo.Digest {
return true, blob.Size, nil
}
}
return false, -1, nil
}
func (s *storageImageDestination) ReapplyBlob(blobinfo types.BlobInfo) (types.BlobInfo, error) {
err := blobinfo.Digest.Validate()
if err != nil {
return types.BlobInfo{}, err
}
if layerList, ok := s.Layers[blobinfo.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, blobinfo.Digest.String())
if err != nil {
return types.BlobInfo{}, err
}
return types.BlobInfo{Digest: blobinfo.Digest, Size: int64(len(b))}, nil
}
layerList := s.Layers[blobinfo.Digest]
rc, _, err := diffLayer(s.imageRef.transport.store, layerList[len(layerList)-1])
if err != nil {
return types.BlobInfo{}, err
}
return s.putBlob(rc, blobinfo, false)
}
func (s *storageImageDestination) Commit() error {
// Create the image record.
lastLayer := ""
for _, blob := range s.BlobList {
if layerList, ok := s.Layers[blob.Digest]; ok {
lastLayer = layerList[len(layerList)-1]
}
}
img, err := s.imageRef.transport.store.CreateImage(s.ID, nil, lastLayer, "", nil)
if err != nil {
logrus.Debugf("error creating image: %q", err)
return err
}
logrus.Debugf("created new image ID %q", img.ID)
s.ID = img.ID
if s.Tag != "" {
// We have a name to set, so move the name to this image.
if err := s.imageRef.transport.store.SetNames(img.ID, []string{s.Tag}); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error setting names on image %q: %v", img.ID, err)
return err
}
logrus.Debugf("set name of image %q to %q", img.ID, s.Tag)
}
// Save the data blobs to disk, and drop their contents from memory.
keys := []ddigest.Digest{}
for k, v := range s.BlobData {
if err := s.imageRef.transport.store.SetImageBigData(img.ID, k.String(), v); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error saving big data %q for image %q: %v", k, img.ID, err)
return err
}
keys = append(keys, k)
}
for _, key := range keys {
delete(s.BlobData, key)
}
// Save the manifest, if we have one.
if err := s.imageRef.transport.store.SetImageBigData(s.ID, "manifest", s.Manifest); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error saving manifest for image %q: %v", img.ID, err)
return err
}
// Save the signatures, if we have any.
if err := s.imageRef.transport.store.SetImageBigData(s.ID, "signatures", s.Signatures); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error saving signatures for image %q: %v", img.ID, err)
return err
}
// Save our metadata.
metadata, err := json.Marshal(s)
if err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error encoding metadata for image %q: %v", img.ID, err)
return err
}
if len(metadata) != 0 {
if err = s.imageRef.transport.store.SetMetadata(s.ID, string(metadata)); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error saving metadata for image %q: %v", img.ID, err)
return err
}
logrus.Debugf("saved image metadata %q", string(metadata))
}
return nil
}
func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
return nil
}
func (s *storageImageDestination) PutManifest(manifest []byte) error {
s.Manifest = make([]byte, len(manifest))
copy(s.Manifest, manifest)
return nil
}
// SupportsSignatures returns an error if we can't expect GetSignatures() to
// return data that was previously supplied to PutSignatures().
func (s *storageImageDestination) SupportsSignatures() error {
return nil
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (s *storageImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
func (s *storageImageDestination) PutSignatures(signatures [][]byte) error {
sizes := []int{}
sigblob := []byte{}
for _, sig := range signatures {
sizes = append(sizes, len(sig))
newblob := make([]byte, len(sigblob)+len(sig))
copy(newblob, sigblob)
copy(newblob[len(sigblob):], sig)
sigblob = newblob
}
s.Signatures = sigblob
s.SignatureSizes = sizes
return nil
}
func (s *storageImageSource) GetBlob(info types.BlobInfo) (rc io.ReadCloser, n int64, err error) {
rc, n, _, err = s.getBlobAndLayerID(info)
return rc, n, err
}
func (s *storageImageSource) getBlobAndLayerID(info types.BlobInfo) (rc io.ReadCloser, n int64, layerID string, err error) {
err = info.Digest.Validate()
if err != nil {
return nil, -1, "", err
}
if layerList, ok := s.Layers[info.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, info.Digest.String())
if err != nil {
return nil, -1, "", err
}
r := bytes.NewReader(b)
logrus.Debugf("exporting opaque data as blob %q", info.Digest.String())
return ioutil.NopCloser(r), int64(r.Len()), "", nil
}
// If the blob was "put" more than once, we have multiple layer IDs
// which should all produce the same diff. For the sake of tests that
// want to make sure we created different layers each time the blob was
// "put", though, cycle through the layers.
layerList := s.Layers[info.Digest]
position, ok := s.LayerPosition[info.Digest]
if !ok {
position = 0
}
s.LayerPosition[info.Digest] = (position + 1) % len(layerList)
logrus.Debugf("exporting filesystem layer %q for blob %q", layerList[position], info.Digest)
rc, n, err = diffLayer(s.imageRef.transport.store, layerList[position])
return rc, n, layerList[position], err
}
func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64, err error) {
layer, err := store.GetLayer(layerID)
if err != nil {
return nil, -1, err
}
layerMeta := storageLayerMetadata{
CompressedSize: -1,
}
if layer.Metadata != "" {
if err := json.Unmarshal([]byte(layer.Metadata), &layerMeta); err != nil {
return nil, -1, errors.Wrapf(err, "error decoding metadata for layer %q", layerID)
}
}
if layerMeta.CompressedSize <= 0 {
n = -1
} else {
n = layerMeta.CompressedSize
}
diff, err := store.Diff("", layer.ID)
if err != nil {
return nil, -1, err
}
return diff, n, nil
}
func (s *storageImageSource) GetManifest() (manifestBlob []byte, MIMEType string, err error) {
manifestBlob, err = s.imageRef.transport.store.GetImageBigData(s.ID, "manifest")
return manifestBlob, manifest.GuessMIMEType(manifestBlob), err
}
func (s *storageImageSource) GetTargetManifest(digest ddigest.Digest) (manifestBlob []byte, MIMEType string, err error) {
return nil, "", ErrNoManifestLists
}
func (s *storageImageSource) GetSignatures() (signatures [][]byte, err error) {
var offset int
signature, err := s.imageRef.transport.store.GetImageBigData(s.ID, "signatures")
if err != nil {
return nil, err
}
sigslice := [][]byte{}
for _, length := range s.SignatureSizes {
sigslice = append(sigslice, signature[offset:offset+length])
offset += length
}
if offset != len(signature) {
return nil, errors.Errorf("signatures data contained %d extra bytes", len(signatures)-offset)
}
return sigslice, nil
}
func (s *storageImageSource) getSize() (int64, error) {
var sum int64
names, err := s.imageRef.transport.store.ListImageBigData(s.imageRef.id)
if err != nil {
return -1, errors.Wrapf(err, "error reading image %q", s.imageRef.id)
}
for _, name := range names {
bigSize, err := s.imageRef.transport.store.GetImageBigDataSize(s.imageRef.id, name)
if err != nil {
return -1, errors.Wrapf(err, "error reading data blob size %q for %q", name, s.imageRef.id)
}
sum += bigSize
}
for _, sigSize := range s.SignatureSizes {
sum += int64(sigSize)
}
for _, layerList := range s.Layers {
for _, layerID := range layerList {
layer, err := s.imageRef.transport.store.GetLayer(layerID)
if err != nil {
return -1, err
}
layerMeta := storageLayerMetadata{
Size: -1,
}
if layer.Metadata != "" {
if err := json.Unmarshal([]byte(layer.Metadata), &layerMeta); err != nil {
return -1, errors.Wrapf(err, "error decoding metadata for layer %q", layerID)
}
}
if layerMeta.Size < 0 {
return -1, errors.Errorf("size for layer %q is unknown, failing getSize()", layerID)
}
sum += layerMeta.Size
}
}
return sum, nil
}
func (s *storageImage) Size() (int64, error) {
return s.size, nil
}
// newImage creates an image that also knows its size
func newImage(s storageReference) (types.Image, error) {
src, err := newImageSource(s)
if err != nil {
return nil, err
}
img, err := image.FromSource(src)
if err != nil {
return nil, err
}
size, err := src.getSize()
if err != nil {
return nil, err
}
return &storageImage{Image: img, size: size}, nil
}

View File

@@ -0,0 +1,127 @@
package storage
import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
)
// A storageReference holds an arbitrary name and/or an ID, which is a 32-byte
// value hex-encoded into a 64-character string, and a reference to a Store
// where an image is, or would be, kept.
type storageReference struct {
transport storageTransport
reference string
id string
name reference.Named
}
func newReference(transport storageTransport, reference, id string, name reference.Named) *storageReference {
// We take a copy of the transport, which contains a pointer to the
// store that it used for resolving this reference, so that the
// transport that we'll return from Transport() won't be affected by
// further calls to the original transport's SetStore() method.
return &storageReference{
transport: transport,
reference: reference,
id: id,
name: name,
}
}
// Resolve the reference's name to an image ID in the store, if there's already
// one present with the same name or ID.
func (s *storageReference) resolveID() string {
if s.id == "" {
image, err := s.transport.store.GetImage(s.reference)
if image != nil && err == nil {
s.id = image.ID
}
}
return s.id
}
// Return a Transport object that defaults to using the same store that we used
// to build this reference object.
func (s storageReference) Transport() types.ImageTransport {
return &storageTransport{
store: s.transport.store,
}
}
// Return a name with a tag, if we have a name to base them on.
func (s storageReference) DockerReference() reference.Named {
return s.name
}
// Return a name with a tag, prefixed with the graph root and driver name, to
// disambiguate between images which may be present in multiple stores and
// share only their names.
func (s storageReference) StringWithinTransport() string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
if s.name == nil {
return storeSpec + "@" + s.id
}
if s.id == "" {
return storeSpec + s.reference
}
return storeSpec + s.reference + "@" + s.id
}
func (s storageReference) PolicyConfigurationIdentity() string {
return s.StringWithinTransport()
}
// Also accept policy that's tied to the combination of the graph root and
// driver name, to apply to all images stored in the Store, and to just the
// graph root, in case we're using multiple drivers in the same directory for
// some reason.
func (s storageReference) PolicyConfigurationNamespaces() []string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
driverlessStoreSpec := "[" + s.transport.store.GetGraphRoot() + "]"
namespaces := []string{}
if s.name != nil {
if s.id != "" {
// The reference without the ID is also a valid namespace.
namespaces = append(namespaces, storeSpec+s.reference)
}
components := strings.Split(s.name.Name(), "/")
for len(components) > 0 {
namespaces = append(namespaces, storeSpec+strings.Join(components, "/"))
components = components[:len(components)-1]
}
}
namespaces = append(namespaces, storeSpec)
namespaces = append(namespaces, driverlessStoreSpec)
return namespaces
}
func (s storageReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return newImage(s)
}
func (s storageReference) DeleteImage(ctx *types.SystemContext) error {
id := s.resolveID()
if id == "" {
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return ErrNoSuchImage
}
layers, err := s.transport.store.DeleteImage(id, true)
if err == nil {
logrus.Debugf("deleted image %q", id)
for _, layer := range layers {
logrus.Debugf("deleted layer %q", layer)
}
}
return err
}
func (s storageReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(s)
}
func (s storageReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(s)
}

View File

@@ -0,0 +1,291 @@
package storage
import (
"path/filepath"
"regexp"
"strings"
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/containers/storage/storage"
"github.com/opencontainers/go-digest"
ddigest "github.com/opencontainers/go-digest"
)
func init() {
transports.Register(Transport)
}
var (
// Transport is an ImageTransport that uses either a default
// storage.Store or one that's it's explicitly told to use.
Transport StoreTransport = &storageTransport{}
// ErrInvalidReference is returned when ParseReference() is passed an
// empty reference.
ErrInvalidReference = errors.New("invalid reference")
// ErrPathNotAbsolute is returned when a graph root is not an absolute
// path name.
ErrPathNotAbsolute = errors.New("path name is not absolute")
idRegexp = regexp.MustCompile("^(sha256:)?([0-9a-fA-F]{64})$")
)
// StoreTransport is an ImageTransport that uses a storage.Store to parse
// references, either its own default or one that it's told to use.
type StoreTransport interface {
types.ImageTransport
// SetStore sets the default store for this transport.
SetStore(storage.Store)
// GetImage retrieves the image from the transport's store that's named
// by the reference.
GetImage(types.ImageReference) (*storage.Image, error)
// GetStoreImage retrieves the image from a specified store that's named
// by the reference.
GetStoreImage(storage.Store, types.ImageReference) (*storage.Image, error)
// ParseStoreReference parses a reference, overriding any store
// specification that it may contain.
ParseStoreReference(store storage.Store, reference string) (*storageReference, error)
}
type storageTransport struct {
store storage.Store
}
func (s *storageTransport) Name() string {
// Still haven't really settled on a name.
return "containers-storage"
}
// SetStore sets the Store object which the Transport will use for parsing
// references when information about a Store is not directly specified as part
// of the reference. If one is not set, the library will attempt to initialize
// one with default settings when a reference needs to be parsed. Calling
// SetStore does not affect previously parsed references.
func (s *storageTransport) SetStore(store storage.Store) {
s.store = store
}
// ParseStoreReference takes a name or an ID, tries to figure out which it is
// relative to the given store, and returns it in a reference object.
func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (*storageReference, error) {
var name reference.Named
var sum digest.Digest
var err error
if ref == "" {
return nil, ErrInvalidReference
}
if ref[0] == '[' {
// Ignore the store specifier.
closeIndex := strings.IndexRune(ref, ']')
if closeIndex < 1 {
return nil, ErrInvalidReference
}
ref = ref[closeIndex+1:]
}
refInfo := strings.SplitN(ref, "@", 2)
if len(refInfo) == 1 {
// A name.
name, err = reference.ParseNormalizedNamed(refInfo[0])
if err != nil {
return nil, err
}
} else if len(refInfo) == 2 {
// An ID, possibly preceded by a name.
if refInfo[0] != "" {
name, err = reference.ParseNormalizedNamed(refInfo[0])
if err != nil {
return nil, err
}
}
sum, err = digest.Parse("sha256:" + refInfo[1])
if err != nil {
return nil, err
}
} else { // Coverage: len(refInfo) is always 1 or 2
// Anything else: store specified in a form we don't
// recognize.
return nil, ErrInvalidReference
}
storeSpec := "[" + store.GetGraphDriverName() + "@" + store.GetGraphRoot() + "]"
id := ""
if sum.Validate() == nil {
id = sum.Hex()
}
refname := ""
if name != nil {
name = reference.TagNameOnly(name)
refname = verboseName(name)
}
if refname == "" {
logrus.Debugf("parsed reference into %q", storeSpec+"@"+id)
} else if id == "" {
logrus.Debugf("parsed reference into %q", storeSpec+refname)
} else {
logrus.Debugf("parsed reference into %q", storeSpec+refname+"@"+id)
}
return newReference(storageTransport{store: store}, refname, id, name), nil
}
func (s *storageTransport) GetStore() (storage.Store, error) {
// Return the transport's previously-set store. If we don't have one
// of those, initialize one now.
if s.store == nil {
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
return nil, err
}
s.store = store
}
return s.store, nil
}
// ParseReference takes a name and/or an ID ("_name_"/"@_id_"/"_name_@_id_"),
// possibly prefixed with a store specifier in the form "[_graphroot_]" or
// "[_driver_@_graphroot_]", tries to figure out which it is, and returns it in
// a reference object. If the _graphroot_ is a location other than the default,
// it needs to have been previously opened using storage.GetStore(), so that it
// can figure out which run root goes with the graph root.
func (s *storageTransport) ParseReference(reference string) (types.ImageReference, error) {
store, err := s.GetStore()
if err != nil {
return nil, err
}
// Check if there's a store location prefix. If there is, then it
// needs to match a store that was previously initialized using
// storage.GetStore(), or be enough to let the storage library fill out
// the rest using knowledge that it has from elsewhere.
if reference[0] == '[' {
closeIndex := strings.IndexRune(reference, ']')
if closeIndex < 1 {
return nil, ErrInvalidReference
}
storeSpec := reference[1:closeIndex]
reference = reference[closeIndex+1:]
storeInfo := strings.SplitN(storeSpec, "@", 2)
if len(storeInfo) == 1 && storeInfo[0] != "" {
// One component: the graph root.
if !filepath.IsAbs(storeInfo[0]) {
return nil, ErrPathNotAbsolute
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphRoot: storeInfo[0],
})
if err != nil {
return nil, err
}
store = store2
} else if len(storeInfo) == 2 && storeInfo[0] != "" && storeInfo[1] != "" {
// Two components: the driver type and the graph root.
if !filepath.IsAbs(storeInfo[1]) {
return nil, ErrPathNotAbsolute
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphDriverName: storeInfo[0],
GraphRoot: storeInfo[1],
})
if err != nil {
return nil, err
}
store = store2
} else {
// Anything else: store specified in a form we don't
// recognize.
return nil, ErrInvalidReference
}
}
return s.ParseStoreReference(store, reference)
}
func (s storageTransport) GetStoreImage(store storage.Store, ref types.ImageReference) (*storage.Image, error) {
dref := ref.DockerReference()
if dref == nil {
if sref, ok := ref.(*storageReference); ok {
if sref.id != "" {
if img, err := store.GetImage(sref.id); err == nil {
return img, nil
}
}
}
return nil, ErrInvalidReference
}
return store.GetImage(verboseName(dref))
}
func (s *storageTransport) GetImage(ref types.ImageReference) (*storage.Image, error) {
store, err := s.GetStore()
if err != nil {
return nil, err
}
return s.GetStoreImage(store, ref)
}
func (s storageTransport) ValidatePolicyConfigurationScope(scope string) error {
// Check that there's a store location prefix. Values we're passed are
// expected to come from PolicyConfigurationIdentity or
// PolicyConfigurationNamespaces, so if there's no store location,
// something's wrong.
if scope[0] != '[' {
return ErrInvalidReference
}
// Parse the store location prefix.
closeIndex := strings.IndexRune(scope, ']')
if closeIndex < 1 {
return ErrInvalidReference
}
storeSpec := scope[1:closeIndex]
scope = scope[closeIndex+1:]
storeInfo := strings.SplitN(storeSpec, "@", 2)
if len(storeInfo) == 1 && storeInfo[0] != "" {
// One component: the graph root.
if !filepath.IsAbs(storeInfo[0]) {
return ErrPathNotAbsolute
}
} else if len(storeInfo) == 2 && storeInfo[0] != "" && storeInfo[1] != "" {
// Two components: the driver type and the graph root.
if !filepath.IsAbs(storeInfo[1]) {
return ErrPathNotAbsolute
}
} else {
// Anything else: store specified in a form we don't
// recognize.
return ErrInvalidReference
}
// That might be all of it, and that's okay.
if scope == "" {
return nil
}
// But if there is anything left, it has to be a name, with or without
// a tag, with or without an ID, since we don't return namespace values
// that are just bare IDs.
scopeInfo := strings.SplitN(scope, "@", 2)
if len(scopeInfo) == 1 && scopeInfo[0] != "" {
_, err := reference.ParseNormalizedNamed(scopeInfo[0])
if err != nil {
return err
}
} else if len(scopeInfo) == 2 && scopeInfo[0] != "" && scopeInfo[1] != "" {
_, err := reference.ParseNormalizedNamed(scopeInfo[0])
if err != nil {
return err
}
_, err = ddigest.Parse("sha256:" + scopeInfo[1])
if err != nil {
return err
}
} else {
return ErrInvalidReference
}
return nil
}
func verboseName(name reference.Named) string {
name = reference.TagNameOnly(name)
tag := ""
if tagged, ok := name.(reference.NamedTagged); ok {
tag = tagged.Tag()
}
return name.Name() + ":" + tag
}

Some files were not shown because too many files have changed in this diff Show More