Compare commits

..

177 Commits

Author SHA1 Message Date
Antonio Murdaca
2e8377a708 version: bump to v0.1.26
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-15 14:48:39 +01:00
Daniel J Walsh
a76cfb7dc7 Merge pull request #450 from umohnani8/dir_transport
Add manifest type conversion to skopeo copy
2017-11-15 08:32:05 -05:00
Urvashi Mohnani
409dce8a89 Add manifest type conversion to skopeo with dir transport
User can select from 3 manifest types: oci, v2s1, or v2s2
skopeo copy defaults to oci manifest if the --format flag is not set
Adds option to compress blobs when saving to the directory using the dir transport
e.g skopeo copy --format v2s1 --compress-blobs docker-archive:alp.tar dir:my-directory

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2017-11-14 14:45:32 -05:00
Urvashi Mohnani
5b14746045 Vendor in changes from containers/image
Adds manifest type conversion to dir transport

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2017-11-14 14:45:32 -05:00
Antonio Murdaca
a3d2e8323a Merge pull request #409 from runcom/add-logo
[do not merge yet] add logo :)
2017-11-14 20:10:20 +01:00
Antonio Murdaca
2be4deb980 README.md: add logo
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-14 20:09:19 +01:00
Antonio Murdaca
5f71547262 Merge pull request #451 from runcom/cve-error-log
Fix CVE in tar-split
2017-11-08 16:26:21 +01:00
Antonio Murdaca
6c791a0559 bump back to v0.1.26-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-08 15:59:21 +01:00
Antonio Murdaca
7fd6f66b7f bump to v0.1.25
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-08 15:59:01 +01:00
Antonio Murdaca
a1b48be22e Fix CVE in tar-split
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-08 15:58:43 +01:00
Antonio Murdaca
bb5584bc4c Merge pull request #442 from mtrmac/fix-vendor
Revert mis-merged reverts of vendor.conf
2017-11-07 20:40:03 +01:00
Miloslav Trmač
3e57660394 Revert mis-merged reverts of vendor.conf
PR #440 reverted the vendor.conf edits of #426.  This passed CI
because the corresponding vendor/* subpackages were not modified.

Restore the vendor.conf changes, and re-run full (vndr) to ensure
the two are consistent again.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-11-07 19:34:26 +01:00
Miloslav Trmač
803619cabf Merge pull request #447 from marcosps/issue_446
README.md: Fix example of skopeo copy command
2017-11-07 17:53:25 +01:00
Marcos Paulo de Souza
8c1a69d1f6 README.md: Fix example of skopeo copy command
In README.md, there is an example of skopeo copy command to download an
image in OCI format, but the current code returns an error:

skopeo copy docker://busybox:latest oci:busybox_ocilayout
FATA[0000] Error initializing destination oci:tmp:: cannot save image with empty image.ref.name

If we add a tag after the oci directory, the problem is gone:
skopeo copy docker://busybox:latest oci:busybox_ocilayout:latest

Fixes: #446

Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com>
2017-11-06 22:54:11 -02:00
Miloslav Trmač
24423ce4a7 Merge pull request #443 from nstack/shared-blobs
copy: add shared blob directory support for OCI sources/destinations
2017-11-06 18:46:34 +01:00
Jonathan Boulle
76b071cf74 cmd/copy: add {src,dst}-shared-blob-dir flags
Only works for OCI layout sources and destinations.
2017-11-06 17:00:39 +01:00
Jonathan Boulle
407a7d9e70 vendor: bump containers/image to master
To pick up containers/image#369

Signed-off-by: Jonathan Boulle <jonathanboulle@gmail.com>
2017-11-06 17:00:39 +01:00
Antonio Murdaca
0d0055df05 Merge pull request #444 from mtrmac/contributing-commits
Modify CONTRIBUTING.md to prefer smaller commits over squashing them
2017-11-06 16:31:37 +01:00
Miloslav Trmač
63b3be2f13 Modify CONTRIBUTING.md to prefer smaller commits over squashing them
See the updated text for the rationale.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-11-06 16:21:50 +01:00
Antonio Murdaca
62c68998d7 Merge pull request #440 from hferentschik/container-image-issue-327
[DO NOT MERGE] - Aligning Docker version between containers/image and skopeo
2017-11-06 11:58:04 +01:00
Hardy Ferentschik
4125d741cf Aligning Docker version between containers/image and skopeo
Signed-off-by: Hardy Ferentschik <hardy@hibernate.org>
2017-11-06 11:45:03 +01:00
Antonio Murdaca
bd07ffb9f4 Merge pull request #426 from mtrmac/single-logrus
Update image-tools, and remove the duplicate Sirupsen/logrus vendor
2017-10-31 16:23:48 +01:00
Miloslav Trmač
700199c944 Update image-tools, and remove the duplicate Sirupsen/logrus vendor 2017-10-30 17:24:44 +01:00
Antonio Murdaca
40a5f48632 Merge pull request #441 from mtrmac/smaller-containers
Create smaller testing containers
2017-10-28 09:05:30 +02:00
Miloslav Trmač
83ca466071 Remove the openshift/origin checkout from /tmp after building it 2017-10-28 02:19:29 +02:00
Miloslav Trmač
a7e8a9b4d4 Run (dnf clean all) after finishing the installation
... to drop all caches which will never be needed again.
2017-10-28 02:17:15 +02:00
Miloslav Trmač
3f10c1726d Do not use a separate yum command to install OpenShift dependencies
We don't really need to pay the depsolving overhead twice.
2017-10-28 02:16:25 +02:00
Miloslav Trmač
832eaa1f67 Use ordinary shell variables instead of ENV for REGISTRY_COMMIT*
They are only used in the immediately following shell snippet,
no need to pollute the container environment with them, nor to add
two extra layers.
2017-10-28 02:14:44 +02:00
Antonio Murdaca
e2b2d25f24 Merge pull request #439 from TomSweeneyRedHat/dev/tsweeney/dockfix/4
A few wording touchups to CONTRIBUTING.md
2017-10-25 17:35:04 +02:00
TomSweeneyRedHat
3fa370fa2e A few wording touchups to CONTRIBUTING.md
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-10-25 10:30:26 -04:00
Antonio Murdaca
88cff614ed Merge pull request #408 from cyphar/use-buildmode-pie
makefile: use -buildmode=pie
2017-10-22 09:05:53 +02:00
Aleksa Sarai
b23cac9c05 makefile: use -buildmode=pie
The security benefits of PIC binaries are quite well known (since they
work with ASLR), and there is effectively no downside. In addition,
we've been seeing some weird linker errors on ppc64le that are resolved
by using -buildmode=pie.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-10-22 10:13:20 +11:00
Antonio Murdaca
c928962ea8 Merge pull request #438 from runcom/bump-v0.1.24
Bump v0.1.24
2017-10-21 19:44:26 +02:00
Antonio Murdaca
ee011b1bf9 bump to v0.1.25-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-10-21 19:27:29 +02:00
Antonio Murdaca
dd2c3e3a8e bump to v0.1.24
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-10-21 19:27:07 +02:00
Antonio Murdaca
f74e3fbb0f Merge pull request #437 from runcom/fix-config-nil
fix inspect with nil image config
2017-10-21 18:18:51 +02:00
Antonio Murdaca
e3f7733de1 fix inspect with nil image config
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-10-21 18:05:19 +02:00
Antonio Murdaca
5436111796 Merge pull request #435 from mtrmac/brew-failure
Work around broken brew in Travis
2017-10-20 15:24:26 +02:00
Miloslav Trmač
b8a502ae87 Work around broken brew in Travis
Per https://github.com/Homebrew/brew/issues/3299 , (brew update)
is needed to avoid a
> ==> Downloading https://homebrew.bintray.com/bottles-portable/portable-ruby-2.3.3.leopard_64.bottle.1.tar.gz
> ######################################################################## 100.0%
> ==> Pouring portable-ruby-2.3.3.leopard_64.bottle.1.tar.gz
...
> /usr/local/Homebrew/Library/Homebrew/brew.rb:12:in `<main>': Homebrew must be run under Ruby 2.3! You're running 2.0.0. (RuntimeError)

Ideally Travis should bake the (brew update) into its images
(https://github.com/travis-ci/travis-ci/issues/8552 ), but that’s only going
to happen around November 2017 per https://blog.travis-ci.com/2017-10-16-a-new-default-os-x-image-is-coming .

Until then, we have to do that ourselves.
2017-10-19 18:04:23 +02:00
Jhon Honce
28d4e08a4b Merge pull request #429 from giuseppe/vendor-containers-image
containers/image: vendor
2017-10-06 10:24:34 -07:00
Giuseppe Scrivano
ef464797c1 containers/image: vendor
Vendor in latest containers/image

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-10-06 18:16:39 +02:00
Antonio Murdaca
2966f794fc Merge pull request #406 from mtrmac/macOS
Simplify macOS builds
2017-10-03 13:01:44 +02:00
Miloslav Trmač
37e34aaff2 Automatically use the right CFLAGS and LDFLAGS for gpgme
On macOS, (brew install gpgme) installs it within /usr/local, but
/usr/local/include is not in the default search path.

Rather than hard-code this directory, use gpgme-config. Sadly that
must be done at the top-level user instead of locally in the gpgme
subpackage, because cgo supports only pkg-config, not general shell
scripts, and gpgme does not install a pkg-config file.

If gpgme is not installed or gpgme-config can’t be found for other reasons,
the error is silently ignored (and the user will probably find out because
the cgo compilation will fail); this is so that users can use the
containers_image_openpgp build tag without seeing ugly errors
(and without the Makefile having to detect that build tag in even more
shell scripts).
2017-10-02 21:58:23 +02:00
Miloslav Trmač
aa6df53779 Only use the cgo workaround if using gpgme
Otherwise we would try to link with gpgme only for that unnecessary
workaround.
2017-10-02 20:49:49 +02:00
Miloslav Trmač
e3170801c5 Merge pull request #427 from rhatdan/vendor
Vendor in latest containers storage
2017-10-02 17:46:40 +02:00
Daniel J Walsh
4e9ef94365 Vendor in latest containers storage
We want to get support into skopeo for handling
override_kernel_checks so that we can use overlay
backend on RHEL.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-09-30 10:40:45 +00:00
Miloslav Trmač
9f54acd6bd Merge pull request #424 from lsm5/Makefile-go-var
use Makefile var for go compiler
2017-09-26 17:54:37 +02:00
Lokesh Mandvekar
e735faac75 use Makefile var for go compiler
This will allow compilation with a custom go binary,
for example /usr/lib/go-1.8/bin/go instead of /usr/bin/go on Ubuntu
16.04 which is still version 1.6

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2017-09-25 17:23:00 -04:00
Miloslav Trmač
3c67427272 Merge pull request #422 from umohnani8/auth
fixing error checking due to update in make lint
2017-09-21 16:11:37 +02:00
umohnani8
a1865e9d8b fixing error checking due to update in make lint
make lint is complaining for cases where the error returned is checked
for err != nil, and then returned anyways.

Signed-off-by: umohnani8 <umohnani@redhat.com>
2017-09-20 14:45:45 -04:00
Miloslav Trmač
875dd2e7a9 Merge pull request #417 from mtrmac/manifest-list-hofix
Vendor in mtrmac/image:manifest-list-hotfix
2017-09-14 17:41:55 +02:00
Miloslav Trmač
75dc703d6a Mark TestCopyFailsWithManifestList with ExpectFailure
This is one of the trade-offs we made.
2017-09-13 19:54:51 +02:00
Miloslav Trmač
fd6324f800 Vendor after merging mtrmac/image:manifest-list-hotfix 2017-09-13 18:47:43 +02:00
Miloslav Trmač
a41cd0a0ab Merge pull request #413 from TomSweeneyRedHat/dev/tsweeney/docfix/3
Spruce up the README.md a bit
2017-09-11 19:55:04 +02:00
TomSweeneyRedHat
b548b5f96f Spruce up the README.md a bit
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-09-11 12:54:04 -04:00
Miloslav Trmač
2bfbb4cbf2 Merge pull request #412 from thomasmckay/patch-1
update installing build dependencies
2017-09-11 18:17:04 +02:00
thomasmckay
b2297592f3 removed builddep for adding ostree-devel 2017-09-11 08:51:08 -04:00
thomasmckay
fe6073e87e update installing build dependencies 2017-09-08 09:37:32 -04:00
Antonio Murdaca
09557f308c Merge pull request #411 from TomSweeneyRedHat/dev/tsweeney/docfix2
Touch up a few tpyos
2017-09-08 01:07:48 +02:00
TomSweeneyRedHat
10c2053967 Touch up a few tpyos
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-09-07 14:19:08 -04:00
Miloslav Trmač
55e7b079f1 Merge pull request #403 from owtaylor/requested-manifest-mime-types
Update for removal of requestedMIMETypes from ImageReference.NewImageSource()
2017-09-07 18:04:32 +02:00
Owen W. Taylor
035fc3a817 Update for removal of requestedMIMETypes from containers/image/types.ImageReference.NewImageSource()
Signed-off-by: Owen W. Taylor <otaylor@fishsoup.net>
2017-09-07 10:28:37 -04:00
Miloslav Trmač
5faf7d8001 Merge pull request #407 from dlorenc/helper
Add docker-credential-helpers dependency.
2017-09-04 19:54:15 +02:00
dlorenc
2ada6b20a2 Add docker-credential-helpers dependency. 2017-09-04 10:00:08 -07:00
Miloslav Trmač
16181f1cfb Merge pull request #404 from mtrmac/shallow-clone
Use (git clone --depth=1) to speed up testing
2017-09-02 00:32:57 +02:00
Miloslav Trmač
8c07dec7a9 Use (git clone --depth=1) to speed up testing
This reduces the time used to clone openshift/origin on Travis from
> real	2m34.227s
> user	4m18.844s
> sys	0m8.144s
to
> real	0m8.816s
> user	0m2.640s
> sys	0m0.856s
, and the download size from  782.78 MiB to 70.05 MiB .

We can't trivially do this for docker/distribution because it is using
(git checkout $commit) on the cloned repo; we could do a clone+fetch+fetch
with --depth=1, but the full clone takes less than two seconds, so let's
keep that one simple.
2017-09-01 21:05:13 +02:00
Miloslav Trmač
a9e8c588e9 Merge pull request #399 from umohnani8/oci_name
Modify skopeo tests
2017-08-31 19:03:37 +02:00
umohnani8
bf6812ea86 [DO NOT MERGE] Modify skopeo tests
The oci name changes in containers/image caused the skopeo test to fail

Signed-off-by: umohnani8 <umohnani@redhat.com>
2017-08-31 12:02:57 -04:00
Antonio Murdaca
5f6b0a00e2 Merge pull request #397 from cyphar/renable-verification-oci
integration: re-enable image-tools upstream validation
2017-08-17 04:58:20 +02:00
Aleksa Sarai
d55a17ee43 integration: re-enable image-tools upstream validation
This effectively reverts f4a44f00b8 ("integration: disable check with
image-tools for image-spec RC5"), which disabled the compliance
validation due to upstream bugs. Since those bugs have been fixed,
re-enable the tests (to make the smoke tests far more effective).

Fixes: f4a44f00b8 ("integration: disable check with image-tools for image-spec RC5")
Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-08-15 02:08:34 +10:00
Aleksa Sarai
96ce8b63bc vendor: revendor github.com/opencontainers/image-tools@da84dc9dddc823a32f543e60323f841d12429c51
This requires re-vendoring a bunch of other things (as well as the old
Sirupsen/logrus path), the relevant commits being:

* github.com/xeipuuv/gojsonschema@0c8571ac0ce161a5feb57375a9cdf148c98c0f70
* github.com/xeipuuv/gojsonpointer@6fe8760cad3569743d51ddbb243b26f8456742dc
* github.com/xeipuuv/gojsonreference@e02fc20de94c78484cd5ffb007f8af96be030a45
* go4.org@034d17a462f7b2dcd1a4a73553ec5357ff6e6c6e

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-08-15 02:08:12 +10:00
Antonio Murdaca
1d1cc1ff5b Merge pull request #388 from mrunalp/update_deps
[DO NOT MERGE] Update dependencies to change to logrus 1.0.0
2017-08-04 20:09:16 +02:00
Mrunal Patel
6f3ed0ecd9 Update dependencies to change to logrus 1.0.0
Signed-off-by: Mrunal Patel <mrunalp@gmail.com>
2017-08-04 10:22:59 -07:00
Antonio Murdaca
7e90bc082a Merge pull request #392 from mtrmac/test-test-failures
Fix travis failures on macOS
2017-08-04 11:58:37 +02:00
Miloslav Trmač
d11d173db1 Install golint on macOS
On Linux this is done in the Dockerfile, but macOS tests do not use containers.
2017-08-04 00:13:55 +02:00
Miloslav Trmač
72e662bcb7 Merge pull request #387 from errm/errm/osx-install
Fix `make install` on OSX
2017-07-27 18:00:26 +02:00
Ed Robinson
a934622220 Install go-md2man on travis osx 2017-07-27 09:47:34 +01:00
Ed Robinson
c448bc0a29 Check make install on osx travis build 2017-07-26 20:14:49 +01:00
Ed Robinson
b0b85dc32f Fix make install on OSX
Fixes #383

Signed-off-by: Ed Robinson <ed.robinson@reevoo.com>
2017-07-26 20:11:03 +01:00
Miloslav Trmač
75811bd4b1 Merge pull request #382 from errm/errm/automate-osx-build
Adds automated build for OSX
2017-07-26 17:08:02 +02:00
Ed Robinson
bb84c696e2 Stubb out libostree support when building on OSX
Signed-off-by: Ed Robinson <ed.robinson@reevoo.com>
2017-07-26 14:50:11 +01:00
Ed Robinson
4d5e442c25 Adds automated build for OSX re #380
Signed-off-by: Ed Robinson <ed.robinson@reevoo.com>
2017-07-25 15:54:46 +01:00
Antonio Murdaca
91606d49f2 Merge pull request #379 from runcom/bump-oci-v1
Bump v0.1.23 for OCIv1 support and bug fixes
2017-07-20 19:53:18 +02:00
Antonio Murdaca
85bbb497d3 bump back to v0.1.24-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-07-20 19:36:46 +02:00
Antonio Murdaca
1bbd87f435 bump to v0.1.23
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-07-20 19:36:46 +02:00
Antonio Murdaca
bf149c6426 Merge pull request #378 from mtrmac/runtime-spec-v1.0.0
Update to image-spec v1.0.0 and revendor
2017-07-20 19:33:24 +02:00
Miloslav Trmač
ca03debe59 Update to image-spec v1.0.0 and revendor 2017-07-20 18:04:00 +02:00
Antonio Murdaca
2d168e3723 Merge pull request #377 from mtrmac/image-spec-1.0.0
Update to image-spec v1.0.0 and revendor
2017-07-20 14:36:33 +02:00
Miloslav Trmač
2c1ede8449 Update to image-spec v1.0.0 and revendor 2017-07-19 23:50:50 +02:00
Miloslav Trmač
b2a06ed720 Merge pull request #376 from runcom/fix-375
vendor c/image: fix auth handlers
2017-07-18 18:58:36 +02:00
Antonio Murdaca
2874584be4 vendor c/image: fix auth handlers
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-07-18 17:14:35 +02:00
Antonio Murdaca
91e801b451 Merge pull request #371 from nalind/vendor-storage
Bump and pin containers/storage and containers/image
2017-06-28 17:30:14 +02:00
Nalin Dahyabhai
b0648d79d4 Bump containers/storage and containers/image
Update containers/storage and containers/image to the
current-as-of-this-writing versions,
105f7c77aef0c797429e41552743bf5b03b63263 and
23bddaa64cc6bf3f3077cda0dbf1cdd7007434df respectively.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-06-28 11:05:26 -04:00
Miloslav Trmač
437d608772 Merge pull request #370 from 0x0916/2017-06-22/dockerfile
Dockerfile.build: using ubuntu 17.04
2017-06-22 13:03:44 +02:00
0x0916
d57934d529 Dockerfile.build: using ubuntu 17.04
ubuntu 16.04 have not package `libostree-dev`. also, we should
install `libglib2.0-dev` package when build skopeo with command `make binary`.

Signed-off-by: 0x0916 <w@laoqinren.net>
2017-06-22 17:17:24 +08:00
Antonio Murdaca
4470b88c50 Merge pull request #368 from runcom/cut-v0.1.22
Cut v0.1.22
2017-06-21 10:39:49 +02:00
Antonio Murdaca
03595a83d0 bump back to v0.1.23
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-21 10:22:58 +02:00
Antonio Murdaca
5d24b67f5e bump to v0.1.22
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-21 10:22:43 +02:00
Antonio Murdaca
3b9ee4f322 Merge pull request #366 from rhatdan/transports
Give more useful help when explaining usage
2017-06-20 17:02:22 +02:00
Daniel J Walsh
0ca26cce94 Give more useful help when explaining usage
Also specify container-storage as a valid transport

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-06-20 14:28:02 +00:00
Antonio Murdaca
29528d00ec Merge pull request #367 from runcom/vendor-c/image-list-names
vendor c/image for ListNames in transports pkg
2017-06-17 00:26:40 +02:00
Antonio Murdaca
af34f50b8c bump ostree-go
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-17 00:08:54 +02:00
Antonio Murdaca
e7b32b1e6a vendor c/image for ListNames in transports pkg
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-16 23:39:06 +02:00
Antonio Murdaca
4049bf2801 Merge pull request #365 from rhatdan/docker
Remove docker references whereever possible
2017-06-16 19:15:33 +02:00
Daniel J Walsh
ad33537769 Remove docker references whereever possible
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-06-16 10:51:42 -04:00
Antonio Murdaca
d5e34c1b5e Merge pull request #362 from rhatdan/master
Vendor in ostree fixes
2017-06-16 11:58:52 +02:00
Dan Walsh
5e586f3781 Vendor in ostree fixes
This will fix the compiler issues.

Signed-off-by: Dan Walsh <dwalsh@redhat.com>
2017-06-16 05:42:47 -04:00
Antonio Murdaca
455177e749 Merge pull request #358 from runcom/bump-release-0.1.21
Bump release 0.1.21
2017-06-15 17:16:46 +02:00
Antonio Murdaca
b85b7319aa bump back to v0.1.22
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 16:58:43 +02:00
Antonio Murdaca
0b73154601 bump to v0.1.21
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 16:58:24 +02:00
Miloslav Trmač
da48399f89 Merge pull request #357 from runcom/up-cstorage
*: update c/storage
2017-06-15 15:12:33 +02:00
Antonio Murdaca
08504d913c *: update c/storage
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 14:29:56 +02:00
Miloslav Trmač
fe67466701 Merge pull request #351 from giuseppe/add-ostree-dep
vendor.conf: add ostree-go
2017-06-14 17:18:51 +02:00
Giuseppe Scrivano
47e5d0cd9e vendor.conf: add ostree-go
it is used by containers/image for pulling images to the OSTree storage.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-06-14 15:35:34 +02:00
Antonio Murdaca
f0d830a8ca Merge pull request #355 from runcom/fail-on-diff-os
fail early when image os doesn't match host os
2017-06-06 13:55:26 +02:00
Antonio Murdaca
6d3f523c57 fail early when image os doesn't match host os
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-06 13:40:16 +02:00
Miloslav Trmač
87a36f9d5b Merge pull request #343 from mtrmac/readme-updates
README.md updates
2017-06-05 17:50:15 +02:00
Miloslav Trmač
9d12d72fb7 Improve documentation on what to do with containers/image failures in test-skopeo 2017-06-05 15:16:43 +02:00
Miloslav Trmač
fc88117065 Drop finished work from TODO
- We now have the docker-archive: transport
- Integration tests with built registries also exist
2017-06-05 15:16:43 +02:00
Miloslav Trmač
0debda0bbb Start all command examples with $
That has mostly been the case, with one outlier.  Fix it.
2017-06-05 15:16:43 +02:00
Miloslav Trmač
49f10736d1 Restructure the “Building without a container” version
Consolidate the Fedora and macOS instructions to prevent duplication,
and to suggest using $GOPATH for both.

Start with installing dependencies.
2017-06-05 15:15:04 +02:00
Miloslav Trmač
3d0d2ea6bb Document that there is a choice in using containers, and what the trade-off is 2017-06-05 15:15:02 +02:00
Miloslav Trmač
a2499d3451 Create “Building {without a container,in a container}” subsections
To make it clearer that the two are alternatives.

Document that a docker command is needed for the in-container build.

Also move the “checkout in $GOPATH” warning into the “without a
container” section, where it belongs.
2017-06-05 15:13:25 +02:00
Miloslav Trmač
7db0aab330 Move documentation build instructions to the end, with a separate header
We want to start with the Go 1.5 dependency and build/checkout
instructions.

Also create a separate subsection, to match the future “Building
in/without a container” subsections
2017-06-05 14:42:19 +02:00
Miloslav Trmač
150eb5bf18 Add Fedora instructions for installing go-md2man
It would be nice to have macOS instructions as well.
2017-06-05 14:40:58 +02:00
Miloslav Trmač
bcc0de69d4 Merge pull request #353 from surajssd/add-fedora-dependency
docs(README): add build dependencies for fedora
2017-06-05 14:39:58 +02:00
Suraj Deshmukh
cab89b9b9c docs(README): add build dependencies for fedora
Two more packages are needed to locally build skopeo
on fedora viz. btrfs-progs-devel & device-mapper-devel,
so added them in README.

Signed-off-by: Suraj Deshmukh <surajssd009005@gmail.com>
2017-06-04 16:03:23 +05:30
Miloslav Trmač
5b95a21401 Merge pull request #350 from mhrivnak/patch-1
Fixes a typo in the Name field
2017-05-31 20:49:28 +02:00
Michael Hrivnak
78c83dbcff Fixes a typo in the Name field 2017-05-31 14:22:22 -04:00
Miloslav Trmač
98ced5196c Merge pull request #347 from mtrmac/docker-certs.d
Support /etc/docker/certs.d
2017-05-30 19:40:36 +02:00
Miloslav Trmač
63272a10d7 Vendor after merging mtrmac/image:docker-certs.d 2017-05-30 18:26:43 +02:00
Antonio Murdaca
07c798ff82 Merge pull request #346 from jingqiuELE/master
Always combine RUN apt-get update with apt-get install in the same RUN statement.
2017-05-25 14:54:31 +02:00
Jing Qiu
750d72873d Always combine RUN apt-get update with apt-get install in the same RUN
statement.

From the [docs](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#build-cache) in March 2017:

Always combine RUN apt-get update with apt-get install in the same  RUN statement, for example

RUN apt-get update && apt-get install -y package-bar

Using apt-get update alone in a RUN statement causes caching issues and subsequent apt-get install instructions fail.

Signed-off-by: Jing Qiu <aqiu0720@gmail.com>
2017-05-25 11:38:35 +08:00
Antonio Murdaca
81dddac7d6 Merge pull request #345 from runcom/image-spec-rc6
update image-spec to v1.0.0-rc6
2017-05-24 16:55:18 +02:00
Antonio Murdaca
405b912f7e update image-spec to v1.0.0-rc6
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-24 16:35:57 +02:00
Miloslav Trmač
e1884aa8a8 Merge pull request #344 from mtrmac/storage-rebase
Pull in rhatdan/image:master
2017-05-17 22:20:50 +02:00
Miloslav Trmač
ffb01385dd Vendor after merging https://github.com/containers/image/pull/275 2017-05-17 17:12:23 +02:00
Miloslav Trmač
c5adc4b580 Merge pull request #331 from mtrmac/storage-rebase
Re-vendor, primarily for https://github.com/containers/storage/pull/11
2017-05-12 18:50:40 +02:00
Miloslav Trmač
69b9106646 Re-vendor, primarily for https://github.com/containers/storage/pull/11
containers/storage got new dependencies, so we will need to re-vendor
eventually anyway, and having this separate from other major work is
cleaner.

But the primary goal of this commit is to see whether it makes skopeo
buildable on OS X.
2017-05-11 13:07:14 +02:00
Antonio Murdaca
565688a963 Merge pull request #341 from projectatomic/jzb-patch-1
Update README.md
2017-05-10 21:21:00 +02:00
Joe Brockmeier
5205f3646d Update README.md
Just clarifying that Skopeo is available in later versions of Fedora as well.
2017-05-10 13:53:36 -04:00
Miloslav Trmač
3c57a0f084 Merge pull request #340 from mtrmac/setup-test
Simplify infrastructure setup, and make registry timeouts less strict
2017-05-10 15:05:52 +02:00
Miloslav Trmač
ed2088a4e5 Increase the time we wait for a registry from 0.5 to 5 seconds
We are not testing registry start-up performance, and killing the test
suite just because Travis is a bit busy doesn’t help; we’re much better
off with a test run which gives the registry a bit more time.
2017-05-10 14:46:28 +02:00
Miloslav Trmač
8b36001c0e Do not build docker/registry and remove remaining helpers 2017-05-10 14:46:28 +02:00
Miloslav Trmač
cd300805d1 Remove registry instances which we don’t use at all 2017-05-10 14:46:12 +02:00
Miloslav Trmač
8d3d0404fe Clean up SigningSuite test initialization
Move "skip if signing is not available" into the test, there may be
tests which only need verification.

Move GNUPGHOME creation from SetUpTest to SetUpSuite, sharing a single
key is fine.  We don’t change the GNUPGHOME contents at test runtime.
2017-05-10 14:45:10 +02:00
Miloslav Trmač
9985f12cd4 Move skopeoBinary check from *Suite.SetUpTest to SetUpSuite
The results are not going to vary across individual tests, so let’s only
check once.
2017-05-10 14:44:20 +02:00
Antonio Murdaca
5a42657cdb Merge pull request #339 from runcom/new-release-v0.1.20
New release v0.1.20
2017-05-10 13:01:27 +02:00
Antonio Murdaca
43d6128036 bump back to v0.1.21-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 12:37:37 +02:00
Antonio Murdaca
e802625b7c bump to v0.1.20
- support image-spec v1.0.0-rc5

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 12:37:07 +02:00
Miloslav Trmač
105be6a0ab Merge pull request #337 from mtrmac/docker-push-to-tag
Update name/tag embedded into schema1 manifests
2017-05-10 12:33:18 +02:00
Miloslav Trmač
8ec2a142c9 Enable tests for schema2→schema1 conversion with docker/distribution registries
Now that we can update the embedded name:tag, the test no longer fails
on a schema1→schema1 copy with the old schema1 server which verifies the
name:tag value.
2017-05-10 12:13:39 +02:00
Miloslav Trmač
c4ec970bb2 Uncomment TestCopyCompression test cases
We have been able to push to Docker tags for a long time, and recently
implemented s2→s1 autodetection.
2017-05-10 12:13:38 +02:00
Miloslav Trmač
cf7e58a297 Test that names are updated as expected when pushing to schema1
Before the update, we have loosened the equality check to ignore the
name/tag; now that we are generating them correctly, test for the
expected values.
2017-05-10 12:13:38 +02:00
Miloslav Trmač
03233a5ca7 Vendor after merging mtrmac/image:docker-push-to-tag 2017-05-10 12:13:24 +02:00
Miloslav Trmač
9bc847e656 Use a schema2 server in TestCopySignatures
TestCopySignatures, among other things, tests handling of a correctly
signed image to a different name without breaking the signature, which
will be impossible with schema1 after we start updating the names
embedded in the schema1 manifest.  So, use the schema2 server binary,
and docker://busybox image versions which use schema2.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
22965c443f Allow updated names when comparing schema1 images
The new version of containers/image will update the name and tag fields
when pushing to schema1; so accept that before we update, so that tests
keep working.

For now, just ignore the name/tag fields, so that both the current and
updated versions of containers/image are acceptable; we will tighten
that after the update.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
6f23c88e84 Make schema1 dir: comparisons nondestructive
Use (diff -x manifest.json) instead of removing the manifest.json files.
Also rename the helper from destructiveCheckDirImageAreEqual to
assertDirImagesAreEqual.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
4afafe9538 Log output of docker/distribution registries instead of sending it to /dev/null
This is useful for diagnosing failures which are logged locally but not
reported to the clients.
2017-05-10 12:01:09 +02:00
Antonio Murdaca
50dda3492c Merge pull request #313 from runcom/update-imgspec-rc5
oci: update to image-spec v1 RC5
2017-05-10 11:57:20 +02:00
Antonio Murdaca
f4a44f00b8 integration: disable check with image-tools for image-spec RC5
We need https://github.com/opencontainers/image-tools/pull/144 first

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 11:35:19 +02:00
Antonio Murdaca
dd13a0d60b oci: update to image-spec v1 RC5
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 11:35:16 +02:00
Antonio Murdaca
80b751a225 Merge pull request #332 from mtrmac/schema12
Run-time manifest type support (e.g. schema1/schema2) autodetection re-vendor + integration tests
2017-05-10 11:04:28 +02:00
Miloslav Trmač
8556dd1aa1 Add tests for automatic schema conversion depending on registry
In addition to the default registry in the OpenShift cluster, start two
more (one known to support s1 only, one known to support s1+s2), and
also a docker/distribution s1-only registry.

Then test that copying images around works as expected.

NOTE: The docker/distribution s1-only tests currently fail and are
disabled.  See the added comment for details.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
c43e4cffaf Collect "processes" to kill in openshiftCluster instead of named members
We don’t really need to differentiate between the master/registry, we
just want to terminate them, maybe in the right order.  So, collect them
in an array instead of using separate members.

This will make it easier to have more registry instances in the near
future.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-05-09 15:08:11 +02:00
Miloslav Trmač
44ee5be6db Do not store a *check.C in openshiftCluster
The *check.C object can not be reused across tests, so storing it in
openshiftCluster is incorrect (and leads to weird behavior like
assertion failures being silently ignored).  So far this hasn't really
been an issue because we have been using the *check.C only in SetUpSuite
and TearDownSuite, and the changes to this have turned out to be
unnecessary after all, but this is still the right thing to do.

This is more or less
> s/c\./cluster\./g; s/cluster\.c/c/g
(paying more attention to the syntax) and corresponding modifications
to the method declarations.

Does not change behavior, apart from using the correct *check.C in
CopySuite.TearDownSuite.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
cfd1cf6def Abort in fileFromFixture if a replacement template is not found
This makes the fixture editation more robust against typos or unexpected
changes (if the “fixture” comes from third parties, like the OpenShift
registry configuration file).
2017-05-09 15:08:11 +02:00
Miloslav Trmač
15ce6488dd Move fileFromFixture from copy_test.go to utils.go
… to make it possible to call it from openshift.go.

Does not change behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
4f199f86f7 Split a startRegistryProcess from openshiftCluster.startRegistry
The helper will be reused in the future.  For now, this does not change
behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
1f1f6801bb Split openshiftCluster.startRegistry into prepareRegistryConfig and startRegistry
This separates creation of the account and configuration, which can be
shared across service instances, from actually starting the registry; we
will soon start several of them.

Only splits a function, does not change behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
1ee776b09b Vendor after merging mtrmac/image:schema12 2017-05-09 15:07:49 +02:00
Antonio Murdaca
8bb9d5f0f2 Merge pull request #335 from runcom/fix-crio-openshift
Fix docker-daemon to containers-storage copy
2017-05-06 12:29:16 +02:00
Antonio Murdaca
88ea901938 vendor c/image to fix docker-daemon -> containers-storage copy
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-06 12:09:29 +02:00
Antonio Murdaca
fd41f20bb8 Merge pull request #314 from giuseppe/ostree
skopeo: support copy to OSTree storage
2017-04-24 22:58:51 +02:00
Giuseppe Scrivano
1d5c681f0f skopeo: support copy to OSTree storage
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-04-24 20:23:45 +02:00
Antonio Murdaca
c467afa37c Merge pull request #328 from runcom/new-release
New release: v0.1.19
2017-04-14 12:45:56 +02:00
Antonio Murdaca
53a90e51d4 bump again to v0.1.20-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-04-14 12:21:28 +02:00
1008 changed files with 203748 additions and 27518 deletions

View File

@@ -1,10 +1,24 @@
sudo: required
services:
- docker
matrix:
include:
- os: linux
sudo: required
services:
- docker
- os: osx
notifications:
email: false
install:
# NOTE: The (brew update) should not be necessary, and slows things down;
# we include it as a workaround for https://github.com/Homebrew/brew/issues/3299
# ideally Travis should bake the (brew update) into its images
# (https://github.com/travis-ci/travis-ci/issues/8552 ), but thats only going
# to happen around November 2017 per https://blog.travis-ci.com/2017-10-16-a-new-default-os-x-image-is-coming .
# Remove the (brew update) at that time.
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update && brew install gpgme ; fi
script:
- make check
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then hack/travis_osx.sh ; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make check ; fi

View File

@@ -8,7 +8,9 @@ that we follow.
* [Reporting Issues](#reporting-issues)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Communications](#communications)
<!--
* [Becoming a Maintainer](#becoming-a-maintainer)
-->
## Reporting Issues
@@ -37,9 +39,9 @@ It's ok to just open up a PR with the fix, but make sure you include the same
information you would have included in an issue - like how to reproduce it.
PRs for new features should include some background on what use cases the
new code is trying to address. And, when possible and it makes, try to break-up
new code is trying to address. When possible and when it makes sense, try to break-up
larger PRs into smaller ones - it's easier to review smaller
code changes. But, only if those smaller ones make sense as stand-alone PRs.
code changes. But only if those smaller ones make sense as stand-alone PRs.
Regardless of the type of PR, all PRs should include:
* well documented code changes
@@ -47,9 +49,9 @@ Regardless of the type of PR, all PRs should include:
* documentation changes
Squash your commits into logical pieces of work that might want to be reviewed
separate from the rest of the PRs. But, squashing down to just one commit is ok
too since in the end the entire PR will be reviewed anyway. When in doubt,
squash.
separate from the rest of the PRs. Ideally, each commit should implement a single
idea, and the PR branch should pass the tests at every commit. GitHub makes it easy
to review the cumulative effect of many commits; so, when in doubt, use smaller commits.
PRs that fix issues should include a reference like `Closes #XXXX` in the
commit message so that github will automatically close the referenced issue
@@ -115,7 +117,7 @@ commit automatically with `git commit -s`.
## Communications
For general questions, or dicsussions, please use the
For general questions, or discussions, please use the
IRC group on `irc.freenode.net` called `container-projects`
that has been setup.

View File

@@ -6,23 +6,19 @@ RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md
device-mapper-devel \
# gpgme bindings deps
libassuan-devel gpgme-devel \
ostree-devel \
gnupg \
# registry v1 deps
xz-devel \
python-devel \
python-pip \
swig \
redhat-rpm-config \
openssl-devel \
patch
# OpenShift deps
which tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \
&& dnf clean all
# Install three versions of the registry. The first is an older version that
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests. Install registry v1 also.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
# and schema2 manifests.
RUN set -x \
&& REGISTRY_COMMIT_SCHEMA1=ec87e9b6971d831f0eff752ddb54fb64693e51cd \
&& REGISTRY_COMMIT=47a064d4195a9b56133891bbb13620c3ac83a827 \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
@@ -31,25 +27,15 @@ RUN set -x \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH" \
&& export DRV1="$(mktemp -d)" \
&& git clone https://github.com/docker/docker-registry.git "$DRV1" \
# no need for setuptools since we have a version conflict with fedora
&& sed -i.bak s/setuptools==5.8//g "$DRV1/requirements/main.txt" \
&& sed -i.bak s/setuptools==5.8//g "$DRV1/depends/docker-registry-core/requirements/main.txt" \
&& pip install "$DRV1/depends/docker-registry-core" \
&& pip install file://"$DRV1#egg=docker-registry[bugsnag,newrelic,cors]" \
&& patch $(python -c 'import boto; import os; print os.path.dirname(boto.__file__)')/connection.py \
< "$DRV1/contrib/boto_header_patch.diff" \
&& dnf -y update && dnf install -y m2crypto
&& rm -rf "$GOPATH"
RUN set -x \
&& yum install -y which git tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \
&& export GOPATH=$(mktemp -d) \
&& git clone -b v1.5.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
&& git clone --depth 1 -b v1.5.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
&& (cd "$GOPATH/src/github.com/openshift/origin" && make clean build && make all WHAT=cmd/dockerregistry) \
&& cp -a "$GOPATH/src/github.com/openshift/origin/_output/local/bin/linux"/*/* /usr/local/bin \
&& cp "$GOPATH/src/github.com/openshift/origin/images/dockerregistry/config.yml" /atomic-registry-config.yml \
&& rm -rf "$GOPATH" \
&& mkdir /registry
ENV GOPATH /usr/share/gocode:/go

View File

@@ -1,5 +1,14 @@
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y golang btrfs-tools git-core libdevmapper-dev libgpgme11-dev go-md2man
FROM ubuntu:17.04
RUN apt-get update && apt-get install -y \
golang \
btrfs-tools \
git-core \
libdevmapper-dev \
libgpgme11-dev \
go-md2man \
libglib2.0-dev \
libostree-dev
ENV GOPATH=/
WORKDIR /src/github.com/projectatomic/skopeo

View File

@@ -2,7 +2,20 @@
export GO15VENDOREXPERIMENT=1
ifeq ($(shell uname),Darwin)
PREFIX ?= ${DESTDIR}/usr/local
DARWIN_BUILD_TAG=containers_image_ostree_stub
# On macOS, (brew install gpgme) installs it within /usr/local, but /usr/local/include is not in the default search path.
# Rather than hard-code this directory, use gpgme-config. Sadly that must be done at the top-level user
# instead of locally in the gpgme subpackage, because cgo supports only pkg-config, not general shell scripts,
# and gpgme does not install a pkg-config file.
# If gpgme is not installed or gpgme-config cant be found for other reasons, the error is silently ignored
# (and the user will probably find out because the cgo compilation will fail).
GPGME_ENV := CGO_CFLAGS="$(shell gpgme-config --cflags 2>/dev/null)" CGO_LDFLAGS="$(shell gpgme-config --libs 2>/dev/null)"
else
PREFIX ?= ${DESTDIR}/usr
endif
INSTALLDIR=${PREFIX}/bin
MANINSTALLDIR=${PREFIX}/share/man
CONTAINERSSYSCONFIGDIR=${DESTDIR}/etc/containers
@@ -10,11 +23,16 @@ REGISTRIESDDIR=${CONTAINERSSYSCONFIGDIR}/registries.d
SIGSTOREDIR=${DESTDIR}/var/lib/atomic/sigstore
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
GO_MD2MAN ?= go-md2man
GO ?= go
ifeq ($(DEBUG), 1)
override GOGCFLAGS += -N -l
endif
ifeq ($(shell go env GOOS), linux)
GO_DYN_FLAGS="-buildmode=pie"
endif
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
DOCKER_IMAGE := skopeo-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
# set env like gobuildtag?
@@ -34,7 +52,7 @@ MANPAGES_MD = $(wildcard docs/*.md)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh)
LIBDM_BUILD_TAG = $(shell hack/libdm_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG) $(DARWIN_BUILD_TAG)
BUILDTAGS += $(LOCAL_BUILD_TAGS)
# make all DEBUG=1
@@ -57,10 +75,10 @@ binary-static: cmd/skopeo
# Build w/o using Docker containers
binary-local:
go build -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
$(GPGME_ENV) $(GO) build ${GO_DYN_FLAGS} -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
binary-local-static:
go build -ldflags "-extldflags \"-static\" -X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
$(GPGME_ENV) $(GO) build -ldflags "-extldflags \"-static\" -X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
build-container:
docker build ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" .
@@ -76,17 +94,22 @@ clean:
install: install-binary install-docs install-completions
install -d -m 755 ${SIGSTOREDIR}
install -D -m 644 default-policy.json ${CONTAINERSSYSCONFIGDIR}/policy.json
install -D -m 644 default.yaml ${REGISTRIESDDIR}/default.yaml
install -d -m 755 ${CONTAINERSSYSCONFIGDIR}
install -m 644 default-policy.json ${CONTAINERSSYSCONFIGDIR}/policy.json
install -d -m 755 ${REGISTRIESDDIR}
install -m 644 default.yaml ${REGISTRIESDDIR}/default.yaml
install-binary: ./skopeo
install -D -m 755 skopeo ${INSTALLDIR}/skopeo
install -d -m 755 ${INSTALLDIR}
install -m 755 skopeo ${INSTALLDIR}/skopeo
install-docs: docs/skopeo.1
install -D -m 644 docs/skopeo.1 ${MANINSTALLDIR}/man1/skopeo.1
install -d -m 755 ${MANINSTALLDIR}/man1
install -m 644 docs/skopeo.1 ${MANINSTALLDIR}/man1/skopeo.1
install-completions:
install -m 644 -D completions/bash/skopeo ${BASHINSTALLDIR}/skopeo
install -m 755 -d ${BASHINSTALLDIR}
install -m 644 completions/bash/skopeo ${BASHINSTALLDIR}/skopeo
shell: build-container
$(DOCKER_RUN_DOCKER) bash
@@ -111,4 +134,4 @@ validate-local:
hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
test-unit-local:
go test -tags "$(BUILDTAGS)" $$(go list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')
$(GPGME_ENV) $(GO) test -tags "$(BUILDTAGS)" $$($(GO) list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')

101
README.md
View File

@@ -1,16 +1,48 @@
skopeo [![Build Status](https://travis-ci.org/projectatomic/skopeo.svg?branch=master)](https://travis-ci.org/projectatomic/skopeo)
=
_Please be aware `skopeo` is still work in progress and it currently supports only registry API V2_
<img src="https://cdn.rawgit.com/projectatomic/skopeo/master/docs/skopeo.svg" width="250">
`skopeo` is a command line utility for various operations on container images and image repositories.
----
`skopeo` is a command line utility that performs various operations on container images and image repositories. Skopeo works with API V2 registries such as Docker registries, the Atomic registry, private registries, local directories and local OCI-layout directories. Skopeo does not require a daemon to be running to perform these operations which consist of:
* Inspecting an image showing its properties including its layers.
* Copying an image from and to various storage mechanisms.
* Deleting an image from an image repository.
* When required by the repository, skopeo can pass the appropriate credentials and certificates for authentication.
Skopeo operates on the following image and repository types:
* containers-storage:docker-reference
An image located in a local containers/storage image store. Location and image store specified in /etc/containers/storage.conf
* dir:path
An existing local directory path storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
* docker://docker-reference
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in $HOME/.docker/config.json, which is set e.g. using (docker login).
* docker-archive:path[:docker-reference]
An image is stored in the `docker save` formated file. docker-reference is only used when creating such a file, and it must not contain a digest.
* docker-daemon:docker-reference
An image docker-reference stored in the docker daemon internal storage. docker-reference must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
* oci:path:tag
An image tag in a directory compliant with "Open Container Image Layout Specification" at path.
* ostree:image[@/absolute/repo/path]
An image in local OSTree repository. /absolute/repo/path defaults to /ostree/repo.
Inspecting a repository
-
`skopeo` is able to _inspect_ a repository on a Docker registry and fetch images layers.
By _inspect_ I mean it fetches the repository's manifest and it is able to show you a `docker inspect`-like
The _inspect_ command fetches the repository's manifest and it is able to show you a `docker inspect`-like
json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about
a repository or a tag before pulling it (using disk space) - e.g. - which tags are available for the given repository? which labels the image has?
a repository or a tag before pulling it (using disk space). The inspect command can show you which tags are available for the given
repository, the labels the image has, the creation date and operating system of the image and more.
Examples:
```sh
@@ -54,7 +86,7 @@ local directories, and local OCI-layout directories:
```sh
$ skopeo copy docker://busybox:1-glibc atomic:myns/unsigned:streaming
$ skopeo copy docker://busybox:latest dir:existingemptydirectory
$ skopeo copy docker://busybox:latest oci:busybox_ocilayout
$ skopeo copy docker://busybox:latest oci:busybox_ocilayout:latest
```
Deleting images
@@ -103,43 +135,64 @@ you'll get an error. You can fix this by either logging in (via `docker login`)
Building
-
To build the manual you will need go-md2man.
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag.
There are two ways to build skopeo: in a container, or locally without a container. Choose the one which better matches your needs and environment.
### Building without a container
Building without a container requires a bit more manual work and setup in your environment, but it is more flexible:
- It should work in more environments (e.g. for native macOS builds)
- It does not require root privileges (after dependencies are installed)
- It is faster, therefore more convenient for developing `skopeo`.
Install the necessary dependencies:
```sh
$ sudo apt-get install go-md2man
Fedora$ sudo dnf install gpgme-devel libassuan-devel btrfs-progs-devel device-mapper-devel ostree-devel
macOS$ brew install gpgme
```
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag. Also, make sure to clone the repository in your `GOPATH` - otherwise compilation fails.
Make sure to clone this repository in your `GOPATH` - otherwise compilation fails.
```sh
$ git clone https://github.com/projectatomic/skopeo $GOPATH/src/github.com/projectatomic/skopeo
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make all
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make binary-local
```
To build localy on OSX:
### Building in a container
Building in a container is simpler, but more restrictive:
- It requires the `docker` command and the ability to run Linux containers
- The created executable is a Linux executable, and depends on dynamic libraries which may only be available only in a container of a similar Linux distribution.
```sh
$ brew install gpgme
$ make binary-local
$ make binary # Or (make all) to also build documentation, see below.
```
You may need to install additional development packages: `gpgme-devel` and `libassuan-devel`
### Building documentation
To build the manual you will need go-md2man.
```sh
$ dnf install gpgme-devel libassuan-devel
Debian$ sudo apt-get install go-md2man
Fedora$ sudo dnf install go-md2man
```
Then
```sh
$ make docs
```
Installing
-
If you built from source:
```sh
$ sudo make install
```
`skopeo` is also available from Fedora 23:
`skopeo` is also available from Fedora 23 (and later):
```sh
sudo dnf install skopeo
$ sudo dnf install skopeo
```
TODO
-
- list all images on registry?
- registry v2 search?
- support output to docker load tar(s)
- show repo tags via flag or when reference isn't tagged or digested
- add tests (integration with deployed registries in container - Docker-like)
- support rkt/appc image spec
NOT TODO
@@ -163,10 +216,20 @@ In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `vndr github.com/pkg/errors`
In order to test out new PRs from [containers/image](https://github.com/containers/image) to not break `skopeo`:
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `vndr github.com/containers/image`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `vndr github.com/containers/image`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete succcesfully again, merge the skopeo PR
License
-

View File

@@ -1,3 +1,5 @@
// +build !containers_image_openpgp
package main
/*

View File

@@ -4,10 +4,14 @@ import (
"errors"
"fmt"
"os"
"strings"
"github.com/containers/image/copy"
"github.com/containers/image/manifest"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
@@ -53,18 +57,42 @@ func copyHandler(context *cli.Context) error {
return err
}
var manifestType string
if context.IsSet("format") {
switch context.String("format") {
case "oci":
manifestType = imgspecv1.MediaTypeImageManifest
case "v2s1":
manifestType = manifest.DockerV2Schema1SignedMediaType
case "v2s2":
manifestType = manifest.DockerV2Schema2MediaType
default:
return fmt.Errorf("unknown format %q. Choose on of the supported formats: 'oci', 'v2s1', or 'v2s2'", context.String("format"))
}
}
return copy.Image(policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: removeSignatures,
SignBy: signBy,
ReportWriter: os.Stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
RemoveSignatures: removeSignatures,
SignBy: signBy,
ReportWriter: os.Stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
ForceManifestMIMEType: manifestType,
})
}
var copyCmd = cli.Command{
Name: "copy",
Usage: "Copy an image from one location to another",
Name: "copy",
Usage: "Copy an IMAGE-NAME from one location to another",
Description: fmt.Sprintf(`
Container "IMAGE-NAME" uses a "transport":"details" format.
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "SOURCE-IMAGE DESTINATION-IMAGE",
Action: copyHandler,
// FIXME: Do we need to namespace the GPG aspect?
@@ -94,7 +122,7 @@ var copyCmd = cli.Command{
},
cli.BoolTFlag{
Name: "src-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker source registry (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to the container source registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-cert-dir",
@@ -103,7 +131,30 @@ var copyCmd = cli.Command{
},
cli.BoolTFlag{
Name: "dest-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker destination registry (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to the container destination registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-ostree-tmp-dir",
Value: "",
Usage: "`DIRECTORY` to use for OSTree temporary files",
},
cli.StringFlag{
Name: "src-shared-blob-dir",
Value: "",
Usage: "`DIRECTORY` to use to fetch retrieved blobs (OCI layout sources only)",
},
cli.StringFlag{
Name: "dest-shared-blob-dir",
Value: "",
Usage: "`DIRECTORY` to use to store retrieved blobs (OCI layout destinations only)",
},
cli.StringFlag{
Name: "format, f",
Usage: "`MANIFEST TYPE` (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)",
},
cli.BoolFlag{
Name: "dest-compress",
Usage: "Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)",
},
},
}

View File

@@ -3,7 +3,9 @@ package main
import (
"errors"
"fmt"
"strings"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/urfave/cli"
)
@@ -22,15 +24,20 @@ func deleteHandler(context *cli.Context) error {
if err != nil {
return err
}
if err := ref.DeleteImage(ctx); err != nil {
return err
}
return nil
return ref.DeleteImage(ctx)
}
var deleteCmd = cli.Command{
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Description: fmt.Sprintf(`
Delete an "IMAGE_NAME" from a transport
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Action: deleteHandler,
Flags: []cli.Flag{
@@ -46,7 +53,7 @@ var deleteCmd = cli.Command{
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
},
}

View File

@@ -6,11 +6,12 @@ import (
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker"
"github.com/containers/image/manifest"
"github.com/containers/image/transports"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@@ -29,8 +30,16 @@ type inspectOutput struct {
}
var inspectCmd = cli.Command{
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Description: fmt.Sprintf(`
Return low-level information about "IMAGE-NAME" in a registry/transport
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Flags: []cli.Flag{
cli.StringFlag{
@@ -40,7 +49,7 @@ var inspectCmd = cli.Command{
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
cli.BoolFlag{
Name: "raw",

View File

@@ -8,7 +8,6 @@ import (
"github.com/containers/image/directory"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
@@ -25,12 +24,7 @@ var layersCmd = cli.Command{
if c.NArg() == 0 {
return errors.New("Usage: layers imageReference [layer...]")
}
rawSource, err := parseImageSource(c, c.Args()[0], []string{
// TODO: skopeo layers only supports these now
// eventually we'll remove this command altogether...
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
})
rawSource, err := parseImageSource(c, c.Args()[0])
if err != nil {
return err
}
@@ -115,10 +109,6 @@ var layersCmd = cli.Command{
return err
}
if err := dest.Commit(); err != nil {
return err
}
return nil
return dest.Commit()
},
}

View File

@@ -4,10 +4,10 @@ import (
"fmt"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/image/signature"
"github.com/containers/storage/pkg/reexec"
"github.com/projectatomic/skopeo/version"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@@ -33,7 +33,7 @@ func createApp() *cli.App {
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
Hidden: true,
},
cli.StringFlag{
@@ -48,7 +48,7 @@ func createApp() *cli.App {
cli.StringFlag{
Name: "registries.d",
Value: "",
Usage: "use registry configuration files in `DIR` (e.g. for docker signature storage)",
Usage: "use registry configuration files in `DIR` (e.g. for container signature storage)",
},
}
app.Before = func(c *cli.Context) error {

View File

@@ -16,6 +16,9 @@ func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemC
// DEPRECATED: keep this here for backward compatibility, but override
// them if per subcommand flags are provided (see below).
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
OSTreeTmpDirPath: c.String(flagPrefix + "ostree-tmp-dir"),
OCISharedBlobDirPath: c.String(flagPrefix + "shared-blob-dir"),
DirForceCompress: c.Bool(flagPrefix + "compress"),
}
if c.IsSet(flagPrefix + "tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT(flagPrefix + "tls-verify")
@@ -71,9 +74,8 @@ func parseImage(c *cli.Context) (types.Image, error) {
}
// parseImageSource converts image URL-like string to an ImageSource.
// requestedManifestMIMETypes is as in types.ImageReference.NewImageSource.
// The caller must call .Close() on the returned ImageSource.
func parseImageSource(c *cli.Context, name string, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func parseImageSource(c *cli.Context, name string) (types.ImageSource, error) {
ref, err := alltransports.ParseImageName(name)
if err != nil {
return nil, err
@@ -82,5 +84,5 @@ func parseImageSource(c *cli.Context, name string, requestedManifestMIMETypes []
if err != nil {
return nil, err
}
return ref.NewImageSource(ctx, requestedManifestMIMETypes)
return ref.NewImageSource(ctx)
}

View File

@@ -20,19 +20,24 @@ _complete_() {
}
_skopeo_copy() {
local options_with_args="
--sign-by
--src-creds --screds
--src-cert-dir
--src-tls-verify
--dest-creds --dcreds
--dest-cert-dir
--dest-tls-verify
"
local boolean_options="
--remove-signatures
"
_complete_ "$options_with_args" "$boolean_options"
local options_with_args="
--format -f
--sign-by
--src-creds --screds
--src-cert-dir
--src-tls-verify
--dest-creds --dcreds
--dest-cert-dir
--dest-ostree-tmp-dir
--dest-tls-verify
"
local boolean_options="
--dest-compress
--remove-signatures
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_inspect() {

View File

@@ -2,36 +2,36 @@
% Jhon Honce
% August 2016
# NAME
skopeo -- Various operations with container images images and container image registries
skopeo -- Various operations with container images and container image registries
# SYNOPSIS
**skopeo** [_global options_] _command_ [_command options_]
# DESCRIPTION
`skopeo` is a command line utility providing various operations with container images and container image registries. For example, it is able to inspect a repository on a Docker registry and fetch image. It fetches the repository's manifest and it is able to show you a `docker inspect`-like json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - which tags are available for the given repository? which labels the image has?
`skopeo` is a command line utility providing various operations with container images and container image registries. For example, it is able to inspect a repository on a container registry and fetch image. It fetches the repository's manifest and it is able to show you a `docker inspect`-like json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - which tags are available for the given repository? which labels the image has?
It also allows you to copy container images between various registries, possibly converting them as necessary, and to sign and verify images.
## IMAGE NAMES
Most commands refer to container images, using a _transport_`:`_details_ format. The following formats are supported:
**atomic:**_namespace_**/**_stream_**:**_tag_
An image in the current project of the current default Atomic
Registry. The current project and Atomic Registry instance are by
default read from `$HOME/.kube/config`, which is set e.g. using
`(oc login)`.
**containers-storage:**_docker-reference_
An image located in a local containers/storage image store. Location and image store specified in /etc/containers/storage.conf
**dir:**_path_
An existing local directory _path_ storing the manifest, layer
tarballs and signatures as individual files. This is a
non-standardized format, primarily useful for debugging or
noninvasive container inspection.
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2".
By default, uses the authorization state in `$HOME/.docker/config.json`,
which is set e.g. using `(docker login)`.
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$HOME/.docker/config.json`, which is set e.g. using `(docker login)`.
**docker-archive:**_path_[**:**_docker-reference_]
An image is stored in the `docker save` formated file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
**docker-daemon:**_docker-reference_
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
**oci:**_path_**:**_tag_
An image _tag_ in a directory compliant with "Open Container Image
Layout Specification" at _path_.
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
# OPTIONS
@@ -41,7 +41,7 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**--insecure-policy** Adopt an insecure, permissive policy that allows anything. This obviates the need for a policy file.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for docker signature storage), overriding the default path.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for container signature storage), overriding the default path.
**--help**|**-h** Show help
@@ -60,28 +60,34 @@ Uses the system's trust policy to validate images, rejects images not trusted by
_destination-image_ use the "image name" format described above
**--format, -f** _manifest-type_ Manifest type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
**--remove-signatures** do not copy signatures, if any, from _source-image_. Necessary when copying a signed image to a destination which does not support signatures.
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
**--src-creds** _username[:password]_ for accessing the source registry
**--dest-compress** _bool-value_ Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)
**--dest-creds** _username[:password]_ for accessing the destination registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker source registry (defaults to true)
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker destination registry (defaults to true)
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry (defaults to true)
Existing signatures, if any, are preserved as well.
## skopeo delete
**skopeo delete** _image-name_
Mark _image-name_ for deletion. To release the allocated disk space, you need to execute the docker registry garabage collector. E.g.,
Mark _image-name_ for deletion. To release the allocated disk space, you need to execute the container registry garabage collector. E.g.,
```sh
$ docker exec -it registry bin/registry garbage-collect /etc/docker/registry/config.yml
@@ -91,7 +97,7 @@ $ docker exec -it registry bin/registry garbage-collect /etc/docker/registry/con
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
@@ -108,7 +114,7 @@ Return low-level information about _image-name_ in a registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
## skopeo manifest-digest
**skopeo manifest-digest** _manifest-file_

546
docs/skopeo.svg Normal file
View File

@@ -0,0 +1,546 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="480.61456"
height="472.66098"
viewBox="0 0 127.1626 125.05822"
version="1.1"
id="svg8"
inkscape:version="0.92.2 5c3e80d, 2017-08-06"
sodipodi:docname="skopeo.svg"
inkscape:export-filename="/home/duffy/Documents/Projects/Favors/skopeo-logo/skopeo.color.png"
inkscape:export-xdpi="90"
inkscape:export-ydpi="90">
<defs
id="defs2">
<linearGradient
inkscape:collect="always"
id="linearGradient84477">
<stop
style="stop-color:#0093d9;stop-opacity:1"
offset="0"
id="stop84473" />
<stop
style="stop-color:#ffffff;stop-opacity:1"
offset="1"
id="stop84475" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84469">
<stop
style="stop-color:#f6e6c8;stop-opacity:1"
offset="0"
id="stop84465" />
<stop
style="stop-color:#dc9f2e;stop-opacity:1"
offset="1"
id="stop84467" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84461">
<stop
style="stop-color:#bfdce8;stop-opacity:1;"
offset="0"
id="stop84457" />
<stop
style="stop-color:#2a72ac;stop-opacity:1"
offset="1"
id="stop84459" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84420">
<stop
style="stop-color:#a7a9ac;stop-opacity:1;"
offset="0"
id="stop84416" />
<stop
style="stop-color:#e7e8e9;stop-opacity:1"
offset="1"
id="stop84418" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84347">
<stop
style="stop-color:#2c2d2f;stop-opacity:1;"
offset="0"
id="stop84343" />
<stop
style="stop-color:#000000;stop-opacity:1"
offset="1"
id="stop84345" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84339">
<stop
style="stop-color:#002442;stop-opacity:1;"
offset="0"
id="stop84335" />
<stop
style="stop-color:#151617;stop-opacity:1"
offset="1"
id="stop84337" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84331">
<stop
style="stop-color:#003d6e;stop-opacity:1;"
offset="0"
id="stop84327" />
<stop
style="stop-color:#59b5ff;stop-opacity:1"
offset="1"
id="stop84329" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84323">
<stop
style="stop-color:#dc9f2e;stop-opacity:1;"
offset="0"
id="stop84319" />
<stop
style="stop-color:#ffffff;stop-opacity:1"
offset="1"
id="stop84321" />
</linearGradient>
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84323"
id="linearGradient84325"
x1="221.5741"
y1="250.235"
x2="219.20772"
y2="221.99771"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84331"
id="linearGradient84333"
x1="223.23239"
y1="212.83418"
x2="245.52328"
y2="129.64345"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84339"
id="linearGradient84341"
x1="190.36137"
y1="217.8925"
x2="205.20828"
y2="209.32063"
gradientUnits="userSpaceOnUse" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84347"
id="linearGradient84349"
x1="212.05453"
y1="215.20055"
x2="237.73705"
y2="230.02835"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84323"
id="linearGradient84363"
x1="193.61516"
y1="225.045"
x2="224.08698"
y2="223.54327"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84323"
id="linearGradient84377"
x1="182.72513"
y1="222.54439"
x2="184.01024"
y2="210.35291"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84420"
id="linearGradient84408"
x1="211.73801"
y1="225.48302"
x2="204.24324"
y2="238.46432"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84420"
id="linearGradient84422"
x1="190.931"
y1="221.83777"
x2="187.53873"
y2="229.26593"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84339"
id="linearGradient84425"
gradientUnits="userSpaceOnUse"
x1="190.36137"
y1="217.8925"
x2="205.20828"
y2="209.32063"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84420"
id="linearGradient84441"
x1="169.95944"
y1="215.77036"
x2="174.0289"
y2="207.81528"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84420"
id="linearGradient84455"
x1="234.08092"
y1="252.39755"
x2="245.88477"
y2="251.21777"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<radialGradient
inkscape:collect="always"
xlink:href="#linearGradient84461"
id="radialGradient84463"
cx="213.19594"
cy="223.40646"
fx="214.12064"
fy="217.34077"
r="33.39888"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(2.6813748,0.05304973,-0.0423372,2.1399146,-349.74924,-255.6421)" />
<radialGradient
inkscape:collect="always"
xlink:href="#linearGradient84469"
id="radialGradient84471"
cx="207.18298"
cy="211.06483"
fx="207.18298"
fy="211.06483"
r="2.77954"
gradientTransform="matrix(1.4407627,0.18685239,-0.24637721,1.8997405,-38.989952,-218.98841)"
gradientUnits="userSpaceOnUse" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84477"
id="linearGradient84479"
x1="241.60336"
y1="255.46982"
x2="244.45177"
y2="250.4846"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="1"
inkscape:cx="517.27113"
inkscape:cy="314.79773"
inkscape:document-units="mm"
inkscape:current-layer="layer1"
inkscape:document-rotation="0"
showgrid="false"
units="px"
inkscape:snap-global="false"
inkscape:window-width="2560"
inkscape:window-height="1376"
inkscape:window-x="0"
inkscape:window-y="27"
inkscape:window-maximized="1"
fit-margin-top="0"
fit-margin-left="0"
fit-margin-right="0"
fit-margin-bottom="0" />
<metadata
id="metadata5">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-149.15784,-175.92614)">
<g
id="g84497"
style="stroke-width:1.32291663;stroke-miterlimit:4;stroke-dasharray:none"
transform="translate(0,10.583333)">
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84485"
width="31.605196"
height="19.16976"
x="299.48376"
y="87.963303"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84487"
width="16.725054"
height="9.8947001"
x="258.07639"
y="92.60083"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84489"
width="4.8383565"
height="11.503917"
x="253.2236"
y="91.796227"
transform="rotate(30)" />
<rect
y="86.859642"
x="331.21924"
height="21.377089"
width="4.521956"
id="rect84491"
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
transform="rotate(30)" />
</g>
<path
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 246.61693,255.0795 -9.11198,15.78242 a 2.6351497,9.1643514 30 0 0 6.60453,-6.7032 2.6351497,9.1643514 30 0 0 2.50745,-9.07922 z"
id="path84483"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="cccccc"
inkscape:connector-curvature="0"
id="path84481"
d="m 202.36709,199.05917 26.65552,8.43269 21.69622,19.51455 -8.68507,12.39398 -46.04559,-26.61429 z"
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952" />
<circle
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="path84224"
cx="213.64427"
cy="234.18927"
r="35.482784" />
<circle
r="33.39888"
cy="234.18927"
cx="213.64427"
id="circle84226"
style="fill:url(#radialGradient84463);fill-opacity:1;stroke:none;stroke-width:0.52916664;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84114"
width="31.605196"
height="19.16976"
x="304.77545"
y="97.128738"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84116"
width="4.521956"
height="21.377089"
x="300.27435"
y="96.025078"
transform="rotate(30)" />
<rect
y="99.087395"
x="283.71848"
height="15.252436"
width="16.459545"
id="rect84118"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
transform="rotate(30)" />
<rect
y="98.190086"
x="280.00021"
height="17.047071"
width="3.617183"
id="rect84120"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84122"
width="16.725054"
height="9.8947001"
x="263.36807"
y="101.76627"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84124"
width="4.8383565"
height="11.503917"
x="258.51526"
y="100.96166"
transform="rotate(30)" />
<rect
y="96.025078"
x="336.51093"
height="21.377089"
width="4.521956"
id="rect84126"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
transform="rotate(30)" />
<path
style="fill:url(#linearGradient84325);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 207.24023,252.71811 25.53907,14.74414 8.52539,-14.76953 -25.53711,-14.74415 z"
id="rect84313"
inkscape:connector-curvature="0" />
<path
inkscape:connector-curvature="0"
id="path84128"
d="m 215.3335,241.36799 22.49734,12.98884"
style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:0.52916664;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<path
inkscape:connector-curvature="0"
id="path84130"
d="m 246.61693,255.0795 -9.11198,15.78242 a 2.6351497,9.1643514 30 0 0 6.60453,-6.7032 2.6351497,9.1643514 30 0 0 2.50745,-9.07922 z"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952" />
<path
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
d="m 195.97877,212.80238 46.0456,26.61429 -3.50256,6.07342 -46.0456,-26.61429 z"
id="path84134"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccccc" />
<path
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
d="m 202.36709,199.05917 26.65552,8.43269 21.69622,19.51455 -8.68507,12.39398 -46.04559,-26.61429 z"
id="path84136"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cccccc" />
<path
style="fill:url(#linearGradient84422);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 186.31445,239.41146 1.30078,0.75 7.46485,-12.92968 -1.30078,-0.75 z"
id="rect84410"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84349);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
d="m 193.92188,218.48568 44.21289,25.55469 2.44335,-4.23242 -44.21289,-25.55664 z"
id="path84284"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84363);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 189.98438,240.4935 12.42187,7.16992 6.56641,-11.375 -12.42188,-7.16992 z"
id="rect84351"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84377);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 173.69727,227.99936 12.65234,7.30273 3.88867,-6.73633 -12.65234,-7.30273 z"
id="rect84365"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ccccc"
inkscape:connector-curvature="0"
id="path84138"
d="m 192.47621,218.8758 -11.1013,8.29627 c 0,0 6.16202,4.57403 15.2798,4.67656 9.1178,0.1025 11.46925,-3.93799 11.46925,-3.93799 z"
style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:0.79374999;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<ellipse
cy="223.01579"
cx="207.08998"
id="circle84140"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
rx="3.8395541"
ry="3.8438656" />
<path
style="fill:url(#linearGradient84333);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
d="m 197.35938,212.35287 44.36523,25.64453 7.58984,-10.83203 -20.82617,-18.73242 -25.55078,-8.08399 z"
id="path84272"
inkscape:connector-curvature="0" />
<path
inkscape:connector-curvature="0"
id="path84142"
d="m 200.6837,212.37603 11.49279,-6.98413 -8.11935,-2.73742"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.5291667;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<path
inkscape:connector-curvature="0"
id="path84144"
d="m 241.31895,235.3047 -8.04514,-4.75769 10.057,-4.72299"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.5291667;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
sodipodi:nodetypes="ccc" />
<path
sodipodi:nodetypes="ccc"
style="fill:none;fill-rule:evenodd;stroke:#2a72ac;stroke-width:0.52899998;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 241.06868,235.79543 -8.9307,-5.38071 10.81942,-5.07707"
id="path84280"
inkscape:connector-curvature="0" />
<path
style="fill:none;fill-rule:evenodd;stroke:#2a72ac;stroke-width:0.5291667;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 200.60886,211.70589 10.37702,-6.1817 -7.12581,-2.30459"
id="path84290"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccc" />
<path
style="fill:url(#radialGradient84471);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 206.89258,220.23959 -0.29297,0.0352 -0.23633,0.0527 -0.26953,0.0898 -0.2793,0.125 -0.23437,0.13477 -0.20508,0.14648 -0.2207,0.19532 -0.18946,0.20117 -0.006,0.008 0.004,-0.008 -0.006,0.01 -0.008,0.01 -0.004,0.004 -0.006,0.006 -0.12109,0.1582 -0.002,0.004 -0.002,0.002 -0.16406,0.26758 -0.12109,0.24804 -0.0996,0.28125 -0.0645,0.24219 -0.0371,0.26367 -0.0176,0.31641 0.008,0.18164 0.0332,0.28711 0.0527,0.23437 0.004,0.0117 0.0937,0.28516 0.11133,0.24805 0.13086,0.23046 0.16992,0.23829 0.1836,0.20898 0.21093,0.19727 0.19532,0.14843 0.25586,0.15625 0.24218,0.11719 0.26172,0.0977 0.27344,0.0684 0.27344,0.043 0.29297,0.0137 0.18164,-0.008 0.29687,-0.0351 0.24024,-0.0547 0.27539,-0.0898 0.24218,-0.10938 0.25,-0.14453 0.23047,-0.16406 0.20899,-0.1836 0.20508,-0.21875 0.125,-0.16406 0.004,-0.006 0.1582,-0.25781 0.004,-0.008 0.12695,-0.26172 0.0996,-0.27344 0.002,-0.006 0.0586,-0.24023 0.0391,-0.26563 0.0176,-0.3125 -0.008,-0.17968 -0.0332,-0.28711 -0.0527,-0.23438 -0.004,-0.0117 -0.0937,-0.28515 -0.11132,-0.24805 -0.13086,-0.23047 -0.16993,-0.23828 -0.18554,-0.20899 -0.19922,-0.18945 -0.21875,-0.16406 -0.23828,-0.14844 -0.26563,-0.12695 -0.01,-0.004 -0.21875,-0.0801 -0.28516,-0.0723 -0.27344,-0.043 -0.29492,-0.0137 z"
id="ellipse84292"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84425);fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:0.79374999;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 183.23633,227.10092 c 5.59753,3.20336 12.36881,4.51528 18.71366,3.17108 1.59516,-0.38 3.17489,-0.99021 4.44874,-2.04739 -0.73893,-0.64617 -1.68301,-0.99544 -2.49844,-1.53493 -3.78032,-2.18293 -7.56064,-4.36587 -11.34096,-6.5488 -3.10767,2.32001 -6.21533,4.64003 -9.323,6.96004 z"
id="path84298"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cccccc" />
<path
style="fill:url(#linearGradient84479);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 238.62695,269.97787 0.006,-0.002 0.39453,-0.27735 0.41797,-0.34179 0.002,-0.002 0.45703,-0.42382 0.47851,-0.49219 0.0156,-0.0176 0.47656,-0.53711 0.002,-0.002 0.0117,-0.0137 0.48438,-0.5918 0.0117,-0.0156 0.49023,-0.64257 0.01,-0.0137 0.49609,-0.69726 0.48047,-0.71875 0.01,-0.0137 0.46485,-0.74805 0.004,-0.008 0.002,-0.002 0.30468,-0.51562 0.008,-0.0117 0.4375,-0.78711 0.40625,-0.77734 0.008,-0.0137 0.37109,-0.77149 0.008,-0.0156 0.33789,-0.75977 0.006,-0.0156 0.30078,-0.73829 0.27148,-0.74609 0.21289,-0.66602 0.17969,-0.66796 v -0.002 l 0.12305,-0.58203 0.002,-0.0137 0.0723,-0.51562 0.0176,-0.31836 z"
id="path84379"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84408);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 202.78906,251.42318 2.08399,1.20118 9.6289,-16.67969 -2.08203,-1.20117 z"
id="rect84396"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84441);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 169.0918,226.26889 2.35937,1.36133 4.69336,-8.13086 -2.35937,-1.36133 z"
id="rect84429"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84455);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 234.17188,269.53842 2.08203,1.20312 9.63086,-16.67773 -2.08399,-1.20313 z"
id="rect84443"
inkscape:connector-curvature="0" />
<path
style="fill:#ffffff;fill-rule:evenodd;stroke:#f8ead2;stroke-width:0.52916664;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 215.55025,240.82707 22.49734,12.98884"
id="path84521"
inkscape:connector-curvature="0" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 24 KiB

16
hack/travis_osx.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
set -e
export GOPATH=$(pwd)/_gopath
export PATH=$GOPATH/bin:$PATH
_projectatomic="${GOPATH}/src/github.com/projectatomic"
mkdir -vp ${_projectatomic}
ln -vsf $(pwd) ${_projectatomic}/skopeo
go get -u github.com/cpuguy83/go-md2man github.com/golang/lint/golint
cd ${_projectatomic}/skopeo
make validate-local test-unit-local binary-local
sudo make install
skopeo -v

View File

@@ -12,9 +12,6 @@ import (
const (
privateRegistryURL0 = "127.0.0.1:5000"
privateRegistryURL1 = "127.0.0.1:5001"
privateRegistryURL2 = "127.0.0.1:5002"
privateRegistryURL3 = "127.0.0.1:5003"
privateRegistryURL4 = "127.0.0.1:5004"
)
func Test(t *testing.T) {
@@ -26,15 +23,13 @@ func init() {
}
type SkopeoSuite struct {
regV1 *testRegistryV1
regV2 *testRegistryV2
regV2Shema1 *testRegistryV2
regV1WithAuth *testRegistryV1 // does v1 support auth?
regV2WithAuth *testRegistryV2
}
func (s *SkopeoSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
}
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
@@ -42,24 +37,14 @@ func (s *SkopeoSuite) TearDownSuite(c *check.C) {
}
func (s *SkopeoSuite) SetUpTest(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.regV1 = setupRegistryV1At(c, privateRegistryURL0, false) // TODO:(runcom)
s.regV2 = setupRegistryV2At(c, privateRegistryURL1, false, false)
s.regV2Shema1 = setupRegistryV2At(c, privateRegistryURL2, false, true)
s.regV1WithAuth = setupRegistryV1At(c, privateRegistryURL3, true) // not used
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL4, true, false)
s.regV2 = setupRegistryV2At(c, privateRegistryURL0, false, false)
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL1, true, false)
}
func (s *SkopeoSuite) TearDownTest(c *check.C) {
// not checking V1 registries now...
if s.regV2 != nil {
s.regV2.Close()
}
if s.regV2Shema1 != nil {
s.regV2Shema1.Close()
}
if s.regV2WithAuth != nil {
//cmd := exec.Command("docker", "logout", s.regV2WithAuth)
//c.Assert(cmd.Run(), check.IsNil)

View File

@@ -1,7 +1,7 @@
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"log"
@@ -15,6 +15,7 @@ import (
"github.com/containers/image/signature"
"github.com/go-check/check"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/image-tools/image"
)
@@ -22,12 +23,16 @@ func init() {
check.Suite(&CopySuite{})
}
const v2DockerRegistryURL = "localhost:5555" // Update also policy.json
const (
v2DockerRegistryURL = "localhost:5555" // Update also policy.json
v2s1DockerRegistryURL = "localhost:5556"
)
type CopySuite struct {
cluster *openshiftCluster
registry *testRegistryV2
gpgHome string
cluster *openshiftCluster
registry *testRegistryV2
s1Registry *testRegistryV2
gpgHome string
}
func (s *CopySuite) SetUpSuite(c *check.C) {
@@ -37,7 +42,7 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
s.cluster = startOpenshiftCluster(c) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression"} {
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression", "schema1", "schema2"} {
isJSON := fmt.Sprintf(`{
"kind": "ImageStream",
"apiVersion": "v1",
@@ -49,7 +54,9 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
runCommandWithInput(c, isJSON, "oc", "create", "-f", "-")
}
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, false, false) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
// FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, false, false)
s.s1Registry = setupRegistryV2At(c, v2s1DockerRegistryURL, false, true)
gpgHome, err := ioutil.TempDir("", "skopeo-gpg")
c.Assert(err, check.IsNil)
@@ -75,35 +82,24 @@ func (s *CopySuite) TearDownSuite(c *check.C) {
if s.registry != nil {
s.registry.Close()
}
if s.s1Registry != nil {
s.s1Registry.Close()
}
if s.cluster != nil {
s.cluster.tearDown()
s.cluster.tearDown(c)
}
}
// fileFromFixtureFixture applies edits to inputPath and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func fileFromFixture(c *check.C, inputPath string, edits map[string]string) string {
contents, err := ioutil.ReadFile(inputPath)
c.Assert(err, check.IsNil)
for template, value := range edits {
contents = bytes.Replace(contents, []byte(template), []byte(value), -1)
}
file, err := ioutil.TempFile("", "policy.json")
c.Assert(err, check.IsNil)
path := file.Name()
_, err = file.Write(contents)
c.Assert(err, check.IsNil)
err = file.Close()
c.Assert(err, check.IsNil)
return path
}
func (s *CopySuite) TestCopyFailsWithManifestList(c *check.C) {
c.ExpectFailure("manifest-list-hotfix sacrificed hotfixes for being able to copy images")
assertSkopeoFails(c, ".*can not copy docker://estesp/busybox:latest: manifest contains multiple images.*", "copy", "docker://estesp/busybox:latest", "dir:somedir")
}
func (s *CopySuite) TestCopyFailsWhenImageOSDoesntMatchRuntimeOS(c *check.C) {
c.Skip("can't run this on Travis")
assertSkopeoFails(c, `.*image operating system "windows" cannot be used on "linux".*`, "copy", "docker://microsoft/windowsservercore", "containers-storage:test")
}
func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
@@ -117,9 +113,9 @@ func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
// "push": dir: → atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, "atomic:localhost:5000/myns/unsigned:unsigned")
// The result of pushing and pulling is an unmodified image.
// The result of pushing and pulling is an equivalent image, except for schema1 embedded names.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:unsigned", "dir:"+dir2)
destructiveCheckDirImagesAreEqual(c, dir1, dir2)
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:unsigned")
}
// The most basic (skopeo copy) use:
@@ -143,21 +139,26 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
// docker v2s2 -> OCI image layout
// docker v2s2 -> OCI image layout with image name
// ociDest will be created by oci: if it doesn't exist
// so don't create it here to exercise auto-creation
ociDest := "busybox-latest"
ociDest := "busybox-latest-image"
ociImgName := "busybox"
defer os.RemoveAll(ociDest)
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:latest", "oci:"+ociDest)
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:latest", "oci:"+ociDest+":"+ociImgName)
_, err = os.Stat(ociDest)
c.Assert(err, check.IsNil)
// docker v2s2 -> OCI image layout without image name
ociDest = "busybox-latest-noimage"
defer os.RemoveAll(ociDest)
assertSkopeoFails(c, ".*Error initializing destination oci:busybox-latest-noimage:: cannot save image with empty image.ref.name.*", "copy", "docker://busybox:latest", "oci:"+ociDest)
}
// Check whether dir: images in dir1 and dir2 are equal.
// WARNING: This modifies the contents of dir1 and dir2!
func destructiveCheckDirImagesAreEqual(c *check.C, dir1, dir2 string) {
// Check whether dir: images in dir1 and dir2 are equal, ignoring schema1 signatures.
func assertDirImagesAreEqual(c *check.C, dir1, dir2 string) {
// The manifests may have different JWS signatures; so, compare the manifests by digests, which
// strips the signatures, and remove them, comparing the rest file by file.
// strips the signatures.
digests := []digest.Digest{}
for _, dir := range []string{dir1, dir2} {
manifestPath := filepath.Join(dir, "manifest.json")
@@ -166,12 +167,37 @@ func destructiveCheckDirImagesAreEqual(c *check.C, dir1, dir2 string) {
digest, err := manifest.Digest(m)
c.Assert(err, check.IsNil)
digests = append(digests, digest)
err = os.Remove(manifestPath)
c.Assert(err, check.IsNil)
c.Logf("Manifest file %s (digest %s) removed", manifestPath, digest)
}
c.Assert(digests[0], check.Equals, digests[1])
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
// Then compare the rest file by file.
out := combinedOutputOfCommand(c, "diff", "-urN", "-x", "manifest.json", dir1, dir2)
c.Assert(out, check.Equals, "")
}
// Check whether schema1 dir: images in dir1 and dir2 are equal, ignoring schema1 signatures and the embedded path/tag values, which should have the expected values.
func assertSchema1DirImagesAreEqualExceptNames(c *check.C, dir1, ref1, dir2, ref2 string) {
// The manifests may have different JWS signatures and names; so, unmarshal and delete these elements.
manifests := []map[string]interface{}{}
for dir, ref := range map[string]string{dir1: ref1, dir2: ref2} {
manifestPath := filepath.Join(dir, "manifest.json")
m, err := ioutil.ReadFile(manifestPath)
c.Assert(err, check.IsNil)
data := map[string]interface{}{}
err = json.Unmarshal(m, &data)
c.Assert(err, check.IsNil)
c.Assert(data["schemaVersion"], check.Equals, float64(1))
colon := strings.LastIndex(ref, ":")
c.Assert(colon, check.Not(check.Equals), -1)
c.Assert(data["name"], check.Equals, ref[:colon])
c.Assert(data["tag"], check.Equals, ref[colon+1:])
for _, key := range []string{"signatures", "name", "tag"} {
delete(data, key)
}
manifests = append(manifests, data)
}
c.Assert(manifests[0], check.DeepEquals, manifests[1])
// Then compare the rest file by file.
out := combinedOutputOfCommand(c, "diff", "-urN", "-x", "manifest.json", dir1, dir2)
c.Assert(out, check.Equals, "")
}
@@ -190,7 +216,7 @@ func (s *CopySuite) TestCopyStreaming(c *check.C) {
// Compare (copies of) the original and the copy:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:streaming", "dir:"+dir2)
destructiveCheckDirImagesAreEqual(c, dir1, dir2)
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:streaming")
// FIXME: Also check pushing to docker://
}
@@ -227,7 +253,9 @@ func (s *CopySuite) TestCopyOCIRoundTrip(c *check.C) {
// For some silly reason we pass a logger to the OCI library here...
logger := log.New(os.Stderr, "", 0)
// TODO: Verify using the upstream OCI image validator.
// Verify using the upstream OCI image validator, this should catch most
// non-compliance errors. DO NOT REMOVE THIS TEST UNLESS IT'S ABSOLUTELY
// NECESSARY.
err = image.ValidateLayout(oci1, nil, logger)
c.Assert(err, check.IsNil)
err = image.ValidateLayout(oci2, nil, logger)
@@ -267,34 +295,34 @@ func (s *CopySuite) TestCopySignatures(c *check.C) {
// type: signedBy
// Sign the images
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "docker://busybox:1.23", "atomic:localhost:5000/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", "docker://busybox:1.23.2", "atomic:localhost:5000/myns/official:official")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "docker://busybox:1.26", "atomic:localhost:5006/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", "docker://busybox:1.26.1", "atomic:localhost:5006/myns/official:official")
// Verify that we can pull them
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/personal:personal", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/official:official", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/personal:personal", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/official:official", dirDest)
// Verify that mis-signed images are rejected
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/personal:personal", "atomic:localhost:5000/myns/official:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/personal:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/personal:personal", "atomic:localhost:5006/myns/official:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/personal:attack")
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/personal:attack", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/personal:attack", dirDest)
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/official:attack", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/official:attack", dirDest)
// Verify that signed identity is verified.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/naming:test1")
assertSkopeoFails(c, ".*Source image rejected: Signature for identity localhost:5000/myns/official:official is not accepted.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/naming:test1", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/naming:test1")
assertSkopeoFails(c, ".*Source image rejected: Signature for identity localhost:5006/myns/official:official is not accepted.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/naming:test1", dirDest)
// signedIdentity works
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/naming:naming")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/naming:naming", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/naming:naming")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/naming:naming", dirDest)
// Verify that cosigning requirements are enforced
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/cosigned:cosigned")
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/cosigned:cosigned", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/cosigned:cosigned", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/cosigned:cosigned", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/cosigned:cosigned", dirDest)
}
// --policy copy for dir: sources
@@ -356,10 +384,10 @@ func (s *CopySuite) TestCopyCompression(c *check.C) {
defer os.RemoveAll(topDir)
for i, t := range []struct{ fixture, remote string }{
//{"uncompressed-image-s1", "docker://" + v2DockerRegistryURL + "/compression/compression:s1"}, // FIXME: depends on push to tag working
//{"uncompressed-image-s2", "docker://" + v2DockerRegistryURL + "/compression/compression:s2"}, // FIXME: depends on push to tag working
{"uncompressed-image-s1", "docker://" + v2DockerRegistryURL + "/compression/compression:s1"},
{"uncompressed-image-s2", "docker://" + v2DockerRegistryURL + "/compression/compression:s2"},
{"uncompressed-image-s1", "atomic:localhost:5000/myns/compression:s1"},
//{"uncompressed-image-s2", "atomic:localhost:5000/myns/compression:s2"}, // FIXME: The unresolved "MANIFEST_UNKNOWN"/"unexpected end of JSON input" failure
{"uncompressed-image-s2", "atomic:localhost:5000/myns/compression:s2"},
} {
dir := filepath.Join(topDir, fmt.Sprintf("case%d", i))
err := os.MkdirAll(dir, 0755)
@@ -515,7 +543,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:atomic", dirDest+"/dirAD")
// Both access methods result in the same data.
destructiveCheckDirImagesAreEqual(c, filepath.Join(topDir, "dirAA"), filepath.Join(topDir, "dirAD"))
assertDirImagesAreEqual(c, filepath.Join(topDir, "dirAA"), filepath.Join(topDir, "dirAD"))
// Get another image (different so that they don't share signatures, and sign it using docker://)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir,
@@ -528,7 +556,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
assertSkopeoSucceeds(c, "", "--debug", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:extension", dirDest+"/dirDD")
// Both access methods result in the same data.
destructiveCheckDirImagesAreEqual(c, filepath.Join(topDir, "dirDA"), filepath.Join(topDir, "dirDD"))
assertDirImagesAreEqual(c, filepath.Join(topDir, "dirDA"), filepath.Join(topDir, "dirDD"))
}
func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
@@ -556,3 +584,78 @@ func (s *CopySuite) TestCopyNoPanicOnHTTPResponseWOTLSVerifyFalse(c *check.C) {
assertSkopeoFails(c, ".*server gave HTTP response to HTTPS client.*",
"copy", ourRegistry+"foobar", "dir:test")
}
func (s *CopySuite) TestCopySchemaConversion(c *check.C) {
// Test conversion / schema autodetection both for the OpenShift embedded registry…
s.testCopySchemaConversionRegistries(c, "docker://localhost:5005/myns/schema1", "docker://localhost:5006/myns/schema2")
// … and for various docker/distribution registry versions.
s.testCopySchemaConversionRegistries(c, "docker://"+v2s1DockerRegistryURL+"/schema1", "docker://"+v2DockerRegistryURL+"/schema2")
}
func (s *CopySuite) TestCopyManifestConversion(c *check.C) {
topDir, err := ioutil.TempDir("", "manifest-conversion")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
srcDir := filepath.Join(topDir, "source")
destDir1 := filepath.Join(topDir, "dest1")
destDir2 := filepath.Join(topDir, "dest2")
// oci to v2s1 and vice-versa not supported yet
// get v2s2 manifest type
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+srcDir)
verifyManifestMIMEType(c, srcDir, manifest.DockerV2Schema2MediaType)
// convert from v2s2 to oci
assertSkopeoSucceeds(c, "", "copy", "--format=oci", "dir:"+srcDir, "dir:"+destDir1)
verifyManifestMIMEType(c, destDir1, imgspecv1.MediaTypeImageManifest)
// convert from oci to v2s2
assertSkopeoSucceeds(c, "", "copy", "--format=v2s2", "dir:"+destDir1, "dir:"+destDir2)
verifyManifestMIMEType(c, destDir2, manifest.DockerV2Schema2MediaType)
// convert from v2s2 to v2s1
assertSkopeoSucceeds(c, "", "copy", "--format=v2s1", "dir:"+srcDir, "dir:"+destDir1)
verifyManifestMIMEType(c, destDir1, manifest.DockerV2Schema1SignedMediaType)
// convert from v2s1 to v2s2
assertSkopeoSucceeds(c, "", "copy", "--format=v2s2", "dir:"+destDir1, "dir:"+destDir2)
verifyManifestMIMEType(c, destDir2, manifest.DockerV2Schema2MediaType)
}
func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Registry, schema2Registry string) {
topDir, err := ioutil.TempDir("", "schema-conversion")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
for _, subdir := range []string{"input1", "input2", "dest2"} {
err := os.MkdirAll(filepath.Join(topDir, subdir), 0755)
c.Assert(err, check.IsNil)
}
input1Dir := filepath.Join(topDir, "input1")
input2Dir := filepath.Join(topDir, "input2")
destDir := filepath.Join(topDir, "dest2")
// Ensure we are working with a schema2 image.
// dir: accepts any manifest format, i.e. this makes …/input2 a schema2 source which cannot be asked to produce schema1 like ordinary docker: registries can.
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+input2Dir)
verifyManifestMIMEType(c, input2Dir, manifest.DockerV2Schema2MediaType)
// 2→2 (the "f2t2" in tag means "from 2 to 2")
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input2Dir, schema2Registry+":f2t2")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema2Registry+":f2t2", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema2MediaType)
// 2→1; we will use the result as a schema1 image for further tests.
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input2Dir, schema1Registry+":f2t1")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema1Registry+":f2t1", "dir:"+input1Dir)
verifyManifestMIMEType(c, input1Dir, manifest.DockerV2Schema1SignedMediaType)
// 1→1
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input1Dir, schema1Registry+":f1t1")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema1Registry+":f1t1", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema1SignedMediaType)
// 1→2: image stays unmodified schema1
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input1Dir, schema2Registry+":f1t2")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema2Registry+":f1t2", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema1SignedMediaType)
}
// Verify manifest in a dir: image at dir is expectedMIMEType.
func verifyManifestMIMEType(c *check.C, dir string, expectedMIMEType string) {
manifestBlob, err := ioutil.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
mimeType := manifest.GuessMIMEType(manifestBlob)
c.Assert(mimeType, check.Equals, expectedMIMEType)
}

View File

@@ -45,46 +45,46 @@
]
},
"atomic": {
"localhost:5000/myns/personal": [
"localhost:5006/myns/personal": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"localhost:5000/myns/official": [
"localhost:5006/myns/official": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg"
}
],
"localhost:5000/myns/naming:test1": [
"localhost:5006/myns/naming:test1": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg"
}
],
"localhost:5000/myns/naming:naming": [
"localhost:5006/myns/naming:naming": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "localhost:5000/myns/official"
"dockerRepository": "localhost:5006/myns/official"
}
}
],
"localhost:5000/myns/cosigned:cosigned": [
"localhost:5006/myns/cosigned:cosigned": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "localhost:5000/myns/official"
"dockerRepository": "localhost:5006/myns/official"
}
},
{

View File

@@ -21,35 +21,34 @@ var adminKUBECONFIG = map[string]string{
// openshiftCluster is an OpenShift API master and integrated registry
// running on localhost.
type openshiftCluster struct {
c *check.C
workingDir string
master *exec.Cmd
registry *exec.Cmd
processes []*exec.Cmd // Processes to terminate on teardown; append to the end, terminate from end to the start.
}
// startOpenshiftCluster creates a new openshiftCluster.
// WARNING: This affects state in users' home directory! Only run
// in isolated test environment.
func startOpenshiftCluster(c *check.C) *openshiftCluster {
cluster := &openshiftCluster{c: c}
cluster := &openshiftCluster{}
dir, err := ioutil.TempDir("", "openshift-cluster")
cluster.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
cluster.workingDir = dir
cluster.startMaster()
cluster.startRegistry()
cluster.ocLoginToProject()
cluster.dockerLogin()
cluster.relaxImageSignerPermissions()
cluster.startMaster(c)
cluster.prepareRegistryConfig(c)
cluster.startRegistry(c)
cluster.ocLoginToProject(c)
cluster.dockerLogin(c)
cluster.relaxImageSignerPermissions(c)
return cluster
}
// clusterCmd creates an exec.Cmd in c.workingDir with current environment modified by environment
func (c *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
// clusterCmd creates an exec.Cmd in cluster.workingDir with current environment modified by environment
func (cluster *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
cmd := exec.Command(name, args...)
cmd.Dir = c.workingDir
cmd.Dir = cluster.workingDir
cmd.Env = os.Environ()
for key, value := range env {
cmd.Env = modifyEnviron(cmd.Env, key, value)
@@ -58,19 +57,20 @@ func (c *openshiftCluster) clusterCmd(env map[string]string, name string, args .
}
// startMaster starts the OpenShift master (etcd+API server) and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startMaster() {
c.master = c.clusterCmd(nil, "openshift", "start", "master")
stdout, err := c.master.StdoutPipe()
func (cluster *openshiftCluster) startMaster(c *check.C) {
cmd := cluster.clusterCmd(nil, "openshift", "start", "master")
cluster.processes = append(cluster.processes, cmd)
stdout, err := cmd.StdoutPipe()
// Send both to the same pipe. This might cause the two streams to be mixed up,
// but logging actually goes only to stderr - this primarily ensure we log any
// unexpected output to stdout.
c.master.Stderr = c.master.Stdout
err = c.master.Start()
c.c.Assert(err, check.IsNil)
cmd.Stderr = cmd.Stdout
err = cmd.Start()
c.Assert(err, check.IsNil)
portOpen, terminatePortCheck := newPortChecker(c.c, 8443)
portOpen, terminatePortCheck := newPortChecker(c, 8443)
defer func() {
c.c.Logf("Terminating port check")
c.Logf("Terminating port check")
terminatePortCheck <- true
}()
@@ -78,12 +78,12 @@ func (c *openshiftCluster) startMaster() {
logCheckFound := make(chan bool)
go func() {
defer func() {
c.c.Logf("Log checker exiting")
c.Logf("Log checker exiting")
}()
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
line := scanner.Text()
c.c.Logf("Log line: %s", line)
c.Logf("Log line: %s", line)
if strings.Contains(line, "Started Origin Controllers") {
logCheckFound <- true
return
@@ -92,7 +92,7 @@ func (c *openshiftCluster) startMaster() {
// Note: we can block before we get here.
select {
case <-terminateLogCheck:
c.c.Logf("terminated")
c.Logf("terminated")
return
default:
// Do not block here and read the next line.
@@ -101,31 +101,31 @@ func (c *openshiftCluster) startMaster() {
logCheckFound <- false
}()
defer func() {
c.c.Logf("Terminating log check")
c.Logf("Terminating log check")
terminateLogCheck <- true
}()
gotPortCheck := false
gotLogCheck := false
for !gotPortCheck || !gotLogCheck {
c.c.Logf("Waiting for master")
c.Logf("Waiting for master")
select {
case <-portOpen:
c.c.Logf("port check done")
c.Logf("port check done")
gotPortCheck = true
case found := <-logCheckFound:
c.c.Logf("log check done, found: %t", found)
c.Logf("log check done, found: %t", found)
if !found {
c.c.Fatal("log check done, success message not found")
c.Fatal("log check done, success message not found")
}
gotLogCheck = true
}
}
c.c.Logf("OK, master started!")
c.Logf("OK, master started!")
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startRegistry() {
// prepareRegistryConfig creates a registry service account and a related k8s client configuration in ${cluster.workingDir}/openshift.local.registry.
func (cluster *openshiftCluster) prepareRegistryConfig(c *check.C) {
// This partially mimics the objects created by (oadm registry), except that we run the
// server directly as an ordinary process instead of a pod with an implicitly attached service account.
saJSON := `{
@@ -135,90 +135,118 @@ func (c *openshiftCluster) startRegistry() {
"name": "registry"
}
}`
cmd := c.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c.c, cmd, saJSON)
cmd := cluster.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c, cmd, saJSON)
cmd = c.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
cmd = c.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
out, err = cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "")
}
//KUBECONFIG=openshift.local.registry/openshift-registry.kubeconfig DOCKER_REGISTRY_URL=127.0.0.1:5000
c.registry = c.clusterCmd(map[string]string{
// startRegistry starts the OpenShift registry with configPart on port, waits for it to be ready, and returns the process object, or terminates on failure.
func (cluster *openshiftCluster) startRegistryProcess(c *check.C, port int, configPath string) *exec.Cmd {
cmd := cluster.clusterCmd(map[string]string{
"KUBECONFIG": "openshift.local.registry/openshift-registry.kubeconfig",
"DOCKER_REGISTRY_URL": "127.0.0.1:5000",
}, "dockerregistry", "/atomic-registry-config.yml")
consumeAndLogOutputs(c.c, "registry", c.registry)
err = c.registry.Start()
c.c.Assert(err, check.IsNil)
"DOCKER_REGISTRY_URL": fmt.Sprintf("127.0.0.1:%d", port),
}, "dockerregistry", configPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%d", port), cmd)
err := cmd.Start()
c.Assert(err, check.IsNil)
portOpen, terminatePortCheck := newPortChecker(c.c, 5000)
portOpen, terminatePortCheck := newPortChecker(c, port)
defer func() {
terminatePortCheck <- true
}()
c.c.Logf("Waiting for registry to start")
c.Logf("Waiting for registry to start")
<-portOpen
c.c.Logf("OK, Registry port open")
c.Logf("OK, Registry port open")
return cmd
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (cluster *openshiftCluster) startRegistry(c *check.C) {
// Our “primary” registry
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5000, "/atomic-registry-config.yml"))
// A registry configured with acceptschema2:false
schema1Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5005",
"rootdirectory: /registry": "rootdirectory: /registry-schema1",
// The default configuration currently already contains acceptschema2: false
})
// Make sure the configuration contains "acceptschema2: false", because eventually it will be enabled upstream and this function will need to be updated.
configContents, err := ioutil.ReadFile(schema1Config)
c.Assert(err, check.IsNil)
c.Assert(string(configContents), check.Matches, "(?s).*acceptschema2: false.*")
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5005, schema1Config))
// A registry configured with acceptschema2:true
schema2Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5006",
"rootdirectory: /registry": "rootdirectory: /registry-schema2",
"acceptschema2: false": "acceptschema2: true",
})
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5006, schema2Config))
}
// ocLogin runs (oc login) and (oc new-project) on the cluster, or terminates on failure.
func (c *openshiftCluster) ocLoginToProject() {
c.c.Logf("oc login")
cmd := c.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
func (cluster *openshiftCluster) ocLoginToProject(c *check.C) {
c.Logf("oc login")
cmd := cluster.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c.c, "oc", "new-project", "myns")
c.c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c, "oc", "new-project", "myns")
c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
}
// dockerLogin simulates (docker login) to the cluster, or terminates on failure.
// We do not run (docker login) directly, because that requires a running daemon and a docker package.
func (c *openshiftCluster) dockerLogin() {
func (cluster *openshiftCluster) dockerLogin(c *check.C) {
dockerDir := filepath.Join(homedir.Get(), ".docker")
err := os.Mkdir(dockerDir, 0700)
c.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
out := combinedOutputOfCommand(c.c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.c.Logf("oc config value: %s", out)
configJSON := fmt.Sprintf(`{
"auths": {
"localhost:5000": {
out := combinedOutputOfCommand(c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.Logf("oc config value: %s", out)
authValue := base64.StdEncoding.EncodeToString([]byte("unused:" + out))
auths := []string{}
for _, port := range []int{5000, 5005, 5006} {
auths = append(auths, fmt.Sprintf(`"localhost:%d": {
"auth": "%s",
"email": "unused"
}
}
}`, base64.StdEncoding.EncodeToString([]byte("unused:"+out)))
}`, port, authValue))
}
configJSON := `{"auths": {` + strings.Join(auths, ",") + `}}`
err = ioutil.WriteFile(filepath.Join(dockerDir, "config.json"), []byte(configJSON), 0600)
c.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
}
// relaxImageSignerPermissions opens up the system:image-signer permissions so that
// anyone can work with signatures
// FIXME: This also allows anyone to DoS anyone else; this design is really not all
// that workable, but it is the best we can do for now.
func (c *openshiftCluster) relaxImageSignerPermissions() {
cmd := c.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
func (cluster *openshiftCluster) relaxImageSignerPermissions(c *check.C) {
cmd := cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
}
// tearDown stops the cluster services and deletes (only some!) of the state.
func (c *openshiftCluster) tearDown() {
if c.registry != nil && c.registry.Process != nil {
c.registry.Process.Kill()
func (cluster *openshiftCluster) tearDown(c *check.C) {
for i := len(cluster.processes) - 1; i >= 0; i-- {
cluster.processes[i].Process.Kill()
}
if c.master != nil && c.master.Process != nil {
c.master.Process.Kill()
}
if c.workingDir != "" {
os.RemoveAll(c.workingDir)
if cluster.workingDir != "" {
os.RemoveAll(cluster.workingDir)
}
}

View File

@@ -13,23 +13,10 @@ import (
)
const (
binaryV1 = "docker-registry"
binaryV2 = "registry-v2"
binaryV2Schema1 = "registry-v2-schema1"
)
type testRegistryV1 struct {
cmd *exec.Cmd
url string
dir string
}
func setupRegistryV1At(c *check.C, url string, auth bool) *testRegistryV1 {
return &testRegistryV1{
url: url,
}
}
type testRegistryV2 struct {
cmd *exec.Cmd
url string
@@ -44,7 +31,7 @@ func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistry
c.Assert(err, check.IsNil)
// Wait for registry to be ready to serve requests.
for i := 0; i != 5; i++ {
for i := 0; i != 50; i++ {
if err = reg.Ping(); err == nil {
break
}
@@ -109,6 +96,7 @@ http:
}
cmd := exec.Command(binary, confPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%s", url), cmd)
if err := cmd.Start(); err != nil {
os.RemoveAll(tmp)
if os.IsNotExist(err) {

View File

@@ -36,15 +36,8 @@ func findFingerprint(lineBytes []byte) (string, error) {
return "", errors.New("No fingerprint found")
}
func (s *SigningSuite) SetUpTest(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
_, err = exec.LookPath(skopeoBinary)
func (s *SigningSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.gpgHome, err = ioutil.TempDir("", "skopeo-gpg")
@@ -59,7 +52,7 @@ func (s *SigningSuite) SetUpTest(c *check.C) {
c.Assert(err, check.IsNil)
}
func (s *SigningSuite) TearDownTest(c *check.C) {
func (s *SigningSuite) TearDownSuite(c *check.C) {
if s.gpgHome != "" {
err := os.RemoveAll(s.gpgHome)
c.Assert(err, check.IsNil)
@@ -70,6 +63,13 @@ func (s *SigningSuite) TearDownTest(c *check.C) {
}
func (s *SigningSuite) TestSignVerifySmoke(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/smoketest"

View File

@@ -1,7 +1,9 @@
package main
import (
"bytes"
"io"
"io/ioutil"
"net"
"os/exec"
"strings"
@@ -150,3 +152,25 @@ func modifyEnviron(env []string, name, value string) []string {
}
return append(res, prefix+value)
}
// fileFromFixtureFixture applies edits to inputPath and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func fileFromFixture(c *check.C, inputPath string, edits map[string]string) string {
contents, err := ioutil.ReadFile(inputPath)
c.Assert(err, check.IsNil)
for template, value := range edits {
updated := bytes.Replace(contents, []byte(template), []byte(value), -1)
c.Assert(bytes.Equal(updated, contents), check.Equals, false, check.Commentf("Replacing %s in %#v failed", template, string(contents))) // Verify that the template has matched something and we are not silently ignoring it.
contents = updated
}
file, err := ioutil.TempFile("", "policy.json")
c.Assert(err, check.IsNil)
path := file.Name()
_, err = file.Write(contents)
c.Assert(err, check.IsNil)
err = file.Close()
c.Assert(err, check.IsNil)
return path
}

View File

@@ -1,9 +1,9 @@
github.com/urfave/cli v1.17.0
github.com/containers/image master
github.com/containers/image f950aa3529148eb0dea90888c24b6682da641b13
github.com/opencontainers/go-digest master
gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
github.com/containers/storage master
github.com/Sirupsen/logrus v0.10.0
github.com/sirupsen/logrus v1.0.0
github.com/go-check/check v1
github.com/stretchr/testify v1.1.3
github.com/davecgh/go-spew master
@@ -11,26 +11,29 @@ github.com/pmezard/go-difflib master
github.com/pkg/errors master
golang.org/x/crypto master
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
github.com/docker/docker v1.13.0
github.com/docker/go-connections 4ccf312bf1d35e5dbda654e57a9be4c3f3cd0366
github.com/vbatts/tar-split v0.10.1
github.com/docker/docker 30eb4d8cdc422b023d5f11f29a82ecb73554183b
github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
github.com/vbatts/tar-split v0.10.2
github.com/gorilla/context 14f550f51a
github.com/gorilla/mux e444e69cbd
github.com/docker/go-units 8a7beacffa3009a9ac66bad506b18ffdd110cf97
golang.org/x/net master
github.com/gogo/protobuf fcdc5011193ff531a548e9b0301828d5a5b97fd8
# end docker deps
golang.org/x/text master
github.com/docker/distribution master
github.com/docker/libtrust master
github.com/docker/docker-credential-helpers d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1
github.com/opencontainers/runc master
github.com/opencontainers/image-spec v1.0.0-rc4
github.com/opencontainers/image-spec v1.0.0
# -- start OCI image validation requirements.
github.com/opencontainers/runtime-spec v1.0.0-rc4
github.com/opencontainers/image-tools v0.1.0
github.com/opencontainers/runtime-spec v1.0.0
github.com/opencontainers/image-tools 6d941547fa1df31900990b3fb47ec2468c9c6469
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
go4.org master https://github.com/camlistore/go4
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460
# -- end OCI image validation requirements
github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
@@ -43,3 +46,7 @@ github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
github.com/opencontainers/selinux master
golang.org/x/sys master
github.com/tchap/go-patricia v2.2.6
github.com/BurntSushi/toml master
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac

21
vendor/github.com/BurntSushi/toml/COPYING generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2013 TOML authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

218
vendor/github.com/BurntSushi/toml/README.md generated vendored Normal file
View File

@@ -0,0 +1,218 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/toml-lang/toml
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
Documentation: https://godoc.org/github.com/BurntSushi/toml
Installation:
```bash
go get github.com/BurntSushi/toml
```
Try the toml validator:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.

509
vendor/github.com/BurntSushi/toml/decode.go generated vendored Normal file
View File

@@ -0,0 +1,509 @@
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

121
vendor/github.com/BurntSushi/toml/decode_meta.go generated vendored Normal file
View File

@@ -0,0 +1,121 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

27
vendor/github.com/BurntSushi/toml/doc.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/toml-lang/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml

568
vendor/github.com/BurntSushi/toml/encode.go generated vendored Normal file
View File

@@ -0,0 +1,568 @@
package toml
import (
"bufio"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
"\"", "\\\"",
"\\", "\\\\",
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
case reflect.Struct:
enc.eStruct(key, rv)
default:
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
continue
}
enc.encode(key.add(mapKey), mrv)
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
continue
}
frv := rv.Field(i)
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
if opts.skip {
continue
}
keyName := sft.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
continue
}
if opts.omitzero && isZero(sf) {
continue
}
enc.encode(key.add(keyName), sf)
}
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

19
vendor/github.com/BurntSushi/toml/encoding_types.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View File

@@ -0,0 +1,18 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

953
vendor/github.com/BurntSushi/toml/lex.go generated vendored Normal file
View File

@@ -0,0 +1,953 @@
package toml
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
itemRawString
itemMultilineString
itemRawMultilineString
itemBool
itemInteger
itemFloat
itemDatetime
itemArray // the start of an array
itemArrayEnd
itemTableStart
itemTableEnd
itemArrayTableStart
itemArrayTableEnd
itemKeyStart
itemCommentStart
itemInlineTableStart
itemInlineTableEnd
)
const (
eof = 0
comma = ','
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
inlineTableStart = '{'
inlineTableEnd = '}'
)
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
// Allow for backing up up to three runes.
// This is necessary because TOML contains 3-rune tokens (""" and ''').
prevWidths [3]int
nprev int // how many of prevWidths are in use
// If we emit an eof, we can still back up, but it is not OK to call
// next again.
atEOF bool
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
}
func (lx *lexer) nextItem() item {
for {
select {
case item := <-lx.items:
return item
default:
lx.state = lx.state(lx)
}
}
}
func lex(input string) *lexer {
lx := &lexer{
input: input,
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
}
return lx
}
func (lx *lexer) push(state stateFn) {
lx.stack = append(lx.stack, state)
}
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
return last
}
func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.start = lx.pos
}
func (lx *lexer) next() (r rune) {
if lx.atEOF {
panic("next called after EOF")
}
if lx.pos >= len(lx.input) {
lx.atEOF = true
return eof
}
if lx.input[lx.pos] == '\n' {
lx.line++
}
lx.prevWidths[2] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[0]
if lx.nprev < 3 {
lx.nprev++
}
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
lx.prevWidths[0] = w
lx.pos += w
return r
}
// ignore skips over the pending input before this point.
func (lx *lexer) ignore() {
lx.start = lx.pos
}
// backup steps back one rune. Can be called only twice between calls to next.
func (lx *lexer) backup() {
if lx.atEOF {
lx.atEOF = false
return
}
if lx.nprev < 1 {
panic("backed up too far")
}
w := lx.prevWidths[0]
lx.prevWidths[0] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[2]
lx.nprev--
lx.pos -= w
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
}
}
// accept consumes the next rune if it's equal to `valid`.
func (lx *lexer) accept(valid rune) bool {
if lx.next() == valid {
return true
}
lx.backup()
return false
}
// peek returns but does not consume the next rune in the input.
func (lx *lexer) peek() rune {
r := lx.next()
lx.backup()
return r
}
// skip ignores all input that matches the given predicate.
func (lx *lexer) skip(pred func(rune) bool) {
for {
r := lx.next()
if pred(r) {
continue
}
lx.backup()
lx.ignore()
return
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (newlines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
}
return nil
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
if isWhitespace(r) || isNL(r) {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
lx.push(lexTop)
return lexCommentStart
case tableStart:
return lexTableStart
case eof:
if lx.pos > lx.start {
return lx.errorf("unexpected EOF")
}
lx.emit(itemEOF)
return nil
}
// At this point, the only valid item can be a key, so we back up
// and let the key lexer do the rest.
lx.backup()
lx.push(lexTopEnd)
return lexKeyStart
}
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a newline. If it sees EOF, it will quit the lexer successfully.
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
// a comment will read to a newline for us.
lx.push(lexTop)
return lexCommentStart
case isWhitespace(r):
return lexTopEnd
case isNL(r):
lx.ignore()
return lexTop
case r == eof:
lx.emit(itemEOF)
return nil
}
return lx.errorf("expected a top-level item to end with a newline, "+
"comment, or EOF, but got %q instead", r)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
// it starts with a character other than '.' and ']'.
// It assumes that '[' has already been consumed.
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
} else {
lx.emit(itemTableStart)
lx.push(lexTableEnd)
}
return lexTableNameStart
}
func lexTableEnd(lx *lexer) stateFn {
lx.emit(itemTableEnd)
return lexTopEnd
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf("expected end of table array name delimiter %q, "+
"but got %q instead", arrayTableEnd, r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
}
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
return lx.errorf("unexpected end of table name " +
"(table names cannot be empty)")
case r == tableSep:
return lx.errorf("unexpected table separator " +
"(table names cannot be empty)")
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.push(lexTableNameEnd)
return lexValue // reuse string lexing
default:
return lexBareTableName
}
}
// lexBareTableName lexes the name of a table. It assumes that at least one
// valid character for the table has already been read.
func lexBareTableName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
return lexBareTableName
}
lx.backup()
lx.emit(itemText)
return lexTableNameEnd
}
// lexTableNameEnd reads the end of a piece of a table name, optionally
// consuming whitespace.
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
lx.ignore()
return lexTableNameStart
case r == tableEnd:
return lx.pop()
default:
return lx.errorf("expected '.' or ']' to end table name, "+
"but got %q instead", r)
}
}
// lexKeyStart consumes a key name up until the first non-whitespace character.
// lexKeyStart will ignore whitespace.
func lexKeyStart(lx *lexer) stateFn {
r := lx.peek()
switch {
case r == keySep:
return lx.errorf("unexpected key separator %q", keySep)
case isWhitespace(r) || isNL(r):
lx.next()
return lexSkip(lx, lexKeyStart)
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.emit(itemKeyStart)
lx.push(lexKeyEnd)
return lexValue // reuse string lexing
default:
lx.ignore()
lx.emit(itemKeyStart)
return lexBareKey
}
}
// lexBareKey consumes the text of a bare key. Assumes that the first character
// (which is not whitespace) has not yet been consumed.
func lexBareKey(lx *lexer) stateFn {
switch r := lx.next(); {
case isBareKeyChar(r):
return lexBareKey
case isWhitespace(r):
lx.backup()
lx.emit(itemText)
return lexKeyEnd
case r == keySep:
lx.backup()
lx.emit(itemText)
return lexKeyEnd
default:
return lx.errorf("bare keys cannot contain %q", r)
}
}
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
// separator).
func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case r == keySep:
return lexSkip(lx, lexValue)
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
default:
return lx.errorf("expected key separator %q, but got %q instead",
keySep, r)
}
}
// lexValue starts the consumption of a value anywhere a value is expected.
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func lexValue(lx *lexer) stateFn {
// We allow whitespace to precede a value, but NOT newlines.
// In array syntax, the array states are responsible for ignoring newlines.
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case isDigit(r):
lx.backup() // avoid an extra state and use the same as above
return lexNumberOrDateStart
}
switch r {
case arrayStart:
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case inlineTableStart:
lx.ignore()
lx.emit(itemInlineTableStart)
return lexInlineTableValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
lx.ignore() // Ignore """
return lexMultilineString
}
lx.backup()
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
lx.ignore() // Ignore """
return lexMultilineRawString
}
lx.backup()
}
lx.ignore() // ignore the "'"
return lexRawString
case '+', '-':
return lexNumberStart
case '.': // special error case, be kind to users
return lx.errorf("floats must start with a digit, not '.'")
}
if unicode.IsLetter(r) {
// Be permissive here; lexBool will give a nice error if the
// user wrote something like
// x = foo
// (i.e. not 'true' or 'false' but is something else word-like.)
lx.backup()
return lexBool
}
return lx.errorf("expected value but found %q instead", r)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and newlines are ignored.
func lexArrayValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
lx.push(lexArrayValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == arrayEnd:
// NOTE(caleb): The spec isn't clear about whether you can have
// a trailing comma or not, so we'll allow it.
return lexArrayEnd
}
lx.backup()
lx.push(lexArrayValueEnd)
return lexValue
}
// lexArrayValueEnd consumes everything between the end of an array value and
// the next value (or the end of the array): it ignores whitespace and newlines
// and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
return lexArrayEnd
}
return lx.errorf(
"expected a comma or array terminator %q, but got %q instead",
arrayEnd, r,
)
}
// lexArrayEnd finishes the lexing of an array.
// It assumes that a ']' has just been consumed.
func lexArrayEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemArrayEnd)
return lx.pop()
}
// lexInlineTableValue consumes one key/value pair in an inline table.
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
func lexInlineTableValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValue)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == inlineTableEnd:
return lexInlineTableEnd
}
lx.backup()
lx.push(lexInlineTableValueEnd)
return lexKeyStart
}
// lexInlineTableValueEnd consumes everything between the end of an inline table
// key/value pair and the next pair (or the end of the table):
// it ignores whitespace and expects either a ',' or a '}'.
func lexInlineTableValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValueEnd)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexInlineTableValue
case r == inlineTableEnd:
return lexInlineTableEnd
}
return lx.errorf("expected a comma or an inline table terminator %q, "+
"but got %q instead", inlineTableEnd, r)
}
// lexInlineTableEnd finishes the lexing of an inline table.
// It assumes that a '}' has just been consumed.
func lexInlineTableEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemInlineTableEnd)
return lx.pop()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func lexString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
lx.backup()
lx.emit(itemString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexString
}
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func lexMultilineString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case '\\':
return lexMultilineStringEscape
case stringEnd:
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
// It assumes that the beginning "'" has already been consumed and ignored.
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == rawStringEnd:
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'''" has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case rawStringEnd:
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemRawMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
return lexMultilineString
}
lx.backup()
lx.push(lexMultilineString)
return lexStringEscape(lx)
}
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'b':
fallthrough
case 't':
fallthrough
case 'n':
fallthrough
case 'f':
fallthrough
case 'r':
fallthrough
case '"':
fallthrough
case '\\':
return lx.pop()
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("invalid escape character %q; only the following "+
"escape characters are allowed: "+
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 4; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected four hexadecimal digits after '\u', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
func lexLongUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 8; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected eight hexadecimal digits after '\U', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '_':
return lexNumber
case 'e', 'E':
return lexFloat
case '.':
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
// lexNumberOrDate consumes either an integer, float or datetime.
func lexNumberOrDate(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '-':
return lexDatetime
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexDatetime consumes a Datetime, to a first approximation.
// The parser validates that it matches one of the accepted formats.
func lexDatetime(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexDatetime
}
switch r {
case '-', 'T', ':', '.', 'Z', '+':
return lexDatetime
}
lx.backup()
lx.emit(itemDatetime)
return lx.pop()
}
// lexNumberStart consumes either an integer or a float. It assumes that a sign
// has already been read, but that *no* digits have been consumed.
// lexNumberStart will move to the appropriate integer or float states.
func lexNumberStart(lx *lexer) stateFn {
// We MUST see a digit. Even floats have to start with a digit.
r := lx.next()
if !isDigit(r) {
if r == '.' {
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
return lexNumber
}
// lexNumber consumes an integer or a float after seeing the first digit.
func lexNumber(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumber
}
switch r {
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexFloat consumes the elements of a float. It allows any sequence of
// float-like characters, so floats emitted by the lexer are only a first
// approximation and must be validated by the parser.
func lexFloat(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexFloat
}
switch r {
case '_', '.', '-', '+', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemFloat)
return lx.pop()
}
// lexBool consumes a bool string: 'true' or 'false.
func lexBool(lx *lexer) stateFn {
var rs []rune
for {
r := lx.next()
if !unicode.IsLetter(r) {
lx.backup()
break
}
rs = append(rs, r)
}
s := string(rs)
switch s {
case "true", "false":
lx.emit(itemBool)
return lx.pop()
}
return lx.errorf("expected value but found %q instead", s)
}
// lexCommentStart begins the lexing of a comment. It will emit
// itemCommentStart and consume no characters, passing control to lexComment.
func lexCommentStart(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemCommentStart)
return lexComment
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first newline character, and pass control
// back to the last state on the stack.
func lexComment(lx *lexer) stateFn {
r := lx.peek()
if isNL(r) || r == eof {
lx.emit(itemText)
return lx.pop()
}
lx.next()
return lexComment
}
// lexSkip ignores all slurped input and moves on to the next state.
func lexSkip(lx *lexer, nextState stateFn) stateFn {
return func(lx *lexer) stateFn {
lx.ignore()
return nextState
}
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
return "Text"
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
return "String"
case itemBool:
return "Bool"
case itemInteger:
return "Integer"
case itemFloat:
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemCommentStart:
return "CommentStart"
}
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
}
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}

592
vendor/github.com/BurntSushi/toml/parse.go generated vendored Normal file
View File

@@ -0,0 +1,592 @@
package toml
import (
"fmt"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
// A list of keys in the order that they appear in the TOML data.
ordered []Key
// the full key for the current hash in scope
context Key
// the base key name for everything except hashes
currentKey string
// rough approximation of line number
approxLine int
// A map of 'key.group.names' to whether they were created implicitly.
implicits map[string]bool
}
type parseError string
func (pe parseError) Error() string {
return string(pe)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(parseError); ok {
return
}
panic(r)
}
}()
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
p.approxLine, p.current(), fmt.Sprintf(format, v...))
panic(parseError(msg))
}
func (p *parser) next() item {
it := p.lx.nextItem()
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart:
p.approxLine = item.line
p.expect(itemText)
case itemTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemTableEnd, kg.typ)
p.establishContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemArrayTableEnd, kg.typ)
p.establishContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart:
kname := p.next()
p.approxLine = kname.line
p.currentKey = p.keyString(kname)
val, typ := p.value(p.next())
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
panic("unreachable")
}
}
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
}
p.bug("Expected boolean value, but got '%s'.", it.val)
case itemInteger:
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseInt(val, 10, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit "+
"signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemFloat:
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be "+
"surrounded by digits", it.val)
}
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed "+
"by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit "+
"IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemDatetime:
var t time.Time
var ok bool
var err error
for _, format := range []string{
"2006-01-02T15:04:05Z07:00",
"2006-01-02T15:04:05",
"2006-01-02",
} {
t, err = time.ParseInLocation(format, it.val, time.Local)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
case itemArray:
array := make([]interface{}, 0)
types := make([]tomlType, 0)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it)
array = append(array, val)
types = append(types, typ)
}
return array, p.typeOfArray(types)
case itemInlineTableStart:
var (
hash = make(map[string]interface{})
outerContext = p.context
outerKey = p.currentKey
)
p.context = append(p.context, p.currentKey)
p.currentKey = ""
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
if it.typ != itemKeyStart {
p.bug("Expected key start but instead found %q, around line %d",
it.val, p.approxLine)
}
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
// retrieve key
k := p.next()
p.approxLine = k.line
kname := p.keyString(k)
// retrieve value
p.currentKey = kname
val, typ := p.value(p.next())
// make sure we keep metadata up to date
p.setType(kname, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
hash[kname] = val
}
p.context = outerContext
p.currentKey = outerKey
return hash, tomlHash
}
p.bug("Unexpected value type: %s", it.typ)
panic("unreachable")
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
accept = false
continue
}
accept = true
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// establishContext sets the current context of the parser,
// where the context is either a hash or an array of hashes. Which one is
// set depends on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) establishContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 5)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as "+
"an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var tmpHash interface{}
var ok bool
hash := p.mapping
keyContext := make(Key, 0)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.bug("Expected hash to have type 'map[string]interface{}', but "+
"it has '%T' instead.", tmpHash)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Typically, if the given key has already been set, then we have
// to raise an error since duplicate keys are disallowed. However,
// it's possible that a key was previously defined implicitly. In this
// case, it is allowed to be redefined concretely. (See the
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// addImplicit sets the given Key as having been created implicitly.
func (p *parser) addImplicit(key Key) {
p.implicits[key.String()] = true
}
// removeImplicit stops tagging the given key as having been implicitly
// created.
func (p *parser) removeImplicit(key Key) {
p.implicits[key.String()] = false
}
// isImplicit returns true if the key group pointed to by the key was created
// implicitly.
func (p *parser) isImplicit(key Key) bool {
return p.implicits[key.String()]
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) == 0 || s[0] != '\n' {
return s
}
return s[1:]
}
func stripEscapedWhitespace(s string) string {
esc := strings.Split(s, "\\\n")
if len(esc) > 1 {
for i := 1; i < len(esc); i++ {
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
}
}
return strings.Join(esc, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}
func isStringType(ty itemType) bool {
return ty == itemString || ty == itemMultilineString ||
ty == itemRawString || ty == itemRawMultilineString
}

91
vendor/github.com/BurntSushi/toml/type_check.go generated vendored Normal file
View File

@@ -0,0 +1,91 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}

242
vendor/github.com/BurntSushi/toml/type_fields.go generated vendored Normal file
View File

@@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

View File

@@ -1,41 +0,0 @@
package logrus
import (
"encoding/json"
"fmt"
)
type JSONFormatter struct {
// TimestampFormat sets the format used for marshaling timestamps.
TimestampFormat string
}
func (f *JSONFormatter) Format(entry *Entry) ([]byte, error) {
data := make(Fields, len(entry.Data)+3)
for k, v := range entry.Data {
switch v := v.(type) {
case error:
// Otherwise errors are ignored by `encoding/json`
// https://github.com/Sirupsen/logrus/issues/137
data[k] = v.Error()
default:
data[k] = v
}
}
prefixFieldClashes(data)
timestampFormat := f.TimestampFormat
if timestampFormat == "" {
timestampFormat = DefaultTimestampFormat
}
data["time"] = entry.Time.Format(timestampFormat)
data["msg"] = entry.Message
data["level"] = entry.Level.String()
serialized, err := json.Marshal(data)
if err != nil {
return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err)
}
return append(serialized, '\n'), nil
}

View File

@@ -1,212 +0,0 @@
package logrus
import (
"io"
"os"
"sync"
)
type Logger struct {
// The logs are `io.Copy`'d to this in a mutex. It's common to set this to a
// file, or leave it default which is `os.Stderr`. You can also set this to
// something more adventorous, such as logging to Kafka.
Out io.Writer
// Hooks for the logger instance. These allow firing events based on logging
// levels and log entries. For example, to send errors to an error tracking
// service, log to StatsD or dump the core on fatal errors.
Hooks LevelHooks
// All log entries pass through the formatter before logged to Out. The
// included formatters are `TextFormatter` and `JSONFormatter` for which
// TextFormatter is the default. In development (when a TTY is attached) it
// logs with colors, but to a file it wouldn't. You can easily implement your
// own that implements the `Formatter` interface, see the `README` or included
// formatters for examples.
Formatter Formatter
// The logging level the logger should log at. This is typically (and defaults
// to) `logrus.Info`, which allows Info(), Warn(), Error() and Fatal() to be
// logged. `logrus.Debug` is useful in
Level Level
// Used to sync writing to the log.
mu sync.Mutex
}
// Creates a new logger. Configuration should be set by changing `Formatter`,
// `Out` and `Hooks` directly on the default logger instance. You can also just
// instantiate your own:
//
// var log = &Logger{
// Out: os.Stderr,
// Formatter: new(JSONFormatter),
// Hooks: make(LevelHooks),
// Level: logrus.DebugLevel,
// }
//
// It's recommended to make this a global instance called `log`.
func New() *Logger {
return &Logger{
Out: os.Stderr,
Formatter: new(TextFormatter),
Hooks: make(LevelHooks),
Level: InfoLevel,
}
}
// Adds a field to the log entry, note that you it doesn't log until you call
// Debug, Print, Info, Warn, Fatal or Panic. It only creates a log entry.
// If you want multiple fields, use `WithFields`.
func (logger *Logger) WithField(key string, value interface{}) *Entry {
return NewEntry(logger).WithField(key, value)
}
// Adds a struct of fields to the log entry. All it does is call `WithField` for
// each `Field`.
func (logger *Logger) WithFields(fields Fields) *Entry {
return NewEntry(logger).WithFields(fields)
}
// Add an error as single field to the log entry. All it does is call
// `WithError` for the given `error`.
func (logger *Logger) WithError(err error) *Entry {
return NewEntry(logger).WithError(err)
}
func (logger *Logger) Debugf(format string, args ...interface{}) {
if logger.Level >= DebugLevel {
NewEntry(logger).Debugf(format, args...)
}
}
func (logger *Logger) Infof(format string, args ...interface{}) {
if logger.Level >= InfoLevel {
NewEntry(logger).Infof(format, args...)
}
}
func (logger *Logger) Printf(format string, args ...interface{}) {
NewEntry(logger).Printf(format, args...)
}
func (logger *Logger) Warnf(format string, args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warnf(format, args...)
}
}
func (logger *Logger) Warningf(format string, args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warnf(format, args...)
}
}
func (logger *Logger) Errorf(format string, args ...interface{}) {
if logger.Level >= ErrorLevel {
NewEntry(logger).Errorf(format, args...)
}
}
func (logger *Logger) Fatalf(format string, args ...interface{}) {
if logger.Level >= FatalLevel {
NewEntry(logger).Fatalf(format, args...)
}
os.Exit(1)
}
func (logger *Logger) Panicf(format string, args ...interface{}) {
if logger.Level >= PanicLevel {
NewEntry(logger).Panicf(format, args...)
}
}
func (logger *Logger) Debug(args ...interface{}) {
if logger.Level >= DebugLevel {
NewEntry(logger).Debug(args...)
}
}
func (logger *Logger) Info(args ...interface{}) {
if logger.Level >= InfoLevel {
NewEntry(logger).Info(args...)
}
}
func (logger *Logger) Print(args ...interface{}) {
NewEntry(logger).Info(args...)
}
func (logger *Logger) Warn(args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warn(args...)
}
}
func (logger *Logger) Warning(args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warn(args...)
}
}
func (logger *Logger) Error(args ...interface{}) {
if logger.Level >= ErrorLevel {
NewEntry(logger).Error(args...)
}
}
func (logger *Logger) Fatal(args ...interface{}) {
if logger.Level >= FatalLevel {
NewEntry(logger).Fatal(args...)
}
os.Exit(1)
}
func (logger *Logger) Panic(args ...interface{}) {
if logger.Level >= PanicLevel {
NewEntry(logger).Panic(args...)
}
}
func (logger *Logger) Debugln(args ...interface{}) {
if logger.Level >= DebugLevel {
NewEntry(logger).Debugln(args...)
}
}
func (logger *Logger) Infoln(args ...interface{}) {
if logger.Level >= InfoLevel {
NewEntry(logger).Infoln(args...)
}
}
func (logger *Logger) Println(args ...interface{}) {
NewEntry(logger).Println(args...)
}
func (logger *Logger) Warnln(args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warnln(args...)
}
}
func (logger *Logger) Warningln(args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warnln(args...)
}
}
func (logger *Logger) Errorln(args ...interface{}) {
if logger.Level >= ErrorLevel {
NewEntry(logger).Errorln(args...)
}
}
func (logger *Logger) Fatalln(args ...interface{}) {
if logger.Level >= FatalLevel {
NewEntry(logger).Fatalln(args...)
}
os.Exit(1)
}
func (logger *Logger) Panicln(args ...interface{}) {
if logger.Level >= PanicLevel {
NewEntry(logger).Panicln(args...)
}
}

View File

@@ -1,15 +0,0 @@
// +build solaris
package logrus
import (
"os"
"golang.org/x/sys/unix"
)
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal() bool {
_, err := unix.IoctlGetTermios(int(os.Stdout.Fd()), unix.TCGETA)
return err == nil
}

View File

@@ -1,27 +0,0 @@
// Based on ssh/terminal:
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build windows
package logrus
import (
"syscall"
"unsafe"
)
var kernel32 = syscall.NewLazyDLL("kernel32.dll")
var (
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
)
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
fd := syscall.Stderr
var st uint32
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)
return r != 0 && e == 0
}

View File

@@ -1,31 +0,0 @@
package logrus
import (
"bufio"
"io"
"runtime"
)
func (logger *Logger) Writer() *io.PipeWriter {
reader, writer := io.Pipe()
go logger.writerScanner(reader)
runtime.SetFinalizer(writer, writerFinalizer)
return writer
}
func (logger *Logger) writerScanner(reader *io.PipeReader) {
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
logger.Print(scanner.Text())
}
if err := scanner.Err(); err != nil {
logger.Errorf("Error while reading from Writer: %s", err)
}
reader.Close()
}
func writerFinalizer(writer *io.PipeWriter) {
writer.Close()
}

View File

@@ -20,27 +20,54 @@ to another, for example docker container images to OCI images. It also allows
you to copy container images between various registries, possibly converting
them as necessary, and to sign and verify images.
## Command-line usage
The containers/image project is only a library with no user interface;
you can either incorporate it into your Go programs, or use the `skopeo` tool:
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
containers/image library and takes advantage of its many features.
containers/image library and takes advantage of many of its features,
e.g. `skopeo copy` exposes the `containers/image/copy.Image` functionality.
## Dependencies
Dependencies that this library prefers will not be found in the `vendor`
directory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects.
This library does not ship a committed version of its dependencies in a `vendor`
subdirectory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects, and because
types defined in the `vendor` directory would be impossible to use from your projects.
What this project tests against dependencies-wise is located
[here](https://github.com/containers/image/blob/master/vendor.conf).
[in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf).
## Building
For ordinary use, `go build ./...` is sufficient.
If you want to see what the library can do, or an example of how it is called,
consider starting with the [skopeo](https://github.com/projectatomic/skopeo) tool
instead.
When developing this library, please use `make` to take advantage of the tests and validation.
To integrate this library into your project, put it into `$GOPATH` or use
your preferred vendoring tool to include a copy in your project.
Ensure that the dependencies documented [in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf)
are also available
(using those exact versions or different versions of your choosing).
Optionally, you can use the `containers_image_openpgp` build tag (using `go build -tags …`, or `make … BUILDTAGS=…`).
This will use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
This library, by default, also depends on the GpgME and libostree C libraries. Either install them:
```sh
Fedora$ dnf install gpgme-devel libassuan-devel libostree-devel
macOS$ brew install gpgme
```
or use the build tags described below to avoid the dependencies (e.g. using `go build -tags …`)
### Supported build tags
- `containers_image_openpgp`: Use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
the primary downside is that creating new signatures with the Golang-only implementation is not supported.
- `containers_image_ostree_stub`: Instead of importing `ostree:` transport in `github.com/containers/image/transports/alltransports`, use a stub which reports that the transport is not supported. This allows building the library without requiring the `libostree` development libraries. The `github.com/containers/image/ostree` package is completely disabled
and impossible to import when this build tag is in use.
## Contributing
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.
## License

View File

@@ -3,30 +3,26 @@ package copy
import (
"bytes"
"compress/gzip"
"context"
"fmt"
"io"
"io/ioutil"
"reflect"
"runtime"
"strings"
"time"
pb "gopkg.in/cheggaaa/pb.v1"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
pb "gopkg.in/cheggaaa/pb.v1"
)
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
type digestingReader struct {
source io.Reader
digester digest.Digester
@@ -98,6 +94,8 @@ type Options struct {
DestinationCtx *types.SystemContext
ProgressInterval time.Duration // time to wait between reports to signal the progress channel
Progress chan types.ProgressProperties // Reported to when ProgressInterval has arrived for a single artifact+offset.
// manifest MIME type of image set by user. "" is default and means use the autodetection to the the manifest MIME type
ForceManifestMIMEType string
}
// Image copies image from srcRef to destRef, using policyContext to validate
@@ -132,9 +130,7 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
}
}()
destSupportedManifestMIMETypes := dest.SupportedManifestMIMETypes()
rawSource, err := srcRef.NewImageSource(options.SourceCtx, destSupportedManifestMIMETypes)
rawSource, err := srcRef.NewImageSource(options.SourceCtx)
if err != nil {
return errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
}
@@ -162,6 +158,10 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
}
}()
if err := checkImageDestinationForCurrentRuntimeOS(src, dest); err != nil {
return err
}
if src.IsMultiImage() {
return errors.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
}
@@ -171,7 +171,7 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
sigs = [][]byte{}
} else {
writeReport("Getting image source signatures\n")
s, err := src.Signatures()
s, err := src.Signatures(context.TODO())
if err != nil {
return errors.Wrap(err, "Error reading signatures")
}
@@ -186,8 +186,16 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
canModifyManifest := len(sigs) == 0
manifestUpdates := types.ManifestUpdateOptions{}
manifestUpdates.InformationOnly.Destination = dest
if err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest); err != nil {
if err := updateEmbeddedDockerReference(&manifestUpdates, dest, src, canModifyManifest); err != nil {
return err
}
// We compute preferredManifestMIMEType only to show it in error messages.
// Without having to add this context in an error message, we would be happy enough to know only that no conversion is needed.
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := determineManifestConversion(&manifestUpdates, src, dest.SupportedManifestMIMETypes(), canModifyManifest, options.ForceManifestMIMEType)
if err != nil {
return err
}
@@ -210,54 +218,58 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
return err
}
pendingImage := src
if !reflect.DeepEqual(manifestUpdates, types.ManifestUpdateOptions{InformationOnly: manifestUpdates.InformationOnly}) {
if !canModifyManifest {
return errors.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
}
manifestUpdates.InformationOnly.Destination = dest
pendingImage, err = src.UpdatedImage(manifestUpdates)
if err != nil {
return errors.Wrap(err, "Error creating an updated image manifest")
}
}
manifest, _, err := pendingImage.Manifest()
// With docker/distribution registries we do not know whether the registry accepts schema2 or schema1 only;
// and at least with the OpenShift registry "acceptschema2" option, there is no way to detect the support
// without actually trying to upload something and getting a types.ManifestTypeRejectedError.
// So, try the preferred manifest MIME type. If the process succeeds, fine…
manifest, err := ic.copyUpdatedConfigAndManifest()
if err != nil {
return errors.Wrap(err, "Error reading manifest")
}
logrus.Debugf("Writing manifest using preferred type %s failed: %v", preferredManifestMIMEType, err)
// … if it fails, _and_ the failure is because the manifest is rejected, we may have other options.
if _, isManifestRejected := errors.Cause(err).(types.ManifestTypeRejectedError); !isManifestRejected || len(otherManifestMIMETypeCandidates) == 0 {
// We dont have other options.
// In principle the code below would handle this as well, but the resulting error message is fairly ugly.
// Dont bother the user with MIME types if we have no choice.
return err
}
// If the original MIME type is acceptable, determineManifestConversion always uses it as preferredManifestMIMEType.
// So if we are here, we will definitely be trying to convert the manifest.
// With !canModifyManifest, that would just be a string of repeated failures for the same reason,
// so lets bail out early and with a better error message.
if !canModifyManifest {
return errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
}
if err := ic.copyConfig(pendingImage); err != nil {
return err
// errs is a list of errors when trying various manifest types. Also serves as an "upload succeeded" flag when set to nil.
errs := []string{fmt.Sprintf("%s(%v)", preferredManifestMIMEType, err)}
for _, manifestMIMEType := range otherManifestMIMETypeCandidates {
logrus.Debugf("Trying to use manifest type %s…", manifestMIMEType)
manifestUpdates.ManifestMIMEType = manifestMIMEType
attemptedManifest, err := ic.copyUpdatedConfigAndManifest()
if err != nil {
logrus.Debugf("Upload of manifest type %s failed: %v", manifestMIMEType, err)
errs = append(errs, fmt.Sprintf("%s(%v)", manifestMIMEType, err))
continue
}
// We have successfully uploaded a manifest.
manifest = attemptedManifest
errs = nil // Mark this as a success so that we don't abort below.
break
}
if errs != nil {
return fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
}
}
if options.SignBy != "" {
mech, err := signature.NewGPGSigningMechanism()
newSig, err := createSignature(dest, manifest, options.SignBy, reportWriter)
if err != nil {
return errors.Wrap(err, "Error initializing GPG")
}
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
return errors.Wrap(err, "Signing not supported")
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
writeReport("Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, options.SignBy)
if err != nil {
return errors.Wrap(err, "Error creating signature")
return err
}
sigs = append(sigs, newSig)
}
writeReport("Writing manifest to image destination\n")
if err := dest.PutManifest(manifest); err != nil {
return errors.Wrap(err, "Error writing manifest")
}
writeReport("Storing signatures\n")
if err := dest.PutSignatures(sigs); err != nil {
return errors.Wrap(err, "Error writing signatures")
@@ -270,6 +282,40 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
return nil
}
func checkImageDestinationForCurrentRuntimeOS(src types.Image, dest types.ImageDestination) error {
if dest.MustMatchRuntimeOS() {
c, err := src.OCIConfig()
if err != nil {
return errors.Wrapf(err, "Error parsing image configuration")
}
osErr := fmt.Errorf("image operating system %q cannot be used on %q", c.OS, runtime.GOOS)
if runtime.GOOS == "windows" && c.OS == "linux" {
return osErr
} else if runtime.GOOS != "windows" && c.OS == "windows" {
return osErr
}
}
return nil
}
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
func updateEmbeddedDockerReference(manifestUpdates *types.ManifestUpdateOptions, dest types.ImageDestination, src types.Image, canModifyManifest bool) error {
destRef := dest.Reference().DockerReference()
if destRef == nil {
return nil // Destination does not care about Docker references
}
if !src.EmbeddedDockerReferenceConflicts(destRef) {
return nil // No reference embedded in the manifest, or it matches destRef already.
}
if !canModifyManifest {
return errors.Errorf("Copying a schema1 image with an embedded Docker reference to %s (Docker reference %s) would invalidate existing signatures. Explicitly enable signature removal to proceed anyway",
transports.ImageName(dest.Reference()), destRef.String())
}
manifestUpdates.EmbeddedDockerReference = destRef
return nil
}
// copyLayers copies layers from src/rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
func (ic *imageCopier) copyLayers() error {
srcInfos := ic.src.LayerInfos()
@@ -322,6 +368,45 @@ func layerDigestsDiffer(a, b []types.BlobInfo) bool {
return false
}
// copyUpdatedConfigAndManifest updates the image per ic.manifestUpdates, if necessary,
// stores the resulting config and manifest to the destination, and returns the stored manifest.
func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
pendingImage := ic.src
if !reflect.DeepEqual(*ic.manifestUpdates, types.ManifestUpdateOptions{InformationOnly: ic.manifestUpdates.InformationOnly}) {
if !ic.canModifyManifest {
return nil, errors.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
}
if !ic.diffIDsAreNeeded && ic.src.UpdatedImageNeedsLayerDiffIDs(*ic.manifestUpdates) {
// We have set ic.diffIDsAreNeeded based on the preferred MIME type returned by determineManifestConversion.
// So, this can only happen if we are trying to upload using one of the other MIME type candidates.
// Because UpdatedImageNeedsLayerDiffIDs is true only when converting from s1 to s2, this case should only arise
// when ic.dest.SupportedManifestMIMETypes() includes both s1 and s2, the upload using s1 failed, and we are now trying s2.
// Supposedly s2-only registries do not exist or are extremely rare, so failing with this error message is good enough for now.
// If handling such registries turns out to be necessary, we could compute ic.diffIDsAreNeeded based on the full list of manifest MIME type candidates.
return nil, errors.Errorf("Can not convert image to %s, preparing DiffIDs for this case is not supported", ic.manifestUpdates.ManifestMIMEType)
}
pi, err := ic.src.UpdatedImage(*ic.manifestUpdates)
if err != nil {
return nil, errors.Wrap(err, "Error creating an updated image manifest")
}
pendingImage = pi
}
manifest, _, err := pendingImage.Manifest()
if err != nil {
return nil, errors.Wrap(err, "Error reading manifest")
}
if err := ic.copyConfig(pendingImage); err != nil {
return nil, err
}
fmt.Fprintf(ic.reportWriter, "Writing manifest to image destination\n")
if err := ic.dest.PutManifest(manifest); err != nil {
return nil, errors.Wrap(err, "Error writing manifest")
}
return manifest, nil
}
// copyConfig copies config.json, if any, from src to dest.
func (ic *imageCopier) copyConfig(src types.Image) error {
srcInfo := src.ConfigInfo()
@@ -497,7 +582,7 @@ func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.Blo
bar.ShowPercent = false
bar.Start()
destStream = bar.NewProxyReader(destStream)
defer fmt.Fprint(ic.reportWriter, "\n")
defer bar.Finish()
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.
@@ -575,41 +660,3 @@ func compressGoroutine(dest *io.PipeWriter, src io.Reader) {
_, err = io.Copy(zipper, src) // Sets err to nil, i.e. causes dest.Close()
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) error {
if len(destSupportedManifestMIMETypes) == 0 {
return nil // Anything goes
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
supportedByDest[t] = struct{}{}
}
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return errors.Wrap(err, "Error reading manifest")
}
if _, ok := supportedByDest[srcType]; ok {
logrus.Debugf("Manifest MIME type %s is declared supported by the destination", srcType)
return nil
}
// OK, we should convert the manifest.
if !canModifyManifest {
logrus.Debugf("Manifest MIME type %s is not supported by the destination, but we can't modify the manifest, hoping for the best...")
return nil // Take our chances - FIXME? Or should we fail without trying?
}
var chosenType = destSupportedManifestMIMETypes[0] // This one is known to be supported.
for _, t := range preferredManifestMIMETypes {
if _, ok := supportedByDest[t]; ok {
chosenType = t
break
}
}
logrus.Debugf("Will convert manifest from MIME type %s to %s", srcType, chosenType)
manifestUpdates.ManifestMIMEType = chosenType
return nil
}

106
vendor/github.com/containers/image/copy/manifest.go generated vendored Normal file
View File

@@ -0,0 +1,106 @@
package copy
import (
"strings"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
// orderedSet is a list of strings (MIME types in our case), with each string appearing at most once.
type orderedSet struct {
list []string
included map[string]struct{}
}
// newOrderedSet creates a correctly initialized orderedSet.
// [Sometimes it would be really nice if Golang had constructors…]
func newOrderedSet() *orderedSet {
return &orderedSet{
list: []string{},
included: map[string]struct{}{},
}
}
// append adds s to the end of os, only if it is not included already.
func (os *orderedSet) append(s string) {
if _, ok := os.included[s]; !ok {
os.list = append(os.list, s)
os.included[s] = struct{}{}
}
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
// Returns the preferred manifest MIME type (whether we are converting to it or using it unmodified),
// and a list of other possible alternatives, in order.
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool, forceManifestMIMEType string) (string, []string, error) {
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return "", nil, errors.Wrap(err, "Error reading manifest")
}
if forceManifestMIMEType != "" {
destSupportedManifestMIMETypes = []string{forceManifestMIMEType}
}
if len(destSupportedManifestMIMETypes) == 0 {
return srcType, []string{}, nil // Anything goes; just use the original as is, do not try any conversions.
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
supportedByDest[t] = struct{}{}
}
// destSupportedManifestMIMETypes is a static guess; a particular registry may still only support a subset of the types.
// So, build a list of types to try in order of decreasing preference.
// FIXME? This treats manifest.DockerV2Schema1SignedMediaType and manifest.DockerV2Schema1MediaType as distinct,
// although we are not really making any conversion, and it is very unlikely that a destination would support one but not the other.
// In practice, schema1 is probably the lowest common denominator, so we would expect to try the first one of the MIME types
// and never attempt the other one.
prioritizedTypes := newOrderedSet()
// First of all, prefer to keep the original manifest unmodified.
if _, ok := supportedByDest[srcType]; ok {
prioritizedTypes.append(srcType)
}
if !canModifyManifest {
// We could also drop the !canModifyManifest parameter and have the caller
// make the choice; it is already doing that to an extent, to improve error
// messages. But it is nice to hide the “if !canModifyManifest, do no conversion”
// special case in here; the caller can then worry (or not) only about a good UI.
logrus.Debugf("We can't modify the manifest, hoping for the best...")
return srcType, []string{}, nil // Take our chances - FIXME? Or should we fail without trying?
}
// Then use our list of preferred types.
for _, t := range preferredManifestMIMETypes {
if _, ok := supportedByDest[t]; ok {
prioritizedTypes.append(t)
}
}
// Finally, try anything else the destination supports.
for _, t := range destSupportedManifestMIMETypes {
prioritizedTypes.append(t)
}
logrus.Debugf("Manifest has MIME type %s, ordered candidate list [%s]", srcType, strings.Join(prioritizedTypes.list, ", "))
if len(prioritizedTypes.list) == 0 { // Coverage: destSupportedManifestMIMETypes is not empty (or we would have exited in the “Anything goes” case above), so this should never happen.
return "", nil, errors.New("Internal error: no candidate MIME types")
}
preferredType := prioritizedTypes.list[0]
if preferredType != srcType {
manifestUpdates.ManifestMIMEType = preferredType
} else {
logrus.Debugf("... will first try using the original manifest unmodified")
}
return preferredType, prioritizedTypes.list[1:], nil
}

35
vendor/github.com/containers/image/copy/sign.go generated vendored Normal file
View File

@@ -0,0 +1,35 @@
package copy
import (
"fmt"
"io"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// createSignature creates a new signature of manifest at (identified by) dest using keyIdentity.
func createSignature(dest types.ImageDestination, manifest []byte, keyIdentity string, reportWriter io.Writer) ([]byte, error) {
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return nil, errors.Wrap(err, "Error initializing GPG")
}
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
return nil, errors.Wrap(err, "Signing not supported")
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return nil, errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
fmt.Fprintf(reportWriter, "Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, keyIdentity)
if err != nil {
return nil, errors.Wrap(err, "Error creating signature")
}
return newSig, nil
}

View File

@@ -4,19 +4,77 @@ import (
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const version = "Directory Transport Version: 1.0\n"
// ErrNotContainerImageDir indicates that the directory doesn't match the expected contents of a directory created
// using the 'dir' transport
var ErrNotContainerImageDir = errors.New("not a containers image directory, don't want to overwrite important data")
type dirImageDestination struct {
ref dirReference
ref dirReference
compress bool
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ref dirReference) types.ImageDestination {
return &dirImageDestination{ref}
// newImageDestination returns an ImageDestination for writing to a directory.
func newImageDestination(ref dirReference, compress bool) (types.ImageDestination, error) {
d := &dirImageDestination{ref: ref, compress: compress}
// If directory exists check if it is empty
// if not empty, check whether the contents match that of a container image directory and overwrite the contents
// if the contents don't match throw an error
dirExists, err := pathExists(d.ref.resolvedPath)
if err != nil {
return nil, errors.Wrapf(err, "error checking for path %q", d.ref.resolvedPath)
}
if dirExists {
isEmpty, err := isDirEmpty(d.ref.resolvedPath)
if err != nil {
return nil, err
}
if !isEmpty {
versionExists, err := pathExists(d.ref.versionPath())
if err != nil {
return nil, errors.Wrapf(err, "error checking if path exists %q", d.ref.versionPath())
}
if versionExists {
contents, err := ioutil.ReadFile(d.ref.versionPath())
if err != nil {
return nil, err
}
// check if contents of version file is what we expect it to be
if string(contents) != version {
return nil, ErrNotContainerImageDir
}
} else {
return nil, ErrNotContainerImageDir
}
// delete directory contents so that only one image is in the directory at a time
if err = removeDirContents(d.ref.resolvedPath); err != nil {
return nil, errors.Wrapf(err, "error erasing contents in %q", d.ref.resolvedPath)
}
logrus.Debugf("overwriting existing container image directory %q", d.ref.resolvedPath)
}
} else {
// create directory if it doesn't exist
if err := os.MkdirAll(d.ref.resolvedPath, 0755); err != nil {
return nil, errors.Wrapf(err, "unable to create directory %q", d.ref.resolvedPath)
}
}
// create version file
err = ioutil.WriteFile(d.ref.versionPath(), []byte(version), 0755)
if err != nil {
return nil, errors.Wrapf(err, "error creating version file %q", d.ref.versionPath())
}
return d, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
@@ -42,7 +100,7 @@ func (d *dirImageDestination) SupportsSignatures() error {
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *dirImageDestination) ShouldCompressLayers() bool {
return false
return d.compress
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
@@ -51,6 +109,11 @@ func (d *dirImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dirImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -118,6 +181,10 @@ func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo,
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dirImageDestination) PutManifest(manifest []byte) error {
return ioutil.WriteFile(d.ref.manifestPath(), manifest, 0644)
}
@@ -138,3 +205,39 @@ func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
func (d *dirImageDestination) Commit() error {
return nil
}
// returns true if path exists
func pathExists(path string) (bool, error) {
_, err := os.Stat(path)
if err == nil {
return true, nil
}
if err != nil && os.IsNotExist(err) {
return false, nil
}
return false, err
}
// returns true if directory is empty
func isDirEmpty(path string) (bool, error) {
files, err := ioutil.ReadDir(path)
if err != nil {
return false, err
}
return len(files) == 0, nil
}
// deletes the contents of a directory
func removeDirContents(path string) error {
files, err := ioutil.ReadDir(path)
if err != nil {
return err
}
for _, file := range files {
if err := os.RemoveAll(filepath.Join(path, file.Name())); err != nil {
return err
}
}
return nil
}

View File

@@ -1,6 +1,7 @@
package directory
import (
"context"
"io"
"io/ioutil"
"os"
@@ -59,7 +60,7 @@ func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, err
return r, fi.Size(), nil
}
func (s *dirImageSource) GetSignatures() ([][]byte, error) {
func (s *dirImageSource) GetSignatures(ctx context.Context) ([][]byte, error) {
signatures := [][]byte{}
for i := 0; ; i++ {
signature, err := ioutil.ReadFile(s.ref.signaturePath(i))

View File

@@ -143,18 +143,20 @@ func (ref dirReference) NewImage(ctx *types.SystemContext) (types.Image, error)
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref dirReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func (ref dirReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ref), nil
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref dirReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ref), nil
compress := false
if ctx != nil {
compress = ctx.DirForceCompress
}
return newImageDestination(ref, compress)
}
// DeleteImage deletes the named image from the registry, if supported.
@@ -177,3 +179,8 @@ func (ref dirReference) layerPath(digest digest.Digest) string {
func (ref dirReference) signaturePath(index int) string {
return filepath.Join(ref.path, fmt.Sprintf("signature-%d", index+1))
}
// versionPath returns a path for the version file within a directory using our conventions.
func (ref dirReference) versionPath() string {
return filepath.Join(ref.path, "version")
}

View File

@@ -19,15 +19,24 @@ func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.
if ref.destinationRef == nil {
return nil, errors.Errorf("docker-archive: destination reference not supplied (must be of form <path>:<reference:tag>)")
}
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_EXCL|os.O_CREATE, 0644)
// ref.path can be either a pipe or a regular file
// in the case of a pipe, we require that we can open it for write
// in the case of a regular file, we don't want to overwrite any pre-existing file
// so we check for Size() == 0 below (This is racy, but using O_EXCL would also be racy,
// only in a different way. Either way, its up to the user to not have two writers to the same path.)
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
// FIXME: It should be possible to modify archives, but the only really
// sane way of doing it is to create a copy of the image, modify
// it and then do a rename(2).
if os.IsExist(err) {
err = errors.New("docker-archive doesn't support modifying existing images")
}
return nil, err
return nil, errors.Wrapf(err, "error opening file %q", ref.path)
}
fhStat, err := fh.Stat()
if err != nil {
return nil, errors.Wrapf(err, "error statting file %q", ref.path)
}
if fhStat.Mode().IsRegular() && fhStat.Size() != 0 {
return nil, errors.New("docker-archive doesn't support modifying existing images")
}
return &archiveImageDestination{

View File

@@ -1,9 +1,9 @@
package archive
import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/sirupsen/logrus"
)
type archiveImageSource struct {

View File

@@ -134,11 +134,9 @@ func (ref archiveReference) NewImage(ctx *types.SystemContext) (types.Image, err
return ctrImage.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref archiveReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func (ref archiveReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref), nil
}

View File

@@ -0,0 +1,69 @@
package daemon
import (
"net/http"
"path/filepath"
"github.com/containers/image/types"
dockerclient "github.com/docker/docker/client"
"github.com/docker/go-connections/tlsconfig"
)
const (
// The default API version to be used in case none is explicitly specified
defaultAPIVersion = "1.22"
)
// NewDockerClient initializes a new API client based on the passed SystemContext.
func newDockerClient(ctx *types.SystemContext) (*dockerclient.Client, error) {
host := dockerclient.DefaultDockerHost
if ctx != nil && ctx.DockerDaemonHost != "" {
host = ctx.DockerDaemonHost
}
// Sadly, unix:// sockets don't work transparently with dockerclient.NewClient.
// They work fine with a nil httpClient; with a non-nil httpClient, the transports
// TLSClientConfig must be nil (or the client will try using HTTPS over the PF_UNIX socket
// regardless of the values in the *tls.Config), and we would have to call sockets.ConfigureTransport.
//
// We don't really want to configure anything for unix:// sockets, so just pass a nil *http.Client.
proto, _, _, err := dockerclient.ParseHost(host)
if err != nil {
return nil, err
}
var httpClient *http.Client
if proto != "unix" {
hc, err := tlsConfig(ctx)
if err != nil {
return nil, err
}
httpClient = hc
}
return dockerclient.NewClient(host, defaultAPIVersion, httpClient, nil)
}
func tlsConfig(ctx *types.SystemContext) (*http.Client, error) {
options := tlsconfig.Options{}
if ctx != nil && ctx.DockerDaemonInsecureSkipTLSVerify {
options.InsecureSkipVerify = true
}
if ctx != nil && ctx.DockerDaemonCertPath != "" {
options.CAFile = filepath.Join(ctx.DockerDaemonCertPath, "ca.pem")
options.CertFile = filepath.Join(ctx.DockerDaemonCertPath, "cert.pem")
options.KeyFile = filepath.Join(ctx.DockerDaemonCertPath, "key.pem")
}
tlsc, err := tlsconfig.Client(options)
if err != nil {
return nil, err
}
return &http.Client{
Transport: &http.Transport{
TLSClientConfig: tlsc,
},
CheckRedirect: dockerclient.CheckRedirect,
}, nil
}

View File

@@ -3,12 +3,12 @@ package daemon
import (
"io"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/net/context"
)
@@ -24,7 +24,7 @@ type daemonImageDestination struct {
}
// newImageDestination returns a types.ImageDestination for the specified image reference.
func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
func newImageDestination(ctx *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
if ref.ref == nil {
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
@@ -33,7 +33,7 @@ func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (t
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
c, err := newDockerClient(ctx)
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
@@ -42,8 +42,8 @@ func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (t
// Commit() may never be called, so we may never read from this channel; so, make this buffered to allow imageLoadGoroutine to write status and terminate even if we never read it.
statusChannel := make(chan error, 1)
ctx, goroutineCancel := context.WithCancel(context.Background())
go imageLoadGoroutine(ctx, c, reader, statusChannel)
goroutineContext, goroutineCancel := context.WithCancel(context.Background())
go imageLoadGoroutine(goroutineContext, c, reader, statusChannel)
return &daemonImageDestination{
ref: ref,
@@ -78,6 +78,11 @@ func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeRe
defer resp.Body.Close()
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *daemonImageDestination) MustMatchRuntimeOS() bool {
return true
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *daemonImageDestination) Close() error {
if !d.committed {

View File

@@ -7,7 +7,6 @@ import (
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
@@ -35,7 +34,7 @@ type layerInfo struct {
// is the config, and that the following len(RootFS) files are the layers, but that feels
// way too brittle.)
func newImageSource(ctx *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
c, err := newDockerClient(ctx)
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}

View File

@@ -161,11 +161,9 @@ func (ref daemonReference) NewImage(ctx *types.SystemContext) (types.Image, erro
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref daemonReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func (ref daemonReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}

View File

@@ -1,38 +1,33 @@
package docker
import (
"context"
"crypto/tls"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"os"
"path/filepath"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/pkg/docker/config"
"github.com/containers/image/pkg/tlsclientconfig"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/homedir"
"github.com/docker/distribution/registry/client"
"github.com/docker/go-connections/sockets"
"github.com/docker/go-connections/tlsconfig"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
dockerHostname = "docker.io"
dockerRegistry = "registry-1.docker.io"
dockerAuthRegistry = "https://index.docker.io/v1/"
dockerHostname = "docker.io"
dockerRegistry = "registry-1.docker.io"
dockerCfg = ".docker"
dockerCfgFileName = "config.json"
dockerCfgObsolete = ".dockercfg"
systemPerHostCertDirPath = "/etc/docker/certs.d"
resolvedPingV2URL = "%s://%s/v2/"
resolvedPingV1URL = "%s://%s/v1/_ping"
@@ -48,9 +43,13 @@ const (
extensionSignatureTypeAtomic = "atomic" // extensionSignature.Type
)
// ErrV1NotSupported is returned when we're trying to talk to a
// docker V1 registry.
var ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
var (
// ErrV1NotSupported is returned when we're trying to talk to a
// docker V1 registry.
ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
// ErrUnauthorizedForCredentials is returned when the status code returned is 401
ErrUnauthorizedForCredentials = errors.New("unable to retrieve auth token: invalid username/password")
)
// extensionSignature and extensionSignatureList come from github.com/openshift/origin/pkg/dockerregistry/server/signaturedispatcher.go:
// signature represents a Docker image signature.
@@ -109,150 +108,121 @@ func serverDefault() *tls.Config {
}
}
func newTransport() *http.Transport {
direct := &net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
// dockerCertDir returns a path to a directory to be consumed by tlsclientconfig.SetupCertificates() depending on ctx and hostPort.
func dockerCertDir(ctx *types.SystemContext, hostPort string) string {
if ctx != nil && ctx.DockerCertPath != "" {
return ctx.DockerCertPath
}
tr := &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: direct.Dial,
TLSHandshakeTimeout: 10 * time.Second,
// TODO(dmcgowan): Call close idle connections when complete and use keep alive
DisableKeepAlives: true,
var hostCertDir string
if ctx != nil && ctx.DockerPerHostCertDirPath != "" {
hostCertDir = ctx.DockerPerHostCertDirPath
} else if ctx != nil && ctx.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(ctx.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
}
proxyDialer, err := sockets.DialerFromEnvironment(direct)
if err == nil {
tr.Dial = proxyDialer.Dial
}
return tr
return filepath.Join(hostCertDir, hostPort)
}
func setupCertificates(dir string, tlsc *tls.Config) error {
if dir == "" {
return nil
}
fs, err := ioutil.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
return err
}
for _, f := range fs {
fullPath := filepath.Join(dir, f.Name())
if strings.HasSuffix(f.Name(), ".crt") {
systemPool, err := tlsconfig.SystemCertPool()
if err != nil {
return errors.Wrap(err, "unable to get system cert pool")
}
tlsc.RootCAs = systemPool
logrus.Debugf("crt: %s", fullPath)
data, err := ioutil.ReadFile(fullPath)
if err != nil {
return err
}
tlsc.RootCAs.AppendCertsFromPEM(data)
}
if strings.HasSuffix(f.Name(), ".cert") {
certName := f.Name()
keyName := certName[:len(certName)-5] + ".key"
logrus.Debugf("cert: %s", fullPath)
if !hasFile(fs, keyName) {
return errors.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
cert, err := tls.LoadX509KeyPair(filepath.Join(dir, certName), filepath.Join(dir, keyName))
if err != nil {
return err
}
tlsc.Certificates = append(tlsc.Certificates, cert)
}
if strings.HasSuffix(f.Name(), ".key") {
keyName := f.Name()
certName := keyName[:len(keyName)-4] + ".cert"
logrus.Debugf("key: %s", fullPath)
if !hasFile(fs, certName) {
return errors.Errorf("missing client certificate %s for key %s", certName, keyName)
}
}
}
return nil
}
func hasFile(files []os.FileInfo, name string) bool {
for _, f := range files {
if f.Name() == name {
return true
}
}
return false
}
// newDockerClient returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
// newDockerClientFromRef returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
// “write” specifies whether the client will be used for "write" access (in particular passed to lookaside.go:toplevelFromSection)
func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
func newDockerClientFromRef(ctx *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
registry := reference.Domain(ref.ref)
if registry == dockerHostname {
registry = dockerRegistry
}
username, password, err := getAuth(ctx, reference.Domain(ref.ref))
username, password, err := config.GetAuthentication(ctx, reference.Domain(ref.ref))
if err != nil {
return nil, err
return nil, errors.Wrapf(err, "error getting username and password")
}
tr := newTransport()
if ctx != nil && (ctx.DockerCertPath != "" || ctx.DockerInsecureSkipTLSVerify) {
tlsc := &tls.Config{}
if err := setupCertificates(ctx.DockerCertPath, tlsc); err != nil {
return nil, err
}
tlsc.InsecureSkipVerify = ctx.DockerInsecureSkipTLSVerify
tr.TLSClientConfig = tlsc
}
if tr.TLSClientConfig == nil {
tr.TLSClientConfig = serverDefault()
}
client := &http.Client{Transport: tr}
sigBase, err := configuredSignatureStorageBase(ctx, ref, write)
if err != nil {
return nil, err
}
remoteName := reference.Path(ref.ref)
return newDockerClientWithDetails(ctx, registry, username, password, actions, sigBase, remoteName)
}
// newDockerClientWithDetails returns a new dockerClient instance for the given parameters
func newDockerClientWithDetails(ctx *types.SystemContext, registry, username, password, actions string, sigBase signatureStorageBase, remoteName string) (*dockerClient, error) {
hostName := registry
if registry == dockerHostname {
registry = dockerRegistry
}
tr := tlsclientconfig.NewTransport()
tr.TLSClientConfig = serverDefault()
// It is undefined whether the host[:port] string for dockerHostname should be dockerHostname or dockerRegistry,
// because docker/docker does not read the certs.d subdirectory at all in that case. We use the user-visible
// dockerHostname here, because it is more symmetrical to read the configuration in that case as well, and because
// generally the UI hides the existence of the different dockerRegistry. But note that this behavior is
// undocumented and may change if docker/docker changes.
certDir := dockerCertDir(ctx, hostName)
if err := tlsclientconfig.SetupCertificates(certDir, tr.TLSClientConfig); err != nil {
return nil, err
}
if ctx != nil && ctx.DockerInsecureSkipTLSVerify {
tr.TLSClientConfig.InsecureSkipVerify = true
}
return &dockerClient{
ctx: ctx,
registry: registry,
username: username,
password: password,
client: client,
client: &http.Client{Transport: tr},
signatureBase: sigBase,
scope: authScope{
actions: actions,
remoteName: reference.Path(ref.ref),
remoteName: remoteName,
},
}, nil
}
// CheckAuth validates the credentials by attempting to log into the registry
// returns an error if an error occcured while making the http request or the status code received was 401
func CheckAuth(ctx context.Context, sCtx *types.SystemContext, username, password, registry string) error {
newLoginClient, err := newDockerClientWithDetails(sCtx, registry, username, password, "", nil, "")
if err != nil {
return errors.Wrapf(err, "error creating new docker client")
}
resp, err := newLoginClient.makeRequest(ctx, "GET", "/v2/", nil, nil)
if err != nil {
return err
}
defer resp.Body.Close()
switch resp.StatusCode {
case http.StatusOK:
return nil
case http.StatusUnauthorized:
return ErrUnauthorizedForCredentials
default:
return errors.Errorf("error occured with status code %q", resp.StatusCode)
}
}
// makeRequest creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// The host name and schema is taken from the client or autodetected, and the path is relative to it, i.e. the path usually starts with /v2/.
func (c *dockerClient) makeRequest(method, path string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
if err := c.detectProperties(); err != nil {
func (c *dockerClient) makeRequest(ctx context.Context, method, path string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
if err := c.detectProperties(ctx); err != nil {
return nil, err
}
url := fmt.Sprintf("%s://%s%s", c.scheme, c.registry, path)
return c.makeRequestToResolvedURL(method, url, headers, stream, -1, true)
return c.makeRequestToResolvedURL(ctx, method, url, headers, stream, -1, true)
}
// makeRequestToResolvedURL creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// streamLen, if not -1, specifies the length of the data expected on stream.
// makeRequest should generally be preferred.
// TODO(runcom): too many arguments here, use a struct
func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[string][]string, stream io.Reader, streamLen int64, sendAuth bool) (*http.Response, error) {
func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url string, headers map[string][]string, stream io.Reader, streamLen int64, sendAuth bool) (*http.Response, error) {
req, err := http.NewRequest(method, url, stream)
if err != nil {
return nil, err
}
req = req.WithContext(ctx)
if streamLen != -1 { // Do not blindly overwrite if streamLen == -1, http.NewRequest above can figure out the length of bytes.Reader and similar objects without us having to compute it.
req.ContentLength = streamLen
}
@@ -289,38 +259,47 @@ func (c *dockerClient) setupRequestAuth(req *http.Request) error {
if len(c.challenges) == 0 {
return nil
}
// assume just one...
challenge := c.challenges[0]
switch challenge.Scheme {
case "basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
schemeNames := make([]string, 0, len(c.challenges))
for _, challenge := range c.challenges {
schemeNames = append(schemeNames, challenge.Scheme)
switch challenge.Scheme {
case "basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
var scope string
if c.scope.remoteName != "" && c.scope.actions != "" {
scope = fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
}
token, err := c.getBearerToken(req.Context(), realm, service, scope)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope := fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
return nil
default:
logrus.Debugf("no handler for %s authentication", challenge.Scheme)
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
return nil
}
return errors.Errorf("no handler for %s authentication", challenge.Scheme)
logrus.Infof("None of the challenges sent by server (%s) are supported, trying an unauthenticated request anyway", strings.Join(schemeNames, ", "))
return nil
}
func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToken, error) {
func (c *dockerClient) getBearerToken(ctx context.Context, realm, service, scope string) (*bearerToken, error) {
authReq, err := http.NewRequest("GET", realm, nil)
if err != nil {
return nil, err
}
authReq = authReq.WithContext(ctx)
getParams := authReq.URL.Query()
if service != "" {
getParams.Add("service", service)
@@ -332,7 +311,7 @@ func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToke
if c.username != "" && c.password != "" {
authReq.SetBasicAuth(c.username, c.password)
}
tr := newTransport()
tr := tlsclientconfig.NewTransport()
// TODO(runcom): insecure for now to contact the external token service
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
client := &http.Client{Transport: tr}
@@ -343,7 +322,7 @@ func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToke
defer res.Body.Close()
switch res.StatusCode {
case http.StatusUnauthorized:
return nil, errors.Errorf("unable to retrieve auth token: 401 unauthorized")
return nil, ErrUnauthorizedForCredentials
case http.StatusOK:
break
default:
@@ -367,70 +346,16 @@ func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToke
return &token, nil
}
func getAuth(ctx *types.SystemContext, registry string) (string, string, error) {
if ctx != nil && ctx.DockerAuthConfig != nil {
return ctx.DockerAuthConfig.Username, ctx.DockerAuthConfig.Password, nil
}
var dockerAuth dockerConfigFile
dockerCfgPath := filepath.Join(getDefaultConfigDir(".docker"), dockerCfgFileName)
if _, err := os.Stat(dockerCfgPath); err == nil {
j, err := ioutil.ReadFile(dockerCfgPath)
if err != nil {
return "", "", err
}
if err := json.Unmarshal(j, &dockerAuth); err != nil {
return "", "", err
}
} else if os.IsNotExist(err) {
// try old config path
oldDockerCfgPath := filepath.Join(getDefaultConfigDir(dockerCfgObsolete))
if _, err := os.Stat(oldDockerCfgPath); err != nil {
if os.IsNotExist(err) {
return "", "", nil
}
return "", "", errors.Wrap(err, oldDockerCfgPath)
}
j, err := ioutil.ReadFile(oldDockerCfgPath)
if err != nil {
return "", "", err
}
if err := json.Unmarshal(j, &dockerAuth.AuthConfigs); err != nil {
return "", "", err
}
} else if err != nil {
return "", "", errors.Wrap(err, dockerCfgPath)
}
// I'm feeling lucky
if c, exists := dockerAuth.AuthConfigs[registry]; exists {
return decodeDockerAuth(c.Auth)
}
// bad luck; let's normalize the entries first
registry = normalizeRegistry(registry)
normalizedAuths := map[string]dockerAuthConfig{}
for k, v := range dockerAuth.AuthConfigs {
normalizedAuths[normalizeRegistry(k)] = v
}
if c, exists := normalizedAuths[registry]; exists {
return decodeDockerAuth(c.Auth)
}
return "", "", nil
}
// detectProperties detects various properties of the registry.
// See the dockerClient documentation for members which are affected by this.
func (c *dockerClient) detectProperties() error {
func (c *dockerClient) detectProperties(ctx context.Context) error {
if c.scheme != "" {
return nil
}
ping := func(scheme string) error {
url := fmt.Sprintf(resolvedPingV2URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
if err != nil {
return err
@@ -457,7 +382,7 @@ func (c *dockerClient) detectProperties() error {
// best effort to understand if we're talking to a V1 registry
pingV1 := func(scheme string) bool {
url := fmt.Sprintf(resolvedPingV1URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
if err != nil {
return false
@@ -482,9 +407,9 @@ func (c *dockerClient) detectProperties() error {
// getExtensionsSignatures returns signatures from the X-Registry-Supports-Signatures API extension,
// using the original data structures.
func (c *dockerClient) getExtensionsSignatures(ref dockerReference, manifestDigest digest.Digest) (*extensionSignatureList, error) {
func (c *dockerClient) getExtensionsSignatures(ctx context.Context, ref dockerReference, manifestDigest digest.Digest) (*extensionSignatureList, error) {
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(ref.ref), manifestDigest)
res, err := c.makeRequest("GET", path, nil, nil)
res, err := c.makeRequest(ctx, "GET", path, nil, nil)
if err != nil {
return nil, err
}
@@ -503,55 +428,3 @@ func (c *dockerClient) getExtensionsSignatures(ref dockerReference, manifestDige
}
return &parsedBody, nil
}
func getDefaultConfigDir(confPath string) string {
return filepath.Join(homedir.Get(), confPath)
}
type dockerAuthConfig struct {
Auth string `json:"auth,omitempty"`
}
type dockerConfigFile struct {
AuthConfigs map[string]dockerAuthConfig `json:"auths"`
}
func decodeDockerAuth(s string) (string, string, error) {
decoded, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return "", "", err
}
parts := strings.SplitN(string(decoded), ":", 2)
if len(parts) != 2 {
// if it's invalid just skip, as docker does
return "", "", nil
}
user := parts[0]
password := strings.Trim(parts[1], "\x00")
return user, password, nil
}
// convertToHostname converts a registry url which has http|https prepended
// to just an hostname.
// Copied from github.com/docker/docker/registry/auth.go
func convertToHostname(url string) string {
stripped := url
if strings.HasPrefix(url, "http://") {
stripped = strings.TrimPrefix(url, "http://")
} else if strings.HasPrefix(url, "https://") {
stripped = strings.TrimPrefix(url, "https://")
}
nameParts := strings.SplitN(stripped, "/", 2)
return nameParts[0]
}
func normalizeRegistry(registry string) string {
normalized := convertToHostname(registry)
switch normalized {
case "registry-1.docker.io", "docker.io":
return "index.docker.io"
}
return normalized
}

View File

@@ -1,6 +1,7 @@
package docker
import (
"context"
"encoding/json"
"fmt"
"net/http"
@@ -22,7 +23,7 @@ type Image struct {
// a client to the registry hosting the given image.
// The caller must call .Close() on the returned Image.
func newImage(ctx *types.SystemContext, ref dockerReference) (types.Image, error) {
s, err := newImageSource(ctx, ref, nil)
s, err := newImageSource(ctx, ref)
if err != nil {
return nil, err
}
@@ -41,7 +42,8 @@ func (i *Image) SourceRefFullName() string {
// GetRepositoryTags list all tags available in the repository. Note that this has no connection with the tag(s) used for this specific image, if any.
func (i *Image) GetRepositoryTags() ([]string, error) {
path := fmt.Sprintf(tagsPath, reference.Path(i.src.ref.ref))
res, err := i.src.c.makeRequest("GET", path, nil, nil)
// FIXME: Pass the context.Context
res, err := i.src.c.makeRequest(context.TODO(), "GET", path, nil, nil)
if err != nil {
return nil, err
}

View File

@@ -2,6 +2,7 @@ package docker
import (
"bytes"
"context"
"crypto/rand"
"encoding/json"
"fmt"
@@ -12,29 +13,18 @@ import (
"os"
"path/filepath"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var manifestMIMETypes = []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
func supportedManifestMIMETypesMap() map[string]bool {
m := make(map[string]bool, len(manifestMIMETypes))
for _, mt := range manifestMIMETypes {
m[mt] = true
}
return m
}
type dockerImageDestination struct {
ref dockerReference
c *dockerClient
@@ -44,7 +34,7 @@ type dockerImageDestination struct {
// newImageDestination creates a new ImageDestination for the specified image reference.
func newImageDestination(ctx *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
c, err := newDockerClient(ctx, ref, true, "pull,push")
c, err := newDockerClientFromRef(ctx, ref, true, "pull,push")
if err != nil {
return nil, err
}
@@ -66,13 +56,18 @@ func (d *dockerImageDestination) Close() error {
}
func (d *dockerImageDestination) SupportedManifestMIMETypes() []string {
return manifestMIMETypes
return []string{
imgspecv1.MediaTypeImageManifest,
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *dockerImageDestination) SupportsSignatures() error {
if err := d.c.detectProperties(); err != nil {
if err := d.c.detectProperties(context.TODO()); err != nil {
return err
}
switch {
@@ -96,6 +91,11 @@ func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dockerImageDestination) MustMatchRuntimeOS() bool {
return false
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }
@@ -124,7 +124,7 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
// FIXME? Chunked upload, progress reporting, etc.
uploadPath := fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref))
logrus.Debugf("Uploading %s", uploadPath)
res, err := d.c.makeRequest("POST", uploadPath, nil, nil)
res, err := d.c.makeRequest(context.TODO(), "POST", uploadPath, nil, nil)
if err != nil {
return types.BlobInfo{}, err
}
@@ -141,7 +141,7 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
digester := digest.Canonical.Digester()
sizeCounter := &sizeCounter{}
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
res, err = d.c.makeRequestToResolvedURL("PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
res, err = d.c.makeRequestToResolvedURL(context.TODO(), "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
if err != nil {
logrus.Debugf("Error uploading layer chunked, response %#v", res)
return types.BlobInfo{}, err
@@ -160,7 +160,7 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
// TODO: check inputInfo.Digest == computedDigest https://github.com/containers/image/pull/70#discussion_r77646717
locationQuery.Set("digest", computedDigest.String())
uploadLocation.RawQuery = locationQuery.Encode()
res, err = d.c.makeRequestToResolvedURL("PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
res, err = d.c.makeRequestToResolvedURL(context.TODO(), "PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
if err != nil {
return types.BlobInfo{}, err
}
@@ -185,7 +185,7 @@ func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, erro
checkPath := fmt.Sprintf(blobsPath, reference.Path(d.ref.ref), info.Digest.String())
logrus.Debugf("Checking %s", checkPath)
res, err := d.c.makeRequest("HEAD", checkPath, nil, nil)
res, err := d.c.makeRequest(context.TODO(), "HEAD", checkPath, nil, nil)
if err != nil {
return false, -1, err
}
@@ -209,6 +209,10 @@ func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInf
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dockerImageDestination) PutManifest(m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
@@ -227,28 +231,49 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
if mimeType != "" {
headers["Content-Type"] = []string{mimeType}
}
res, err := d.c.makeRequest("PUT", path, headers, bytes.NewReader(m))
res, err := d.c.makeRequest(context.TODO(), "PUT", path, headers, bytes.NewReader(m))
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
body, err := ioutil.ReadAll(res.Body)
if err == nil {
logrus.Debugf("Error body %s", string(body))
if !successStatus(res.StatusCode) {
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest to %s", path)
if isManifestInvalidError(errors.Cause(err)) {
err = types.ManifestTypeRejectedError{Err: err}
}
logrus.Debugf("Error uploading manifest, status %d, %#v", res.StatusCode, res)
return errors.Errorf("Error uploading manifest to %s, status %d", path, res.StatusCode)
return err
}
return nil
}
// successStatus returns true if the argument is a successful HTTP response
// code (in the range 200 - 399 inclusive).
func successStatus(status int) bool {
return status >= 200 && status <= 399
}
// isManifestInvalidError returns true iff err from client.HandleErrorReponse is a “manifest invalid” error.
func isManifestInvalidError(err error) bool {
errors, ok := err.(errcode.Errors)
if !ok || len(errors) == 0 {
return false
}
ec, ok := errors[0].(errcode.ErrorCoder)
if !ok {
return false
}
// ErrorCodeManifestInvalid is returned by OpenShift with acceptschema2=false.
// ErrorCodeTagInvalid is returned by docker/distribution (at least as of commit ec87e9b6971d831f0eff752ddb54fb64693e51cd)
// when uploading to a tag (because it cant find a matching tag inside the manifest)
return ec.ErrorCode() == v2.ErrorCodeManifestInvalid || ec.ErrorCode() == v2.ErrorCodeTagInvalid
}
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
// Do not fail if we dont really need to support signatures.
if len(signatures) == 0 {
return nil
}
if err := d.c.detectProperties(); err != nil {
if err := d.c.detectProperties(context.TODO()); err != nil {
return err
}
switch {
@@ -369,7 +394,7 @@ func (d *dockerImageDestination) putSignaturesToAPIExtension(signatures [][]byte
// always adds signatures. Eventually we should also allow removing signatures,
// but the X-Registry-Supports-Signatures API extension does not support that yet.
existingSignatures, err := d.c.getExtensionsSignatures(d.ref, d.manifestDigest)
existingSignatures, err := d.c.getExtensionsSignatures(context.TODO(), d.ref, d.manifestDigest)
if err != nil {
return err
}
@@ -411,7 +436,7 @@ sigExists:
}
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), d.manifestDigest.String())
res, err := d.c.makeRequest("PUT", path, nil, bytes.NewReader(body))
res, err := d.c.makeRequest(context.TODO(), "PUT", path, nil, bytes.NewReader(body))
if err != nil {
return err
}

View File

@@ -1,6 +1,7 @@
package docker
import (
"context"
"fmt"
"io"
"io/ioutil"
@@ -10,51 +11,33 @@ import (
"os"
"strconv"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type dockerImageSource struct {
ref dockerReference
requestedManifestMIMETypes []string
c *dockerClient
ref dockerReference
c *dockerClient
// State
cachedManifest []byte // nil if not loaded yet
cachedManifestMIMEType string // Only valid if cachedManifest != nil
}
// newImageSource creates a new ImageSource for the specified image reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// newImageSource creates a new ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref dockerReference, requestedManifestMIMETypes []string) (*dockerImageSource, error) {
c, err := newDockerClient(ctx, ref, false, "pull")
func newImageSource(ctx *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
c, err := newDockerClientFromRef(ctx, ref, false, "pull")
if err != nil {
return nil, err
}
if requestedManifestMIMETypes == nil {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
supportedMIMEs := supportedManifestMIMETypesMap()
acceptableRequestedMIMEs := false
for _, mtrequested := range requestedManifestMIMETypes {
if supportedMIMEs[mtrequested] {
acceptableRequestedMIMEs = true
break
}
}
if !acceptableRequestedMIMEs {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
return &dockerImageSource{
ref: ref,
requestedManifestMIMETypes: requestedManifestMIMETypes,
c: c,
c: c,
}, nil
}
@@ -85,18 +68,18 @@ func simplifyContentType(contentType string) string {
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
err := s.ensureManifestIsLoaded()
err := s.ensureManifestIsLoaded(context.TODO())
if err != nil {
return nil, "", err
}
return s.cachedManifest, s.cachedManifestMIMEType, nil
}
func (s *dockerImageSource) fetchManifest(tagOrDigest string) ([]byte, string, error) {
func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest string) ([]byte, string, error) {
path := fmt.Sprintf(manifestPath, reference.Path(s.ref.ref), tagOrDigest)
headers := make(map[string][]string)
headers["Accept"] = s.requestedManifestMIMETypes
res, err := s.c.makeRequest("GET", path, headers, nil)
headers["Accept"] = manifest.DefaultRequestedManifestMIMETypes
res, err := s.c.makeRequest(ctx, "GET", path, headers, nil)
if err != nil {
return nil, "", err
}
@@ -114,7 +97,7 @@ func (s *dockerImageSource) fetchManifest(tagOrDigest string) ([]byte, string, e
// GetTargetManifest returns an image's manifest given a digest.
// This is mainly used to retrieve a single image's manifest out of a manifest list.
func (s *dockerImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return s.fetchManifest(digest.String())
return s.fetchManifest(context.TODO(), digest.String())
}
// ensureManifestIsLoaded sets s.cachedManifest and s.cachedManifestMIMEType
@@ -124,7 +107,7 @@ func (s *dockerImageSource) GetTargetManifest(digest digest.Digest) ([]byte, str
// we need to ensure that the digest of the manifest returned by GetManifest
// and used by GetSignatures are consistent, otherwise we would get spurious
// signature verification failures when pulling while a tag is being updated.
func (s *dockerImageSource) ensureManifestIsLoaded() error {
func (s *dockerImageSource) ensureManifestIsLoaded(ctx context.Context) error {
if s.cachedManifest != nil {
return nil
}
@@ -134,7 +117,7 @@ func (s *dockerImageSource) ensureManifestIsLoaded() error {
return err
}
manblob, mt, err := s.fetchManifest(reference)
manblob, mt, err := s.fetchManifest(ctx, reference)
if err != nil {
return err
}
@@ -150,13 +133,14 @@ func (s *dockerImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64
err error
)
for _, url := range urls {
resp, err = s.c.makeRequestToResolvedURL("GET", url, nil, nil, -1, false)
resp, err = s.c.makeRequestToResolvedURL(context.TODO(), "GET", url, nil, nil, -1, false)
if err == nil {
if resp.StatusCode != http.StatusOK {
err = errors.Errorf("error fetching external blob from %q: %d", url, resp.StatusCode)
logrus.Debug(err)
continue
}
break
}
}
if resp.Body != nil && err == nil {
@@ -181,7 +165,7 @@ func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64,
path := fmt.Sprintf(blobsPath, reference.Path(s.ref.ref), info.Digest.String())
logrus.Debugf("Downloading %s", path)
res, err := s.c.makeRequest("GET", path, nil, nil)
res, err := s.c.makeRequest(context.TODO(), "GET", path, nil, nil)
if err != nil {
return nil, 0, err
}
@@ -192,27 +176,38 @@ func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64,
return res.Body, getBlobSize(res), nil
}
func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
if err := s.c.detectProperties(); err != nil {
func (s *dockerImageSource) GetSignatures(ctx context.Context) ([][]byte, error) {
if err := s.c.detectProperties(ctx); err != nil {
return nil, err
}
switch {
case s.c.signatureBase != nil:
return s.getSignaturesFromLookaside()
return s.getSignaturesFromLookaside(ctx)
case s.c.supportsSignatures:
return s.getSignaturesFromAPIExtension()
return s.getSignaturesFromAPIExtension(ctx)
default:
return [][]byte{}, nil
}
}
// manifestDigest returns a digest of the manifest, either from the supplied reference or from a fetched manifest.
func (s *dockerImageSource) manifestDigest(ctx context.Context) (digest.Digest, error) {
if digested, ok := s.ref.ref.(reference.Digested); ok {
d := digested.Digest()
if d.Algorithm() == digest.Canonical {
return d, nil
}
}
if err := s.ensureManifestIsLoaded(ctx); err != nil {
return "", err
}
return manifest.Digest(s.cachedManifest)
}
// getSignaturesFromLookaside implements GetSignatures() from the lookaside location configured in s.c.signatureBase,
// which is not nil.
func (s *dockerImageSource) getSignaturesFromLookaside() ([][]byte, error) {
if err := s.ensureManifestIsLoaded(); err != nil {
return nil, err
}
manifestDigest, err := manifest.Digest(s.cachedManifest)
func (s *dockerImageSource) getSignaturesFromLookaside(ctx context.Context) ([][]byte, error) {
manifestDigest, err := s.manifestDigest(ctx)
if err != nil {
return nil, err
}
@@ -224,7 +219,7 @@ func (s *dockerImageSource) getSignaturesFromLookaside() ([][]byte, error) {
if url == nil {
return nil, errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
signature, missing, err := s.getOneSignature(url)
signature, missing, err := s.getOneSignature(ctx, url)
if err != nil {
return nil, err
}
@@ -239,7 +234,7 @@ func (s *dockerImageSource) getSignaturesFromLookaside() ([][]byte, error) {
// getOneSignature downloads one signature from url.
// If it successfully determines that the signature does not exist, returns with missing set to true and error set to nil.
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, missing bool, err error) {
func (s *dockerImageSource) getOneSignature(ctx context.Context, url *url.URL) (signature []byte, missing bool, err error) {
switch url.Scheme {
case "file":
logrus.Debugf("Reading %s", url.Path)
@@ -254,7 +249,12 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
case "http", "https":
logrus.Debugf("GET %s", url)
res, err := s.c.client.Get(url.String())
req, err := http.NewRequest("GET", url.String(), nil)
if err != nil {
return nil, false, err
}
req = req.WithContext(ctx)
res, err := s.c.client.Do(req)
if err != nil {
return nil, false, err
}
@@ -276,16 +276,13 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
}
// getSignaturesFromAPIExtension implements GetSignatures() using the X-Registry-Supports-Signatures API extension.
func (s *dockerImageSource) getSignaturesFromAPIExtension() ([][]byte, error) {
if err := s.ensureManifestIsLoaded(); err != nil {
return nil, err
}
manifestDigest, err := manifest.Digest(s.cachedManifest)
func (s *dockerImageSource) getSignaturesFromAPIExtension(ctx context.Context) ([][]byte, error) {
manifestDigest, err := s.manifestDigest(ctx)
if err != nil {
return nil, err
}
parsedBody, err := s.c.getExtensionsSignatures(s.ref, manifestDigest)
parsedBody, err := s.c.getExtensionsSignatures(ctx, s.ref, manifestDigest)
if err != nil {
return nil, err
}
@@ -301,7 +298,7 @@ func (s *dockerImageSource) getSignaturesFromAPIExtension() ([][]byte, error) {
// deleteImage deletes the named image from the registry, if supported.
func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
c, err := newDockerClient(ctx, ref, true, "push")
c, err := newDockerClientFromRef(ctx, ref, true, "push")
if err != nil {
return err
}
@@ -316,7 +313,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
return err
}
getPath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), refTail)
get, err := c.makeRequest("GET", getPath, headers, nil)
get, err := c.makeRequest(context.TODO(), "GET", getPath, headers, nil)
if err != nil {
return err
}
@@ -338,7 +335,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
// When retrieving the digest from a registry >= 2.3 use the following header:
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
delete, err := c.makeRequest("DELETE", deletePath, headers, nil)
delete, err := c.makeRequest(context.TODO(), "DELETE", deletePath, headers, nil)
if err != nil {
return err
}

View File

@@ -130,12 +130,10 @@ func (ref dockerReference) NewImage(ctx *types.SystemContext) (types.Image, erro
return newImage(ctx, ref)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref dockerReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref, requestedManifestMIMETypes)
func (ref dockerReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.

View File

@@ -9,12 +9,12 @@ import (
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/ghodss/yaml"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// systemRegistriesDirPath is the path to registries.d, used for locating lookaside Docker signature storage.

View File

@@ -10,12 +10,12 @@ import (
"os"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
@@ -81,6 +81,11 @@ func (d *Destination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *Destination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -156,10 +161,13 @@ func (d *Destination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest sends the given manifest blob to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate
// between schema versions.
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *Destination) PutManifest(m []byte) error {
// We do not bother with types.ManifestTypeRejectedError; our .SupportedManifestMIMETypes() above is already providing only one alternative,
// so the caller trying a different manifest kind would be pointless.
var man schema2Manifest
if err := json.Unmarshal(m, &man); err != nil {
return errors.Wrap(err, "Error parsing manifest")
@@ -173,7 +181,7 @@ func (d *Destination) PutManifest(m []byte) error {
layerPaths = append(layerPaths, l.Digest.String())
}
items := []manifestItem{{
items := []ManifestItem{{
Config: man.Config.Digest.String(),
RepoTags: []string{d.repoTag},
Layers: layerPaths,

View File

@@ -3,6 +3,7 @@ package tarfile
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"io"
"io/ioutil"
@@ -20,7 +21,7 @@ import (
type Source struct {
tarPath string
// The following data is only available after ensureCachedDataIsPresent() succeeds
tarManifest *manifestItem // nil if not available yet.
tarManifest *ManifestItem // nil if not available yet.
configBytes []byte
configDigest digest.Digest
orderedDiffIDList []diffID
@@ -145,23 +146,28 @@ func (s *Source) ensureCachedDataIsPresent() error {
return err
}
// Check to make sure length is 1
if len(tarManifest) != 1 {
return errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(tarManifest))
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest.Config)
configBytes, err := s.readTarComponent(tarManifest[0].Config)
if err != nil {
return err
}
var parsedConfig image // Most fields ommitted, we only care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest.Config)
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest[0].Config)
}
knownLayers, err := s.prepareLayerData(tarManifest, &parsedConfig)
knownLayers, err := s.prepareLayerData(&tarManifest[0], &parsedConfig)
if err != nil {
return err
}
// Success; commit.
s.tarManifest = tarManifest
s.tarManifest = &tarManifest[0]
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
@@ -170,23 +176,25 @@ func (s *Source) ensureCachedDataIsPresent() error {
}
// loadTarManifest loads and decodes the manifest.json.
func (s *Source) loadTarManifest() (*manifestItem, error) {
func (s *Source) loadTarManifest() ([]ManifestItem, error) {
// FIXME? Do we need to deal with the legacy format?
bytes, err := s.readTarComponent(manifestFileName)
if err != nil {
return nil, err
}
var items []manifestItem
var items []ManifestItem
if err := json.Unmarshal(bytes, &items); err != nil {
return nil, errors.Wrap(err, "Error decoding tar manifest.json")
}
if len(items) != 1 {
return nil, errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(items))
}
return &items[0], nil
return items, nil
}
func (s *Source) prepareLayerData(tarManifest *manifestItem, parsedConfig *image) (map[diffID]*layerInfo, error) {
// LoadTarManifest loads and decodes the manifest.json
func (s *Source) LoadTarManifest() ([]ManifestItem, error) {
return s.loadTarManifest()
}
func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *image) (map[diffID]*layerInfo, error) {
// Collect layer data available in manifest and config.
if len(tarManifest.Layers) != len(parsedConfig.RootFS.DiffIDs) {
return nil, errors.Errorf("Inconsistent layer count: %d in manifest, %d in config", len(tarManifest.Layers), len(parsedConfig.RootFS.DiffIDs))
@@ -347,6 +355,6 @@ func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
}
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
func (s *Source) GetSignatures() ([][]byte, error) {
func (s *Source) GetSignatures(ctx context.Context) ([][]byte, error) {
return [][]byte{}, nil
}

View File

@@ -13,7 +13,8 @@ const (
// legacyRepositoriesFileName = "repositories"
)
type manifestItem struct {
// ManifestItem is an element of the array stored in the top-level manifest.json file.
type ManifestItem struct {
Config string
RepoTags []string
Layers []string

View File

@@ -16,7 +16,7 @@ type platformSpec struct {
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
Variant string `json:"variant,omitempty"`
Features []string `json:"features,omitempty"`
Features []string `json:"features,omitempty"` // removed in OCI
}
// A manifestDescriptor references a platform-specific manifest.

View File

@@ -135,19 +135,43 @@ func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
return layers
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
// This is a bit convoluted: We cant just have a "get embedded docker reference" method
// and have the “does it conflict” logic in the generic copy code, because the manifest does not actually
// embed a full docker/distribution reference, but only the repo name and tag (without the host name).
// So we would have to provide a “return repo without host name, and tag” getter for the generic code,
// which would be very awkward. Instead, we do the matching here in schema1-specific code, and all the
// generic copy code needs to know about is reference.Named and that a manifest may need updating
// for some destinations.
name := reference.Path(ref)
var tag string
if tagged, isTagged := ref.(reference.NamedTagged); isTagged {
tag = tagged.Tag()
} else {
tag = ""
}
return m.Name != name || m.Tag != tag
}
func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
v1 := &v1Image{}
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
i := &types.ImageInspectInfo{
Tag: m.Tag,
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
}, nil
}
if v1.Config != nil {
i.Labels = v1.Config.Labels
}
return i, nil
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
@@ -173,6 +197,14 @@ func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (typ
copy.FSLayers[(len(options.LayerInfos)-1)-i].BlobSum = info.Digest
}
}
if options.EmbeddedDockerReference != nil {
copy.Name = reference.Path(options.EmbeddedDockerReference)
if tagged, isTagged := options.EmbeddedDockerReference.(reference.NamedTagged); isTagged {
copy.Tag = tagged.Tag()
} else {
copy.Tag = ""
}
}
switch options.ManifestMIMEType {
case "": // No conversion, OK

View File

@@ -8,12 +8,13 @@ import (
"io/ioutil"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// gzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
@@ -140,6 +141,13 @@ func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
return blobs
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema2) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
@@ -149,13 +157,16 @@ func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
if err := json.Unmarshal(config, v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
i := &types.ImageInspectInfo{
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
}, nil
}
if v1.Config != nil {
i.Labels = v1.Config.Labels
}
return i, nil
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
@@ -175,11 +186,13 @@ func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (typ
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].MediaType = m.LayersDescriptors[i].MediaType
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
copy.LayersDescriptors[i].URLs = info.URLs
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1 to schema2, but we really don't care.
switch options.ManifestMIMEType {
case "": // No conversion, OK
@@ -204,15 +217,17 @@ func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
return nil, err
}
config := descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Size: int64(len(configOCIBytes)),
Digest: digest.FromBytes(configOCIBytes),
config := descriptorOCI1{
descriptor: descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Size: int64(len(configOCIBytes)),
Digest: digest.FromBytes(configOCIBytes),
},
}
layers := make([]descriptor, len(m.LayersDescriptors))
layers := make([]descriptorOCI1, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
layers[idx] = descriptorOCI1{descriptor: m.LayersDescriptors[idx]}
if m.LayersDescriptors[idx].MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable
} else {

View File

@@ -3,6 +3,7 @@ package image
import (
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/strslice"
"github.com/containers/image/types"
@@ -72,6 +73,10 @@ type genericManifest interface {
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []types.BlobInfo
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
imageInspectInfo() (*types.ImageInspectInfo, error) // To be called by inspectManifest
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute

View File

@@ -1,6 +1,8 @@
package image
import (
"context"
"github.com/pkg/errors"
"github.com/containers/image/types"
@@ -54,7 +56,7 @@ func (i *memoryImage) Manifest() ([]byte, string, error) {
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Signatures() ([][]byte, error) {
func (i *memoryImage) Signatures(ctx context.Context) ([][]byte, error) {
// Modifying an image invalidates signatures; a caller asking the updated image for signatures
// is probably confused.
return nil, errors.New("Internal error: Image.Signatures() is not supported for images modified in memory")

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"io/ioutil"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
@@ -11,12 +12,18 @@ import (
"github.com/pkg/errors"
)
type descriptorOCI1 struct {
descriptor
Annotations map[string]string `json:"annotations,omitempty"`
}
type manifestOCI1 struct {
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
ConfigDescriptor descriptorOCI1 `json:"config"`
LayersDescriptors []descriptorOCI1 `json:"layers"`
Annotations map[string]string `json:"annotations,omitempty"`
}
func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
@@ -28,7 +35,7 @@ func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericMa
}
// manifestOCI1FromComponents builds a new manifestOCI1 from the supplied data:
func manifestOCI1FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
func manifestOCI1FromComponents(config descriptorOCI1, src types.ImageSource, configBlob []byte, layers []descriptorOCI1) genericManifest {
return &manifestOCI1{
src: src,
configBlob: configBlob,
@@ -49,7 +56,7 @@ func (m *manifestOCI1) manifestMIMEType() string {
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestOCI1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size, Annotations: m.ConfigDescriptor.Annotations}
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
@@ -102,11 +109,18 @@ func (m *manifestOCI1) OCIConfig() (*imgspecv1.Image, error) {
func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size})
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size, Annotations: layer.Annotations, URLs: layer.URLs, MediaType: layer.MediaType})
}
return blobs
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestOCI1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
@@ -116,13 +130,16 @@ func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
if err := json.Unmarshal(config, v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
i := &types.ImageInspectInfo{
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
}, nil
}
if v1.Config != nil {
i.Labels = v1.Config.Labels
}
return i, nil
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
@@ -140,12 +157,16 @@ func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
copy.LayersDescriptors = make([]descriptorOCI1, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].MediaType = m.LayersDescriptors[i].MediaType
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
copy.LayersDescriptors[i].Annotations = info.Annotations
copy.LayersDescriptors[i].URLs = info.URLs
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1, but we really don't care.
switch options.ManifestMIMEType {
case "": // No conversion, OK
@@ -160,7 +181,7 @@ func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.
func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
// Create a copy of the descriptor.
config := m.ConfigDescriptor
config := m.ConfigDescriptor.descriptor
// The only difference between OCI and DockerSchema2 is the mediatypes. The
// media type of the manifest is handled by manifestSchema2FromComponents.
@@ -168,7 +189,7 @@ func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
layers := make([]descriptor, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
layers[idx] = m.LayersDescriptors[idx].descriptor
layers[idx].MediaType = manifest.DockerV2Schema2LayerMediaType
}

View File

@@ -1,6 +1,8 @@
package image
import (
"context"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
@@ -71,9 +73,9 @@ func (i *UnparsedImage) Manifest() ([]byte, string, error) {
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Signatures() ([][]byte, error) {
func (i *UnparsedImage) Signatures(ctx context.Context) ([][]byte, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignatures()
sigs, err := i.src.GetSignatures(ctx)
if err != nil {
return nil, err
}

View File

@@ -35,7 +35,7 @@ var DefaultRequestedManifestMIMETypes = []string{
DockerV2Schema2MediaType,
DockerV2Schema1SignedMediaType,
DockerV2Schema1MediaType,
DockerV2ListMediaType,
// DockerV2ListMediaType, // FIXME: Restore this ASAP
}
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
@@ -54,7 +54,7 @@ func GuessMIMEType(manifest []byte) string {
}
switch meta.MediaType {
case DockerV2Schema2MediaType, DockerV2ListMediaType, imgspecv1.MediaTypeImageManifest, imgspecv1.MediaTypeImageManifestList: // A recognized type.
case DockerV2Schema2MediaType, DockerV2ListMediaType: // A recognized type.
return meta.MediaType
}
// this is the only way the function can return DockerV2Schema1MediaType, and recognizing that is essential for stripping the JWS signatures = computing the correct manifest digest.
@@ -64,7 +64,31 @@ func GuessMIMEType(manifest []byte) string {
return DockerV2Schema1SignedMediaType
}
return DockerV2Schema1MediaType
case 2: // Really should not happen, meta.MediaType should have been set. But given the data, this is our best guess.
case 2:
// best effort to understand if this is an OCI image since mediaType
// isn't in the manifest for OCI anymore
// for docker v2s2 meta.MediaType should have been set. But given the data, this is our best guess.
ociMan := struct {
Config struct {
MediaType string `json:"mediaType"`
} `json:"config"`
Layers []imgspecv1.Descriptor `json:"layers"`
}{}
if err := json.Unmarshal(manifest, &ociMan); err != nil {
return ""
}
if ociMan.Config.MediaType == imgspecv1.MediaTypeImageConfig && len(ociMan.Layers) != 0 {
return imgspecv1.MediaTypeImageManifest
}
ociIndex := struct {
Manifests []imgspecv1.Descriptor `json:"manifests"`
}{}
if err := json.Unmarshal(manifest, &ociIndex); err != nil {
return ""
}
if len(ociIndex.Manifests) != 0 && ociIndex.Manifests[0].MediaType == imgspecv1.MediaTypeImageManifest {
return imgspecv1.MediaTypeImageIndex
}
return DockerV2Schema2MediaType
}
return ""

View File

@@ -0,0 +1,131 @@
package archive
import (
"io"
"os"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
)
type ociArchiveImageDestination struct {
ref ociArchiveReference
unpackedDest types.ImageDestination
tempDirRef tempDirOCIRef
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ctx *types.SystemContext, ref ociArchiveReference) (types.ImageDestination, error) {
tempDirRef, err := createOCIRef(ref.image)
if err != nil {
return nil, errors.Wrapf(err, "error creating oci reference")
}
unpackedDest, err := tempDirRef.ociRefExtracted.NewImageDestination(ctx)
if err != nil {
if err := tempDirRef.deleteTempDir(); err != nil {
return nil, errors.Wrapf(err, "error deleting temp directory", tempDirRef.tempDirectory)
}
return nil, err
}
return &ociArchiveImageDestination{ref: ref,
unpackedDest: unpackedDest,
tempDirRef: tempDirRef}, nil
}
// Reference returns the reference used to set up this destination.
func (d *ociArchiveImageDestination) Reference() types.ImageReference {
return d.ref
}
// Close removes resources associated with an initialized ImageDestination, if any
// Close deletes the temp directory of the oci-archive image
func (d *ociArchiveImageDestination) Close() error {
defer d.tempDirRef.deleteTempDir()
return d.unpackedDest.Close()
}
func (d *ociArchiveImageDestination) SupportedManifestMIMETypes() []string {
return d.unpackedDest.SupportedManifestMIMETypes()
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures
func (d *ociArchiveImageDestination) SupportsSignatures() error {
return d.unpackedDest.SupportsSignatures()
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination
func (d *ociArchiveImageDestination) ShouldCompressLayers() bool {
return d.unpackedDest.ShouldCompressLayers()
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ociArchiveImageDestination) AcceptsForeignLayerURLs() bool {
return d.unpackedDest.AcceptsForeignLayerURLs()
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise
func (d *ociArchiveImageDestination) MustMatchRuntimeOS() bool {
return d.unpackedDest.MustMatchRuntimeOS()
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
func (d *ociArchiveImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
return d.unpackedDest.PutBlob(stream, inputInfo)
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob
func (d *ociArchiveImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
return d.unpackedDest.HasBlob(info)
}
func (d *ociArchiveImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return d.unpackedDest.ReapplyBlob(info)
}
// PutManifest writes manifest to the destination
func (d *ociArchiveImageDestination) PutManifest(m []byte) error {
return d.unpackedDest.PutManifest(m)
}
func (d *ociArchiveImageDestination) PutSignatures(signatures [][]byte) error {
return d.unpackedDest.PutSignatures(signatures)
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted
// after the directory is made, it is tarred up into a file and the directory is deleted
func (d *ociArchiveImageDestination) Commit() error {
if err := d.unpackedDest.Commit(); err != nil {
return errors.Wrapf(err, "error storing image %q", d.ref.image)
}
// path of directory to tar up
src := d.tempDirRef.tempDirectory
// path to save tarred up file
dst := d.ref.resolvedFile
return tarDirectory(src, dst)
}
// tar converts the directory at src and saves it to dst
func tarDirectory(src, dst string) error {
// input is a stream of bytes from the archive of the directory at path
input, err := archive.Tar(src, archive.Uncompressed)
if err != nil {
return errors.Wrapf(err, "error retrieving stream of bytes from %q", src)
}
// creates the tar file
outFile, err := os.Create(dst)
if err != nil {
return errors.Wrapf(err, "error creating tar file %q", dst)
}
defer outFile.Close()
// copies the contents of the directory to the tar file
_, err = io.Copy(outFile, input)
return err
}

View File

@@ -0,0 +1,88 @@
package archive
import (
"context"
"io"
ocilayout "github.com/containers/image/oci/layout"
"github.com/containers/image/types"
digest "github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
type ociArchiveImageSource struct {
ref ociArchiveReference
unpackedSrc types.ImageSource
tempDirRef tempDirOCIRef
}
// newImageSource returns an ImageSource for reading from an existing directory.
// newImageSource untars the file and saves it in a temp directory
func newImageSource(ctx *types.SystemContext, ref ociArchiveReference) (types.ImageSource, error) {
tempDirRef, err := createUntarTempDir(ref)
if err != nil {
return nil, errors.Wrap(err, "error creating temp directory")
}
unpackedSrc, err := tempDirRef.ociRefExtracted.NewImageSource(ctx)
if err != nil {
if err := tempDirRef.deleteTempDir(); err != nil {
return nil, errors.Wrapf(err, "error deleting temp directory", tempDirRef.tempDirectory)
}
return nil, err
}
return &ociArchiveImageSource{ref: ref,
unpackedSrc: unpackedSrc,
tempDirRef: tempDirRef}, nil
}
// LoadManifestDescriptor loads the manifest
func LoadManifestDescriptor(imgRef types.ImageReference) (imgspecv1.Descriptor, error) {
ociArchRef, ok := imgRef.(ociArchiveReference)
if !ok {
return imgspecv1.Descriptor{}, errors.Errorf("error typecasting, need type ociArchiveReference")
}
tempDirRef, err := createUntarTempDir(ociArchRef)
if err != nil {
return imgspecv1.Descriptor{}, errors.Wrap(err, "error creating temp directory")
}
defer tempDirRef.deleteTempDir()
descriptor, err := ocilayout.LoadManifestDescriptor(tempDirRef.ociRefExtracted)
if err != nil {
return imgspecv1.Descriptor{}, errors.Wrap(err, "error loading index")
}
return descriptor, nil
}
// Reference returns the reference used to set up this source.
func (s *ociArchiveImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
// Close deletes the temporary directory at dst
func (s *ociArchiveImageSource) Close() error {
defer s.tempDirRef.deleteTempDir()
return s.unpackedSrc.Close()
}
// GetManifest returns the image's manifest along with its MIME type
// (which may be empty when it can't be determined but the manifest is available).
func (s *ociArchiveImageSource) GetManifest() ([]byte, string, error) {
return s.unpackedSrc.GetManifest()
}
func (s *ociArchiveImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return s.unpackedSrc.GetTargetManifest(digest)
}
// GetBlob returns a stream for the specified blob, and the blob's size.
func (s *ociArchiveImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
return s.unpackedSrc.GetBlob(info)
}
func (s *ociArchiveImageSource) GetSignatures(c context.Context) ([][]byte, error) {
return s.unpackedSrc.GetSignatures(c)
}

View File

@@ -0,0 +1,225 @@
package archive
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
ocilayout "github.com/containers/image/oci/layout"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for OCI archive
// it creates an oci-archive tar file by calling into the OCI transport
// tarring the directory created by oci and deleting the directory
var Transport = ociArchiveTransport{}
type ociArchiveTransport struct{}
// ociArchiveReference is an ImageReference for OCI Archive paths
type ociArchiveReference struct {
file string
resolvedFile string
image string
}
func (t ociArchiveTransport) Name() string {
return "oci-archive"
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix
// into an ImageReference.
func (t ociArchiveTransport) ParseReference(reference string) (types.ImageReference, error) {
return ParseReference(reference)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
func (t ociArchiveTransport) ValidatePolicyConfigurationScope(scope string) error {
var file string
sep := strings.SplitN(scope, ":", 2)
file = sep[0]
if len(sep) == 2 {
image := sep[1]
if !refRegexp.MatchString(image) {
return errors.Errorf("Invalid image %s", image)
}
}
if !strings.HasPrefix(file, "/") {
return errors.Errorf("Invalid scope %s: must be an absolute path", scope)
}
// Refuse also "/", otherwise "/" and "" would have the same semantics,
// and "" could be unexpectedly shadowed by the "/" entry.
// (Note: we do allow "/:someimage", a bit ridiculous but why refuse it?)
if scope == "/" {
return errors.New(`Invalid scope "/": Use the generic default scope ""`)
}
cleaned := filepath.Clean(file)
if cleaned != file {
return errors.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
}
return nil
}
// annotation spex from https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys
const (
separator = `(?:[-._:@+]|--)`
alphanum = `(?:[A-Za-z0-9]+)`
component = `(?:` + alphanum + `(?:` + separator + alphanum + `)*)`
)
var refRegexp = regexp.MustCompile(`^` + component + `(?:/` + component + `)*$`)
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an OCI ImageReference.
func ParseReference(reference string) (types.ImageReference, error) {
var file, image string
sep := strings.SplitN(reference, ":", 2)
file = sep[0]
if len(sep) == 2 {
image = sep[1]
}
return NewReference(file, image)
}
// NewReference returns an OCI reference for a file and a image.
func NewReference(file, image string) (types.ImageReference, error) {
resolved, err := explicitfilepath.ResolvePathToFullyExplicit(file)
if err != nil {
return nil, err
}
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
// from being ambiguous with values of PolicyConfigurationIdentity.
if strings.Contains(resolved, ":") {
return nil, errors.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", file, image, resolved)
}
if len(image) > 0 && !refRegexp.MatchString(image) {
return nil, errors.Errorf("Invalid image %s", image)
}
return ociArchiveReference{file: file, resolvedFile: resolved, image: image}, nil
}
func (ref ociArchiveReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
func (ref ociArchiveReference) StringWithinTransport() string {
return fmt.Sprintf("%s:%s", ref.file, ref.image)
}
// DockerReference returns a Docker reference associated with this reference
func (ref ociArchiveReference) DockerReference() reference.Named {
return nil
}
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
func (ref ociArchiveReference) PolicyConfigurationIdentity() string {
// NOTE: ref.image is not a part of the image identity, because "$dir:$someimage" and "$dir:" may mean the
// same image and the two cant be statically disambiguated. Using at least the repository directory is
// less granular but hopefully still useful.
return fmt.Sprintf("%s", ref.resolvedFile)
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set
func (ref ociArchiveReference) PolicyConfigurationNamespaces() []string {
res := []string{}
path := ref.resolvedFile
for {
lastSlash := strings.LastIndex(path, "/")
// Note that we do not include "/"; it is redundant with the default "" global default,
// and rejected by ociTransport.ValidatePolicyConfigurationScope above.
if lastSlash == -1 || path == "/" {
break
}
res = append(res, path)
path = path[:lastSlash]
}
return res
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
func (ref ociArchiveReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := newImageSource(ctx, ref)
if err != nil {
return nil, err
}
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref ociArchiveReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref ociArchiveReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref ociArchiveReference) DeleteImage(ctx *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for oci: images")
}
// struct to store the ociReference and temporary directory returned by createOCIRef
type tempDirOCIRef struct {
tempDirectory string
ociRefExtracted types.ImageReference
}
// deletes the temporary directory created
func (t *tempDirOCIRef) deleteTempDir() error {
return os.RemoveAll(t.tempDirectory)
}
// createOCIRef creates the oci reference of the image
func createOCIRef(image string) (tempDirOCIRef, error) {
dir, err := ioutil.TempDir("/var/tmp", "oci")
if err != nil {
return tempDirOCIRef{}, errors.Wrapf(err, "error creating temp directory")
}
ociRef, err := ocilayout.NewReference(dir, image)
if err != nil {
return tempDirOCIRef{}, err
}
tempDirRef := tempDirOCIRef{tempDirectory: dir, ociRefExtracted: ociRef}
return tempDirRef, nil
}
// creates the temporary directory and copies the tarred content to it
func createUntarTempDir(ref ociArchiveReference) (tempDirOCIRef, error) {
tempDirRef, err := createOCIRef(ref.image)
if err != nil {
return tempDirOCIRef{}, errors.Wrap(err, "error creating oci reference")
}
src := ref.resolvedFile
dst := tempDirRef.tempDirectory
if err := archive.UntarPath(src, dst); err != nil {
if err := tempDirRef.deleteTempDir(); err != nil {
return tempDirOCIRef{}, errors.Wrapf(err, "error deleting temp directory %q", tempDirRef.tempDirectory)
}
return tempDirOCIRef{}, errors.Wrapf(err, "error untarring file %q", tempDirRef.tempDirectory)
}
return tempDirRef, nil
}

View File

@@ -6,22 +6,59 @@ import (
"io/ioutil"
"os"
"path/filepath"
"runtime"
"github.com/pkg/errors"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspec "github.com/opencontainers/image-spec/specs-go"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type ociImageDestination struct {
ref ociReference
ref ociReference
index imgspecv1.Index
sharedBlobDir string
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ref ociReference) types.ImageDestination {
return &ociImageDestination{ref: ref}
func newImageDestination(ctx *types.SystemContext, ref ociReference) (types.ImageDestination, error) {
if ref.image == "" {
return nil, errors.Errorf("cannot save image with empty image.ref.name")
}
var index *imgspecv1.Index
if indexExists(ref) {
var err error
index, err = ref.getIndex()
if err != nil {
return nil, err
}
} else {
index = &imgspecv1.Index{
Versioned: imgspec.Versioned{
SchemaVersion: 2,
},
}
}
d := &ociImageDestination{ref: ref, index: *index}
if ctx != nil {
d.sharedBlobDir = ctx.OCISharedBlobDirPath
}
if err := ensureDirectoryExists(d.ref.dir); err != nil {
return nil, err
}
// Per the OCI image specification, layouts MUST have a "blobs" subdirectory,
// but it MAY be empty (e.g. if we never end up calling PutBlob)
// https://github.com/opencontainers/image-spec/blame/7c889fafd04a893f5c5f50b7ab9963d5d64e5242/image-layout.md#L19
if err := ensureDirectoryExists(filepath.Join(d.ref.dir, "blobs")); err != nil {
return nil, err
}
return d, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
@@ -55,6 +92,11 @@ func (d *ociImageDestination) ShouldCompressLayers() bool {
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ociImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ociImageDestination) MustMatchRuntimeOS() bool {
return false
}
@@ -65,9 +107,6 @@ func (d *ociImageDestination) AcceptsForeignLayerURLs() bool {
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
if err := ensureDirectoryExists(d.ref.dir); err != nil {
return types.BlobInfo{}, err
}
blobFile, err := ioutil.TempFile(d.ref.dir, "oci-put-blob")
if err != nil {
return types.BlobInfo{}, err
@@ -98,7 +137,7 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
return types.BlobInfo{}, err
}
blobPath, err := d.ref.blobPath(computedDigest)
blobPath, err := d.ref.blobPath(computedDigest, d.sharedBlobDir)
if err != nil {
return types.BlobInfo{}, err
}
@@ -120,7 +159,7 @@ func (d *ociImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error)
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
blobPath, err := d.ref.blobPath(info.Digest)
blobPath, err := d.ref.blobPath(info.Digest, d.sharedBlobDir)
if err != nil {
return false, -1, err
}
@@ -138,6 +177,10 @@ func (d *ociImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo,
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *ociImageDestination) PutManifest(m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
@@ -148,12 +191,8 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
// TODO(runcom): beaware and add support for OCI manifest list
desc.MediaType = imgspecv1.MediaTypeImageManifest
desc.Size = int64(len(m))
data, err := json.Marshal(desc)
if err != nil {
return err
}
blobPath, err := d.ref.blobPath(digest)
blobPath, err := d.ref.blobPath(digest, d.sharedBlobDir)
if err != nil {
return err
}
@@ -163,15 +202,54 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
if err := ioutil.WriteFile(blobPath, m, 0644); err != nil {
return err
}
// TODO(runcom): ugly here?
if d.ref.image == "" {
return errors.Errorf("cannot save image with empyt image.ref.name")
}
annotations := make(map[string]string)
annotations["org.opencontainers.image.ref.name"] = d.ref.image
desc.Annotations = annotations
desc.Platform = &imgspecv1.Platform{
Architecture: runtime.GOARCH,
OS: runtime.GOOS,
}
d.addManifest(&desc)
return nil
}
func (d *ociImageDestination) addManifest(desc *imgspecv1.Descriptor) {
for i, manifest := range d.index.Manifests {
if manifest.Annotations["org.opencontainers.image.ref.name"] == desc.Annotations["org.opencontainers.image.ref.name"] {
// TODO Should there first be a cleanup based on the descriptor we are going to replace?
d.index.Manifests[i] = *desc
return
}
}
d.index.Manifests = append(d.index.Manifests, *desc)
}
func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return errors.Errorf("Pushing signatures for OCI images is not supported")
}
return nil
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *ociImageDestination) Commit() error {
if err := ioutil.WriteFile(d.ref.ociLayoutPath(), []byte(`{"imageLayoutVersion": "1.0.0"}`), 0644); err != nil {
return err
}
descriptorPath := d.ref.descriptorPath(d.ref.tag)
if err := ensureParentDirectoryExists(descriptorPath); err != nil {
indexJSON, err := json.Marshal(d.index)
if err != nil {
return err
}
return ioutil.WriteFile(descriptorPath, data, 0644)
return ioutil.WriteFile(d.ref.indexPath(), indexJSON, 0644)
}
func ensureDirectoryExists(path string) error {
@@ -188,17 +266,15 @@ func ensureParentDirectoryExists(path string) error {
return ensureDirectoryExists(filepath.Dir(path))
}
func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return errors.Errorf("Pushing signatures for OCI images is not supported")
// indexExists checks whether the index location specified in the OCI reference exists.
// The implementation is opinionated, since in case of unexpected errors false is returned
func indexExists(ref ociReference) bool {
_, err := os.Stat(ref.indexPath())
if err == nil {
return true
}
return nil
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *ociImageDestination) Commit() error {
return nil
if os.IsNotExist(err) {
return false
}
return true
}

View File

@@ -1,23 +1,52 @@
package layout
import (
"encoding/json"
"context"
"io"
"io/ioutil"
"net/http"
"os"
"strconv"
"github.com/containers/image/pkg/tlsclientconfig"
"github.com/containers/image/types"
"github.com/docker/go-connections/tlsconfig"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
type ociImageSource struct {
ref ociReference
ref ociReference
descriptor imgspecv1.Descriptor
client *http.Client
sharedBlobDir string
}
// newImageSource returns an ImageSource for reading from an existing directory.
func newImageSource(ref ociReference) types.ImageSource {
return &ociImageSource{ref: ref}
func newImageSource(ctx *types.SystemContext, ref ociReference) (types.ImageSource, error) {
tr := tlsclientconfig.NewTransport()
tr.TLSClientConfig = tlsconfig.ServerDefault()
if ctx != nil && ctx.OCICertPath != "" {
if err := tlsclientconfig.SetupCertificates(ctx.OCICertPath, tr.TLSClientConfig); err != nil {
return nil, err
}
tr.TLSClientConfig.InsecureSkipVerify = ctx.OCIInsecureSkipTLSVerify
}
client := &http.Client{}
client.Transport = tr
descriptor, err := ref.getManifestDescriptor()
if err != nil {
return nil, err
}
d := &ociImageSource{ref: ref, descriptor: descriptor, client: client}
if ctx != nil {
// TODO(jonboulle): check dir existence?
d.sharedBlobDir = ctx.OCISharedBlobDirPath
}
return d, nil
}
// Reference returns the reference used to set up this source.
@@ -33,19 +62,7 @@ func (s *ociImageSource) Close() error {
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *ociImageSource) GetManifest() ([]byte, string, error) {
descriptorPath := s.ref.descriptorPath(s.ref.tag)
data, err := ioutil.ReadFile(descriptorPath)
if err != nil {
return nil, "", err
}
desc := imgspecv1.Descriptor{}
err = json.Unmarshal(data, &desc)
if err != nil {
return nil, "", err
}
manifestPath, err := s.ref.blobPath(digest.Digest(desc.Digest))
manifestPath, err := s.ref.blobPath(digest.Digest(s.descriptor.Digest), s.sharedBlobDir)
if err != nil {
return nil, "", err
}
@@ -54,11 +71,11 @@ func (s *ociImageSource) GetManifest() ([]byte, string, error) {
return nil, "", err
}
return m, desc.MediaType, nil
return m, s.descriptor.MediaType, nil
}
func (s *ociImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
manifestPath, err := s.ref.blobPath(digest)
manifestPath, err := s.ref.blobPath(digest, s.sharedBlobDir)
if err != nil {
return nil, "", err
}
@@ -77,7 +94,11 @@ func (s *ociImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string
// GetBlob returns a stream for the specified blob, and the blob's size.
func (s *ociImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
path, err := s.ref.blobPath(info.Digest)
if len(info.URLs) != 0 {
return s.getExternalBlob(info.URLs)
}
path, err := s.ref.blobPath(info.Digest, s.sharedBlobDir)
if err != nil {
return nil, 0, err
}
@@ -93,6 +114,35 @@ func (s *ociImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, err
return r, fi.Size(), nil
}
func (s *ociImageSource) GetSignatures() ([][]byte, error) {
func (s *ociImageSource) GetSignatures(context.Context) ([][]byte, error) {
return [][]byte{}, nil
}
func (s *ociImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, error) {
errWrap := errors.New("failed fetching external blob from all urls")
for _, url := range urls {
resp, err := s.client.Get(url)
if err != nil {
errWrap = errors.Wrapf(errWrap, "fetching %s failed %s", url, err.Error())
continue
}
if resp.StatusCode != http.StatusOK {
resp.Body.Close()
errWrap = errors.Wrapf(errWrap, "fetching %s failed, response code not 200", url)
continue
}
return resp.Body, getBlobSize(resp), nil
}
return nil, 0, errWrap
}
func getBlobSize(resp *http.Response) int64 {
size, err := strconv.ParseInt(resp.Header.Get("Content-Length"), 10, 64)
if err != nil {
size = -1
}
return size
}

View File

@@ -1,7 +1,9 @@
package layout
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"regexp"
"strings"
@@ -12,6 +14,7 @@ import (
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -33,7 +36,14 @@ func (t ociTransport) ParseReference(reference string) (types.ImageReference, er
return ParseReference(reference)
}
var refRegexp = regexp.MustCompile(`^([A-Za-z0-9._-]+)+$`)
// annotation spex from https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys
const (
separator = `(?:[-._:@+]|--)`
alphanum = `(?:[A-Za-z0-9]+)`
component = `(?:` + alphanum + `(?:` + separator + alphanum + `)*)`
)
var refRegexp = regexp.MustCompile(`^` + component + `(?:/` + component + `)*$`)
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
@@ -41,19 +51,14 @@ var refRegexp = regexp.MustCompile(`^([A-Za-z0-9._-]+)+$`)
// scope passed to this function will not be "", that value is always allowed.
func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
var dir string
sep := strings.LastIndex(scope, ":")
if sep == -1 {
dir = scope
} else {
dir = scope[:sep]
tag := scope[sep+1:]
if !refRegexp.MatchString(tag) {
return errors.Errorf("Invalid tag %s", tag)
}
}
sep := strings.SplitN(scope, ":", 2)
dir = sep[0]
if strings.Contains(dir, ":") {
return errors.Errorf("Invalid OCI reference %s: path contains a colon", scope)
if len(sep) == 2 {
image := sep[1]
if !refRegexp.MatchString(image) {
return errors.Errorf("Invalid image %s", image)
}
}
if !strings.HasPrefix(dir, "/") {
@@ -61,7 +66,7 @@ func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
}
// Refuse also "/", otherwise "/" and "" would have the same semantics,
// and "" could be unexpectedly shadowed by the "/" entry.
// (Note: we do allow "/:sometag", a bit ridiculous but why refuse it?)
// (Note: we do allow "/:someimage", a bit ridiculous but why refuse it?)
if scope == "/" {
return errors.New(`Invalid scope "/": Use the generic default scope ""`)
}
@@ -82,28 +87,26 @@ type ociReference struct {
// (But in general, we make no attempt to be completely safe against concurrent hostile filesystem modifications.)
dir string // As specified by the user. May be relative, contain symlinks, etc.
resolvedDir string // Absolute path with no symlinks, at least at the time of its creation. Primarily used for policy namespaces.
tag string
image string // If image=="", it means the only image in the index.json is used
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an OCI ImageReference.
func ParseReference(reference string) (types.ImageReference, error) {
var dir, tag string
sep := strings.LastIndex(reference, ":")
if sep == -1 {
dir = reference
tag = "latest"
} else {
dir = reference[:sep]
tag = reference[sep+1:]
var dir, image string
sep := strings.SplitN(reference, ":", 2)
dir = sep[0]
if len(sep) == 2 {
image = sep[1]
}
return NewReference(dir, tag)
return NewReference(dir, image)
}
// NewReference returns an OCI reference for a directory and a tag.
// NewReference returns an OCI reference for a directory and a image.
//
// We do not expose an API supplying the resolvedDir; we could, but recomputing it
// is generally cheap enough that we prefer being confident about the properties of resolvedDir.
func NewReference(dir, tag string) (types.ImageReference, error) {
func NewReference(dir, image string) (types.ImageReference, error) {
resolved, err := explicitfilepath.ResolvePathToFullyExplicit(dir)
if err != nil {
return nil, err
@@ -111,12 +114,12 @@ func NewReference(dir, tag string) (types.ImageReference, error) {
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
// from being ambiguous with values of PolicyConfigurationIdentity.
if strings.Contains(resolved, ":") {
return nil, errors.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, tag, resolved)
return nil, errors.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, image, resolved)
}
if !refRegexp.MatchString(tag) {
return nil, errors.Errorf("Invalid tag %s", tag)
if len(image) > 0 && !refRegexp.MatchString(image) {
return nil, errors.Errorf("Invalid image %s", image)
}
return ociReference{dir: dir, resolvedDir: resolved, tag: tag}, nil
return ociReference{dir: dir, resolvedDir: resolved, image: image}, nil
}
func (ref ociReference) Transport() types.ImageTransport {
@@ -129,7 +132,7 @@ func (ref ociReference) Transport() types.ImageTransport {
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref ociReference) StringWithinTransport() string {
return fmt.Sprintf("%s:%s", ref.dir, ref.tag)
return fmt.Sprintf("%s:%s", ref.dir, ref.image)
}
// DockerReference returns a Docker reference associated with this reference
@@ -147,7 +150,10 @@ func (ref ociReference) DockerReference() reference.Named {
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
// Returns "" if configuration identities for these references are not supported.
func (ref ociReference) PolicyConfigurationIdentity() string {
return fmt.Sprintf("%s:%s", ref.resolvedDir, ref.tag)
// NOTE: ref.image is not a part of the image identity, because "$dir:$someimage" and "$dir:" may mean the
// same image and the two cant be statically disambiguated. Using at least the repository directory is
// less granular but hopefully still useful.
return fmt.Sprintf("%s", ref.resolvedDir)
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
@@ -176,22 +182,86 @@ func (ref ociReference) PolicyConfigurationNamespaces() []string {
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src := newImageSource(ref)
src, err := newImageSource(ctx, ref)
if err != nil {
return nil, err
}
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// getIndex returns a pointer to the index references by this ociReference. If an error occurs opening an index nil is returned together
// with an error.
func (ref ociReference) getIndex() (*imgspecv1.Index, error) {
indexJSON, err := os.Open(ref.indexPath())
if err != nil {
return nil, err
}
defer indexJSON.Close()
index := &imgspecv1.Index{}
if err := json.NewDecoder(indexJSON).Decode(index); err != nil {
return nil, err
}
return index, nil
}
func (ref ociReference) getManifestDescriptor() (imgspecv1.Descriptor, error) {
index, err := ref.getIndex()
if err != nil {
return imgspecv1.Descriptor{}, err
}
var d *imgspecv1.Descriptor
if ref.image == "" {
// return manifest if only one image is in the oci directory
if len(index.Manifests) == 1 {
d = &index.Manifests[0]
} else {
// ask user to choose image when more than one image in the oci directory
return imgspecv1.Descriptor{}, errors.Wrapf(err, "more than one image in oci, choose an image")
}
} else {
// if image specified, look through all manifests for a match
for _, md := range index.Manifests {
if md.MediaType != imgspecv1.MediaTypeImageManifest {
continue
}
refName, ok := md.Annotations["org.opencontainers.image.ref.name"]
if !ok {
continue
}
if refName == ref.image {
d = &md
break
}
}
}
if d == nil {
return imgspecv1.Descriptor{}, fmt.Errorf("no descriptor found for reference %q", ref.image)
}
return *d, nil
}
// LoadManifestDescriptor loads the manifest descriptor to be used to retrieve the image name
// when pulling an image
func LoadManifestDescriptor(imgRef types.ImageReference) (imgspecv1.Descriptor, error) {
ociRef, ok := imgRef.(ociReference)
if !ok {
return imgspecv1.Descriptor{}, errors.Errorf("error typecasting, need type ociRef")
}
return ociRef.getManifestDescriptor()
}
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref ociReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ref), nil
func (ref ociReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref ociReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ref), nil
return newImageDestination(ctx, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
@@ -199,20 +269,24 @@ func (ref ociReference) DeleteImage(ctx *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for oci: images")
}
// ociLayoutPathPath returns a path for the oci-layout within a directory using OCI conventions.
// ociLayoutPath returns a path for the oci-layout within a directory using OCI conventions.
func (ref ociReference) ociLayoutPath() string {
return filepath.Join(ref.dir, "oci-layout")
}
// indexPath returns a path for the index.json within a directory using OCI conventions.
func (ref ociReference) indexPath() string {
return filepath.Join(ref.dir, "index.json")
}
// blobPath returns a path for a blob within a directory using OCI image-layout conventions.
func (ref ociReference) blobPath(digest digest.Digest) (string, error) {
func (ref ociReference) blobPath(digest digest.Digest, sharedBlobDir string) (string, error) {
if err := digest.Validate(); err != nil {
return "", errors.Wrapf(err, "unexpected digest reference %s", digest)
}
return filepath.Join(ref.dir, "blobs", digest.Algorithm().String(), digest.Hex()), nil
}
// descriptorPath returns a path for the manifest within a directory using OCI conventions.
func (ref ociReference) descriptorPath(digest string) string {
return filepath.Join(ref.dir, "refs", digest)
blobDir := filepath.Join(ref.dir, "blobs")
if sharedBlobDir != "" {
blobDir = sharedBlobDir
}
return filepath.Join(blobDir, digest.Algorithm().String(), digest.Hex()), nil
}

View File

@@ -2,6 +2,7 @@ package openshift
import (
"bytes"
"context"
"crypto/rand"
"encoding/json"
"fmt"
@@ -11,7 +12,6 @@ import (
"net/url"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
@@ -19,6 +19,7 @@ import (
"github.com/containers/image/version"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// openshiftClient is configuration for dealing with a single image stream, for reading or writing.
@@ -70,7 +71,7 @@ func newOpenshiftClient(ref openshiftReference) (*openshiftClient, error) {
}
// doRequest performs a correctly authenticated request to a specified path, and returns response body or an error object.
func (c *openshiftClient) doRequest(method, path string, requestBody []byte) ([]byte, error) {
func (c *openshiftClient) doRequest(ctx context.Context, method, path string, requestBody []byte) ([]byte, error) {
url := *c.baseURL
url.Path = path
var requestBodyReader io.Reader
@@ -82,6 +83,7 @@ func (c *openshiftClient) doRequest(method, path string, requestBody []byte) ([]
if err != nil {
return nil, err
}
req = req.WithContext(ctx)
if len(c.bearerToken) != 0 {
req.Header.Set("Authorization", "Bearer "+c.bearerToken)
@@ -132,10 +134,10 @@ func (c *openshiftClient) doRequest(method, path string, requestBody []byte) ([]
}
// getImage loads the specified image object.
func (c *openshiftClient) getImage(imageStreamImageName string) (*image, error) {
func (c *openshiftClient) getImage(ctx context.Context, imageStreamImageName string) (*image, error) {
// FIXME: validate components per validation.IsValidPathSegmentName?
path := fmt.Sprintf("/oapi/v1/namespaces/%s/imagestreamimages/%s@%s", c.ref.namespace, c.ref.stream, imageStreamImageName)
body, err := c.doRequest("GET", path, nil)
body, err := c.doRequest(ctx, "GET", path, nil)
if err != nil {
return nil, err
}
@@ -160,18 +162,15 @@ func (c *openshiftClient) convertDockerImageReference(ref string) (string, error
type openshiftImageSource struct {
client *openshiftClient
// Values specific to this image
ctx *types.SystemContext
requestedManifestMIMETypes []string
ctx *types.SystemContext
// State
docker types.ImageSource // The Docker Registry endpoint, or nil if not resolved yet
imageStreamImageName string // Resolved image identifier, or "" if not known yet
}
// newImageSource creates a new ImageSource for the specified reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// newImageSource creates a new ImageSource for the specified reference.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref openshiftReference, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func newImageSource(ctx *types.SystemContext, ref openshiftReference) (types.ImageSource, error) {
client, err := newOpenshiftClient(ref)
if err != nil {
return nil, err
@@ -180,7 +179,6 @@ func newImageSource(ctx *types.SystemContext, ref openshiftReference, requestedM
return &openshiftImageSource{
client: client,
ctx: ctx,
requestedManifestMIMETypes: requestedManifestMIMETypes,
}, nil
}
@@ -203,7 +201,7 @@ func (s *openshiftImageSource) Close() error {
}
func (s *openshiftImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
if err := s.ensureImageIsResolved(); err != nil {
if err := s.ensureImageIsResolved(context.TODO()); err != nil {
return nil, "", err
}
return s.docker.GetTargetManifest(digest)
@@ -212,7 +210,7 @@ func (s *openshiftImageSource) GetTargetManifest(digest digest.Digest) ([]byte,
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
if err := s.ensureImageIsResolved(); err != nil {
if err := s.ensureImageIsResolved(context.TODO()); err != nil {
return nil, "", err
}
return s.docker.GetManifest()
@@ -220,18 +218,18 @@ func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *openshiftImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if err := s.ensureImageIsResolved(); err != nil {
if err := s.ensureImageIsResolved(context.TODO()); err != nil {
return nil, 0, err
}
return s.docker.GetBlob(info)
}
func (s *openshiftImageSource) GetSignatures() ([][]byte, error) {
if err := s.ensureImageIsResolved(); err != nil {
func (s *openshiftImageSource) GetSignatures(ctx context.Context) ([][]byte, error) {
if err := s.ensureImageIsResolved(ctx); err != nil {
return nil, err
}
image, err := s.client.getImage(s.imageStreamImageName)
image, err := s.client.getImage(ctx, s.imageStreamImageName)
if err != nil {
return nil, err
}
@@ -245,14 +243,14 @@ func (s *openshiftImageSource) GetSignatures() ([][]byte, error) {
}
// ensureImageIsResolved sets up s.docker and s.imageStreamImageName
func (s *openshiftImageSource) ensureImageIsResolved() error {
func (s *openshiftImageSource) ensureImageIsResolved(ctx context.Context) error {
if s.docker != nil {
return nil
}
// FIXME: validate components per validation.IsValidPathSegmentName?
path := fmt.Sprintf("/oapi/v1/namespaces/%s/imagestreams/%s", s.client.ref.namespace, s.client.ref.stream)
body, err := s.client.doRequest("GET", path, nil)
body, err := s.client.doRequest(ctx, "GET", path, nil)
if err != nil {
return err
}
@@ -284,7 +282,7 @@ func (s *openshiftImageSource) ensureImageIsResolved() error {
if err != nil {
return err
}
d, err := dockerRef.NewImageSource(s.ctx, s.requestedManifestMIMETypes)
d, err := dockerRef.NewImageSource(s.ctx)
if err != nil {
return err
}
@@ -338,10 +336,7 @@ func (d *openshiftImageDestination) Close() error {
}
func (d *openshiftImageDestination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
return d.docker.SupportedManifestMIMETypes()
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
@@ -361,6 +356,11 @@ func (d *openshiftImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *openshiftImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -383,6 +383,10 @@ func (d *openshiftImageDestination) ReapplyBlob(info types.BlobInfo) (types.Blob
return d.docker.ReapplyBlob(info)
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *openshiftImageDestination) PutManifest(m []byte) error {
manifestDigest, err := manifest.Digest(m)
if err != nil {
@@ -404,7 +408,7 @@ func (d *openshiftImageDestination) PutSignatures(signatures [][]byte) error {
return nil // No need to even read the old state.
}
image, err := d.client.getImage(d.imageStreamImageName)
image, err := d.client.getImage(context.TODO(), d.imageStreamImageName)
if err != nil {
return err
}
@@ -445,7 +449,7 @@ sigExists:
Content: newSig,
}
body, err := json.Marshal(sig)
_, err = d.client.doRequest("POST", "/oapi/v1/imagesignatures", body)
_, err = d.client.doRequest(context.TODO(), "POST", "/oapi/v1/imagesignatures", body)
if err != nil {
return err
}

View File

@@ -130,19 +130,17 @@ func (ref openshiftReference) PolicyConfigurationNamespaces() []string {
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref openshiftReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := newImageSource(ctx, ref, nil)
src, err := newImageSource(ctx, ref)
if err != nil {
return nil, err
}
return genericImage.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref openshiftReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref, requestedManifestMIMETypes)
func (ref openshiftReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.

View File

@@ -0,0 +1,332 @@
// +build !containers_image_ostree_stub
package ostree
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/ostreedev/ostree-go/pkg/otbuiltin"
)
type blobToImport struct {
Size int64
Digest digest.Digest
BlobPath string
}
type descriptor struct {
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
}
type manifestSchema struct {
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
}
type ostreeImageDestination struct {
ref ostreeReference
manifest string
schema manifestSchema
tmpDirPath string
blobs map[string]*blobToImport
digest digest.Digest
}
// newImageDestination returns an ImageDestination for writing to an existing ostree.
func newImageDestination(ref ostreeReference, tmpDirPath string) (types.ImageDestination, error) {
tmpDirPath = filepath.Join(tmpDirPath, ref.branchName)
if err := ensureDirectoryExists(tmpDirPath); err != nil {
return nil, err
}
return &ostreeImageDestination{ref, "", manifestSchema{}, tmpDirPath, map[string]*blobToImport{}, ""}, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *ostreeImageDestination) Reference() types.ImageReference {
return d.ref
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *ostreeImageDestination) Close() error {
return os.RemoveAll(d.tmpDirPath)
}
func (d *ostreeImageDestination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema2MediaType,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *ostreeImageDestination) SupportsSignatures() error {
return nil
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *ostreeImageDestination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ostreeImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ostreeImageDestination) MustMatchRuntimeOS() bool {
return true
}
func (d *ostreeImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
tmpDir, err := ioutil.TempDir(d.tmpDirPath, "blob")
if err != nil {
return types.BlobInfo{}, err
}
blobPath := filepath.Join(tmpDir, "content")
blobFile, err := os.Create(blobPath)
if err != nil {
return types.BlobInfo{}, err
}
defer blobFile.Close()
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
}
hash := computedDigest.Hex()
d.blobs[hash] = &blobToImport{Size: size, Digest: computedDigest, BlobPath: blobPath}
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
func fixFiles(dir string, usermode bool) error {
entries, err := ioutil.ReadDir(dir)
if err != nil {
return err
}
for _, info := range entries {
fullpath := filepath.Join(dir, info.Name())
if info.Mode()&(os.ModeNamedPipe|os.ModeSocket|os.ModeDevice) != 0 {
if err := os.Remove(fullpath); err != nil {
return err
}
continue
}
if info.IsDir() {
if usermode {
if err := os.Chmod(fullpath, info.Mode()|0700); err != nil {
return err
}
}
err = fixFiles(fullpath, usermode)
if err != nil {
return err
}
} else if usermode && (info.Mode().IsRegular()) {
if err := os.Chmod(fullpath, info.Mode()|0600); err != nil {
return err
}
}
}
return nil
}
func (d *ostreeImageDestination) ostreeCommit(repo *otbuiltin.Repo, branch string, root string, metadata []string) error {
opts := otbuiltin.NewCommitOptions()
opts.AddMetadataString = metadata
opts.Timestamp = time.Now()
// OCI layers have no parent OSTree commit
opts.Parent = "0000000000000000000000000000000000000000000000000000000000000000"
_, err := repo.Commit(root, branch, opts)
return err
}
func (d *ostreeImageDestination) importBlob(repo *otbuiltin.Repo, blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
destinationPath := filepath.Join(d.tmpDirPath, blob.Digest.Hex(), "root")
if err := ensureDirectoryExists(destinationPath); err != nil {
return err
}
defer func() {
os.Remove(blob.BlobPath)
os.RemoveAll(destinationPath)
}()
if os.Getuid() == 0 {
if err := archive.UntarPath(blob.BlobPath, destinationPath); err != nil {
return err
}
if err := fixFiles(destinationPath, false); err != nil {
return err
}
} else {
os.MkdirAll(destinationPath, 0755)
if err := exec.Command("tar", "-C", destinationPath, "--no-same-owner", "--no-same-permissions", "--delay-directory-restore", "-xf", blob.BlobPath).Run(); err != nil {
return err
}
if err := fixFiles(destinationPath, true); err != nil {
return err
}
}
return d.ostreeCommit(repo, ostreeBranch, destinationPath, []string{fmt.Sprintf("docker.size=%d", blob.Size)})
}
func (d *ostreeImageDestination) importConfig(blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
return exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.size=%d", blob.Size),
"--branch", ostreeBranch, filepath.Dir(blob.BlobPath)).Run()
}
func (d *ostreeImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
branch := fmt.Sprintf("ociimage/%s", info.Digest.Hex())
output, err := exec.Command("ostree", "show", "--repo", d.ref.repo, "--print-metadata-key=docker.size", branch).CombinedOutput()
if err != nil {
if bytes.Index(output, []byte("not found")) >= 0 || bytes.Index(output, []byte("No such")) >= 0 {
return false, -1, nil
}
return false, -1, err
}
size, err := strconv.ParseInt(strings.Trim(string(output), "'\n"), 10, 64)
if err != nil {
return false, -1, err
}
return true, size, nil
}
func (d *ostreeImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *ostreeImageDestination) PutManifest(manifestBlob []byte) error {
d.manifest = string(manifestBlob)
if err := json.Unmarshal(manifestBlob, &d.schema); err != nil {
return err
}
manifestPath := filepath.Join(d.tmpDirPath, d.ref.manifestPath())
if err := ensureParentDirectoryExists(manifestPath); err != nil {
return err
}
digest, err := manifest.Digest(manifestBlob)
if err != nil {
return err
}
d.digest = digest
return ioutil.WriteFile(manifestPath, manifestBlob, 0644)
}
func (d *ostreeImageDestination) PutSignatures(signatures [][]byte) error {
path := filepath.Join(d.tmpDirPath, d.ref.signaturePath(0))
if err := ensureParentDirectoryExists(path); err != nil {
return err
}
for i, sig := range signatures {
signaturePath := filepath.Join(d.tmpDirPath, d.ref.signaturePath(i))
if err := ioutil.WriteFile(signaturePath, sig, 0644); err != nil {
return err
}
}
return nil
}
func (d *ostreeImageDestination) Commit() error {
repo, err := otbuiltin.OpenRepo(d.ref.repo)
if err != nil {
return err
}
_, err = repo.PrepareTransaction()
if err != nil {
return err
}
for _, layer := range d.schema.LayersDescriptors {
hash := layer.Digest.Hex()
blob := d.blobs[hash]
// if the blob is not present in d.blobs then it is already stored in OSTree,
// and we don't need to import it.
if blob == nil {
continue
}
err := d.importBlob(repo, blob)
if err != nil {
return err
}
}
hash := d.schema.ConfigDescriptor.Digest.Hex()
blob := d.blobs[hash]
if blob != nil {
err := d.importConfig(blob)
if err != nil {
return err
}
}
manifestPath := filepath.Join(d.tmpDirPath, "manifest")
metadata := []string{fmt.Sprintf("docker.manifest=%s", string(d.manifest)), fmt.Sprintf("docker.digest=%s", string(d.digest))}
err = d.ostreeCommit(repo, fmt.Sprintf("ociimage/%s", d.ref.branchName), manifestPath, metadata)
_, err = repo.CommitTransaction()
return err
}
func ensureDirectoryExists(path string) error {
if _, err := os.Stat(path); err != nil && os.IsNotExist(err) {
if err := os.MkdirAll(path, 0755); err != nil {
return err
}
}
return nil
}
func ensureParentDirectoryExists(path string) error {
return ensureDirectoryExists(filepath.Dir(path))
}

View File

@@ -0,0 +1,226 @@
// +build !containers_image_ostree_stub
package ostree
import (
"bytes"
"fmt"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
)
const defaultOSTreeRepo = "/ostree/repo"
// Transport is an ImageTransport for ostree paths.
var Transport = ostreeTransport{}
type ostreeTransport struct{}
func (t ostreeTransport) Name() string {
return "ostree"
}
func init() {
transports.Register(Transport)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t ostreeTransport) ValidatePolicyConfigurationScope(scope string) error {
sep := strings.Index(scope, ":")
if sep < 0 {
return errors.Errorf("Invalid ostree: scope %s: Must include a repo", scope)
}
repo := scope[:sep]
if !strings.HasPrefix(repo, "/") {
return errors.Errorf("Invalid ostree: scope %s: repository must be an absolute path", scope)
}
cleaned := filepath.Clean(repo)
if cleaned != repo {
return errors.Errorf(`Invalid ostree: scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
}
// FIXME? In the namespaces within a repo,
// we could be verifying the various character set and length restrictions
// from docker/distribution/reference.regexp.go, but other than that there
// are few semantically invalid strings.
return nil
}
// ostreeReference is an ImageReference for ostree paths.
type ostreeReference struct {
image string
branchName string
repo string
}
func (t ostreeTransport) ParseReference(ref string) (types.ImageReference, error) {
var repo = ""
var image = ""
s := strings.SplitN(ref, "@/", 2)
if len(s) == 1 {
image, repo = s[0], defaultOSTreeRepo
} else {
image, repo = s[0], "/"+s[1]
}
return NewReference(image, repo)
}
// NewReference returns an OSTree reference for a specified repo and image.
func NewReference(image string, repo string) (types.ImageReference, error) {
// image is not _really_ in a containers/image/docker/reference format;
// as far as the libOSTree ociimage/* namespace is concerned, it is more or
// less an arbitrary string with an implied tag.
// Parse the image using reference.ParseNormalizedNamed so that we can
// check whether the images has a tag specified and we can add ":latest" if needed
ostreeImage, err := reference.ParseNormalizedNamed(image)
if err != nil {
return nil, err
}
if reference.IsNameOnly(ostreeImage) {
image = image + ":latest"
}
resolved, err := explicitfilepath.ResolvePathToFullyExplicit(repo)
if err != nil {
// With os.IsNotExist(err), the parent directory of repo is also not existent;
// that should ordinarily not happen, but it would be a bit weird to reject
// references which do not specify a repo just because the implicit defaultOSTreeRepo
// does not exist.
if os.IsNotExist(err) && repo == defaultOSTreeRepo {
resolved = repo
} else {
return nil, err
}
}
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
// from being ambiguous with values of PolicyConfigurationIdentity.
if strings.Contains(resolved, ":") {
return nil, errors.Errorf("Invalid OSTreeCI reference %s@%s: path %s contains a colon", image, repo, resolved)
}
return ostreeReference{
image: image,
branchName: encodeOStreeRef(image),
repo: resolved,
}, nil
}
func (ref ostreeReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref ostreeReference) StringWithinTransport() string {
return fmt.Sprintf("%s@%s", ref.image, ref.repo)
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref ostreeReference) DockerReference() reference.Named {
return nil
}
func (ref ostreeReference) PolicyConfigurationIdentity() string {
return fmt.Sprintf("%s:%s", ref.repo, ref.image)
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref ostreeReference) PolicyConfigurationNamespaces() []string {
s := strings.SplitN(ref.image, ":", 2)
if len(s) != 2 { // Coverage: Should never happen, NewReference above ensures ref.image has a :tag.
panic(fmt.Sprintf("Internal inconsistency: ref.image value %q does not have a :tag", ref.image))
}
name := s[0]
res := []string{}
for {
res = append(res, fmt.Sprintf("%s:%s", ref.repo, name))
lastSlash := strings.LastIndex(name, "/")
if lastSlash == -1 {
break
}
name = name[:lastSlash]
}
return res
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ostreeReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return nil, errors.New("Reading ostree: images is currently not supported")
}
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref ostreeReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return nil, errors.New("Reading ostree: images is currently not supported")
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref ostreeReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
var tmpDir string
if ctx == nil || ctx.OSTreeTmpDirPath == "" {
tmpDir = os.TempDir()
} else {
tmpDir = ctx.OSTreeTmpDirPath
}
return newImageDestination(ref, tmpDir)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref ostreeReference) DeleteImage(ctx *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for ostree: images")
}
var ostreeRefRegexp = regexp.MustCompile(`^[A-Za-z0-9.-]$`)
func encodeOStreeRef(in string) string {
var buffer bytes.Buffer
for i := range in {
sub := in[i : i+1]
if ostreeRefRegexp.MatchString(sub) {
buffer.WriteString(sub)
} else {
buffer.WriteString(fmt.Sprintf("_%02X", sub[0]))
}
}
return buffer.String()
}
// manifestPath returns a path for the manifest within a ostree using our conventions.
func (ref ostreeReference) manifestPath() string {
return filepath.Join("manifest", "manifest.json")
}
// signaturePath returns a path for a signature within a ostree using our conventions.
func (ref ostreeReference) signaturePath(index int) string {
return filepath.Join("manifest", fmt.Sprintf("signature-%d", index+1))
}

View File

@@ -8,7 +8,7 @@ import (
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
"github.com/sirupsen/logrus"
)
// DecompressorFunc returns the decompressed stream, given a compressed stream.

View File

@@ -0,0 +1,295 @@
package config
import (
"encoding/base64"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/containers/image/types"
helperclient "github.com/docker/docker-credential-helpers/client"
"github.com/docker/docker-credential-helpers/credentials"
"github.com/docker/docker/pkg/homedir"
"github.com/pkg/errors"
)
type dockerAuthConfig struct {
Auth string `json:"auth,omitempty"`
}
type dockerConfigFile struct {
AuthConfigs map[string]dockerAuthConfig `json:"auths"`
CredHelpers map[string]string `json:"credHelpers,omitempty"`
}
const (
defaultPath = "/run/user"
authCfg = "containers"
authCfgFileName = "auth.json"
dockerCfg = ".docker"
dockerCfgFileName = "config.json"
dockerLegacyCfg = ".dockercfg"
)
var (
// ErrNotLoggedIn is returned for users not logged into a registry
// that they are trying to logout of
ErrNotLoggedIn = errors.New("not logged in")
)
// SetAuthentication stores the username and password in the auth.json file
func SetAuthentication(ctx *types.SystemContext, registry, username, password string) error {
return modifyJSON(ctx, func(auths *dockerConfigFile) (bool, error) {
if ch, exists := auths.CredHelpers[registry]; exists {
return false, setAuthToCredHelper(ch, registry, username, password)
}
creds := base64.StdEncoding.EncodeToString([]byte(username + ":" + password))
newCreds := dockerAuthConfig{Auth: creds}
auths.AuthConfigs[registry] = newCreds
return true, nil
})
}
// GetAuthentication returns the registry credentials stored in
// either auth.json file or .docker/config.json
// If an entry is not found empty strings are returned for the username and password
func GetAuthentication(ctx *types.SystemContext, registry string) (string, string, error) {
if ctx != nil && ctx.DockerAuthConfig != nil {
return ctx.DockerAuthConfig.Username, ctx.DockerAuthConfig.Password, nil
}
dockerLegacyPath := filepath.Join(homedir.Get(), dockerLegacyCfg)
paths := [3]string{getPathToAuth(ctx), filepath.Join(homedir.Get(), dockerCfg, dockerCfgFileName), dockerLegacyPath}
for _, path := range paths {
legacyFormat := path == dockerLegacyPath
username, password, err := findAuthentication(registry, path, legacyFormat)
if err != nil {
return "", "", err
}
if username != "" && password != "" {
return username, password, nil
}
}
return "", "", nil
}
// GetUserLoggedIn returns the username logged in to registry from either
// auth.json or XDG_RUNTIME_DIR
// Used to tell the user if someone is logged in to the registry when logging in
func GetUserLoggedIn(ctx *types.SystemContext, registry string) string {
path := getPathToAuth(ctx)
username, _, _ := findAuthentication(registry, path, false)
if username != "" {
return username
}
return ""
}
// RemoveAuthentication deletes the credentials stored in auth.json
func RemoveAuthentication(ctx *types.SystemContext, registry string) error {
return modifyJSON(ctx, func(auths *dockerConfigFile) (bool, error) {
// First try cred helpers.
if ch, exists := auths.CredHelpers[registry]; exists {
return false, deleteAuthFromCredHelper(ch, registry)
}
if _, ok := auths.AuthConfigs[registry]; ok {
delete(auths.AuthConfigs, registry)
} else if _, ok := auths.AuthConfigs[normalizeRegistry(registry)]; ok {
delete(auths.AuthConfigs, normalizeRegistry(registry))
} else {
return false, ErrNotLoggedIn
}
return true, nil
})
}
// RemoveAllAuthentication deletes all the credentials stored in auth.json
func RemoveAllAuthentication(ctx *types.SystemContext) error {
return modifyJSON(ctx, func(auths *dockerConfigFile) (bool, error) {
auths.CredHelpers = make(map[string]string)
auths.AuthConfigs = make(map[string]dockerAuthConfig)
return true, nil
})
}
// getPath gets the path of the auth.json file
// The path can be overriden by the user if the overwrite-path flag is set
// If the flag is not set and XDG_RUNTIME_DIR is ser, the auth.json file is saved in XDG_RUNTIME_DIR/containers
// Otherwise, the auth.json file is stored in /run/user/UID/containers
func getPathToAuth(ctx *types.SystemContext) string {
if ctx != nil {
if ctx.AuthFilePath != "" {
return ctx.AuthFilePath
}
if ctx.RootForImplicitAbsolutePaths != "" {
return filepath.Join(ctx.RootForImplicitAbsolutePaths, defaultPath, strconv.Itoa(os.Getuid()), authCfg, authCfgFileName)
}
}
runtimeDir := os.Getenv("XDG_RUNTIME_DIR")
if runtimeDir == "" {
runtimeDir = filepath.Join(defaultPath, strconv.Itoa(os.Getuid()))
}
return filepath.Join(runtimeDir, authCfg, authCfgFileName)
}
// readJSONFile unmarshals the authentications stored in the auth.json file and returns it
// or returns an empty dockerConfigFile data structure if auth.json does not exist
// if the file exists and is empty, readJSONFile returns an error
func readJSONFile(path string, legacyFormat bool) (dockerConfigFile, error) {
var auths dockerConfigFile
raw, err := ioutil.ReadFile(path)
if os.IsNotExist(err) {
auths.AuthConfigs = map[string]dockerAuthConfig{}
return auths, nil
}
if legacyFormat {
if err = json.Unmarshal(raw, &auths.AuthConfigs); err != nil {
return dockerConfigFile{}, errors.Wrapf(err, "error unmarshaling JSON at %q", path)
}
return auths, nil
}
if err = json.Unmarshal(raw, &auths); err != nil {
return dockerConfigFile{}, errors.Wrapf(err, "error unmarshaling JSON at %q", path)
}
return auths, nil
}
// modifyJSON writes to auth.json if the dockerConfigFile has been updated
func modifyJSON(ctx *types.SystemContext, editor func(auths *dockerConfigFile) (bool, error)) error {
path := getPathToAuth(ctx)
dir := filepath.Dir(path)
if _, err := os.Stat(dir); os.IsNotExist(err) {
if err = os.Mkdir(dir, 0700); err != nil {
return errors.Wrapf(err, "error creating directory %q", dir)
}
}
auths, err := readJSONFile(path, false)
if err != nil {
return errors.Wrapf(err, "error reading JSON file %q", path)
}
updated, err := editor(&auths)
if err != nil {
return errors.Wrapf(err, "error updating %q", path)
}
if updated {
newData, err := json.MarshalIndent(auths, "", "\t")
if err != nil {
return errors.Wrapf(err, "error marshaling JSON %q", path)
}
if err = ioutil.WriteFile(path, newData, 0755); err != nil {
return errors.Wrapf(err, "error writing to file %q", path)
}
}
return nil
}
func getAuthFromCredHelper(credHelper, registry string) (string, string, error) {
helperName := fmt.Sprintf("docker-credential-%s", credHelper)
p := helperclient.NewShellProgramFunc(helperName)
creds, err := helperclient.Get(p, registry)
if err != nil {
return "", "", err
}
return creds.Username, creds.Secret, nil
}
func setAuthToCredHelper(credHelper, registry, username, password string) error {
helperName := fmt.Sprintf("docker-credential-%s", credHelper)
p := helperclient.NewShellProgramFunc(helperName)
creds := &credentials.Credentials{
ServerURL: registry,
Username: username,
Secret: password,
}
return helperclient.Store(p, creds)
}
func deleteAuthFromCredHelper(credHelper, registry string) error {
helperName := fmt.Sprintf("docker-credential-%s", credHelper)
p := helperclient.NewShellProgramFunc(helperName)
return helperclient.Erase(p, registry)
}
// findAuthentication looks for auth of registry in path
func findAuthentication(registry, path string, legacyFormat bool) (string, string, error) {
auths, err := readJSONFile(path, legacyFormat)
if err != nil {
return "", "", errors.Wrapf(err, "error reading JSON file %q", path)
}
// First try cred helpers. They should always be normalized.
if ch, exists := auths.CredHelpers[registry]; exists {
return getAuthFromCredHelper(ch, registry)
}
// I'm feeling lucky
if val, exists := auths.AuthConfigs[registry]; exists {
return decodeDockerAuth(val.Auth)
}
// bad luck; let's normalize the entries first
registry = normalizeRegistry(registry)
normalizedAuths := map[string]dockerAuthConfig{}
for k, v := range auths.AuthConfigs {
normalizedAuths[normalizeRegistry(k)] = v
}
if val, exists := normalizedAuths[registry]; exists {
return decodeDockerAuth(val.Auth)
}
return "", "", nil
}
func decodeDockerAuth(s string) (string, string, error) {
decoded, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return "", "", err
}
parts := strings.SplitN(string(decoded), ":", 2)
if len(parts) != 2 {
// if it's invalid just skip, as docker does
return "", "", nil
}
user := parts[0]
password := strings.Trim(parts[1], "\x00")
return user, password, nil
}
// convertToHostname converts a registry url which has http|https prepended
// to just an hostname.
// Copied from github.com/docker/docker/registry/auth.go
func convertToHostname(url string) string {
stripped := url
if strings.HasPrefix(url, "http://") {
stripped = strings.TrimPrefix(url, "http://")
} else if strings.HasPrefix(url, "https://") {
stripped = strings.TrimPrefix(url, "https://")
}
nameParts := strings.SplitN(stripped, "/", 2)
return nameParts[0]
}
func normalizeRegistry(registry string) string {
normalized := convertToHostname(registry)
switch normalized {
case "registry-1.docker.io", "docker.io":
return "index.docker.io"
}
return normalized
}

View File

@@ -0,0 +1,102 @@
package tlsclientconfig
import (
"crypto/tls"
"io/ioutil"
"net"
"net/http"
"os"
"path/filepath"
"strings"
"time"
"github.com/docker/go-connections/sockets"
"github.com/docker/go-connections/tlsconfig"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// SetupCertificates opens all .crt, .cert, and .key files in dir and appends / loads certs and key pairs as appropriate to tlsc
func SetupCertificates(dir string, tlsc *tls.Config) error {
logrus.Debugf("Looking for TLS certificates and private keys in %s", dir)
fs, err := ioutil.ReadDir(dir)
if err != nil {
if os.IsNotExist(err) {
return nil
}
if os.IsPermission(err) {
logrus.Debugf("Skipping scan of %s due to permission error: %v", dir, err)
return nil
}
return err
}
for _, f := range fs {
fullPath := filepath.Join(dir, f.Name())
if strings.HasSuffix(f.Name(), ".crt") {
systemPool, err := tlsconfig.SystemCertPool()
if err != nil {
return errors.Wrap(err, "unable to get system cert pool")
}
tlsc.RootCAs = systemPool
logrus.Debugf(" crt: %s", fullPath)
data, err := ioutil.ReadFile(fullPath)
if err != nil {
return err
}
tlsc.RootCAs.AppendCertsFromPEM(data)
}
if strings.HasSuffix(f.Name(), ".cert") {
certName := f.Name()
keyName := certName[:len(certName)-5] + ".key"
logrus.Debugf(" cert: %s", fullPath)
if !hasFile(fs, keyName) {
return errors.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
cert, err := tls.LoadX509KeyPair(filepath.Join(dir, certName), filepath.Join(dir, keyName))
if err != nil {
return err
}
tlsc.Certificates = append(tlsc.Certificates, cert)
}
if strings.HasSuffix(f.Name(), ".key") {
keyName := f.Name()
certName := keyName[:len(keyName)-4] + ".cert"
logrus.Debugf(" key: %s", fullPath)
if !hasFile(fs, certName) {
return errors.Errorf("missing client certificate %s for key %s", certName, keyName)
}
}
}
return nil
}
func hasFile(files []os.FileInfo, name string) bool {
for _, f := range files {
if f.Name() == name {
return true
}
}
return false
}
// NewTransport Creates a default transport
func NewTransport() *http.Transport {
direct := &net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}
tr := &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: direct.Dial,
TLSHandshakeTimeout: 10 * time.Second,
// TODO(dmcgowan): Call close idle connections when complete and use keep alive
DisableKeepAlives: true,
}
proxyDialer, err := sockets.DialerFromEnvironment(direct)
if err == nil {
tr.Dial = proxyDialer.Dial
}
return tr
}

View File

@@ -132,11 +132,17 @@ func (m *openpgpSigningMechanism) Verify(unverifiedSignature []byte) (contents [
if md.SignedBy == nil {
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Invalid GPG signature: %#v", md.Signature)}
}
if md.Signature.SigLifetimeSecs != nil {
expiry := md.Signature.CreationTime.Add(time.Duration(*md.Signature.SigLifetimeSecs) * time.Second)
if time.Now().After(expiry) {
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Signature expired on %s", expiry)}
if md.Signature != nil {
if md.Signature.SigLifetimeSecs != nil {
expiry := md.Signature.CreationTime.Add(time.Duration(*md.Signature.SigLifetimeSecs) * time.Second)
if time.Now().After(expiry) {
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Signature expired on %s", expiry)}
}
}
} else if md.SignatureV3 == nil {
// Coverage: If md.SignedBy != nil, the final md.UnverifiedBody.Read() either sets one of md.Signature or md.SignatureV3,
// or sets md.SignatureError.
return nil, "", InvalidSignatureError{msg: "Unexpected openpgp.MessageDetails: neither Signature nor SignatureV3 is set"}
}
// Uppercase the fingerprint to be compatible with gpgme

View File

@@ -70,7 +70,11 @@ func NewPolicyFromFile(fileName string) (*Policy, error) {
if err != nil {
return nil, err
}
return NewPolicyFromBytes(contents)
policy, err := NewPolicyFromBytes(contents)
if err != nil {
return nil, errors.Wrapf(err, "invalid policy in %q", fileName)
}
return policy, nil
}
// NewPolicyFromBytes returns a policy parsed from the specified blob.

View File

@@ -6,9 +6,11 @@
package signature
import (
"github.com/Sirupsen/logrus"
"context"
"github.com/containers/image/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// PolicyRequirementError is an explanatory text for rejecting a signature or an image.
@@ -188,7 +190,8 @@ func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.UnparsedIma
reqs := pc.requirementsForImageRef(image.Reference())
// FIXME: rename Signatures to UnverifiedSignatures
unverifiedSignatures, err := image.Signatures()
// FIXME: pass context.Context
unverifiedSignatures, err := image.Signatures(context.TODO())
if err != nil {
return nil, err
}

View File

@@ -3,8 +3,8 @@
package signature
import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/types"
"github.com/sirupsen/logrus"
)
func (pr *prSignedBaseLayer) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {

View File

@@ -3,6 +3,7 @@
package signature
import (
"context"
"fmt"
"io/ioutil"
"strings"
@@ -90,7 +91,8 @@ func (pr *prSignedBy) isSignatureAuthorAccepted(image types.UnparsedImage, sig [
}
func (pr *prSignedBy) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
sigs, err := image.Signatures()
// FIXME: pass context.Context
sigs, err := image.Signatures(context.TODO())
if err != nil {
return false, err
}

View File

@@ -1,5 +1,7 @@
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
// NOTE: Keep this in sync with docs/atomic-signature.md and docs/atomic-signature-embedded.json!
package signature
import (
@@ -178,13 +180,9 @@ func (s *untrustedSignature) strictUnmarshalJSON(data []byte) error {
}
s.UntrustedDockerManifestDigest = digest.Digest(digestString)
if err := paranoidUnmarshalJSONObjectExactFields(identity, map[string]interface{}{
return paranoidUnmarshalJSONObjectExactFields(identity, map[string]interface{}{
"docker-reference": &s.UntrustedDockerReference,
}); err != nil {
return err
}
return nil
})
}
// Sign formats the signature and returns a blob signed using mech and keyIdentity

View File

@@ -1,7 +1,10 @@
// +build !containers_image_storage_stub
package storage
import (
"bytes"
"context"
"encoding/json"
"io"
"io/ioutil"
@@ -9,14 +12,14 @@ import (
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/storage"
ddigest "github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
)
var (
@@ -71,14 +74,9 @@ type storageImage struct {
// newImageSource sets us up to read out an image, which needs to already exist.
func newImageSource(imageRef storageReference) (*storageImageSource, error) {
id := imageRef.resolveID()
if id == "" {
logrus.Errorf("no image matching reference %q found", imageRef.StringWithinTransport())
return nil, ErrNoSuchImage
}
img, err := imageRef.transport.store.GetImage(id)
img, err := imageRef.resolveImage()
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", id)
return nil, err
}
image := &storageImageSource{
imageRef: imageRef,
@@ -143,9 +141,9 @@ func (s *storageImageDestination) putBlob(stream io.Reader, blobinfo types.BlobI
Size: -1,
}
// Try to read an initial snippet of the blob.
header := make([]byte, 10240)
n, err := stream.Read(header)
if err != nil && err != io.EOF {
buf := [archive.HeaderSize]byte{}
n, err := io.ReadAtLeast(stream, buf[:], len(buf))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
return errorBlobInfo, err
}
// Set up to read the whole blob (the initial snippet, plus the rest)
@@ -159,9 +157,9 @@ func (s *storageImageDestination) putBlob(stream io.Reader, blobinfo types.BlobI
}
hash := ""
counter := ioutils.NewWriteCounter(hasher.Hash())
defragmented := io.MultiReader(bytes.NewBuffer(header[:n]), stream)
defragmented := io.MultiReader(bytes.NewBuffer(buf[:n]), stream)
multi := io.TeeReader(defragmented, counter)
if (n > 0) && archive.IsArchive(header[:n]) {
if (n > 0) && archive.IsArchive(buf[:n]) {
// It's a filesystem layer. If it's not the first one in the
// image, we assume that the most recently added layer is its
// parent.
@@ -179,11 +177,11 @@ func (s *storageImageDestination) putBlob(stream io.Reader, blobinfo types.BlobI
}
// Attempt to create the identified layer and import its contents.
layer, uncompressedSize, err := s.imageRef.transport.store.PutLayer(id, parentLayer, nil, "", true, multi)
if err != nil && err != storage.ErrDuplicateID {
if err != nil && errors.Cause(err) != storage.ErrDuplicateID {
logrus.Debugf("error importing layer blob %q as %q: %v", blobinfo.Digest, id, err)
return errorBlobInfo, err
}
if err == storage.ErrDuplicateID {
if errors.Cause(err) == storage.ErrDuplicateID {
// We specified an ID, and there's already a layer with
// the same ID. Drain the input so that we can look at
// its length and digest.
@@ -296,7 +294,7 @@ func (s *storageImageDestination) PutBlob(stream io.Reader, blobinfo types.BlobI
// it returns a non-nil error only on an unexpected failure.
func (s *storageImageDestination) HasBlob(blobinfo types.BlobInfo) (bool, int64, error) {
if blobinfo.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
return false, -1, errors.Errorf(`Can not check for a blob with unknown digest`)
}
for _, blob := range s.BlobList {
if blob.Digest == blobinfo.Digest {
@@ -312,7 +310,7 @@ func (s *storageImageDestination) ReapplyBlob(blobinfo types.BlobInfo) (types.Bl
return types.BlobInfo{}, err
}
if layerList, ok := s.Layers[blobinfo.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, blobinfo.Digest.String())
b, err := s.imageRef.transport.store.ImageBigData(s.ID, blobinfo.Digest.String())
if err != nil {
return types.BlobInfo{}, err
}
@@ -336,21 +334,37 @@ func (s *storageImageDestination) Commit() error {
}
img, err := s.imageRef.transport.store.CreateImage(s.ID, nil, lastLayer, "", nil)
if err != nil {
logrus.Debugf("error creating image: %q", err)
return err
if errors.Cause(err) != storage.ErrDuplicateID {
logrus.Debugf("error creating image: %q", err)
return errors.Wrapf(err, "error creating image %q", s.ID)
}
img, err = s.imageRef.transport.store.Image(s.ID)
if err != nil {
return errors.Wrapf(err, "error reading image %q", s.ID)
}
if img.TopLayer != lastLayer {
logrus.Debugf("error creating image: image with ID %q exists, but uses different layers", s.ID)
return errors.Wrapf(storage.ErrDuplicateID, "image with ID %q already exists, but uses a different top layer", s.ID)
}
logrus.Debugf("reusing image ID %q", img.ID)
} else {
logrus.Debugf("created new image ID %q", img.ID)
}
logrus.Debugf("created new image ID %q", img.ID)
s.ID = img.ID
names := img.Names
if s.Tag != "" {
// We have a name to set, so move the name to this image.
if err := s.imageRef.transport.store.SetNames(img.ID, []string{s.Tag}); err != nil {
names = append(names, s.Tag)
}
// We have names to set, so move those names to this image.
if len(names) > 0 {
if err := s.imageRef.transport.store.SetNames(img.ID, names); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error setting names on image %q: %v", img.ID, err)
return err
}
logrus.Debugf("set name of image %q to %q", img.ID, s.Tag)
logrus.Debugf("set names of image %q to %v", img.ID, names)
}
// Save the data blobs to disk, and drop their contents from memory.
keys := []ddigest.Digest{}
@@ -405,10 +419,21 @@ func (s *storageImageDestination) Commit() error {
return nil
}
func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
return nil
var manifestMIMETypes = []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
return manifestMIMETypes
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (s *storageImageDestination) PutManifest(manifest []byte) error {
s.Manifest = make([]byte, len(manifest))
copy(s.Manifest, manifest)
@@ -427,6 +452,11 @@ func (s *storageImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (s *storageImageDestination) MustMatchRuntimeOS() bool {
return true
}
func (s *storageImageDestination) PutSignatures(signatures [][]byte) error {
sizes := []int{}
sigblob := []byte{}
@@ -453,7 +483,7 @@ func (s *storageImageSource) getBlobAndLayerID(info types.BlobInfo) (rc io.ReadC
return nil, -1, "", err
}
if layerList, ok := s.Layers[info.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, info.Digest.String())
b, err := s.imageRef.transport.store.ImageBigData(s.ID, info.Digest.String())
if err != nil {
return nil, -1, "", err
}
@@ -477,7 +507,7 @@ func (s *storageImageSource) getBlobAndLayerID(info types.BlobInfo) (rc io.ReadC
}
func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64, err error) {
layer, err := store.GetLayer(layerID)
layer, err := store.Layer(layerID)
if err != nil {
return nil, -1, err
}
@@ -494,7 +524,7 @@ func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64,
} else {
n = layerMeta.CompressedSize
}
diff, err := store.Diff("", layer.ID)
diff, err := store.Diff("", layer.ID, nil)
if err != nil {
return nil, -1, err
}
@@ -502,7 +532,7 @@ func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64,
}
func (s *storageImageSource) GetManifest() (manifestBlob []byte, MIMEType string, err error) {
manifestBlob, err = s.imageRef.transport.store.GetImageBigData(s.ID, "manifest")
manifestBlob, err = s.imageRef.transport.store.ImageBigData(s.ID, "manifest")
return manifestBlob, manifest.GuessMIMEType(manifestBlob), err
}
@@ -510,9 +540,9 @@ func (s *storageImageSource) GetTargetManifest(digest ddigest.Digest) (manifestB
return nil, "", ErrNoManifestLists
}
func (s *storageImageSource) GetSignatures() (signatures [][]byte, err error) {
func (s *storageImageSource) GetSignatures(ctx context.Context) (signatures [][]byte, err error) {
var offset int
signature, err := s.imageRef.transport.store.GetImageBigData(s.ID, "signatures")
signature, err := s.imageRef.transport.store.ImageBigData(s.ID, "signatures")
if err != nil {
return nil, err
}
@@ -534,7 +564,7 @@ func (s *storageImageSource) getSize() (int64, error) {
return -1, errors.Wrapf(err, "error reading image %q", s.imageRef.id)
}
for _, name := range names {
bigSize, err := s.imageRef.transport.store.GetImageBigDataSize(s.imageRef.id, name)
bigSize, err := s.imageRef.transport.store.ImageBigDataSize(s.imageRef.id, name)
if err != nil {
return -1, errors.Wrapf(err, "error reading data blob size %q for %q", name, s.imageRef.id)
}
@@ -545,7 +575,7 @@ func (s *storageImageSource) getSize() (int64, error) {
}
for _, layerList := range s.Layers {
for _, layerID := range layerList {
layer, err := s.imageRef.transport.store.GetLayer(layerID)
layer, err := s.imageRef.transport.store.Layer(layerID)
if err != nil {
return -1, err
}

View File

@@ -1,11 +1,15 @@
// +build !containers_image_storage_stub
package storage
import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// A storageReference holds an arbitrary name and/or an ID, which is a 32-byte
@@ -32,22 +36,45 @@ func newReference(transport storageTransport, reference, id string, name referen
}
// Resolve the reference's name to an image ID in the store, if there's already
// one present with the same name or ID.
func (s *storageReference) resolveID() string {
// one present with the same name or ID, and return the image.
func (s *storageReference) resolveImage() (*storage.Image, error) {
if s.id == "" {
image, err := s.transport.store.GetImage(s.reference)
image, err := s.transport.store.Image(s.reference)
if image != nil && err == nil {
s.id = image.ID
}
}
return s.id
if s.id == "" {
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return nil, ErrNoSuchImage
}
img, err := s.transport.store.Image(s.id)
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", s.id)
}
if s.reference != "" {
nameMatch := false
for _, name := range img.Names {
if name == s.reference {
nameMatch = true
break
}
}
if !nameMatch {
logrus.Errorf("no image matching reference %q found", s.StringWithinTransport())
return nil, ErrNoSuchImage
}
}
return img, nil
}
// Return a Transport object that defaults to using the same store that we used
// to build this reference object.
func (s storageReference) Transport() types.ImageTransport {
return &storageTransport{
store: s.transport.store,
store: s.transport.store,
defaultUIDMap: s.transport.defaultUIDMap,
defaultGIDMap: s.transport.defaultGIDMap,
}
}
@@ -60,7 +87,12 @@ func (s storageReference) DockerReference() reference.Named {
// disambiguate between images which may be present in multiple stores and
// share only their names.
func (s storageReference) StringWithinTransport() string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
optionsList := ""
options := s.transport.store.GraphOptions()
if len(options) > 0 {
optionsList = ":" + strings.Join(options, ",")
}
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "+" + s.transport.store.RunRoot() + optionsList + "]"
if s.name == nil {
return storeSpec + "@" + s.id
}
@@ -71,7 +103,14 @@ func (s storageReference) StringWithinTransport() string {
}
func (s storageReference) PolicyConfigurationIdentity() string {
return s.StringWithinTransport()
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
if s.name == nil {
return storeSpec + "@" + s.id
}
if s.id == "" {
return storeSpec + s.reference
}
return storeSpec + s.reference + "@" + s.id
}
// Also accept policy that's tied to the combination of the graph root and
@@ -79,8 +118,8 @@ func (s storageReference) PolicyConfigurationIdentity() string {
// graph root, in case we're using multiple drivers in the same directory for
// some reason.
func (s storageReference) PolicyConfigurationNamespaces() []string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
driverlessStoreSpec := "[" + s.transport.store.GetGraphRoot() + "]"
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
driverlessStoreSpec := "[" + s.transport.store.GraphRoot() + "]"
namespaces := []string{}
if s.name != nil {
if s.id != "" {
@@ -103,14 +142,13 @@ func (s storageReference) NewImage(ctx *types.SystemContext) (types.Image, error
}
func (s storageReference) DeleteImage(ctx *types.SystemContext) error {
id := s.resolveID()
if id == "" {
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return ErrNoSuchImage
img, err := s.resolveImage()
if err != nil {
return err
}
layers, err := s.transport.store.DeleteImage(id, true)
layers, err := s.transport.store.DeleteImage(img.ID, true)
if err == nil {
logrus.Debugf("deleted image %q", id)
logrus.Debugf("deleted image %q", img.ID)
for _, layer := range layers {
logrus.Debugf("deleted layer %q", layer)
}
@@ -118,7 +156,7 @@ func (s storageReference) DeleteImage(ctx *types.SystemContext) error {
return err
}
func (s storageReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func (s storageReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(s)
}

View File

@@ -1,19 +1,21 @@
// +build !containers_image_storage_stub
package storage
import (
"path/filepath"
"regexp"
"strings"
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/containers/storage/storage"
"github.com/containers/storage"
"github.com/containers/storage/pkg/idtools"
"github.com/opencontainers/go-digest"
ddigest "github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
)
func init() {
@@ -30,7 +32,6 @@ var (
// ErrPathNotAbsolute is returned when a graph root is not an absolute
// path name.
ErrPathNotAbsolute = errors.New("path name is not absolute")
idRegexp = regexp.MustCompile("^(sha256:)?([0-9a-fA-F]{64})$")
)
// StoreTransport is an ImageTransport that uses a storage.Store to parse
@@ -48,10 +49,20 @@ type StoreTransport interface {
// ParseStoreReference parses a reference, overriding any store
// specification that it may contain.
ParseStoreReference(store storage.Store, reference string) (*storageReference, error)
// SetDefaultUIDMap sets the default UID map to use when opening stores.
SetDefaultUIDMap(idmap []idtools.IDMap)
// SetDefaultGIDMap sets the default GID map to use when opening stores.
SetDefaultGIDMap(idmap []idtools.IDMap)
// DefaultUIDMap returns the default UID map used when opening stores.
DefaultUIDMap() []idtools.IDMap
// DefaultGIDMap returns the default GID map used when opening stores.
DefaultGIDMap() []idtools.IDMap
}
type storageTransport struct {
store storage.Store
store storage.Store
defaultUIDMap []idtools.IDMap
defaultGIDMap []idtools.IDMap
}
func (s *storageTransport) Name() string {
@@ -68,6 +79,26 @@ func (s *storageTransport) SetStore(store storage.Store) {
s.store = store
}
// SetDefaultUIDMap sets the default UID map to use when opening stores.
func (s *storageTransport) SetDefaultUIDMap(idmap []idtools.IDMap) {
s.defaultUIDMap = idmap
}
// SetDefaultGIDMap sets the default GID map to use when opening stores.
func (s *storageTransport) SetDefaultGIDMap(idmap []idtools.IDMap) {
s.defaultGIDMap = idmap
}
// DefaultUIDMap returns the default UID map used when opening stores.
func (s *storageTransport) DefaultUIDMap() []idtools.IDMap {
return s.defaultUIDMap
}
// DefaultGIDMap returns the default GID map used when opening stores.
func (s *storageTransport) DefaultGIDMap() []idtools.IDMap {
return s.defaultGIDMap
}
// ParseStoreReference takes a name or an ID, tries to figure out which it is
// relative to the given store, and returns it in a reference object.
func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (*storageReference, error) {
@@ -100,16 +131,24 @@ func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (
return nil, err
}
}
sum, err = digest.Parse("sha256:" + refInfo[1])
if err != nil {
return nil, err
sum, err = digest.Parse(refInfo[1])
if err != nil || sum.Validate() != nil {
sum, err = digest.Parse("sha256:" + refInfo[1])
if err != nil || sum.Validate() != nil {
return nil, err
}
}
} else { // Coverage: len(refInfo) is always 1 or 2
// Anything else: store specified in a form we don't
// recognize.
return nil, ErrInvalidReference
}
storeSpec := "[" + store.GetGraphDriverName() + "@" + store.GetGraphRoot() + "]"
optionsList := ""
options := store.GraphOptions()
if len(options) > 0 {
optionsList = ":" + strings.Join(options, ",")
}
storeSpec := "[" + store.GraphDriverName() + "@" + store.GraphRoot() + "+" + store.RunRoot() + optionsList + "]"
id := ""
if sum.Validate() == nil {
id = sum.Hex()
@@ -126,14 +165,17 @@ func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (
} else {
logrus.Debugf("parsed reference into %q", storeSpec+refname+"@"+id)
}
return newReference(storageTransport{store: store}, refname, id, name), nil
return newReference(storageTransport{store: store, defaultUIDMap: s.defaultUIDMap, defaultGIDMap: s.defaultGIDMap}, refname, id, name), nil
}
func (s *storageTransport) GetStore() (storage.Store, error) {
// Return the transport's previously-set store. If we don't have one
// of those, initialize one now.
if s.store == nil {
store, err := storage.GetStore(storage.DefaultStoreOptions)
options := storage.DefaultStoreOptions
options.UIDMap = s.defaultUIDMap
options.GIDMap = s.defaultGIDMap
store, err := storage.GetStore(options)
if err != nil {
return nil, err
}
@@ -144,15 +186,11 @@ func (s *storageTransport) GetStore() (storage.Store, error) {
// ParseReference takes a name and/or an ID ("_name_"/"@_id_"/"_name_@_id_"),
// possibly prefixed with a store specifier in the form "[_graphroot_]" or
// "[_driver_@_graphroot_]", tries to figure out which it is, and returns it in
// a reference object. If the _graphroot_ is a location other than the default,
// it needs to have been previously opened using storage.GetStore(), so that it
// can figure out which run root goes with the graph root.
// "[_driver_@_graphroot_]" or "[_driver_@_graphroot_+_runroot_]" or
// "[_driver_@_graphroot_:_options_]" or "[_driver_@_graphroot_+_runroot_:_options_]",
// tries to figure out which it is, and returns it in a reference object.
func (s *storageTransport) ParseReference(reference string) (types.ImageReference, error) {
store, err := s.GetStore()
if err != nil {
return nil, err
}
var store storage.Store
// Check if there's a store location prefix. If there is, then it
// needs to match a store that was previously initialized using
// storage.GetStore(), or be enough to let the storage library fill out
@@ -164,37 +202,65 @@ func (s *storageTransport) ParseReference(reference string) (types.ImageReferenc
}
storeSpec := reference[1:closeIndex]
reference = reference[closeIndex+1:]
storeInfo := strings.SplitN(storeSpec, "@", 2)
if len(storeInfo) == 1 && storeInfo[0] != "" {
// One component: the graph root.
if !filepath.IsAbs(storeInfo[0]) {
return nil, ErrPathNotAbsolute
// Peel off a "driver@" from the start.
driverInfo := ""
driverSplit := strings.SplitN(storeSpec, "@", 2)
if len(driverSplit) != 2 {
if storeSpec == "" {
return nil, ErrInvalidReference
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphRoot: storeInfo[0],
})
if err != nil {
return nil, err
}
store = store2
} else if len(storeInfo) == 2 && storeInfo[0] != "" && storeInfo[1] != "" {
// Two components: the driver type and the graph root.
if !filepath.IsAbs(storeInfo[1]) {
return nil, ErrPathNotAbsolute
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphDriverName: storeInfo[0],
GraphRoot: storeInfo[1],
})
if err != nil {
return nil, err
}
store = store2
} else {
// Anything else: store specified in a form we don't
// recognize.
return nil, ErrInvalidReference
driverInfo = driverSplit[0]
if driverInfo == "" {
return nil, ErrInvalidReference
}
storeSpec = driverSplit[1]
if storeSpec == "" {
return nil, ErrInvalidReference
}
}
// Peel off a ":options" from the end.
var options []string
optionsSplit := strings.SplitN(storeSpec, ":", 2)
if len(optionsSplit) == 2 {
options = strings.Split(optionsSplit[1], ",")
storeSpec = optionsSplit[0]
}
// Peel off a "+runroot" from the new end.
runRootInfo := ""
runRootSplit := strings.SplitN(storeSpec, "+", 2)
if len(runRootSplit) == 2 {
runRootInfo = runRootSplit[1]
storeSpec = runRootSplit[0]
}
// The rest is our graph root.
rootInfo := storeSpec
// Check that any paths are absolute paths.
if rootInfo != "" && !filepath.IsAbs(rootInfo) {
return nil, ErrPathNotAbsolute
}
if runRootInfo != "" && !filepath.IsAbs(runRootInfo) {
return nil, ErrPathNotAbsolute
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphDriverName: driverInfo,
GraphRoot: rootInfo,
RunRoot: runRootInfo,
GraphDriverOptions: options,
UIDMap: s.defaultUIDMap,
GIDMap: s.defaultGIDMap,
})
if err != nil {
return nil, err
}
store = store2
} else {
// We didn't have a store spec, so use the default.
store2, err := s.GetStore()
if err != nil {
return nil, err
}
store = store2
}
return s.ParseStoreReference(store, reference)
}
@@ -204,14 +270,14 @@ func (s storageTransport) GetStoreImage(store storage.Store, ref types.ImageRefe
if dref == nil {
if sref, ok := ref.(*storageReference); ok {
if sref.id != "" {
if img, err := store.GetImage(sref.id); err == nil {
if img, err := store.Image(sref.id); err == nil {
return img, nil
}
}
}
return nil, ErrInvalidReference
}
return store.GetImage(verboseName(dref))
return store.Image(verboseName(dref))
}
func (s *storageTransport) GetImage(ref types.ImageReference) (*storage.Image, error) {
@@ -249,7 +315,7 @@ func (s storageTransport) ValidatePolicyConfigurationScope(scope string) error {
return ErrPathNotAbsolute
}
} else {
// Anything else: store specified in a form we don't
// Anything else: scope specified in a form we don't
// recognize.
return ErrInvalidReference
}
@@ -285,7 +351,7 @@ func verboseName(name reference.Named) string {
name = reference.TagNameOnly(name)
tag := ""
if tagged, ok := name.(reference.NamedTagged); ok {
tag = tagged.Tag()
tag = ":" + tagged.Tag()
}
return name.Name() + ":" + tag
return name.Name() + tag
}

48
vendor/github.com/containers/image/tarball/doc.go generated vendored Normal file
View File

@@ -0,0 +1,48 @@
// Package tarball provides a way to generate images using one or more layer
// tarballs and an optional template configuration.
//
// An example:
// package main
//
// import (
// "fmt"
//
// cp "github.com/containers/image/copy"
// "github.com/containers/image/tarball"
// "github.com/containers/image/transports/alltransports"
//
// imgspecv1 "github.com/containers/image/transports/alltransports"
// )
//
// func imageFromTarball() {
// src, err := alltransports.ParseImageName("tarball:/var/cache/mock/fedora-26-x86_64/root_cache/cache.tar.gz")
// // - or -
// // src, err := tarball.Transport.ParseReference("/var/cache/mock/fedora-26-x86_64/root_cache/cache.tar.gz")
// if err != nil {
// panic(err)
// }
// updater, ok := src.(tarball.ConfigUpdater)
// if !ok {
// panic("unexpected: a tarball reference should implement tarball.ConfigUpdater")
// }
// config := imgspecv1.Image{
// Config: imgspecv1.ImageConfig{
// Cmd: []string{"/bin/bash"},
// },
// }
// annotations := make(map[string]string)
// annotations[imgspecv1.AnnotationDescription] = "test image built from a mock root cache"
// err = updater.ConfigUpdate(config, annotations)
// if err != nil {
// panic(err)
// }
// dest, err := alltransports.ParseImageName("docker-daemon:mock:latest")
// if err != nil {
// panic(err)
// }
// err = cp.Image(nil, dest, src, nil)
// if err != nil {
// panic(err)
// }
// }
package tarball

View File

@@ -0,0 +1,88 @@
package tarball
import (
"fmt"
"os"
"strings"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/types"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
// ConfigUpdater is an interface that ImageReferences for "tarball" images also
// implement. It can be used to set values for a configuration, and to set
// image annotations which will be present in the images returned by the
// reference's NewImage() or NewImageSource() methods.
type ConfigUpdater interface {
ConfigUpdate(config imgspecv1.Image, annotations map[string]string) error
}
type tarballReference struct {
transport types.ImageTransport
config imgspecv1.Image
annotations map[string]string
filenames []string
stdin []byte
}
// ConfigUpdate updates the image's default configuration and adds annotations
// which will be visible in source images created using this reference.
func (r *tarballReference) ConfigUpdate(config imgspecv1.Image, annotations map[string]string) error {
r.config = config
if r.annotations == nil {
r.annotations = make(map[string]string)
}
for k, v := range annotations {
r.annotations[k] = v
}
return nil
}
func (r *tarballReference) Transport() types.ImageTransport {
return r.transport
}
func (r *tarballReference) StringWithinTransport() string {
return strings.Join(r.filenames, ":")
}
func (r *tarballReference) DockerReference() reference.Named {
return nil
}
func (r *tarballReference) PolicyConfigurationIdentity() string {
return ""
}
func (r *tarballReference) PolicyConfigurationNamespaces() []string {
return nil
}
func (r *tarballReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := r.NewImageSource(ctx)
if err != nil {
return nil, err
}
img, err := image.FromSource(src)
if err != nil {
src.Close()
return nil, err
}
return img, nil
}
func (r *tarballReference) DeleteImage(ctx *types.SystemContext) error {
for _, filename := range r.filenames {
if err := os.Remove(filename); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error removing %q: %v", filename, err)
}
}
return nil
}
func (r *tarballReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return nil, fmt.Errorf("destination not implemented yet")
}

View File

@@ -0,0 +1,250 @@
package tarball
import (
"bytes"
"compress/gzip"
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"runtime"
"strings"
"time"
"github.com/containers/image/types"
digest "github.com/opencontainers/go-digest"
imgspecs "github.com/opencontainers/image-spec/specs-go"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type tarballImageSource struct {
reference tarballReference
filenames []string
diffIDs []digest.Digest
diffSizes []int64
blobIDs []digest.Digest
blobSizes []int64
blobTypes []string
config []byte
configID digest.Digest
configSize int64
manifest []byte
}
func (r *tarballReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
// Gather up the digests, sizes, and date information for all of the files.
filenames := []string{}
diffIDs := []digest.Digest{}
diffSizes := []int64{}
blobIDs := []digest.Digest{}
blobSizes := []int64{}
blobTimes := []time.Time{}
blobTypes := []string{}
for _, filename := range r.filenames {
var file *os.File
var err error
var blobSize int64
var blobTime time.Time
var reader io.Reader
if filename == "-" {
blobSize = int64(len(r.stdin))
blobTime = time.Now()
reader = bytes.NewReader(r.stdin)
} else {
file, err = os.Open(filename)
if err != nil {
return nil, fmt.Errorf("error opening %q for reading: %v", filename, err)
}
defer file.Close()
reader = file
fileinfo, err := file.Stat()
if err != nil {
return nil, fmt.Errorf("error reading size of %q: %v", filename, err)
}
blobSize = fileinfo.Size()
blobTime = fileinfo.ModTime()
}
// Default to assuming the layer is compressed.
layerType := imgspecv1.MediaTypeImageLayerGzip
// Set up to digest the file as it is.
blobIDdigester := digest.Canonical.Digester()
reader = io.TeeReader(reader, blobIDdigester.Hash())
// Set up to digest the file after we maybe decompress it.
diffIDdigester := digest.Canonical.Digester()
uncompressed, err := gzip.NewReader(reader)
if err == nil {
// It is compressed, so the diffID is the digest of the uncompressed version
reader = io.TeeReader(uncompressed, diffIDdigester.Hash())
} else {
// It is not compressed, so the diffID and the blobID are going to be the same
diffIDdigester = blobIDdigester
layerType = imgspecv1.MediaTypeImageLayer
uncompressed = nil
}
n, err := io.Copy(ioutil.Discard, reader)
if err != nil {
return nil, fmt.Errorf("error reading %q: %v", filename, err)
}
if uncompressed != nil {
uncompressed.Close()
}
// Grab our uncompressed and possibly-compressed digests and sizes.
filenames = append(filenames, filename)
diffIDs = append(diffIDs, diffIDdigester.Digest())
diffSizes = append(diffSizes, n)
blobIDs = append(blobIDs, blobIDdigester.Digest())
blobSizes = append(blobSizes, blobSize)
blobTimes = append(blobTimes, blobTime)
blobTypes = append(blobTypes, layerType)
}
// Build the rootfs and history for the configuration blob.
rootfs := imgspecv1.RootFS{
Type: "layers",
DiffIDs: diffIDs,
}
created := time.Time{}
history := []imgspecv1.History{}
// Pick up the layer comment from the configuration's history list, if one is set.
comment := "imported from tarball"
if len(r.config.History) > 0 && r.config.History[0].Comment != "" {
comment = r.config.History[0].Comment
}
for i := range diffIDs {
createdBy := fmt.Sprintf("/bin/sh -c #(nop) ADD file:%s in %c", diffIDs[i].Hex(), os.PathSeparator)
history = append(history, imgspecv1.History{
Created: &blobTimes[i],
CreatedBy: createdBy,
Comment: comment,
})
// Use the mtime of the most recently modified file as the image's creation time.
if created.Before(blobTimes[i]) {
created = blobTimes[i]
}
}
// Pick up other defaults from the config in the reference.
config := r.config
if config.Created == nil {
config.Created = &created
}
if config.Architecture == "" {
config.Architecture = runtime.GOARCH
}
if config.OS == "" {
config.OS = runtime.GOOS
}
config.RootFS = rootfs
config.History = history
// Encode and digest the image configuration blob.
configBytes, err := json.Marshal(&config)
if err != nil {
return nil, fmt.Errorf("error generating configuration blob for %q: %v", strings.Join(r.filenames, separator), err)
}
configID := digest.Canonical.FromBytes(configBytes)
configSize := int64(len(configBytes))
// Populate a manifest with the configuration blob and the file as the single layer.
layerDescriptors := []imgspecv1.Descriptor{}
for i := range blobIDs {
layerDescriptors = append(layerDescriptors, imgspecv1.Descriptor{
Digest: blobIDs[i],
Size: blobSizes[i],
MediaType: blobTypes[i],
})
}
annotations := make(map[string]string)
for k, v := range r.annotations {
annotations[k] = v
}
manifest := imgspecv1.Manifest{
Versioned: imgspecs.Versioned{
SchemaVersion: 2,
},
Config: imgspecv1.Descriptor{
Digest: configID,
Size: configSize,
MediaType: imgspecv1.MediaTypeImageConfig,
},
Layers: layerDescriptors,
Annotations: annotations,
}
// Encode the manifest.
manifestBytes, err := json.Marshal(&manifest)
if err != nil {
return nil, fmt.Errorf("error generating manifest for %q: %v", strings.Join(r.filenames, separator), err)
}
// Return the image.
src := &tarballImageSource{
reference: *r,
filenames: filenames,
diffIDs: diffIDs,
diffSizes: diffSizes,
blobIDs: blobIDs,
blobSizes: blobSizes,
blobTypes: blobTypes,
config: configBytes,
configID: configID,
configSize: configSize,
manifest: manifestBytes,
}
return src, nil
}
func (is *tarballImageSource) Close() error {
return nil
}
func (is *tarballImageSource) GetBlob(blobinfo types.BlobInfo) (io.ReadCloser, int64, error) {
// We should only be asked about things in the manifest. Maybe the configuration blob.
if blobinfo.Digest == is.configID {
return ioutil.NopCloser(bytes.NewBuffer(is.config)), is.configSize, nil
}
// Maybe one of the layer blobs.
for i := range is.blobIDs {
if blobinfo.Digest == is.blobIDs[i] {
// We want to read that layer: open the file or memory block and hand it back.
if is.filenames[i] == "-" {
return ioutil.NopCloser(bytes.NewBuffer(is.reference.stdin)), int64(len(is.reference.stdin)), nil
}
reader, err := os.Open(is.filenames[i])
if err != nil {
return nil, -1, fmt.Errorf("error opening %q: %v", is.filenames[i], err)
}
return reader, is.blobSizes[i], nil
}
}
return nil, -1, fmt.Errorf("no blob with digest %q found", blobinfo.Digest.String())
}
func (is *tarballImageSource) GetManifest() ([]byte, string, error) {
return is.manifest, imgspecv1.MediaTypeImageManifest, nil
}
func (*tarballImageSource) GetSignatures(context.Context) ([][]byte, error) {
return nil, nil
}
func (*tarballImageSource) GetTargetManifest(digest.Digest) ([]byte, string, error) {
return nil, "", fmt.Errorf("manifest lists are not supported by the %q transport", transportName)
}
func (is *tarballImageSource) Reference() types.ImageReference {
return &is.reference
}
// UpdatedLayerInfos() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (*tarballImageSource) UpdatedLayerInfos() []types.BlobInfo {
return nil
}

View File

@@ -0,0 +1,66 @@
package tarball
import (
"errors"
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/containers/image/transports"
"github.com/containers/image/types"
)
const (
transportName = "tarball"
separator = ":"
)
var (
// Transport implements the types.ImageTransport interface for "tarball:" images,
// which are makeshift images constructed using one or more possibly-compressed tar
// archives.
Transport = &tarballTransport{}
)
type tarballTransport struct {
}
func (t *tarballTransport) Name() string {
return transportName
}
func (t *tarballTransport) ParseReference(reference string) (types.ImageReference, error) {
var stdin []byte
var err error
filenames := strings.Split(reference, separator)
for _, filename := range filenames {
if filename == "-" {
stdin, err = ioutil.ReadAll(os.Stdin)
if err != nil {
return nil, fmt.Errorf("error buffering stdin: %v", err)
}
continue
}
f, err := os.Open(filename)
if err != nil {
return nil, fmt.Errorf("error opening %q: %v", filename, err)
}
f.Close()
}
ref := &tarballReference{
transport: t,
filenames: filenames,
stdin: stdin,
}
return ref, nil
}
func (t *tarballTransport) ValidatePolicyConfigurationScope(scope string) error {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return errors.New(`tarball: does not support any scopes except the default "" one`)
}
func init() {
transports.Register(Transport)
}

View File

@@ -10,9 +10,12 @@ import (
_ "github.com/containers/image/docker"
_ "github.com/containers/image/docker/archive"
_ "github.com/containers/image/docker/daemon"
_ "github.com/containers/image/oci/archive"
_ "github.com/containers/image/oci/layout"
_ "github.com/containers/image/openshift"
_ "github.com/containers/image/storage"
_ "github.com/containers/image/tarball"
// The ostree transport is registered by ostree*.go
// The storage transport is registered by storage*.go
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/pkg/errors"

Some files were not shown because too many files have changed in this diff Show More