Compare commits

..

251 Commits

Author SHA1 Message Date
Antonio Murdaca
1bbd87f435 bump to v0.1.23
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-07-20 19:36:46 +02:00
Antonio Murdaca
bf149c6426 Merge pull request #378 from mtrmac/runtime-spec-v1.0.0
Update to image-spec v1.0.0 and revendor
2017-07-20 19:33:24 +02:00
Miloslav Trmač
ca03debe59 Update to image-spec v1.0.0 and revendor 2017-07-20 18:04:00 +02:00
Antonio Murdaca
2d168e3723 Merge pull request #377 from mtrmac/image-spec-1.0.0
Update to image-spec v1.0.0 and revendor
2017-07-20 14:36:33 +02:00
Miloslav Trmač
2c1ede8449 Update to image-spec v1.0.0 and revendor 2017-07-19 23:50:50 +02:00
Miloslav Trmač
b2a06ed720 Merge pull request #376 from runcom/fix-375
vendor c/image: fix auth handlers
2017-07-18 18:58:36 +02:00
Antonio Murdaca
2874584be4 vendor c/image: fix auth handlers
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-07-18 17:14:35 +02:00
Antonio Murdaca
91e801b451 Merge pull request #371 from nalind/vendor-storage
Bump and pin containers/storage and containers/image
2017-06-28 17:30:14 +02:00
Nalin Dahyabhai
b0648d79d4 Bump containers/storage and containers/image
Update containers/storage and containers/image to the
current-as-of-this-writing versions,
105f7c77aef0c797429e41552743bf5b03b63263 and
23bddaa64cc6bf3f3077cda0dbf1cdd7007434df respectively.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-06-28 11:05:26 -04:00
Miloslav Trmač
437d608772 Merge pull request #370 from 0x0916/2017-06-22/dockerfile
Dockerfile.build: using ubuntu 17.04
2017-06-22 13:03:44 +02:00
0x0916
d57934d529 Dockerfile.build: using ubuntu 17.04
ubuntu 16.04 have not package `libostree-dev`. also, we should
install `libglib2.0-dev` package when build skopeo with command `make binary`.

Signed-off-by: 0x0916 <w@laoqinren.net>
2017-06-22 17:17:24 +08:00
Antonio Murdaca
4470b88c50 Merge pull request #368 from runcom/cut-v0.1.22
Cut v0.1.22
2017-06-21 10:39:49 +02:00
Antonio Murdaca
03595a83d0 bump back to v0.1.23
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-21 10:22:58 +02:00
Antonio Murdaca
5d24b67f5e bump to v0.1.22
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-21 10:22:43 +02:00
Antonio Murdaca
3b9ee4f322 Merge pull request #366 from rhatdan/transports
Give more useful help when explaining usage
2017-06-20 17:02:22 +02:00
Daniel J Walsh
0ca26cce94 Give more useful help when explaining usage
Also specify container-storage as a valid transport

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-06-20 14:28:02 +00:00
Antonio Murdaca
29528d00ec Merge pull request #367 from runcom/vendor-c/image-list-names
vendor c/image for ListNames in transports pkg
2017-06-17 00:26:40 +02:00
Antonio Murdaca
af34f50b8c bump ostree-go
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-17 00:08:54 +02:00
Antonio Murdaca
e7b32b1e6a vendor c/image for ListNames in transports pkg
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-16 23:39:06 +02:00
Antonio Murdaca
4049bf2801 Merge pull request #365 from rhatdan/docker
Remove docker references whereever possible
2017-06-16 19:15:33 +02:00
Daniel J Walsh
ad33537769 Remove docker references whereever possible
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-06-16 10:51:42 -04:00
Antonio Murdaca
d5e34c1b5e Merge pull request #362 from rhatdan/master
Vendor in ostree fixes
2017-06-16 11:58:52 +02:00
Dan Walsh
5e586f3781 Vendor in ostree fixes
This will fix the compiler issues.

Signed-off-by: Dan Walsh <dwalsh@redhat.com>
2017-06-16 05:42:47 -04:00
Antonio Murdaca
455177e749 Merge pull request #358 from runcom/bump-release-0.1.21
Bump release 0.1.21
2017-06-15 17:16:46 +02:00
Antonio Murdaca
b85b7319aa bump back to v0.1.22
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 16:58:43 +02:00
Antonio Murdaca
0b73154601 bump to v0.1.21
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 16:58:24 +02:00
Miloslav Trmač
da48399f89 Merge pull request #357 from runcom/up-cstorage
*: update c/storage
2017-06-15 15:12:33 +02:00
Antonio Murdaca
08504d913c *: update c/storage
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 14:29:56 +02:00
Miloslav Trmač
fe67466701 Merge pull request #351 from giuseppe/add-ostree-dep
vendor.conf: add ostree-go
2017-06-14 17:18:51 +02:00
Giuseppe Scrivano
47e5d0cd9e vendor.conf: add ostree-go
it is used by containers/image for pulling images to the OSTree storage.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-06-14 15:35:34 +02:00
Antonio Murdaca
f0d830a8ca Merge pull request #355 from runcom/fail-on-diff-os
fail early when image os doesn't match host os
2017-06-06 13:55:26 +02:00
Antonio Murdaca
6d3f523c57 fail early when image os doesn't match host os
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-06 13:40:16 +02:00
Miloslav Trmač
87a36f9d5b Merge pull request #343 from mtrmac/readme-updates
README.md updates
2017-06-05 17:50:15 +02:00
Miloslav Trmač
9d12d72fb7 Improve documentation on what to do with containers/image failures in test-skopeo 2017-06-05 15:16:43 +02:00
Miloslav Trmač
fc88117065 Drop finished work from TODO
- We now have the docker-archive: transport
- Integration tests with built registries also exist
2017-06-05 15:16:43 +02:00
Miloslav Trmač
0debda0bbb Start all command examples with $
That has mostly been the case, with one outlier.  Fix it.
2017-06-05 15:16:43 +02:00
Miloslav Trmač
49f10736d1 Restructure the “Building without a container” version
Consolidate the Fedora and macOS instructions to prevent duplication,
and to suggest using $GOPATH for both.

Start with installing dependencies.
2017-06-05 15:15:04 +02:00
Miloslav Trmač
3d0d2ea6bb Document that there is a choice in using containers, and what the trade-off is 2017-06-05 15:15:02 +02:00
Miloslav Trmač
a2499d3451 Create “Building {without a container,in a container}” subsections
To make it clearer that the two are alternatives.

Document that a docker command is needed for the in-container build.

Also move the “checkout in $GOPATH” warning into the “without a
container” section, where it belongs.
2017-06-05 15:13:25 +02:00
Miloslav Trmač
7db0aab330 Move documentation build instructions to the end, with a separate header
We want to start with the Go 1.5 dependency and build/checkout
instructions.

Also create a separate subsection, to match the future “Building
in/without a container” subsections
2017-06-05 14:42:19 +02:00
Miloslav Trmač
150eb5bf18 Add Fedora instructions for installing go-md2man
It would be nice to have macOS instructions as well.
2017-06-05 14:40:58 +02:00
Miloslav Trmač
bcc0de69d4 Merge pull request #353 from surajssd/add-fedora-dependency
docs(README): add build dependencies for fedora
2017-06-05 14:39:58 +02:00
Suraj Deshmukh
cab89b9b9c docs(README): add build dependencies for fedora
Two more packages are needed to locally build skopeo
on fedora viz. btrfs-progs-devel & device-mapper-devel,
so added them in README.

Signed-off-by: Suraj Deshmukh <surajssd009005@gmail.com>
2017-06-04 16:03:23 +05:30
Miloslav Trmač
5b95a21401 Merge pull request #350 from mhrivnak/patch-1
Fixes a typo in the Name field
2017-05-31 20:49:28 +02:00
Michael Hrivnak
78c83dbcff Fixes a typo in the Name field 2017-05-31 14:22:22 -04:00
Miloslav Trmač
98ced5196c Merge pull request #347 from mtrmac/docker-certs.d
Support /etc/docker/certs.d
2017-05-30 19:40:36 +02:00
Miloslav Trmač
63272a10d7 Vendor after merging mtrmac/image:docker-certs.d 2017-05-30 18:26:43 +02:00
Antonio Murdaca
07c798ff82 Merge pull request #346 from jingqiuELE/master
Always combine RUN apt-get update with apt-get install in the same RUN statement.
2017-05-25 14:54:31 +02:00
Jing Qiu
750d72873d Always combine RUN apt-get update with apt-get install in the same RUN
statement.

From the [docs](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#build-cache) in March 2017:

Always combine RUN apt-get update with apt-get install in the same  RUN statement, for example

RUN apt-get update && apt-get install -y package-bar

Using apt-get update alone in a RUN statement causes caching issues and subsequent apt-get install instructions fail.

Signed-off-by: Jing Qiu <aqiu0720@gmail.com>
2017-05-25 11:38:35 +08:00
Antonio Murdaca
81dddac7d6 Merge pull request #345 from runcom/image-spec-rc6
update image-spec to v1.0.0-rc6
2017-05-24 16:55:18 +02:00
Antonio Murdaca
405b912f7e update image-spec to v1.0.0-rc6
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-24 16:35:57 +02:00
Miloslav Trmač
e1884aa8a8 Merge pull request #344 from mtrmac/storage-rebase
Pull in rhatdan/image:master
2017-05-17 22:20:50 +02:00
Miloslav Trmač
ffb01385dd Vendor after merging https://github.com/containers/image/pull/275 2017-05-17 17:12:23 +02:00
Miloslav Trmač
c5adc4b580 Merge pull request #331 from mtrmac/storage-rebase
Re-vendor, primarily for https://github.com/containers/storage/pull/11
2017-05-12 18:50:40 +02:00
Miloslav Trmač
69b9106646 Re-vendor, primarily for https://github.com/containers/storage/pull/11
containers/storage got new dependencies, so we will need to re-vendor
eventually anyway, and having this separate from other major work is
cleaner.

But the primary goal of this commit is to see whether it makes skopeo
buildable on OS X.
2017-05-11 13:07:14 +02:00
Antonio Murdaca
565688a963 Merge pull request #341 from projectatomic/jzb-patch-1
Update README.md
2017-05-10 21:21:00 +02:00
Joe Brockmeier
5205f3646d Update README.md
Just clarifying that Skopeo is available in later versions of Fedora as well.
2017-05-10 13:53:36 -04:00
Miloslav Trmač
3c57a0f084 Merge pull request #340 from mtrmac/setup-test
Simplify infrastructure setup, and make registry timeouts less strict
2017-05-10 15:05:52 +02:00
Miloslav Trmač
ed2088a4e5 Increase the time we wait for a registry from 0.5 to 5 seconds
We are not testing registry start-up performance, and killing the test
suite just because Travis is a bit busy doesn’t help; we’re much better
off with a test run which gives the registry a bit more time.
2017-05-10 14:46:28 +02:00
Miloslav Trmač
8b36001c0e Do not build docker/registry and remove remaining helpers 2017-05-10 14:46:28 +02:00
Miloslav Trmač
cd300805d1 Remove registry instances which we don’t use at all 2017-05-10 14:46:12 +02:00
Miloslav Trmač
8d3d0404fe Clean up SigningSuite test initialization
Move "skip if signing is not available" into the test, there may be
tests which only need verification.

Move GNUPGHOME creation from SetUpTest to SetUpSuite, sharing a single
key is fine.  We don’t change the GNUPGHOME contents at test runtime.
2017-05-10 14:45:10 +02:00
Miloslav Trmač
9985f12cd4 Move skopeoBinary check from *Suite.SetUpTest to SetUpSuite
The results are not going to vary across individual tests, so let’s only
check once.
2017-05-10 14:44:20 +02:00
Antonio Murdaca
5a42657cdb Merge pull request #339 from runcom/new-release-v0.1.20
New release v0.1.20
2017-05-10 13:01:27 +02:00
Antonio Murdaca
43d6128036 bump back to v0.1.21-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 12:37:37 +02:00
Antonio Murdaca
e802625b7c bump to v0.1.20
- support image-spec v1.0.0-rc5

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 12:37:07 +02:00
Miloslav Trmač
105be6a0ab Merge pull request #337 from mtrmac/docker-push-to-tag
Update name/tag embedded into schema1 manifests
2017-05-10 12:33:18 +02:00
Miloslav Trmač
8ec2a142c9 Enable tests for schema2→schema1 conversion with docker/distribution registries
Now that we can update the embedded name:tag, the test no longer fails
on a schema1→schema1 copy with the old schema1 server which verifies the
name:tag value.
2017-05-10 12:13:39 +02:00
Miloslav Trmač
c4ec970bb2 Uncomment TestCopyCompression test cases
We have been able to push to Docker tags for a long time, and recently
implemented s2→s1 autodetection.
2017-05-10 12:13:38 +02:00
Miloslav Trmač
cf7e58a297 Test that names are updated as expected when pushing to schema1
Before the update, we have loosened the equality check to ignore the
name/tag; now that we are generating them correctly, test for the
expected values.
2017-05-10 12:13:38 +02:00
Miloslav Trmač
03233a5ca7 Vendor after merging mtrmac/image:docker-push-to-tag 2017-05-10 12:13:24 +02:00
Miloslav Trmač
9bc847e656 Use a schema2 server in TestCopySignatures
TestCopySignatures, among other things, tests handling of a correctly
signed image to a different name without breaking the signature, which
will be impossible with schema1 after we start updating the names
embedded in the schema1 manifest.  So, use the schema2 server binary,
and docker://busybox image versions which use schema2.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
22965c443f Allow updated names when comparing schema1 images
The new version of containers/image will update the name and tag fields
when pushing to schema1; so accept that before we update, so that tests
keep working.

For now, just ignore the name/tag fields, so that both the current and
updated versions of containers/image are acceptable; we will tighten
that after the update.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
6f23c88e84 Make schema1 dir: comparisons nondestructive
Use (diff -x manifest.json) instead of removing the manifest.json files.
Also rename the helper from destructiveCheckDirImageAreEqual to
assertDirImagesAreEqual.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
4afafe9538 Log output of docker/distribution registries instead of sending it to /dev/null
This is useful for diagnosing failures which are logged locally but not
reported to the clients.
2017-05-10 12:01:09 +02:00
Antonio Murdaca
50dda3492c Merge pull request #313 from runcom/update-imgspec-rc5
oci: update to image-spec v1 RC5
2017-05-10 11:57:20 +02:00
Antonio Murdaca
f4a44f00b8 integration: disable check with image-tools for image-spec RC5
We need https://github.com/opencontainers/image-tools/pull/144 first

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 11:35:19 +02:00
Antonio Murdaca
dd13a0d60b oci: update to image-spec v1 RC5
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 11:35:16 +02:00
Antonio Murdaca
80b751a225 Merge pull request #332 from mtrmac/schema12
Run-time manifest type support (e.g. schema1/schema2) autodetection re-vendor + integration tests
2017-05-10 11:04:28 +02:00
Miloslav Trmač
8556dd1aa1 Add tests for automatic schema conversion depending on registry
In addition to the default registry in the OpenShift cluster, start two
more (one known to support s1 only, one known to support s1+s2), and
also a docker/distribution s1-only registry.

Then test that copying images around works as expected.

NOTE: The docker/distribution s1-only tests currently fail and are
disabled.  See the added comment for details.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
c43e4cffaf Collect "processes" to kill in openshiftCluster instead of named members
We don’t really need to differentiate between the master/registry, we
just want to terminate them, maybe in the right order.  So, collect them
in an array instead of using separate members.

This will make it easier to have more registry instances in the near
future.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-05-09 15:08:11 +02:00
Miloslav Trmač
44ee5be6db Do not store a *check.C in openshiftCluster
The *check.C object can not be reused across tests, so storing it in
openshiftCluster is incorrect (and leads to weird behavior like
assertion failures being silently ignored).  So far this hasn't really
been an issue because we have been using the *check.C only in SetUpSuite
and TearDownSuite, and the changes to this have turned out to be
unnecessary after all, but this is still the right thing to do.

This is more or less
> s/c\./cluster\./g; s/cluster\.c/c/g
(paying more attention to the syntax) and corresponding modifications
to the method declarations.

Does not change behavior, apart from using the correct *check.C in
CopySuite.TearDownSuite.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
cfd1cf6def Abort in fileFromFixture if a replacement template is not found
This makes the fixture editation more robust against typos or unexpected
changes (if the “fixture” comes from third parties, like the OpenShift
registry configuration file).
2017-05-09 15:08:11 +02:00
Miloslav Trmač
15ce6488dd Move fileFromFixture from copy_test.go to utils.go
… to make it possible to call it from openshift.go.

Does not change behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
4f199f86f7 Split a startRegistryProcess from openshiftCluster.startRegistry
The helper will be reused in the future.  For now, this does not change
behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
1f1f6801bb Split openshiftCluster.startRegistry into prepareRegistryConfig and startRegistry
This separates creation of the account and configuration, which can be
shared across service instances, from actually starting the registry; we
will soon start several of them.

Only splits a function, does not change behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
1ee776b09b Vendor after merging mtrmac/image:schema12 2017-05-09 15:07:49 +02:00
Antonio Murdaca
8bb9d5f0f2 Merge pull request #335 from runcom/fix-crio-openshift
Fix docker-daemon to containers-storage copy
2017-05-06 12:29:16 +02:00
Antonio Murdaca
88ea901938 vendor c/image to fix docker-daemon -> containers-storage copy
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-06 12:09:29 +02:00
Antonio Murdaca
fd41f20bb8 Merge pull request #314 from giuseppe/ostree
skopeo: support copy to OSTree storage
2017-04-24 22:58:51 +02:00
Giuseppe Scrivano
1d5c681f0f skopeo: support copy to OSTree storage
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-04-24 20:23:45 +02:00
Antonio Murdaca
c467afa37c Merge pull request #328 from runcom/new-release
New release: v0.1.19
2017-04-14 12:45:56 +02:00
Antonio Murdaca
53a90e51d4 bump again to v0.1.20-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-04-14 12:21:28 +02:00
Antonio Murdaca
62e3747a11 bump to v0.1.19
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-04-14 12:21:09 +02:00
Miloslav Trmač
882ba36ef8 Merge pull request #327 from cyphar/bump-c-i
vendor: update containers/image@fb36437
2017-04-13 16:39:07 +02:00
Aleksa Sarai
3b699c5248 vendor: update c/i@fb36437e0f
This change includes the docker-archive: transport, allowing for
entirely local manipulation of Docker images.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-04-13 23:58:45 +10:00
Aleksa Sarai
b82945b689 vendor: fix non-root imports
vndr has never supported non-root imports but it used to not produce
errors. Newer versions of vndr will not clone anything if the
vendor.conf doesn't "look right".

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-04-13 23:54:23 +10:00
Antonio Murdaca
3851d89b17 Merge pull request #326 from estesp/gracefully-allow-no-tag-list
Allow inspect to work even if tag list blocked
2017-04-06 20:24:00 +02:00
Phil Estes
4360db9f6d Allow inspect to work even if tag list blocked
Some registries may choose to block the "list all tags" endpoint for
performance or other reasons. In this case we should still allow an
inspect which will not include the "tag list" in the output.

Signed-off-by: Phil Estes <estesp@gmail.com>
2017-04-06 13:48:19 -04:00
Antonio Murdaca
355de6c757 Merge pull request #324 from runcom/revndr-c/image
revendor c/image for OCIConfig
2017-04-04 18:29:55 +02:00
Antonio Murdaca
ceacd8d885 revendor c/image for OCIConfig
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-04-04 17:49:51 +02:00
Antonio Murdaca
bc36bb416b Merge pull request #316 from mtrmac/change-sigstore-layout
Vendor mtrmac/image:change-sigstore-layout
2017-04-03 23:48:41 +02:00
Miloslav Trmač
e86ff0e79d Vendor after merging mtrmac/image:change-sigstore-layout 2017-04-03 23:26:18 +02:00
Miloslav Trmač
590703db95 Merge pull request #322 from mtrmac/x-registry-supports-signatures
Add support for the X-Registry-Supports-Signatures API extension
2017-04-03 16:40:01 +02:00
Miloslav Trmač
727212b12f Add CopySuite.TestCopyAtomicExtension
… testing signature reading and writing using the
X-Registry-Supports-Signatures extension, and its
interoperability/equivalence with the atomic: native OpenShift API.
2017-03-30 19:35:44 +02:00
Miloslav Trmač
593bdfe098 Vendor after merging mtrmac/image:x-registry-supports-signatures 2017-03-30 19:35:36 +02:00
Miloslav Trmač
ba22d17d1f Merge pull request #298 from mtrmac/openpgp
Vendor in non-gpgme support, and update tests and build machinery
2017-03-29 22:12:36 +02:00
Miloslav Trmač
0caee746fb Vendor after merging mtrmac/image:openpgp, + other updates
Primarily vendor after merging mtrmac/image:openpgp.

Then update for the SigningMechanism API change.

Also skip signing tests if the GPG mechanism does not support signing.

Also abort some of the tests early instead of trying to use invalid (or
nil) values.

The current master of image-tools does not build with Go 1.6, so keep
using an older release.

Also requires adding a few more dependencies of our updated
dependencies.
2017-03-29 20:54:18 +02:00
Miloslav Trmač
d2d41ebc33 Propagate BUILDTAGS through the build process
… so that any command-line overrides are respected.
2017-03-29 20:50:45 +02:00
Miloslav Trmač
9272b5177e Merge pull request #321 from mtrmac/remove-temp-dir
Trivial bug fixes to `CopySuite.TestCopyDockerSigstore`
2017-03-28 17:10:52 +02:00
Miloslav Trmač
62967259a4 Remove a copy&pasted comment 2017-03-28 16:41:15 +02:00
Miloslav Trmač
224b54c367 Uncomment a “defer os.RemoveAll(tmpDir)”
We are running the integration tests in a container, so this does not
_really_ matter, at least right now—mostly a matter of principle.
2017-03-28 16:40:28 +02:00
Miloslav Trmač
0734c4ccb3 Merge pull request #320 from mtrmac/openshift-shell
Add a commented-out CopySuite.TestRunShell
2017-03-27 17:36:14 +02:00
Miloslav Trmač
6b9345a5f9 Add a commented-out CopySuite.TestRunShell
We are maintaining code to set up and run registries, including the
fairly complex setup for Atomic Registry, in the integration tests.
This is all useful for experimentation in shell, and the easiest way to
do that is to add a “test” which, after all the set up is done, simply
starts a shell.

This is gated by a build tag, so it does not affect normal test runs.

A possible alternative would be to convert all of the setup code not to
depend on check.C and testing.T, but that would be fairly cumbersome due
to how prevalent c.Logf and c.Assert are throughout the setup code.
Especially the natural replacement of c.Assert with a panic() would be
pretty ugly, and adding real error handling to all of that would make
the code noticeably longer.  The build tag and copy&pasting a command
works just as well, at least for now.

(It is not conveniently possible to create a new “main program” which
manually creates a check.C and testing.T just for the purpose of running
the setup code either; check.C can be created given a testing.T, but
testing.T is only created by testing.MainStart, which does not allow us
to submit a non-test method; and testing.MainStart is excluded from the
Go compatibility promise.)
2017-03-27 17:01:29 +02:00
Miloslav Trmač
ff5694b1a6 Merge pull request #319 from kofalt/insecure-policy-flag-redux
Insecure policy flag redux
2017-03-25 00:39:43 +01:00
Nathaniel Kofalt
467a574e34 Add documentation for --insecure-policy flag
Signed-off-by: Nathaniel Kofalt <nathaniel@kofalt.com>
2017-03-24 17:30:01 -05:00
Kushal Das
4043ecf922 Adds --insecure-policy flag
This patch adds a new flag --insecure-policy.
Closes #181, we can now directly use the tool with the
above mentioned flag wihout using a policy file

Signed-off-by: Kushal Das <mail@kushaldas.in>
2017-03-24 17:10:44 -05:00
Antonio Murdaca
e052488674 Merge pull request #318 from enoodle/inspect_cmd_cert_path_to_cert_dir
Fix wrong naming of cert-dir argument in inspect command
2017-03-21 10:40:47 +01:00
Erez Freiberger
52aade5356 fix cert-path references in completions/bash/skopeo 2017-03-20 16:16:57 +02:00
Erez Freiberger
1491651ea9 adding tests 2017-03-20 12:53:34 +02:00
Erez Freiberger
d969934fa4 Fix wrnog naming of cert-dir argument 2017-03-20 11:27:43 +02:00
Antonio Murdaca
bda45f0d60 Merge pull request #317 from mtrmac/openshift-distribution-api
Update OpenShift in integration tests to get the docker/distribution API extensions
2017-03-20 08:42:35 +01:00
Miloslav Trmač
96e579720e Update OpenShift from 1.3.0-alpha.3 to 1.5.0-alpha.3
This is primarily to get the signature access docker/distribution API
extension.

To make it work, two updates to the test harness are necessary:

- Change the expected output of (oadm policy add-cluster-role-to-group)
- Don't expect (openshift start master) to create .kubeconfig files
  for the registry service.

  As of https://github.com/openshift/origin/pull/10830 ,
  openshift.local.config/master/openshift-registry.kubeconfig is no longer
  autogenerated.  Instead, do what (oadm registry) does, creating a
  service account and a cluster policy role binding.  Then manually create
  the necessary certificates and a .kubeconfig instead of using the
  service account in a pod.
2017-03-18 20:25:01 +01:00
Miloslav Trmač
9d88725a97 Update tests to handle OpenShift changing the schema1 signatures
The integrated registry used to return the original signature unmodified
in 1.3.0-alpha.3; in 1.5.0-alpha-3 it regenerates a new one, so allow that
when comparing the original and copied image.
2017-03-18 20:24:22 +01:00
Miloslav Trmač
def5f4a11a Add an openshiftCluster.clusterCmd helper method
… to help with creating an exec.Cmd in the cluster’s directory and with
the appropriate environment variables.
2017-03-18 20:24:22 +01:00
Miloslav Trmač
c4275519ae Split runExecCmdWithInput from runCommandWithInput
This does not change behavior yet; runExecCmdWithInput will be used in
the future by callers who need to modify the exec.Cmd.
2017-03-18 20:24:22 +01:00
Antonio Murdaca
b164a261cf Merge pull request #315 from runcom/fix-panic
fix copy panic on http response with tls-verify true
2017-03-04 00:17:10 +01:00
Antonio Murdaca
f89bd82dcd fix copy panic on http response with tls-verify true
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-03-03 23:59:59 +01:00
Antonio Murdaca
ab4912a5a1 Merge pull request #288 from runcom/pluggable-transports
vendor c/image for pluggable transports
2017-03-02 15:39:52 +01:00
Antonio Murdaca
85d737fc29 vendor c/image for pluggable transports
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-03-02 15:21:58 +01:00
Miloslav Trmač
0224d8cd38 Merge pull request #309 from erikh/check-close-errors
cmd/skopeo: check errors on close functions for image handling.
2017-02-27 15:28:15 +01:00
Erik Hollensbe
f0730043c6 vendor.conf,vendor: vndr update for containers/image
Signed-off-by: Erik Hollensbe <github@hollensbe.org>
2017-02-27 02:15:36 -08:00
Erik Hollensbe
e0efa0c2b3 cmd/skopeo: check errors on close functions for image handling.
Signed-off-by: Erik Hollensbe <github@hollensbe.org>
2017-02-27 01:55:49 -08:00
Antonio Murdaca
b9826f0c42 Merge pull request #307 from cyphar/vendor-ci193
[donotmerge] vendor: update to c/i@copy-obey-destination-compression
2017-02-17 14:03:28 +01:00
Aleksa Sarai
94d6767d07 vendor: update to c/i@3f493f2e5d
This includes fixes to docker-daemon's GetBlob, which will now
decompress blobs (making c/i/copy act sanely when trying to copy from a
docker-daemon to uncompressed destinations, as well as making
verification actually work properly).

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-02-17 23:45:05 +11:00
Antonio Murdaca
226dc99ad4 Merge pull request #306 from cyphar/oci-roundtrip-tests
integration: add OCI <-> Docker roundtrip tests
2017-02-16 13:07:42 +01:00
Aleksa Sarai
eea384cdf7 integration: add upstream validator to OCI roundtrip tests
In order to make sure that we don't create invalid OCI images that are
consistently invalid, add additional checks to ensure that both of the
generated OCI images in the round-trip test are valid according to the
upstream validator.

This commit vendors the following packages (deep breath):
* oci/image-tools@7575a09363, which requires
* oci/image-spec@v1.0.0-rc4 [revendor, but is technically an update
  because I couldn't figure out what version was vendored last time]
* oci/runtime-spec@v1.0.0-rc4
* xeipuuv/gojsonschema@6b67b3fab7
* xeipuuv/gojsonreference@e02fc20de9
* xeipuuv/gojsonpointer@e0fe6f6830
* camlistore/go4@7ce08ca145

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-02-16 22:48:32 +11:00
Aleksa Sarai
76f5c6d4c5 integration: add OCI <-> Docker roundtrip tests
This test is just a general smoke test to make sure there are no errors
with skopeo, but also verifying that after passing through several
translation steps an OCI image will remain in fully working order.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-02-16 22:48:32 +11:00
Aleksa Sarai
79ef111398 vendor: update c/image@a074c669cf
This includes fixes required to add OCI roundtrip integration tests
(namely f9214e1d9d5d ("oci: remove MIME type autodetection")).

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-02-16 21:22:37 +11:00
Antonio Murdaca
af2998040a Merge pull request #302 from runcom/new-rel-v0.1.18
New release: v0.1.18
2017-02-02 18:08:18 +01:00
Antonio Murdaca
1f6c140716 bump to v0.1.19-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-02-02 17:42:48 +01:00
Antonio Murdaca
b08008c5b2 bump to v0.1.18
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-02-02 17:42:32 +01:00
Antonio Murdaca
c011e81b38 Merge pull request #301 from runcom/perf-gain
vendor c/image@c1893ff40c
2017-02-02 17:41:13 +01:00
Antonio Murdaca
683d45ffd6 vendor c/image@c1893ff40c
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-02-02 17:23:47 +01:00
Antonio Murdaca
07d6e7db03 Merge pull request #300 from runcom/fix-v2s2-oci-conversion
oci: fix config conversion from docker v2s2
2017-01-31 18:42:31 +01:00
Antonio Murdaca
02ea29a99d oci: fix config conversion from docker v2s2
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-31 18:25:18 +01:00
Antonio Murdaca
f1849c6a47 Merge pull request #297 from runcom/remove-engine-api
docker: remove github.com/docker/engine-api
2017-01-31 17:27:13 +01:00
Antonio Murdaca
03bb3c2f74 docker: remove github.com/docker/engine-api
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-31 17:09:02 +01:00
Antonio Murdaca
fdf0bec556 Merge pull request #296 from runcom/fix-registry-auth
docker: fix registry authentication issues
2017-01-30 17:00:52 +01:00
Antonio Murdaca
c736e69d48 docker: fix registry authentication issues
vendor containers/image@d23efe9c5a
fix https://bugzilla.redhat.com/show_bug.cgi?id=1413987

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-30 16:43:00 +01:00
Antonio Murdaca
d7156f9b3d Merge pull request #294 from runcom/fix-image-spec-dep
fix OCI image-spec dependency
2017-01-28 13:31:46 +01:00
Antonio Murdaca
15b3fdf6f4 fix OCI image-spec dependency
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-28 12:53:41 +01:00
Miloslav Trmač
0e1ba1fb70 Merge pull request #293 from mtrmac/unverified-contents
REPLACE: Vendor in mtrmac/image:unverified-contents and add a (skopeo dump-untrusted-signature-contents-without-verification)
2017-01-23 17:34:11 +01:00
Miloslav Trmač
ee590a9795 Add an undocumented (skopeo untrusted-signature-dump-without-verification)
This is a bit better than raw (gpg -d $signature), and it allows testing
of the signature.GetSignatureInformationWithoutVerification function;
but, still, keeping it hidden because relying on this in common
workflows is probably a bad idea and we don’t _neeed_ to expose it right
now.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-01-23 16:45:18 +01:00
Miloslav Trmač
076d41d627 Vendor after merging mtrmac/image:unverified-contents 2017-01-23 16:45:03 +01:00
Miloslav Trmač
4830d90c32 Merge pull request #292 from mtrmac/ParseNormalizedNamed
REPLACE: Vendor in mtrmac/image:ParseNormalizedNamed
2017-01-20 00:02:14 +01:00
Miloslav Trmač
2f8cc39a1a Vendor after merging mtrmac/image:ParseNormalizedNamed
… and use the master branch of docker/distribution which provides
docker/distribution/reference.ParseNormalizedNamed.
2017-01-19 23:11:08 +01:00
Miloslav Trmač
8602471486 Merge pull request #291 from mtrmac/erikh-kubernetes-apimachinery
Vendor after merging erikh/image:kube-fix
2017-01-19 20:37:33 +01:00
Erik Hollensbe
1ee74864e9 Vendor after merging erikh/image:kube-fix
Based on https://github.com/projectatomic/skopeo/pull/289 by Erik
Hollensbe <github@hollensbe.org>

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-01-19 20:17:36 +01:00
Antonio Murdaca
81404fb71c Merge pull request #286 from AkihiroSuda/man2
Makefile: /usr/bin/go-md2man -> go-md2man
2017-01-12 15:29:57 +01:00
Akihiro Suda
845ad88cec Makefile: /usr/bin/go-md2man -> go-md2man
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
2017-01-12 09:57:52 +00:00
Antonio Murdaca
9b6b57df50 Merge pull request #284 from runcom/switch-to-vndr
switch to vndr
2017-01-09 18:11:22 +01:00
Antonio Murdaca
fefeeb4c70 switch to vndr
vndr is almost exactly the same as our old good hack/vendor.sh. Except
it's cleaner and it allows to re-vendor just one dependency if needed
(which we do a lot for containers/image).

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-09 17:54:17 +01:00
Antonio Murdaca
bbc0c69624 Merge pull request #283 from runcom/oci-digest
*: switch to opencontaniners/go-digest
2017-01-09 16:51:47 +01:00
Antonio Murdaca
dcfcfdaa1e *: switch to opencontaniners/go-digest
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-09 16:20:00 +01:00
Antonio Murdaca
e41b0d67d6 Merge pull request #282 from runcom/rev-c-image-2
bump c/image
2017-01-08 13:26:08 +01:00
Antonio Murdaca
1ec992abd1 bump c/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-01-08 13:10:07 +01:00
Antonio Murdaca
b8ae5c6054 Merge pull request #278 from runcom/up-oci-image
update opencontainers/image-spec to v1.0.0-rc3
2016-12-23 12:07:14 +01:00
Antonio Murdaca
7c530bc55f update opencontainers/image-spec to v1.0.0-rc3
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-23 11:39:27 +01:00
Antonio Murdaca
3377542e27 Merge pull request #277 from runcom/up-c-image
update c/image
2016-12-22 17:59:22 +01:00
Antonio Murdaca
cf5d9ffa49 update c/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-22 17:43:37 +01:00
Antonio Murdaca
686c3fcd7a Merge pull request #265 from masters-of-cats/vendor-errorspkg
Add pkg/errors dependency
2016-12-19 17:47:15 +01:00
George Lestaris
78a24cea81 Vendors new containers/image version using pkg/errors
Signed-off-by: George Lestaris <glestaris@pivotal.io>
2016-12-19 16:25:50 +00:00
Antonio Murdaca
fd93ebb78d Merge pull request #275 from glestaris/patch-1
Fix typo in CONTRIBUTING.md
2016-12-16 12:30:43 +01:00
George Lestaris
56dd3fc928 Fix typo
Signed-off-by: Gareth Clay <gclay@pivotal.io>
Signed-off-by: George Lestaris <glestaris@pivotal.io>
2016-12-16 10:56:57 +00:00
Antonio Murdaca
d830304e6d Merge pull request #179 from nalind/vendor-storage
Vendor containers/storage
2016-12-15 18:49:51 +01:00
Nalin Dahyabhai
4960f390e2 Vendor containers/storage, update containers/image
Vendor containers/storage, and its dependencies github.com/pborman/uuid
and github.com/mistifyio/go-zfs, which we didn't already use.

Update the build Dockerfile to install their dependencies.

Add scriptlets that try to detect whether or not we need to use the
"libdm_no_deferred_remove" and/or "btrfs_noversion" build tags.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-12-15 12:29:07 -05:00
Nalin Dahyabhai
9ba6dd71d7 Initialize reexec() handlers at startup-time
When we start up, initialize handlers so that we can import blobs
correctly when using the storage library.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-12-09 13:31:54 -05:00
Antonio Murdaca
dd6441b546 Merge pull request #273 from mtrmac/FIXMEs
Remove an obsolete FIXME
2016-12-09 17:28:19 +01:00
Miloslav Trmač
09cc6c3199 Remove an obsolete FIXME
With https://github.com/containers/image/pull/183 , this no longer
applies.
2016-12-09 17:10:32 +01:00
Antonio Murdaca
7f7b648443 Merge pull request #272 from runcom/bump-v0.1.17-and-again
Bump v0.1.17 and again to v0.1.18-dev
2016-12-08 21:15:36 +01:00
Antonio Murdaca
cc571eb1ea bump to v0.1.18-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 20:58:26 +01:00
Antonio Murdaca
b3b4e2b8f8 bump to v0.1.17
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 20:58:03 +01:00
Antonio Murdaca
28647cf29f Merge pull request #271 from runcom/fix-nokey-err
provide better error when key not found
2016-12-08 19:49:49 +01:00
Antonio Murdaca
0c8511f222 provide better error when key not found
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 18:51:30 +01:00
Antonio Murdaca
a515fefda9 Merge pull request #264 from runcom/split-flags
cmd: per command tls flags
2016-12-08 18:12:11 +01:00
Antonio Murdaca
1215f5fe69 cmd: per command tls flags
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 17:56:22 +01:00
Antonio Murdaca
93cde78d9b cmd/skopeo: hide layers command
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-08 17:56:22 +01:00
Antonio Murdaca
1730fd0d5f Merge pull request #269 from nalind/buildtags
Makefile: run "go" with $(BUILDTAGS)
2016-12-08 17:10:45 +01:00
Nalin Dahyabhai
7d58309a4f Makefile: run "go" with $(BUILDTAGS)
Run the "go" command with the $(BUILDTAGS) makefile variable passed in
as build tags.  We don't currently set it, but we'll need to eventually,
and adding it now does no harm.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-12-08 10:49:36 -05:00
Antonio Murdaca
a865c07818 Merge pull request #267 from cyphar/vendor-image
vendor: update to containers/image@a791c54467
2016-12-08 10:03:48 +01:00
Aleksa Sarai
cd269a4558 vendor: update to containers/image@a791c54467
Signed-off-by: Aleksa Sarai <asarai@suse.de>
2016-12-08 12:32:36 +11:00
Miloslav Trmač
2b3af4ad51 Merge pull request #266 from runcom/update-readme-contrib
README.md: add dependencies management tips
2016-12-05 17:38:21 +01:00
Antonio Murdaca
6ec338aa30 README.md: add dependencies management tips
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-05 13:07:20 +01:00
Antonio Murdaca
fb61d0c98f Merge pull request #262 from runcom/fix-quay-io
docker: fix ping routine
2016-12-02 19:16:31 +01:00
Antonio Murdaca
7620193722 docker: fix ping routine
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-02 19:00:38 +01:00
Miloslav Trmač
f8bd406deb Merge pull request #261 from runcom/fix-inline-df-comments
Revert "Dockerfile: remove inline comments"
2016-12-02 18:28:40 +01:00
Antonio Murdaca
d9b60e7fc9 Revert "Dockerfile: remove inline comments"
This reverts commit 3dcdb1ff7d.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-01 22:03:10 +01:00
Miloslav Trmač
d0a41799da Merge pull request #260 from runcom/rerevendor-cimage
revendor containers/image
2016-12-01 19:19:44 +01:00
Antonio Murdaca
dcc5395140 revendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-12-01 19:03:10 +01:00
Antonio Murdaca
980ff3eadd Merge pull request #249 from runcom/layers-federation
Support layers federation
2016-11-30 22:14:30 +01:00
Antonio Murdaca
f36fde92d6 Supports layers federation
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-30 19:46:54 +01:00
Antonio Murdaca
6b616d1730 Merge pull request #256 from rhatdan/bash_completions
Complete bash completions for skopeo
2016-11-30 19:39:25 +01:00
Dan Walsh
6dc36483f4 Complete bash completions for skopeo
Current code only handled commands not the options.
2016-11-30 13:22:57 -05:00
Antonio Murdaca
574b764391 Merge pull request #257 from runcom/revendor-cimage
revendor containers/image
2016-11-30 18:24:05 +01:00
Antonio Murdaca
1e795e038b revendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-30 18:07:35 +01:00
Antonio Murdaca
7cca84ba57 Merge pull request #254 from runcom/enable-cli-userpass
use user/pass flags
2016-11-30 17:29:48 +01:00
Antonio Murdaca
342ba18561 use user/pass flags
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-30 17:10:42 +01:00
Antonio Murdaca
d69c51e958 Merge pull request #248 from Crazykev/use-docker-digest
Use docker/distribution/digest
2016-11-28 17:31:33 +01:00
Crazykev
8b73542d89 use docker/distribution/digest
Signed-off-by: Crazykev <crazykev@zju.edu.cn>
2016-11-28 16:14:51 +00:00
Crazykev
1c76bc950d update containers/image vendor
Signed-off-by: Crazykev <crazykev@zju.edu.cn>
2016-11-28 11:36:18 +00:00
Antonio Murdaca
141212f27d Merge pull request #255 from runcom/fix-Dockerfile
Dockerfile: remove inline comments
2016-11-25 18:27:05 +01:00
Antonio Murdaca
3dcdb1ff7d Dockerfile: remove inline comments
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-25 17:35:59 +01:00
Antonio Murdaca
4620d5849c Merge pull request #252 from mtrmac/one-skopeo-in-all
Only build one skopeo binary by (make all)
2016-11-23 17:48:35 +01:00
Miloslav Trmač
a0af3619d3 Only build one skopeo binary by (make all)
Both (make binary) and (make binary-static) compile the code and create
a skopeo binary, so (make all) should only depend on one of them.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2016-11-23 16:57:37 +01:00
Antonio Murdaca
fcdf9c1b91 Merge pull request #243 from hustcat/static-branch
Add static compile target in Makefile
2016-11-03 09:00:26 +01:00
fightingdu
12cc3a9cbf Add static compile target in Makefile.
Add install `go-md2man` in Dockerfile.build

Signed-off-by: Ye Yin <eyniy@qq.com>
2016-11-03 15:31:50 +08:00
Antonio Murdaca
bd816574ed Merge pull request #242 from runcom/fix-skopeo-layers
Fix skopeo layers and deprecate it
2016-11-02 17:36:36 +01:00
Antonio Murdaca
b3b322e10b deprecate skopeo layers
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-02 17:10:25 +01:00
Antonio Murdaca
2c5532746f cmd/skopeo/layers: fix index out of range
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-11-02 09:20:43 +01:00
Miloslav Trmač
c70e58e6b5 Merge pull request #224 from aweiteka/default-policy
Add insecureAcceptAnything to default docker-daemon transport
2016-10-31 20:18:30 +01:00
Aaron Weitekamp
879dbc3757 Add insecureAcceptAnything to default docker-daemon transport
Signed-off-by: Aaron Weitekamp <aweiteka@redhat.com>
2016-10-31 14:43:35 -04:00
Antonio Murdaca
1f655f3f09 Merge pull request #238 from runcom/integrate-all-the-things-into-master
Pull in schema1 and docker-daemon
2016-10-21 17:13:01 +02:00
Antonio Murdaca
69e08d78ad Pull in schema1 and docker-daemon
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-21 16:48:39 +02:00
Antonio Murdaca
f4f69742ad Merge pull request #237 from so0k/add-osx-instructions
Add OSX instructions to Readme
2016-10-21 12:22:49 +02:00
Vincent De Smet
066463201a Add OSX instructions to Readme 2016-10-21 17:58:55 +08:00
Antonio Murdaca
5d589d6d54 Merge pull request #233 from runcom/add-defyaml
add sigstore default configuration
2016-10-12 19:19:15 +02:00
Antonio Murdaca
012f89d16b add sigstore default configuration
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-12 18:58:14 +02:00
Antonio Murdaca
ce42c70d4c Merge pull request #232 from runcom/add-better-errors
vendor containers/image for better registry errors
2016-10-12 15:14:46 +02:00
Antonio Murdaca
a48f7597e3 vendor containers/image for better registry errors
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-12 14:53:46 +02:00
Antonio Murdaca
d166555fb4 Merge pull request #231 from runcom/add-reg-user-agent
vendor containers/iamge for DockerRegistryUserAgent
2016-10-11 18:45:42 +02:00
Antonio Murdaca
5721355da7 vendor containers/iamge for DockerRegistryUserAgent
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-11 18:23:09 +02:00
Antonio Murdaca
c00868148e Merge pull request #229 from runcom/fix-fork-docker-reference
vendor contianers/image with docker/docker/reference forked
2016-10-11 18:04:50 +02:00
Antonio Murdaca
7f757cd253 vendor contianers/image with docker/docker/reference forked
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-11 17:42:19 +02:00
Antonio Murdaca
a720c22303 Merge pull request #230 from mtrmac/image-refactor
Refactor c/i/image
2016-10-11 16:13:42 +02:00
Miloslav Trmač
bd992e3872 Vendor after merging mtrmac/image:image-refactor
… and update for API changes
2016-10-11 15:53:20 +02:00
Antonio Murdaca
5207447327 Merge pull request #228 from runcom/fix-authConfig
vendor containers/image for DockerAuthConfig
2016-10-11 12:36:57 +02:00
Antonio Murdaca
f957e894e6 vendor containers/image for DockerAuthConfig
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-11 12:12:45 +02:00
Antonio Murdaca
0eb841ec8b Merge pull request #226 from runcom/fix-copy-test-manlist
vendor containers/image
2016-10-10 20:06:12 +02:00
Antonio Murdaca
dc1e560d4e vendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-10 19:46:10 +02:00
Antonio Murdaca
f69a78fa0b Merge pull request #221 from runcom/fix-oci-image-spec
update opencontainers/image-spec
2016-10-01 20:31:42 +02:00
Antonio Murdaca
efb47cf374 update opencontainers/image-spec
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-10-01 20:07:43 +02:00
Antonio Murdaca
507d09876d Merge pull request #219 from runcom/bump-containers-image-1
Bump v0.1.16
2016-09-27 21:25:21 +02:00
Antonio Murdaca
34fe924aff bump to v0.1.17-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-09-27 21:00:11 +02:00
Antonio Murdaca
03bac73f3a bump to v0.1.16
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-09-27 20:59:53 +02:00
Antonio Murdaca
fb51eb21e8 vendor containers/image
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-09-27 20:59:34 +02:00
Miloslav Trmač
d0bc564259 Merge pull request #177 from mtrmac/docker-lookaside-tests
Docker lookaside tests
2016-09-27 16:01:16 +02:00
Miloslav Trmač
fc3d809ce2 Add sigstore tests
Also includes a smoke test for (skopeo delete) (really verifying the
sigstore deletion).
2016-09-27 15:21:24 +02:00
Miloslav Trmač
947ac8b2ab Replace preparePolicyFixture by fileFromFixture
This will make it useful for other template files.

Also rewrite it to do the edits internally instead of calling sed.
2016-09-27 15:21:20 +02:00
Antonio Murdaca
fa72d057db Merge pull request #217 from runcom/bump
Bump to 0.1.15 and then again to 0.1.16-dev
2016-09-26 18:21:02 +02:00
Antonio Murdaca
64eb855338 bump 0.1.16-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-09-26 17:55:40 +02:00
1074 changed files with 228106 additions and 44993 deletions

View File

@@ -8,7 +8,7 @@ that we follow.
* [Reporting Issues](#reporting-issues)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Communications](#communications)
* [Becomign a Maintainer](#becoming-a-maintainer)
* [Becoming a Maintainer](#becoming-a-maintainer)
## Reporting Issues

View File

@@ -1,22 +1,18 @@
FROM fedora
RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md2man \
# storage deps
btrfs-progs-devel \
device-mapper-devel \
# gpgme bindings deps
libassuan-devel gpgme-devel \
gnupg \
# registry v1 deps
xz-devel \
python-devel \
python-pip \
swig \
redhat-rpm-config \
openssl-devel \
patch
ostree-devel \
gnupg
# Install three versions of the registry. The first is an older version that
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests. Install registry v1 also.
# and schema2 manifests.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN set -x \
@@ -28,22 +24,12 @@ RUN set -x \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH" \
&& export DRV1="$(mktemp -d)" \
&& git clone https://github.com/docker/docker-registry.git "$DRV1" \
# no need for setuptools since we have a version conflict with fedora
&& sed -i.bak s/setuptools==5.8//g "$DRV1/requirements/main.txt" \
&& sed -i.bak s/setuptools==5.8//g "$DRV1/depends/docker-registry-core/requirements/main.txt" \
&& pip install "$DRV1/depends/docker-registry-core" \
&& pip install file://"$DRV1#egg=docker-registry[bugsnag,newrelic,cors]" \
&& patch $(python -c 'import boto; import os; print os.path.dirname(boto.__file__)')/connection.py \
< "$DRV1/contrib/boto_header_patch.diff" \
&& dnf -y update && dnf install -y m2crypto
&& rm -rf "$GOPATH"
RUN set -x \
&& yum install -y which git tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \
&& export GOPATH=$(mktemp -d) \
&& git clone -b v1.3.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
&& git clone -b v1.5.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
&& (cd "$GOPATH/src/github.com/openshift/origin" && make clean build && make all WHAT=cmd/dockerregistry) \
&& cp -a "$GOPATH/src/github.com/openshift/origin/_output/local/bin/linux"/*/* /usr/local/bin \
&& cp "$GOPATH/src/github.com/openshift/origin/images/dockerregistry/config.yml" /atomic-registry-config.yml \

View File

@@ -1,5 +1,14 @@
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y golang git-core libgpgme11-dev
FROM ubuntu:17.04
RUN apt-get update && apt-get install -y \
golang \
btrfs-tools \
git-core \
libdevmapper-dev \
libgpgme11-dev \
go-md2man \
libglib2.0-dev \
libostree-dev
ENV GOPATH=/
WORKDIR /src/github.com/projectatomic/skopeo

View File

@@ -7,8 +7,9 @@ INSTALLDIR=${PREFIX}/bin
MANINSTALLDIR=${PREFIX}/share/man
CONTAINERSSYSCONFIGDIR=${DESTDIR}/etc/containers
REGISTRIESDDIR=${CONTAINERSSYSCONFIGDIR}/registries.d
SIGSTOREDIR=${DESTDIR}/var/lib/atomic/sigstore
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
GO_MD2MAN ?= /usr/bin/go-md2man
GO_MD2MAN ?= go-md2man
ifeq ($(DEBUG), 1)
override GOGCFLAGS += -N -l
@@ -31,6 +32,11 @@ GIT_COMMIT := $(shell git rev-parse HEAD 2> /dev/null || true)
MANPAGES_MD = $(wildcard docs/*.md)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh)
LIBDM_BUILD_TAG = $(shell hack/libdm_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG)
BUILDTAGS += $(LOCAL_BUILD_TAGS)
# make all DEBUG=1
# Note: Uses the -N -l go compiler options to disable compiler optimizations
# and inlining. Using these build options allows you to subsequently
@@ -42,12 +48,19 @@ all: binary docs
binary: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
skopeobuildimage make binary-local $(if $(DEBUG),DEBUG=$(DEBUG))
skopeobuildimage make binary-local $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
binary-static: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
skopeobuildimage make binary-local-static $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
# Build w/o using Docker containers
binary-local:
go build -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -o skopeo ./cmd/skopeo
go build -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
binary-local-static:
go build -ldflags "-extldflags \"-static\" -X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
build-container:
docker build ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" .
@@ -62,8 +75,9 @@ clean:
rm -f skopeo docs/*.1
install: install-binary install-docs install-completions
install -d -m 755 ${SIGSTOREDIR}
install -D -m 644 default-policy.json ${CONTAINERSSYSCONFIGDIR}/policy.json
install -d -m 755 ${REGISTRIESDDIR}
install -D -m 644 default.yaml ${REGISTRIESDDIR}/default.yaml
install-binary: ./skopeo
install -D -m 755 skopeo ${INSTALLDIR}/skopeo
@@ -72,7 +86,7 @@ install-docs: docs/skopeo.1
install -D -m 644 docs/skopeo.1 ${MANINSTALLDIR}/man1/skopeo.1
install-completions:
install -m 644 -D hack/make/bash_autocomplete ${BASHINSTALLDIR}/skopeo
install -m 644 -D completions/bash/skopeo ${BASHINSTALLDIR}/skopeo
shell: build-container
$(DOCKER_RUN_DOCKER) bash
@@ -81,11 +95,11 @@ check: validate test-unit test-integration
# The tests can run out of entropy and block in containers, so replace /dev/random.
test-integration: build-container
$(DOCKER_RUN_DOCKER) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; SKOPEO_CONTAINER_TESTS=1 hack/make.sh test-integration'
$(DOCKER_RUN_DOCKER) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; SKOPEO_CONTAINER_TESTS=1 BUILDTAGS="$(BUILDTAGS)" hack/make.sh test-integration'
test-unit: build-container
# Just call (make test unit-local) here instead of worrying about environment differences, e.g. GO15VENDOREXPERIMENT.
$(DOCKER_RUN_DOCKER) make test-unit-local
$(DOCKER_RUN_DOCKER) make test-unit-local BUILDTAGS='$(BUILDTAGS)'
validate: build-container
$(DOCKER_RUN_DOCKER) hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
@@ -97,4 +111,4 @@ validate-local:
hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
test-unit-local:
go test $$(go list -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')
go test -tags "$(BUILDTAGS)" $$(go list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')

112
README.md
View File

@@ -49,11 +49,12 @@ Copying images
-
`skopeo` can copy container images between various storage mechanisms,
e.g. Docker registries (including the Docker Hub), the Atomic Registry,
and local directories:
local directories, and local OCI-layout directories:
```sh
$ skopeo copy docker://busybox:1-glibc atomic:myns/unsigned:streaming
$ skopeo copy docker://busybox:latest dir:existingemptydirectory
$ skopeo copy docker://busybox:latest oci:busybox_ocilayout
```
Deleting images
@@ -65,14 +66,10 @@ $ skopeo delete docker://localhost:5000/imagename:latest
Private registries with authentication
-
When interacting with private registries, `skopeo` first looks for the Docker's cli config file (usually located at `$HOME/.docker/config.json`) to get the credentials needed to authenticate. When the file isn't available it falls back looking for `--username` and `--password` flags. The ultimate fallback, as Docker does, is to provide an empty authentication when interacting with those registries.
When interacting with private registries, `skopeo` first looks for `--creds` (for `skopeo inspect|delete`) or `--src-creds|--dest-creds` (for `skopeo copy`) flags. If those aren't provided, it looks for the Docker's cli config file (usually located at `$HOME/.docker/config.json`) to get the credentials needed to authenticate. The ultimate fallback, as Docker does, is to provide an empty authentication when interacting with those registries.
Examples:
```sh
# on my system
$ skopeo --help | grep docker-cfg
--docker-cfg "/home/runcom/.docker" Docker's cli config for auth
$ cat /home/runcom/.docker/config.json
{
"auths": {
@@ -88,55 +85,120 @@ $ skopeo inspect docker://myregistrydomain.com:5000/busybox
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
# let's try now to fake a non existent Docker's config file
$ skopeo --docker-cfg="" inspect docker://myregistrydomain.com:5000/busybox
FATA[0000] Get https://myregistrydomain.com:5000/v2/busybox/manifests/latest: no basic auth credentials
$ cat /home/runcom/.docker/config.json
{}
# passing --username and --password - we can see that everything goes fine
$ skopeo --docker-cfg="" --username=testuser --password=testpassword inspect docker://myregistrydomain.com:5000/busybox
$ skopeo inspect docker://myregistrydomain.com:5000/busybox
FATA[0000] unauthorized: authentication required
# passing --creds - we can see that everything goes fine
$ skopeo inspect --creds=testuser:testpassword docker://myregistrydomain.com:5000/busybox
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
# skopeo copy example:
$ skopeo copy --src-creds=testuser:testpassword docker://myregistrydomain.com:5000/private oci:local_oci_image
```
If your cli config is found but it doesn't contain the necessary credentials for the queried registry
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--username`
and `--password`.
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--creds` or `--src-creds|--dest-creds`.
Building
-
To build the manual you will need go-md2man.
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag.
There are two ways to build skopeo: in a container, or locally without a container. Choose the one which better matches your needs and environment.
### Building without a container
Building without a container requires a bit more manual work and setup in your environment, but it is more flexible:
- It should work in more environments (e.g. for native macOS builds)
- It does not require root privileges (after dependencies are installed)
- It is faster, therefore more convenient for developing `skopeo`.
Install the necessary dependencies:
```sh
$ sudo apt-get install go-md2man
```
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag. Also, make sure to clone the repository in your `GOPATH` - otherwise compilation fails.
```sh
$ git clone https://github.com/projectatomic/skopeo $GOPATH/src/github.com/projectatomic/skopeo
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make all
Fedora$ sudo dnf install gpgme-devel libassuan-devel btrfs-progs-devel device-mapper-devel
macOS$ brew install gpgme
```
You may need to install additional development packages: gpgme-devel and libassuan-devel
Make sure to clone this repository in your `GOPATH` - otherwise compilation fails.
```sh
$ dnf install gpgme-devel libassuan-devel
$ git clone https://github.com/projectatomic/skopeo $GOPATH/src/github.com/projectatomic/skopeo
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make binary-local
```
### Building in a container
Building in a container is simpler, but more restrictive:
- It requires the `docker` command and the ability to run Linux containers
- The created executable is a Linux executable, and depends on dynamic libraries which may only be available only in a container of a similar Linux distribution.
```sh
$ make binary # Or (make all) to also build documentation, see below.
```
### Building documentation
To build the manual you will need go-md2man.
```sh
Debian$ sudo apt-get install go-md2man
Fedora$ sudo dnf install go-md2man
```
Then
```sh
$ make docs
```
Installing
-
If you built from source:
```sh
$ sudo make install
```
`skopeo` is also available from Fedora 23:
`skopeo` is also available from Fedora 23 (and later):
```sh
sudo dnf install skopeo
$ sudo dnf install skopeo
```
TODO
-
- list all images on registry?
- registry v2 search?
- support output to docker load tar(s)
- show repo tags via flag or when reference isn't tagged or digested
- add tests (integration with deployed registries in container - Docker-like)
- support rkt/appc image spec
NOT TODO
-
- provide a _format_ flag - just use the awesome [jq](https://stedolan.github.io/jq/)
CONTRIBUTING
-
### Dependencies management
`skopeo` uses [`vndr`](https://github.com/LK4D4/vndr) for dependencies management.
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- run `vndr github.com/pkg/errors`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `vndr github.com/pkg/errors`
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `vndr github.com/containers/image`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `vndr github.com/containers/image`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete succcesfully again, merge the skopeo PR
License
-
skopeo is licensed under the Apache License, Version 2.0. See

View File

@@ -4,12 +4,30 @@ import (
"errors"
"fmt"
"os"
"strings"
"github.com/containers/image/copy"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/urfave/cli"
)
// contextsFromGlobalOptions returns source and destionation types.SystemContext depending on c.
func contextsFromGlobalOptions(c *cli.Context) (*types.SystemContext, *types.SystemContext, error) {
sourceCtx, err := contextFromGlobalOptions(c, "src-")
if err != nil {
return nil, nil, err
}
destinationCtx, err := contextFromGlobalOptions(c, "dest-")
if err != nil {
return nil, nil, err
}
return sourceCtx, destinationCtx, nil
}
func copyHandler(context *cli.Context) error {
if len(context.Args()) != 2 {
return errors.New("Usage: copy source destination")
@@ -21,27 +39,43 @@ func copyHandler(context *cli.Context) error {
}
defer policyContext.Destroy()
srcRef, err := transports.ParseImageName(context.Args()[0])
srcRef, err := alltransports.ParseImageName(context.Args()[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
}
destRef, err := transports.ParseImageName(context.Args()[1])
destRef, err := alltransports.ParseImageName(context.Args()[1])
if err != nil {
return fmt.Errorf("Invalid destination name %s: %v", context.Args()[1], err)
}
signBy := context.String("sign-by")
removeSignatures := context.Bool("remove-signatures")
return copy.Image(contextFromGlobalOptions(context), policyContext, destRef, srcRef, &copy.Options{
sourceCtx, destinationCtx, err := contextsFromGlobalOptions(context)
if err != nil {
return err
}
return copy.Image(policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: removeSignatures,
SignBy: signBy,
ReportWriter: os.Stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
})
}
var copyCmd = cli.Command{
Name: "copy",
Usage: "Copy an image from one location to another",
Name: "copy",
Usage: "Copy an IMAGE-NAME from one location to another",
Description: fmt.Sprintf(`
Container "IMAGE-NAME" uses a "transport":"details" format.
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "SOURCE-IMAGE DESTINATION-IMAGE",
Action: copyHandler,
// FIXME: Do we need to namespace the GPG aspect?
@@ -54,5 +88,38 @@ var copyCmd = cli.Command{
Name: "sign-by",
Usage: "Sign the image using a GPG key with the specified `FINGERPRINT`",
},
cli.StringFlag{
Name: "src-creds, screds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the source registry",
},
cli.StringFlag{
Name: "dest-creds, dcreds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the destination registry",
},
cli.StringFlag{
Name: "src-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the source registry",
},
cli.BoolTFlag{
Name: "src-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the container source registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the destination registry",
},
cli.BoolTFlag{
Name: "dest-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the container destination registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-ostree-tmp-dir",
Value: "",
Usage: "`DIRECTORY` to use for OSTree temporary files",
},
},
}

View File

@@ -3,8 +3,10 @@ package main
import (
"errors"
"fmt"
"strings"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/urfave/cli"
)
@@ -13,20 +15,48 @@ func deleteHandler(context *cli.Context) error {
return errors.New("Usage: delete imageReference")
}
ref, err := transports.ParseImageName(context.Args()[0])
ref, err := alltransports.ParseImageName(context.Args()[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
}
if err := ref.DeleteImage(contextFromGlobalOptions(context)); err != nil {
ctx, err := contextFromGlobalOptions(context, "")
if err != nil {
return err
}
if err := ref.DeleteImage(ctx); err != nil {
return err
}
return nil
}
var deleteCmd = cli.Command{
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Description: fmt.Sprintf(`
Delete an "IMAGE_NAME" from a transport
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Action: deleteHandler,
Flags: []cli.Flag{
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
},
}

View File

@@ -3,10 +3,15 @@ package main
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker"
"github.com/containers/image/manifest"
"github.com/containers/image/transports"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
@@ -14,7 +19,7 @@ import (
type inspectOutput struct {
Name string `json:",omitempty"`
Tag string `json:",omitempty"`
Digest string
Digest digest.Digest
RepoTags []string
Created time.Time
DockerVersion string
@@ -25,21 +30,48 @@ type inspectOutput struct {
}
var inspectCmd = cli.Command{
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Description: fmt.Sprintf(`
Return low-level information about "IMAGE-NAME" in a registry/transport
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Flags: []cli.Flag{
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
cli.BoolFlag{
Name: "raw",
Usage: "output raw manifest",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
},
},
Action: func(c *cli.Context) error {
Action: func(c *cli.Context) (retErr error) {
img, err := parseImage(c)
if err != nil {
return err
}
defer img.Close()
defer func() {
if err := img.Close(); err != nil {
retErr = errors.Wrapf(retErr, fmt.Sprintf("(could not close image: %v) ", err))
}
}()
rawManifest, _, err := img.Manifest()
if err != nil {
@@ -76,7 +108,13 @@ var inspectCmd = cli.Command{
outputData.Name = dockerImg.SourceRefFullName()
outputData.RepoTags, err = dockerImg.GetRepositoryTags()
if err != nil {
return fmt.Errorf("Error determining repository tags: %v", err)
// some registries may decide to block the "list all tags" endpoint
// gracefully allow the inspect to continue in this case. Currently
// the IBM Bluemix container registry has this restriction.
if !strings.Contains(err.Error(), "401") {
return fmt.Errorf("Error determining repository tags: %v", err)
}
logrus.Warnf("Registry disallows tag list retrieval; skipping")
}
}
out, err := json.MarshalIndent(outputData, "", " ")

View File

@@ -1,24 +1,32 @@
package main
import (
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/containers/image/directory"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
// TODO(runcom): document args and usage
var layersCmd = cli.Command{
Name: "layers",
Usage: "Get layers of IMAGE-NAME",
ArgsUsage: "IMAGE-NAME",
Action: func(c *cli.Context) error {
ArgsUsage: "IMAGE-NAME [LAYER...]",
Hidden: true,
Action: func(c *cli.Context) (retErr error) {
fmt.Fprintln(os.Stderr, `DEPRECATED: skopeo layers is deprecated in favor of skopeo copy`)
if c.NArg() == 0 {
return errors.New("Usage: layers imageReference [layer...]")
}
rawSource, err := parseImageSource(c, c.Args()[0], []string{
// TODO: skopeo layers only support these now
// TODO: skopeo layers only supports these now
// eventually we'll remove this command altogether...
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
@@ -26,26 +34,42 @@ var layersCmd = cli.Command{
if err != nil {
return err
}
src := image.FromSource(rawSource)
defer src.Close()
src, err := image.FromSource(rawSource)
if err != nil {
if closeErr := rawSource.Close(); closeErr != nil {
return errors.Wrapf(err, " (close error: %v)", closeErr)
}
blobDigests := c.Args().Tail()
if len(blobDigests) == 0 {
layers, err := src.LayerInfos()
return err
}
defer func() {
if err := src.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (close error: %v)", err)
}
}()
var blobDigests []digest.Digest
for _, dString := range c.Args().Tail() {
if !strings.HasPrefix(dString, "sha256:") {
dString = "sha256:" + dString
}
d, err := digest.Parse(dString)
if err != nil {
return err
}
seenLayers := map[string]struct{}{}
blobDigests = append(blobDigests, d)
}
if len(blobDigests) == 0 {
layers := src.LayerInfos()
seenLayers := map[digest.Digest]struct{}{}
for _, info := range layers {
if _, ok := seenLayers[info.Digest]; !ok {
blobDigests = append(blobDigests, info.Digest)
seenLayers[info.Digest] = struct{}{}
}
}
configInfo, err := src.ConfigInfo()
if err != nil {
return err
}
configInfo := src.ConfigInfo()
if configInfo.Digest != "" {
blobDigests = append(blobDigests, configInfo.Digest)
}
@@ -63,21 +87,24 @@ var layersCmd = cli.Command{
if err != nil {
return err
}
defer dest.Close()
defer func() {
if err := dest.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (close error: %v)", err)
}
}()
for _, digest := range blobDigests {
if !strings.HasPrefix(digest, "sha256:") {
digest = "sha256:" + digest
}
r, blobSize, err := rawSource.GetBlob(digest)
r, blobSize, err := rawSource.GetBlob(types.BlobInfo{Digest: digest, Size: -1})
if err != nil {
return err
}
if _, err := dest.PutBlob(r, types.BlobInfo{Digest: digest, Size: blobSize}); err != nil {
r.Close()
if closeErr := r.Close(); closeErr != nil {
return errors.Wrapf(err, " (close error: %v)", closeErr)
}
return err
}
r.Close()
}
manifest, _, err := src.Manifest()

View File

@@ -6,6 +6,7 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/signature"
"github.com/containers/storage/pkg/reexec"
"github.com/projectatomic/skopeo/version"
"github.com/urfave/cli"
)
@@ -25,47 +26,38 @@ func createApp() *cli.App {
app.Version = version.Version
}
app.Usage = "Various operations with container images and container image registries"
// TODO(runcom)
//app.EnableBashCompletion = true
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug",
Usage: "enable debug output",
},
cli.StringFlag{
Name: "username",
Value: "",
Usage: "use `USERNAME` for accessing the registry",
},
cli.StringFlag{
Name: "password",
Value: "",
Usage: "use `PASSWORD` for accessing the registry",
},
cli.StringFlag{
Name: "cert-path",
Value: "",
Usage: "use certificates at `PATH` (cert.pem, key.pem) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
Hidden: true,
},
cli.StringFlag{
Name: "policy",
Value: "",
Usage: "Path to a trust policy file",
},
cli.BoolFlag{
Name: "insecure-policy",
Usage: "run the tool without any policy check",
},
cli.StringFlag{
Name: "registries.d",
Value: "",
Usage: "use registry configuration files in `DIR` (e.g. for docker signature storage)",
Usage: "use registry configuration files in `DIR` (e.g. for container signature storage)",
},
}
app.Before = func(c *cli.Context) error {
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
if c.GlobalIsSet("tls-verify") {
logrus.Warn("'--tls-verify' is deprecated, please set this on the specific subcommand")
}
return nil
}
app.Commands = []cli.Command{
@@ -76,11 +68,15 @@ func createApp() *cli.App {
manifestDigestCmd,
standaloneSignCmd,
standaloneVerifyCmd,
untrustedSignatureDumpCmd,
}
return app
}
func main() {
if reexec.Init() {
return
}
app := createApp()
if err := app.Run(os.Args); err != nil {
logrus.Fatal(err)
@@ -92,7 +88,9 @@ func getPolicyContext(c *cli.Context) (*signature.PolicyContext, error) {
policyPath := c.GlobalString("policy")
var policy *signature.Policy // This could be cached across calls, if we had an application context.
var err error
if policyPath == "" {
if c.GlobalBool("insecure-policy") {
policy = &signature.Policy{Default: []signature.PolicyRequirement{signature.NewPRInsecureAcceptAnything()}}
} else if policyPath == "" {
policy, err = signature.DefaultPolicy(nil)
} else {
policy, err = signature.NewPolicyFromFile(policyPath)

View File

@@ -27,5 +27,5 @@ func TestManifestDigest(t *testing.T) {
// Success
out, err = runSkopeo("manifest-digest", "fixtures/image.manifest.json")
assert.NoError(t, err)
assert.Equal(t, fixturesTestImageManifestDigest+"\n", out)
assert.Equal(t, fixturesTestImageManifestDigest.String()+"\n", out)
}

View File

@@ -1,6 +1,7 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
@@ -27,6 +28,7 @@ func standaloneSign(context *cli.Context) error {
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
}
defer mech.Close()
signature, err := signature.SignDockerManifest(manifest, dockerReference, mech, fingerprint)
if err != nil {
return fmt.Errorf("Error creating signature: %v", err)
@@ -73,6 +75,7 @@ func standaloneVerify(context *cli.Context) error {
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
}
defer mech.Close()
sig, err := signature.VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest, expectedDockerReference, mech, expectedFingerprint)
if err != nil {
return fmt.Errorf("Error verifying signature: %v", err)
@@ -88,3 +91,40 @@ var standaloneVerifyCmd = cli.Command{
ArgsUsage: "MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT SIGNATURE",
Action: standaloneVerify,
}
func untrustedSignatureDump(context *cli.Context) error {
if len(context.Args()) != 1 {
return errors.New("Usage: skopeo untrusted-signature-dump-without-verification signature")
}
untrustedSignaturePath := context.Args()[0]
untrustedSignature, err := ioutil.ReadFile(untrustedSignaturePath)
if err != nil {
return fmt.Errorf("Error reading untrusted signature from %s: %v", untrustedSignaturePath, err)
}
untrustedInfo, err := signature.GetUntrustedSignatureInformationWithoutVerifying(untrustedSignature)
if err != nil {
return fmt.Errorf("Error decoding untrusted signature: %v", err)
}
untrustedOut, err := json.MarshalIndent(untrustedInfo, "", " ")
if err != nil {
return err
}
fmt.Fprintln(context.App.Writer, string(untrustedOut))
return nil
}
// WARNING: Do not use the contents of this for ANY security decisions,
// and be VERY CAREFUL about showing this information to humans in any way which suggest that these values “are probably” reliable.
// There is NO REASON to expect the values to be correct, or not intentionally misleading
// (including things like “✅ Verified by $authority”)
//
// The subcommand is undocumented, and it may be renamed or entirely disappear in the future.
var untrustedSignatureDumpCmd = cli.Command{
Name: "untrusted-signature-dump-without-verification",
Usage: "Dump contents of a signature WITHOUT VERIFYING IT",
ArgsUsage: "SIGNATURE",
Hidden: true,
Action: untrustedSignatureDump,
}

View File

@@ -1,20 +1,25 @@
package main
import (
"encoding/json"
"io/ioutil"
"os"
"testing"
"time"
"github.com/containers/image/signature"
"github.com/opencontainers/go-digest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
// fixturesTestImageManifestDigest is the Docker manifest digest of "image.manifest.json"
fixturesTestImageManifestDigest = "sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55"
fixturesTestImageManifestDigest = digest.Digest("sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55")
// fixturesTestKeyFingerprint is the fingerprint of the private key.
fixturesTestKeyFingerprint = "1D8230F6CDB6A06716E414C1DB72F2188BB46CC8"
// fixturesTestKeyFingerprint is the key ID of the private key.
fixturesTestKeyShortID = "DB72F2188BB46CC8"
)
// Test that results of runSkopeo failed with nothing on stdout, and substring
@@ -26,6 +31,13 @@ func assertTestFailed(t *testing.T, stdout string, err error, substring string)
}
func TestStandaloneSign(t *testing.T) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
require.NoError(t, err)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
t.Skipf("Signing not supported: %v", err)
}
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/manifest"
os.Setenv("GNUPGHOME", "fixtures")
@@ -54,7 +66,7 @@ func TestStandaloneSign(t *testing.T) {
manifestPath, "" /* empty reference */, fixturesTestKeyFingerprint)
assertTestFailed(t, out, err, "empty signature content")
// Unknown key. (FIXME? The error is 'Error creating signature: End of file")
// Unknown key.
out, err = runSkopeo("standalone-sign", "-o", "/dev/null",
manifestPath, dockerReference, "UNKNOWN GPG FINGERPRINT")
assert.Error(t, err)
@@ -71,17 +83,18 @@ func TestStandaloneSign(t *testing.T) {
defer os.Remove(sigOutput.Name())
out, err = runSkopeo("standalone-sign", "-o", sigOutput.Name(),
manifestPath, dockerReference, fixturesTestKeyFingerprint)
assert.NoError(t, err)
require.NoError(t, err)
assert.Empty(t, out)
sig, err := ioutil.ReadFile(sigOutput.Name())
require.NoError(t, err)
manifest, err := ioutil.ReadFile(manifestPath)
require.NoError(t, err)
mech, err := signature.NewGPGSigningMechanism()
mech, err = signature.NewGPGSigningMechanism()
require.NoError(t, err)
defer mech.Close()
verified, err := signature.VerifyDockerManifestSignature(sig, manifest, dockerReference, mech, fixturesTestKeyFingerprint)
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, dockerReference, verified.DockerReference)
assert.Equal(t, fixturesTestImageManifestDigest, verified.DockerManifestDigest)
}
@@ -122,5 +135,44 @@ func TestStandaloneVerify(t *testing.T) {
out, err = runSkopeo("standalone-verify", manifestPath,
dockerReference, fixturesTestKeyFingerprint, signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified, digest "+fixturesTestImageManifestDigest+"\n", out)
assert.Equal(t, "Signature verified, digest "+fixturesTestImageManifestDigest.String()+"\n", out)
}
func TestUntrustedSignatureDump(t *testing.T) {
// Invalid command-line arguments
for _, args := range [][]string{
{},
{"a1", "a2"},
{"a1", "a2", "a3", "a4"},
} {
out, err := runSkopeo(append([]string{"untrusted-signature-dump-without-verification"}, args...)...)
assertTestFailed(t, out, err, "Usage")
}
// Error reading manifest
out, err := runSkopeo("untrusted-signature-dump-without-verification",
"/this/doesnt/exist")
assertTestFailed(t, out, err, "/this/doesnt/exist")
// Error reading signature (input is not a signature)
out, err = runSkopeo("untrusted-signature-dump-without-verification", "fixtures/image.manifest.json")
assertTestFailed(t, out, err, "Error decoding untrusted signature")
// Success
for _, path := range []string{"fixtures/image.signature", "fixtures/corrupt.signature"} {
// Success
out, err = runSkopeo("untrusted-signature-dump-without-verification", path)
require.NoError(t, err)
var info signature.UntrustedSignatureInformation
err := json.Unmarshal([]byte(out), &info)
require.NoError(t, err)
assert.Equal(t, fixturesTestImageManifestDigest, info.UntrustedDockerManifestDigest)
assert.Equal(t, "testing/manifest", info.UntrustedDockerReference)
assert.NotNil(t, info.UntrustedCreatorID)
assert.Equal(t, "atomic ", *info.UntrustedCreatorID)
assert.NotNil(t, info.UntrustedTimestamp)
assert.True(t, time.Unix(1458239713, 0).Equal(*info.UntrustedTimestamp))
assert.Equal(t, fixturesTestKeyShortID, info.UntrustedShortKeyIdentifier)
}
}

View File

@@ -1,39 +1,87 @@
package main
import (
"github.com/containers/image/transports"
"errors"
"strings"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/urfave/cli"
)
// contextFromGlobalOptions returns a types.SystemContext depending on c.
func contextFromGlobalOptions(c *cli.Context) *types.SystemContext {
tlsVerify := c.GlobalBoolT("tls-verify")
return &types.SystemContext{
RegistriesDirPath: c.GlobalString("registries.d"),
DockerCertPath: c.GlobalString("cert-path"),
DockerInsecureSkipTLSVerify: !tlsVerify,
func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemContext, error) {
ctx := &types.SystemContext{
RegistriesDirPath: c.GlobalString("registries.d"),
DockerCertPath: c.String(flagPrefix + "cert-dir"),
// DEPRECATED: keep this here for backward compatibility, but override
// them if per subcommand flags are provided (see below).
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
OSTreeTmpDirPath: c.String(flagPrefix + "ostree-tmp-dir"),
}
if c.IsSet(flagPrefix + "tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT(flagPrefix + "tls-verify")
}
if c.IsSet(flagPrefix + "creds") {
var err error
ctx.DockerAuthConfig, err = getDockerAuth(c.String(flagPrefix + "creds"))
if err != nil {
return nil, err
}
}
return ctx, nil
}
// ParseImage converts image URL-like string to an initialized handler for that image.
// The caller must call .Close() on the returned Image.
func parseImage(c *cli.Context) (types.Image, error) {
imgName := c.Args().First()
ref, err := transports.ParseImageName(imgName)
func parseCreds(creds string) (string, string, error) {
if creds == "" {
return "", "", errors.New("credentials can't be empty")
}
up := strings.SplitN(creds, ":", 2)
if len(up) == 1 {
return up[0], "", nil
}
if up[0] == "" {
return "", "", errors.New("username can't be empty")
}
return up[0], up[1], nil
}
func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
username, password, err := parseCreds(creds)
if err != nil {
return nil, err
}
return ref.NewImage(contextFromGlobalOptions(c))
return &types.DockerAuthConfig{
Username: username,
Password: password,
}, nil
}
// parseImage converts image URL-like string to an initialized handler for that image.
// The caller must call .Close() on the returned Image.
func parseImage(c *cli.Context) (types.Image, error) {
imgName := c.Args().First()
ref, err := alltransports.ParseImageName(imgName)
if err != nil {
return nil, err
}
ctx, err := contextFromGlobalOptions(c, "")
if err != nil {
return nil, err
}
return ref.NewImage(ctx)
}
// parseImageSource converts image URL-like string to an ImageSource.
// requestedManifestMIMETypes is as in types.ImageReference.NewImageSource.
// The caller must call .Close() on the returned ImageSource.
func parseImageSource(c *cli.Context, name string, requestedManifestMIMETypes []string) (types.ImageSource, error) {
ref, err := transports.ParseImageName(name)
ref, err := alltransports.ParseImageName(name)
if err != nil {
return nil, err
}
return ref.NewImageSource(contextFromGlobalOptions(c), requestedManifestMIMETypes)
ctx, err := contextFromGlobalOptions(c, "")
if err != nil {
return nil, err
}
return ref.NewImageSource(ctx, requestedManifestMIMETypes)
}

159
completions/bash/skopeo Normal file
View File

@@ -0,0 +1,159 @@
#! /bin/bash
: ${PROG:=$(basename ${BASH_SOURCE})}
_complete_() {
local options_with_args=$1
local boolean_options="$2 -h --help"
case "$prev" in
$options_with_args)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
esac
}
_skopeo_copy() {
local options_with_args="
--sign-by
--src-creds --screds
--src-cert-dir
--src-tls-verify
--dest-creds --dcreds
--dest-cert-dir
--dest-ostree-tmp-dir
--dest-tls-verify
"
local boolean_options="
--remove-signatures
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_inspect() {
local options_with_args="
--creds
--cert-dir
"
local boolean_options="
--raw
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_standalone_sign() {
local options_with_args="
-o --output
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_standalone_verify() {
local options_with_args="
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_manifest_digest() {
local options_with_args="
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_delete() {
local options_with_args="
--creds
--cert-dir
"
local boolean_options="
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_layers() {
local options_with_args="
--creds
--cert-dir
"
local boolean_options="
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_skopeo() {
local options_with_args="
--policy
--registries.d
"
local boolean_options="
--insecure-policy
--debug
--version -v
--help -h
"
commands=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
case "$prev" in
$main_options_with_args_glob )
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]} help" -- "$cur" ) )
;;
esac
}
_cli_bash_autocomplete() {
local cur opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=()
local cur prev words cword
_get_comp_words_by_ref -n : cur prev words cword
local command=${PROG} cpos=0
local counter=1
counter=1
while [ $counter -lt $cword ]; do
case "!${words[$counter]}" in
*)
command=$(echo "${words[$counter]}" | sed 's/-/_/g')
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
done
local completions_func=_skopeo_${command}
declare -F $completions_func >/dev/null && $completions_func
eval "$previous_extglob_setting"
return 0
}
complete -F _cli_bash_autocomplete $PROG

View File

@@ -3,5 +3,12 @@
{
"type": "insecureAcceptAnything"
}
]
}
],
"transports":
{
"docker-daemon":
{
"": [{"type":"insecureAcceptAnything"}]
}
}
}

26
default.yaml Normal file
View File

@@ -0,0 +1,26 @@
# This is a default registries.d configuration file. You may
# add to this file or create additional files in registries.d/.
#
# sigstore: indicates a location that is read and write
# sigstore-staging: indicates a location that is only for write
#
# sigstore and sigstore-staging take a value of the following:
# sigstore: {schema}://location
#
# For reading signatures, schema may be http, https, or file.
# For writing signatures, schema may only be file.
# This is the default signature write location for docker registries.
default-docker:
# sigstore: file:///var/lib/atomic/sigstore
sigstore-staging: file:///var/lib/atomic/sigstore
# The 'docker' indicator here is the start of the configuration
# for docker registries.
#
# docker:
#
# privateregistry.com:
# sigstore: http://privateregistry.com/sigstore/
# sigstore-staging: /mnt/nfs/privateregistry/sigstore

View File

@@ -2,52 +2,49 @@
% Jhon Honce
% August 2016
# NAME
skopeo -- Various operations with container images images and container image registries
skopeo -- Various operations with container images and container image registries
# SYNOPSIS
**skopeo** [_global options_] _command_ [_command options_]
# DESCRIPTION
`skopeo` is a command line utility providing various operations with container images and container image registries. For example, it is able to inspect a repository on a Docker registry and fetch image. It fetches the repository's manifest and it is able to show you a `docker inspect`-like json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - which tags are available for the given repository? which labels the image has?
`skopeo` is a command line utility providing various operations with container images and container image registries. For example, it is able to inspect a repository on a container registry and fetch image. It fetches the repository's manifest and it is able to show you a `docker inspect`-like json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - which tags are available for the given repository? which labels the image has?
It also allows you to copy container images between various registries, possibly converting them as necessary, and to sign and verify images.
## IMAGE NAMES
Most commands refer to container images, using a _transport_`:`_details_ format. The following formats are supported:
**atomic:**_namespace_**/**_stream_**:**_tag_
An image in the current project of the current default Atomic
Registry. The current project and Atomic Registry instance are by
default read from `$HOME/.kube/config`, which is set e.g. using
`(oc login)`.
**atomic:**_hostname_**/**_namespace_**/**_stream_**:**_tag_
An image served by an OpenShift(Atomic) Registry server. The current OpenShift project and OpenShift Registry instance are by default read from `$HOME/.kube/config`, which is set e.g. using `(oc login)`.
**containers-storage://**_docker-reference_
An image located in a local containers/storage image store. Location and image store specified in /etc/containers/storage.conf
**dir:**_path_
An existing local directory _path_ storing the manifest, layer
tarballs and signatures as individual files. This is a
non-standardized format, primarily useful for debugging or
noninvasive container inspection.
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2".
By default, uses the authorization state in `$HOME/.docker/config.json`,
which is set e.g. using `(docker login)`.
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$HOME/.docker/config.json`, which is set e.g. using `(docker login)`.
**docker-archive:**_path_[**:**_docker-reference_]
An image is stored in the `docker save` formated file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
**docker-daemon:**_docker-reference_
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
**oci:**_path_**:**_tag_
An image _tag_ in a directory compliant with "Open Container Image
Layout Specification" at _path_.
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
# OPTIONS
**--debug** enable debug output
**--username** _username_ for accessing the registry
**--password** _password_ for accessing the registry
**--cert-path** _path_ Use certificates at _path_ (cert.pem, key.pem) to connect to the registry
**--policy** _path-to-policy_ Path to a policy.json file to use for verifying signatures and deciding whether an image is trusted, overriding the default trust policy file.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for docker signature storage), overriding the default path.
**--insecure-policy** Adopt an insecure, permissive policy that allows anything. This obviates the need for a policy file.
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for container signature storage), overriding the default path.
**--help**|**-h** Show help
@@ -70,17 +67,37 @@ Uses the system's trust policy to validate images, rejects images not trusted by
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
**--src-creds** _username[:password]_ for accessing the source registry
**--dest-creds** _username[:password]_ for accessing the destination registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry (defaults to true)
Existing signatures, if any, are preserved as well.
## skopeo delete
**skopeo delete** _image-name_
Mark _image-name_ for deletion. To release the allocated disk space, you need to execute the docker registry garabage collector. E.g.,
Mark _image-name_ for deletion. To release the allocated disk space, you need to execute the container registry garabage collector. E.g.,
```sh
$ docker exec -it registry bin/registry garbage-collect /etc/docker/registry/config.yml
```
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
## skopeo inspect
@@ -92,12 +109,11 @@ Return low-level information about _image-name_ in a registry
_image-name_ name of image to retrieve information about
## skopeo layers
**skopeo layers** _image-name_
**--creds** _username[:password]_ for accessing the registry
Get image layers of _image-name_
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
_image-name_ name of the image to retrieve layers
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
## skopeo manifest-digest
**skopeo manifest-digest** _manifest-file_

View File

@@ -1,115 +0,0 @@
#!/usr/bin/env bash
PROJECT=github.com/projectatomic/skopeo
# Downloads dependencies into vendor/ directory
mkdir -p vendor
original_GOPATH=$GOPATH
export GOPATH="${PWD}/vendor:$GOPATH"
find="/usr/bin/find"
clone() {
local delete_vendor=true
if [ "x$1" = x--keep-vendor ]; then
delete_vendor=false
shift
fi
local vcs="$1"
local pkg="$2"
local rev="$3"
local url="$4"
: ${url:=https://$pkg}
local target="vendor/src/$pkg"
echo -n "$pkg @ $rev: "
if [ -d "$target" ]; then
echo -n 'rm old, '
rm -rf "$target"
fi
echo -n 'clone, '
case "$vcs" in
git)
git clone --quiet --no-checkout "$url" "$target"
( cd "$target" && git checkout --quiet "$rev" && git reset --quiet --hard "$rev" -- )
;;
hg)
hg clone --quiet --updaterev "$rev" "$url" "$target"
;;
esac
echo -n 'rm VCS, '
( cd "$target" && rm -rf .{git,hg} )
if $delete_vendor; then
echo -n 'rm vendor, '
( cd "$target" && rm -rf vendor Godeps/_workspace )
fi
echo done
}
clean() {
# If $GOPATH starts with ./vendor, (go list) shows the short-form import paths for packages inside ./vendor.
# So, reset GOPATH to the external value (without ./vendor), so that the grep -v works.
local packages=($(GOPATH=$original_GOPATH go list -e ./... | grep -v "^${PROJECT}/vendor"))
local platforms=( linux/amd64 linux/386 )
local buildTags=( )
echo
echo -n 'collecting import graph, '
local IFS=$'\n'
local imports=( $(
for platform in "${platforms[@]}"; do
export GOOS="${platform%/*}";
export GOARCH="${platform##*/}";
go list -e -tags "$buildTags" -f '{{join .Deps "\n"}}' "${packages[@]}"
go list -e -tags "$buildTags" -f '{{join .TestImports "\n"}}' "${packages[@]}"
done | grep -vE "^${PROJECT}" | sort -u
) )
# .TestImports does not include indirect dependencies, so do one more iteration.
imports+=( $(
go list -e -f '{{join .Deps "\n"}}' "${imports[@]}" | grep -vE "^${PROJECT}" | sort -u
) )
imports=( $(go list -e -f '{{if not .Standard}}{{.ImportPath}}{{end}}' "${imports[@]}") )
unset IFS
echo -n 'pruning unused packages, '
findArgs=(
# This directory contains only .c and .h files which are necessary
# -path vendor/src/github.com/mattn/go-sqlite3/code
)
for import in "${imports[@]}"; do
[ "${#findArgs[@]}" -eq 0 ] || findArgs+=( -or )
findArgs+=( -path "vendor/src/$import" )
done
local IFS=$'\n'
local prune=( $($find vendor -depth -type d -not '(' "${findArgs[@]}" ')') )
unset IFS
for dir in "${prune[@]}"; do
$find "$dir" -maxdepth 1 -not -type d -not -name 'LICENSE*' -not -name 'COPYING*' -exec rm -v -f '{}' ';'
rmdir "$dir" 2>/dev/null || true
done
echo -n 'pruning unused files, '
$find vendor -type f -name '*_test.go' -exec rm -v '{}' ';'
echo done
}
# Fix up hard-coded imports that refer to Godeps paths so they'll work with our vendoring
fix_rewritten_imports () {
local pkg="$1"
local remove="${pkg}/Godeps/_workspace/src/"
local target="vendor/src/$pkg"
echo "$pkg: fixing rewritten imports"
$find "$target" -name \*.go -exec sed -i -e "s|\"${remove}|\"|g" {} \;
}

7
hack/btrfs_tag.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
cc -E - > /dev/null 2> /dev/null << EOF
#include <btrfs/version.h>
EOF
if test $? -ne 0 ; then
echo btrfs_noversion
fi

14
hack/libdm_tag.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
tmpdir="$PWD/tmp.$RANDOM"
mkdir -p "$tmpdir"
trap 'rm -fr "$tmpdir"' EXIT
cc -c -o "$tmpdir"/libdm_tag.o -x c - > /dev/null 2> /dev/null << EOF
#include <libdevmapper.h>
int main() {
struct dm_task *task;
return 0;
}
EOF
if test $? -ne 0 ; then
echo libdm_no_deferred_remove
fi

View File

@@ -72,10 +72,10 @@ TESTFLAGS+=" -test.timeout=10m"
go_test_dir() {
dir=$1
(
echo '+ go test' $TESTFLAGS "${SKOPEO_PKG}${dir#.}"
echo '+ go test' $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"} "${SKOPEO_PKG}${dir#.}"
cd "$dir"
export DEST="$ABS_DEST" # we're in a subshell, so this is safe -- our integration-cli tests need DEST, and "cd" screws it up
go test $TESTFLAGS
go test $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"}
)
}

View File

@@ -1,14 +0,0 @@
#! /bin/bash
: ${PROG:=$(basename ${BASH_SOURCE})}
_cli_bash_autocomplete() {
local cur opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
opts=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
return 0
}
complete -F _cli_bash_autocomplete $PROG

View File

@@ -8,7 +8,7 @@ bundle_test_integration() {
# subshell so that we can export PATH without breaking other things
(
make binary-local
make binary-local ${BUILDTAGS:+BUILDTAGS="$BUILDTAGS"}
make install
export GO15VENDOREXPERIMENT=1
bundle_test_integration

48
hack/vendor.sh Executable file → Normal file
View File

@@ -1,39 +1,15 @@
#!/usr/bin/env bash
#!/bin/bash
# This file is just wrapper around vndr (github.com/LK4D4/vndr) tool.
# For updating dependencies you should change `vendor.conf` file in root of the
# project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for
# vndr usage.
set -e
cd "$(dirname "$BASH_SOURCE")/.."
rm -rf vendor/
source 'hack/.vendor-helpers.sh'
if ! hash vndr; then
echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH"
exit 1
fi
clone git github.com/urfave/cli v1.17.0
clone git github.com/containers/image master
clone git gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
clone git github.com/Sirupsen/logrus v0.10.0
clone git github.com/go-check/check v1
clone git github.com/stretchr/testify v1.1.3
clone git github.com/davecgh/go-spew master
clone git github.com/pmezard/go-difflib master
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
clone git github.com/docker/docker v1.11.2
clone git github.com/docker/engine-api v0.3.3
clone git github.com/docker/go-connections v0.2.0
clone git github.com/vbatts/tar-split v0.9.11
clone git github.com/gorilla/context 14f550f51a
clone git github.com/docker/go-units 651fc226e7441360384da338d0fd37f2440ffbe3
clone git golang.org/x/net master https://github.com/golang/net.git
# end docker deps
clone git github.com/docker/distribution master
clone git github.com/docker/libtrust master
clone git github.com/opencontainers/runc master
clone git github.com/opencontainers/image-spec master
clone git github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
clone git github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
clone git k8s.io/kubernetes 4a3f9c5b19c7ff804cbc1bf37a15c044ca5d2353 https://github.com/openshift/kubernetes
clone git github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
clone git gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
clone git github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
clean
mv vendor/src/* vendor/
vndr "$@"

View File

@@ -12,9 +12,6 @@ import (
const (
privateRegistryURL0 = "127.0.0.1:5000"
privateRegistryURL1 = "127.0.0.1:5001"
privateRegistryURL2 = "127.0.0.1:5002"
privateRegistryURL3 = "127.0.0.1:5003"
privateRegistryURL4 = "127.0.0.1:5004"
)
func Test(t *testing.T) {
@@ -26,15 +23,13 @@ func init() {
}
type SkopeoSuite struct {
regV1 *testRegistryV1
regV2 *testRegistryV2
regV2Shema1 *testRegistryV2
regV1WithAuth *testRegistryV1 // does v1 support auth?
regV2WithAuth *testRegistryV2
}
func (s *SkopeoSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
}
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
@@ -42,24 +37,14 @@ func (s *SkopeoSuite) TearDownSuite(c *check.C) {
}
func (s *SkopeoSuite) SetUpTest(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.regV1 = setupRegistryV1At(c, privateRegistryURL0, false) // TODO:(runcom)
s.regV2 = setupRegistryV2At(c, privateRegistryURL1, false, false)
s.regV2Shema1 = setupRegistryV2At(c, privateRegistryURL2, false, true)
s.regV1WithAuth = setupRegistryV1At(c, privateRegistryURL3, true) // not used
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL4, true, false)
s.regV2 = setupRegistryV2At(c, privateRegistryURL0, false, false)
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL1, true, false)
}
func (s *SkopeoSuite) TearDownTest(c *check.C) {
// not checking V1 registries now...
if s.regV2 != nil {
s.regV2.Close()
}
if s.regV2Shema1 != nil {
s.regV2Shema1.Close()
}
if s.regV2WithAuth != nil {
//cmd := exec.Command("docker", "logout", s.regV2WithAuth)
//c.Assert(cmd.Run(), check.IsNil)
@@ -75,22 +60,21 @@ func (s *SkopeoSuite) TestVersion(c *check.C) {
assertSkopeoSucceeds(c, wanted, "--version")
}
const (
errFetchManifestRegexp = ".*error fetching manifest: status code: %s.*"
)
func (s *SkopeoSuite) TestCanAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
// TODO(runcom)
c.Skip("we need to restore --username --password flags!")
wanted := fmt.Sprintf(errFetchManifestRegexp, "401")
assertSkopeoFails(c, wanted, "--docker-cfg=''", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
wanted := ".*manifest unknown: manifest unknown.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", "--creds="+s.regV2WithAuth.username+":"+s.regV2WithAuth.password, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
// TODO(runcom): mock the empty docker-cfg by removing it in the test itself (?)
c.Skip("mock empty docker config")
wanted := fmt.Sprintf(errFetchManifestRegexp, "401")
assertSkopeoFails(c, wanted, "--docker-cfg=''", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
wanted := ".*unauthorized: authentication required.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestCertDirInsteadOfCertPath(c *check.C) {
wanted := ".*flag provided but not defined: -cert-path.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-path=/")
wanted = ".*unauthorized: authentication required.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-dir=/etc/docker/certs.d/")
}
// TODO(runcom): as soon as we can push to registries ensure you can inspect here
@@ -98,8 +82,8 @@ func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C
func (s *SkopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound(c *check.C) {
out, err := exec.Command(skopeoBinary, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2.url)).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf(string(out)))
wanted := fmt.Sprintf(errFetchManifestRegexp, "404")
wanted := ".*manifest unknown.*"
c.Assert(string(out), check.Matches, "(?s)"+wanted) // (?s) : '.' will also match newlines
wanted = fmt.Sprintf(errFetchManifestRegexp, "401")
wanted = ".*unauthorized: authentication required.*"
c.Assert(string(out), check.Not(check.Matches), "(?s)"+wanted) // (?s) : '.' will also match newlines
}

View File

@@ -1,26 +1,35 @@
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"strings"
"github.com/containers/image/manifest"
"github.com/containers/image/signature"
"github.com/go-check/check"
"github.com/opencontainers/go-digest"
)
func init() {
check.Suite(&CopySuite{})
}
const v2DockerRegistryURL = "localhost:5555"
const (
v2DockerRegistryURL = "localhost:5555" // Update also policy.json
v2s1DockerRegistryURL = "localhost:5556"
)
type CopySuite struct {
cluster *openshiftCluster
registry *testRegistryV2
gpgHome string
cluster *openshiftCluster
registry *testRegistryV2
s1Registry *testRegistryV2
gpgHome string
}
func (s *CopySuite) SetUpSuite(c *check.C) {
@@ -30,7 +39,7 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
s.cluster = startOpenshiftCluster(c) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression"} {
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression", "schema1", "schema2"} {
isJSON := fmt.Sprintf(`{
"kind": "ImageStream",
"apiVersion": "v1",
@@ -42,7 +51,9 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
runCommandWithInput(c, isJSON, "oc", "create", "-f", "-")
}
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, false, false) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
// FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, false, false)
s.s1Registry = setupRegistryV2At(c, v2s1DockerRegistryURL, false, true)
gpgHome, err := ioutil.TempDir("", "skopeo-gpg")
c.Assert(err, check.IsNil)
@@ -68,33 +79,24 @@ func (s *CopySuite) TearDownSuite(c *check.C) {
if s.registry != nil {
s.registry.Close()
}
if s.s1Registry != nil {
s.s1Registry.Close()
}
if s.cluster != nil {
s.cluster.tearDown()
s.cluster.tearDown(c)
}
}
// preparePolicyFixture applies edits to fixtures/policy.json and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func preparePolicyFixture(c *check.C, edits map[string]string) string {
commands := []string{}
for template, value := range edits {
commands = append(commands, fmt.Sprintf("s,%s,%s,g", template, value))
}
json := combinedOutputOfCommand(c, "sed", strings.Join(commands, "; "), "fixtures/policy.json")
file, err := ioutil.TempFile("", "policy.json")
c.Assert(err, check.IsNil)
path := file.Name()
_, err = file.Write([]byte(json))
c.Assert(err, check.IsNil)
err = file.Close()
c.Assert(err, check.IsNil)
return path
func (s *CopySuite) TestCopyFailsWithManifestList(c *check.C) {
assertSkopeoFails(c, ".*can not copy docker://estesp/busybox:latest: manifest contains multiple images.*", "copy", "docker://estesp/busybox:latest", "dir:somedir")
}
// The most basic (skopeo copy) use:
func (s *CopySuite) TestCopySimple(c *check.C) {
func (s *CopySuite) TestCopyFailsWhenImageOSDoesntMatchRuntimeOS(c *check.C) {
c.Skip("can't run this on Travis")
assertSkopeoFails(c, `.*image operating system "windows" cannot be used on "linux".*`, "copy", "docker://microsoft/windowsservercore", "containers-storage:test")
}
func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
@@ -104,11 +106,32 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
// FIXME: It would be nice to use one of the local Docker registries instead of neeeding an Internet connection.
// "pull": docker: → dir:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:latest", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
// "push": dir: → atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, "atomic:localhost:5000/myns/unsigned:unsigned")
// The result of pushing and pulling is an unmodified image.
// The result of pushing and pulling is an equivalent image, except for schema1 embedded names.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:unsigned", "dir:"+dir2)
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:unsigned")
}
// The most basic (skopeo copy) use:
func (s *CopySuite) TestCopySimple(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
// FIXME: It would be nice to use one of the local Docker registries instead of neeeding an Internet connection.
// "pull": docker: → dir:
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+dir1)
// "push": dir: → docker(v2s2):
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, ourRegistry+"busybox:unsigned")
// The result of pushing and pulling is an unmodified image.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", ourRegistry+"busybox:unsigned", "dir:"+dir2)
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
@@ -120,8 +143,52 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:latest", "oci:"+ociDest)
_, err = os.Stat(ociDest)
c.Assert(err, check.IsNil)
}
// FIXME: Also check pushing to docker://
// Check whether dir: images in dir1 and dir2 are equal, ignoring schema1 signatures.
func assertDirImagesAreEqual(c *check.C, dir1, dir2 string) {
// The manifests may have different JWS signatures; so, compare the manifests by digests, which
// strips the signatures.
digests := []digest.Digest{}
for _, dir := range []string{dir1, dir2} {
manifestPath := filepath.Join(dir, "manifest.json")
m, err := ioutil.ReadFile(manifestPath)
c.Assert(err, check.IsNil)
digest, err := manifest.Digest(m)
c.Assert(err, check.IsNil)
digests = append(digests, digest)
}
c.Assert(digests[0], check.Equals, digests[1])
// Then compare the rest file by file.
out := combinedOutputOfCommand(c, "diff", "-urN", "-x", "manifest.json", dir1, dir2)
c.Assert(out, check.Equals, "")
}
// Check whether schema1 dir: images in dir1 and dir2 are equal, ignoring schema1 signatures and the embedded path/tag values, which should have the expected values.
func assertSchema1DirImagesAreEqualExceptNames(c *check.C, dir1, ref1, dir2, ref2 string) {
// The manifests may have different JWS signatures and names; so, unmarshal and delete these elements.
manifests := []map[string]interface{}{}
for dir, ref := range map[string]string{dir1: ref1, dir2: ref2} {
manifestPath := filepath.Join(dir, "manifest.json")
m, err := ioutil.ReadFile(manifestPath)
c.Assert(err, check.IsNil)
data := map[string]interface{}{}
err = json.Unmarshal(m, &data)
c.Assert(err, check.IsNil)
c.Assert(data["schemaVersion"], check.Equals, float64(1))
colon := strings.LastIndex(ref, ":")
c.Assert(colon, check.Not(check.Equals), -1)
c.Assert(data["name"], check.Equals, ref[:colon])
c.Assert(data["tag"], check.Equals, ref[colon+1:])
for _, key := range []string{"signatures", "name", "tag"} {
delete(data, key)
}
manifests = append(manifests, data)
}
c.Assert(manifests[0], check.DeepEquals, manifests[1])
// Then compare the rest file by file.
out := combinedOutputOfCommand(c, "diff", "-urN", "-x", "manifest.json", dir1, dir2)
c.Assert(out, check.Equals, "")
}
// Streaming (skopeo copy)
@@ -139,34 +206,72 @@ func (s *CopySuite) TestCopyStreaming(c *check.C) {
// Compare (copies of) the original and the copy:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:streaming", "dir:"+dir2)
// The manifests will have different JWS signatures; so, compare the manifests by digests, which
// strips the signatures, and remove them, comparing the rest file by file.
digests := []string{}
for _, dir := range []string{dir1, dir2} {
manifestPath := filepath.Join(dir, "manifest.json")
m, err := ioutil.ReadFile(manifestPath)
c.Assert(err, check.IsNil)
digest, err := manifest.Digest(m)
c.Assert(err, check.IsNil)
digests = append(digests, digest)
err = os.Remove(manifestPath)
c.Assert(err, check.IsNil)
c.Logf("Manifest file %s (digest %s) removed", manifestPath, digest)
}
c.Assert(digests[0], check.Equals, digests[1])
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:streaming")
// FIXME: Also check pushing to docker://
}
// OCI round-trip testing. It's very important to make sure that OCI <-> Docker
// conversion works (while skopeo handles many things, one of the most obvious
// benefits of a tool like skopeo is that you can use OCI tooling to create an
// image and then as the final step convert the image to a non-standard format
// like Docker). But this only works if we _test_ it.
func (s *CopySuite) TestCopyOCIRoundTrip(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
oci1, err := ioutil.TempDir("", "oci-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci1)
oci2, err := ioutil.TempDir("", "oci-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci2)
// Docker -> OCI
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "docker://busybox", "oci:"+oci1+":latest")
// OCI -> Docker
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "oci:"+oci1+":latest", ourRegistry+"original/busybox:oci_copy")
// Docker -> OCI
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", ourRegistry+"original/busybox:oci_copy", "oci:"+oci2+":latest")
// OCI -> Docker
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "oci:"+oci2+":latest", ourRegistry+"original/busybox:oci_copy2")
// TODO: Add some more tags to output to and check those work properly.
// First, make sure the OCI blobs are the same. This should _always_ be true.
out := combinedOutputOfCommand(c, "diff", "-urN", oci1+"/blobs", oci2+"/blobs")
c.Assert(out, check.Equals, "")
// For some silly reason we pass a logger to the OCI library here...
//logger := log.New(os.Stderr, "", 0)
// TODO: Verify using the upstream OCI image validator.
//err = image.ValidateLayout(oci1, nil, logger)
//c.Assert(err, check.IsNil)
//err = image.ValidateLayout(oci2, nil, logger)
//c.Assert(err, check.IsNil)
// Now verify that everything is identical. Currently this is true, but
// because we recompute the manifests on-the-fly this doesn't necessarily
// always have to be true (but if this breaks in the future __PLEASE__ make
// sure that the breakage actually makes sense before removing this check).
out = combinedOutputOfCommand(c, "diff", "-urN", oci1, oci2)
c.Assert(out, check.Equals, "")
}
// --sign-by and --policy copy, primarily using atomic:
func (s *CopySuite) TestCopySignatures(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
dir, err := ioutil.TempDir("", "signatures-dest")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir)
dirDest := "dir:" + dir
policy := preparePolicyFixture(c, map[string]string{"@keydir@": s.gpgHome})
policy := fileFromFixture(c, "fixtures/policy.json", map[string]string{"@keydir@": s.gpgHome})
defer os.Remove(policy)
// type: reject
@@ -178,38 +283,45 @@ func (s *CopySuite) TestCopySignatures(c *check.C) {
// type: signedBy
// Sign the images
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "docker://busybox:1.23", "atomic:localhost:5000/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", "docker://busybox:1.23.2", "atomic:localhost:5000/myns/official:official")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "docker://busybox:1.26", "atomic:localhost:5006/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", "docker://busybox:1.26.1", "atomic:localhost:5006/myns/official:official")
// Verify that we can pull them
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/personal:personal", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/official:official", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/personal:personal", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/official:official", dirDest)
// Verify that mis-signed images are rejected
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/personal:personal", "atomic:localhost:5000/myns/official:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/personal:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/personal:personal", "atomic:localhost:5006/myns/official:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/personal:attack")
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/personal:attack", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/personal:attack", dirDest)
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/official:attack", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/official:attack", dirDest)
// Verify that signed identity is verified.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/naming:test1")
assertSkopeoFails(c, ".*Source image rejected: Signature for identity localhost:5000/myns/official:official is not accepted.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/naming:test1", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/naming:test1")
assertSkopeoFails(c, ".*Source image rejected: Signature for identity localhost:5006/myns/official:official is not accepted.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/naming:test1", dirDest)
// signedIdentity works
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/naming:naming")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/naming:naming", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/naming:naming")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/naming:naming", dirDest)
// Verify that cosigning requirements are enforced
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/cosigned:cosigned")
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/cosigned:cosigned", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/cosigned:cosigned", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/cosigned:cosigned", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/cosigned:cosigned", dirDest)
}
// --policy copy for dir: sources
func (s *CopySuite) TestCopyDirSignatures(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
topDir, err := ioutil.TempDir("", "dir-signatures-top")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
@@ -222,7 +334,7 @@ func (s *CopySuite) TestCopyDirSignatures(c *check.C) {
// Note the "/@dirpath@": The value starts with a slash so that it is not rejected in other tests which do not replace it,
// but we must ensure that the result is a canonical path, not something starting with a "//".
policy := preparePolicyFixture(c, map[string]string{"@keydir@": s.gpgHome, "/@dirpath@": topDir + "/restricted"})
policy := fileFromFixture(c, "fixtures/policy.json", map[string]string{"@keydir@": s.gpgHome, "/@dirpath@": topDir + "/restricted"})
defer os.Remove(policy)
// Get some images.
@@ -260,10 +372,10 @@ func (s *CopySuite) TestCopyCompression(c *check.C) {
defer os.RemoveAll(topDir)
for i, t := range []struct{ fixture, remote string }{
//{"uncompressed-image-s1", "docker://" + v2DockerRegistryURL + "/compression/compression:s1"}, // FIXME: depends on push to tag working
//{"uncompressed-image-s2", "docker://" + v2DockerRegistryURL + "/compression/compression:s2"}, // FIXME: depends on push to tag working
{"uncompressed-image-s1", "docker://" + v2DockerRegistryURL + "/compression/compression:s1"},
{"uncompressed-image-s2", "docker://" + v2DockerRegistryURL + "/compression/compression:s2"},
{"uncompressed-image-s1", "atomic:localhost:5000/myns/compression:s1"},
//{"uncompressed-image-s2", "atomic:localhost:5000/myns/compression:s2"}, // FIXME: The unresolved "MANIFEST_UNKNOWN"/"unexpected end of JSON input" failure
{"uncompressed-image-s2", "atomic:localhost:5000/myns/compression:s2"},
} {
dir := filepath.Join(topDir, fmt.Sprintf("case%d", i))
err := os.MkdirAll(dir, 0755)
@@ -291,3 +403,221 @@ func (s *CopySuite) TestCopyCompression(c *check.C) {
}
}
}
func findRegularFiles(c *check.C, root string) []string {
result := []string{}
err := filepath.Walk(root, filepath.WalkFunc(func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.Mode().IsRegular() {
result = append(result, path)
}
return nil
}))
c.Assert(err, check.IsNil)
return result
}
// --sign-by and policy use for docker: with sigstore
func (s *CopySuite) TestCopyDockerSigstore(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
tmpDir, err := ioutil.TempDir("", "signatures-sigstore")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
copyDest := filepath.Join(tmpDir, "dest")
err = os.Mkdir(copyDest, 0755)
c.Assert(err, check.IsNil)
dirDest := "dir:" + copyDest
plainSigstore := filepath.Join(tmpDir, "sigstore")
splitSigstoreStaging := filepath.Join(tmpDir, "sigstore-staging")
splitSigstoreReadServerHandler := http.NotFoundHandler()
splitSigstoreReadServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
splitSigstoreReadServerHandler.ServeHTTP(w, r)
}))
defer splitSigstoreReadServer.Close()
policy := fileFromFixture(c, "fixtures/policy.json", map[string]string{"@keydir@": s.gpgHome})
defer os.Remove(policy)
registriesDir := filepath.Join(tmpDir, "registries.d")
err = os.Mkdir(registriesDir, 0755)
c.Assert(err, check.IsNil)
registriesFile := fileFromFixture(c, "fixtures/registries.yaml",
map[string]string{"@sigstore@": plainSigstore, "@split-staging@": splitSigstoreStaging, "@split-read@": splitSigstoreReadServer.URL})
err = os.Symlink(registriesFile, filepath.Join(registriesDir, "registries.yaml"))
c.Assert(err, check.IsNil)
// Get an image to work with. Also verifies that we can use Docker repositories with no sigstore configured.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "copy", "docker://busybox", ourRegistry+"original/busybox")
// Pulling an unsigned image fails.
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
"--tls-verify=false", "--policy", policy, "--registries.d", registriesDir, "copy", ourRegistry+"original/busybox", dirDest)
// Signing with sigstore defined succeeds,
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "copy", "--sign-by", "personal@example.com", ourRegistry+"original/busybox", ourRegistry+"signed/busybox")
// a signature file has been created,
foundFiles := findRegularFiles(c, plainSigstore)
c.Assert(foundFiles, check.HasLen, 1)
// and pulling a signed image succeeds.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir, "copy", ourRegistry+"signed/busybox", dirDest)
// Deleting the image succeeds,
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "delete", ourRegistry+"signed/busybox")
// and the signature file has been deleted (but we leave the directories around).
foundFiles = findRegularFiles(c, plainSigstore)
c.Assert(foundFiles, check.HasLen, 0)
// Signing with a read/write sigstore split succeeds,
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "copy", "--sign-by", "personal@example.com", ourRegistry+"original/busybox", ourRegistry+"public/busybox")
// and a signature file has been created.
foundFiles = findRegularFiles(c, splitSigstoreStaging)
c.Assert(foundFiles, check.HasLen, 1)
// Pulling the image fails because the read sigstore URL has not been populated:
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
"--tls-verify=false", "--policy", policy, "--registries.d", registriesDir, "copy", ourRegistry+"public/busybox", dirDest)
// Pulling the image succeeds after the read sigstore URL is available:
splitSigstoreReadServerHandler = http.FileServer(http.Dir(splitSigstoreStaging))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir, "copy", ourRegistry+"public/busybox", dirDest)
}
// atomic: and docker: X-Registry-Supports-Signatures works and interoperates
func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that the reading/writing works using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
topDir, err := ioutil.TempDir("", "atomic-extension")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
for _, subdir := range []string{"dirAA", "dirAD", "dirDA", "dirDD", "registries.d"} {
err := os.MkdirAll(filepath.Join(topDir, subdir), 0755)
c.Assert(err, check.IsNil)
}
registriesDir := filepath.Join(topDir, "registries.d")
dirDest := "dir:" + topDir
policy := fileFromFixture(c, "fixtures/policy.json", map[string]string{"@keydir@": s.gpgHome})
defer os.Remove(policy)
// Get an image to work with to an atomic: destination. Also verifies that we can use Docker repositories without X-Registry-Supports-Signatures
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "copy", "docker://busybox", "atomic:localhost:5000/myns/extension:unsigned")
// Pulling an unsigned image using atomic: fails.
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
"--tls-verify=false", "--policy", policy,
"copy", "atomic:localhost:5000/myns/extension:unsigned", dirDest+"/dirAA")
// The same when pulling using docker:
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
"--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:unsigned", dirDest+"/dirAD")
// Sign the image using atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false",
"copy", "--sign-by", "personal@example.com", "atomic:localhost:5000/myns/extension:unsigned", "atomic:localhost:5000/myns/extension:atomic")
// Pulling the image using atomic: now succeeds.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy,
"copy", "atomic:localhost:5000/myns/extension:atomic", dirDest+"/dirAA")
// The same when pulling using docker:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:atomic", dirDest+"/dirAD")
// Both access methods result in the same data.
assertDirImagesAreEqual(c, filepath.Join(topDir, "dirAA"), filepath.Join(topDir, "dirAD"))
// Get another image (different so that they don't share signatures, and sign it using docker://)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir,
"copy", "--sign-by", "personal@example.com", "docker://estesp/busybox:ppc64le", "atomic:localhost:5000/myns/extension:extension")
c.Logf("%s", combinedOutputOfCommand(c, "oc", "get", "istag", "extension:extension", "-o", "json"))
// Pulling the image using atomic: succeeds.
assertSkopeoSucceeds(c, "", "--debug", "--tls-verify=false", "--policy", policy,
"copy", "atomic:localhost:5000/myns/extension:extension", dirDest+"/dirDA")
// The same when pulling using docker:
assertSkopeoSucceeds(c, "", "--debug", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:extension", dirDest+"/dirDD")
// Both access methods result in the same data.
assertDirImagesAreEqual(c, filepath.Join(topDir, "dirDA"), filepath.Join(topDir, "dirDD"))
}
func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--src-creds=testuser:testpassword", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "dir:"+dir1)
}
func (s *SkopeoSuite) TestCopyDestWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestCopySrcAndDestWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--src-creds=testuser:testpassword", "--dest-creds=testuser:testpassword", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), fmt.Sprintf("docker://%s/test:auth", s.regV2WithAuth.url))
}
func (s *CopySuite) TestCopyNoPanicOnHTTPResponseWOTLSVerifyFalse(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
// dir:test isn't created beforehand just because we already know this could
// just fail when evaluating the src
assertSkopeoFails(c, ".*server gave HTTP response to HTTPS client.*",
"copy", ourRegistry+"foobar", "dir:test")
}
func (s *CopySuite) TestCopySchemaConversion(c *check.C) {
// Test conversion / schema autodetection both for the OpenShift embedded registry…
s.testCopySchemaConversionRegistries(c, "docker://localhost:5005/myns/schema1", "docker://localhost:5006/myns/schema2")
// … and for various docker/distribution registry versions.
s.testCopySchemaConversionRegistries(c, "docker://"+v2s1DockerRegistryURL+"/schema1", "docker://"+v2DockerRegistryURL+"/schema2")
}
func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Registry, schema2Registry string) {
topDir, err := ioutil.TempDir("", "schema-conversion")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
for _, subdir := range []string{"input1", "input2", "dest2"} {
err := os.MkdirAll(filepath.Join(topDir, subdir), 0755)
c.Assert(err, check.IsNil)
}
input1Dir := filepath.Join(topDir, "input1")
input2Dir := filepath.Join(topDir, "input2")
destDir := filepath.Join(topDir, "dest2")
// Ensure we are working with a schema2 image.
// dir: accepts any manifest format, i.e. this makes …/input2 a schema2 source which cannot be asked to produce schema1 like ordinary docker: registries can.
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+input2Dir)
verifyManifestMIMEType(c, input2Dir, manifest.DockerV2Schema2MediaType)
// 2→2 (the "f2t2" in tag means "from 2 to 2")
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input2Dir, schema2Registry+":f2t2")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema2Registry+":f2t2", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema2MediaType)
// 2→1; we will use the result as a schema1 image for further tests.
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input2Dir, schema1Registry+":f2t1")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema1Registry+":f2t1", "dir:"+input1Dir)
verifyManifestMIMEType(c, input1Dir, manifest.DockerV2Schema1SignedMediaType)
// 1→1
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input1Dir, schema1Registry+":f1t1")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema1Registry+":f1t1", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema1SignedMediaType)
// 1→2: image stays unmodified schema1
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input1Dir, schema2Registry+":f1t2")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema2Registry+":f1t2", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema1SignedMediaType)
}
// Verify manifest in a dir: image at dir is expectedMIMEType.
func verifyManifestMIMEType(c *check.C, dir string, expectedMIMEType string) {
manifestBlob, err := ioutil.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
mimeType := manifest.GuessMIMEType(manifestBlob)
c.Assert(mimeType, check.Equals, expectedMIMEType)
}

View File

@@ -6,6 +6,20 @@
],
"transports": {
"docker": {
"localhost:5555": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"localhost:5000/myns/extension": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"docker.io/openshift": [
{
"type": "insecureAcceptAnything"
@@ -31,46 +45,46 @@
]
},
"atomic": {
"localhost:5000/myns/personal": [
"localhost:5006/myns/personal": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"localhost:5000/myns/official": [
"localhost:5006/myns/official": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg"
}
],
"localhost:5000/myns/naming:test1": [
"localhost:5006/myns/naming:test1": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg"
}
],
"localhost:5000/myns/naming:naming": [
"localhost:5006/myns/naming:naming": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "localhost:5000/myns/official"
"dockerRepository": "localhost:5006/myns/official"
}
}
],
"localhost:5000/myns/cosigned:cosigned": [
"localhost:5006/myns/cosigned:cosigned": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "localhost:5000/myns/official"
"dockerRepository": "localhost:5006/myns/official"
}
},
{
@@ -78,6 +92,13 @@
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"localhost:5000/myns/extension": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
]
}
}

View File

@@ -0,0 +1,6 @@
docker:
localhost:5555:
sigstore: file://@sigstore@
localhost:5555/public:
sigstore-staging: file://@split-staging@
sigstore: @split-read@

View File

@@ -14,49 +14,63 @@ import (
"github.com/go-check/check"
)
var adminKUBECONFIG = map[string]string{
"KUBECONFIG": "openshift.local.config/master/admin.kubeconfig",
}
// openshiftCluster is an OpenShift API master and integrated registry
// running on localhost.
type openshiftCluster struct {
c *check.C
workingDir string
master *exec.Cmd
registry *exec.Cmd
processes []*exec.Cmd // Processes to terminate on teardown; append to the end, terminate from end to the start.
}
// startOpenshiftCluster creates a new openshiftCluster.
// WARNING: This affects state in users' home directory! Only run
// in isolated test environment.
func startOpenshiftCluster(c *check.C) *openshiftCluster {
cluster := &openshiftCluster{c: c}
cluster := &openshiftCluster{}
dir, err := ioutil.TempDir("", "openshift-cluster")
cluster.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
cluster.workingDir = dir
cluster.startMaster()
cluster.startRegistry()
cluster.ocLoginToProject()
cluster.dockerLogin()
cluster.relaxImageSignerPermissions()
cluster.startMaster(c)
cluster.prepareRegistryConfig(c)
cluster.startRegistry(c)
cluster.ocLoginToProject(c)
cluster.dockerLogin(c)
cluster.relaxImageSignerPermissions(c)
return cluster
}
// clusterCmd creates an exec.Cmd in cluster.workingDir with current environment modified by environment
func (cluster *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
cmd := exec.Command(name, args...)
cmd.Dir = cluster.workingDir
cmd.Env = os.Environ()
for key, value := range env {
cmd.Env = modifyEnviron(cmd.Env, key, value)
}
return cmd
}
// startMaster starts the OpenShift master (etcd+API server) and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startMaster() {
c.master = exec.Command("openshift", "start", "master")
c.master.Dir = c.workingDir
stdout, err := c.master.StdoutPipe()
func (cluster *openshiftCluster) startMaster(c *check.C) {
cmd := cluster.clusterCmd(nil, "openshift", "start", "master")
cluster.processes = append(cluster.processes, cmd)
stdout, err := cmd.StdoutPipe()
// Send both to the same pipe. This might cause the two streams to be mixed up,
// but logging actually goes only to stderr - this primarily ensure we log any
// unexpected output to stdout.
c.master.Stderr = c.master.Stdout
err = c.master.Start()
c.c.Assert(err, check.IsNil)
cmd.Stderr = cmd.Stdout
err = cmd.Start()
c.Assert(err, check.IsNil)
portOpen, terminatePortCheck := newPortChecker(c.c, 8443)
portOpen, terminatePortCheck := newPortChecker(c, 8443)
defer func() {
c.c.Logf("Terminating port check")
c.Logf("Terminating port check")
terminatePortCheck <- true
}()
@@ -64,12 +78,12 @@ func (c *openshiftCluster) startMaster() {
logCheckFound := make(chan bool)
go func() {
defer func() {
c.c.Logf("Log checker exiting")
c.Logf("Log checker exiting")
}()
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
line := scanner.Text()
c.c.Logf("Log line: %s", line)
c.Logf("Log line: %s", line)
if strings.Contains(line, "Started Origin Controllers") {
logCheckFound <- true
return
@@ -78,7 +92,7 @@ func (c *openshiftCluster) startMaster() {
// Note: we can block before we get here.
select {
case <-terminateLogCheck:
c.c.Logf("terminated")
c.Logf("terminated")
return
default:
// Do not block here and read the next line.
@@ -87,107 +101,152 @@ func (c *openshiftCluster) startMaster() {
logCheckFound <- false
}()
defer func() {
c.c.Logf("Terminating log check")
c.Logf("Terminating log check")
terminateLogCheck <- true
}()
gotPortCheck := false
gotLogCheck := false
for !gotPortCheck || !gotLogCheck {
c.c.Logf("Waiting for master")
c.Logf("Waiting for master")
select {
case <-portOpen:
c.c.Logf("port check done")
c.Logf("port check done")
gotPortCheck = true
case found := <-logCheckFound:
c.c.Logf("log check done, found: %t", found)
c.Logf("log check done, found: %t", found)
if !found {
c.c.Fatal("log check done, success message not found")
c.Fatal("log check done, success message not found")
}
gotLogCheck = true
}
}
c.c.Logf("OK, master started!")
c.Logf("OK, master started!")
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startRegistry() {
//KUBECONFIG=openshift.local.config/master/openshift-registry.kubeconfig DOCKER_REGISTRY_URL=127.0.0.1:5000
c.registry = exec.Command("dockerregistry", "/atomic-registry-config.yml")
c.registry.Dir = c.workingDir
c.registry.Env = os.Environ()
c.registry.Env = modifyEnviron(c.registry.Env, "KUBECONFIG", "openshift.local.config/master/openshift-registry.kubeconfig")
c.registry.Env = modifyEnviron(c.registry.Env, "DOCKER_REGISTRY_URL", "127.0.0.1:5000")
consumeAndLogOutputs(c.c, "registry", c.registry)
err := c.registry.Start()
c.c.Assert(err, check.IsNil)
// prepareRegistryConfig creates a registry service account and a related k8s client configuration in ${cluster.workingDir}/openshift.local.registry.
func (cluster *openshiftCluster) prepareRegistryConfig(c *check.C) {
// This partially mimics the objects created by (oadm registry), except that we run the
// server directly as an ordinary process instead of a pod with an implicitly attached service account.
saJSON := `{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "registry"
}
}`
cmd := cluster.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c, cmd, saJSON)
portOpen, terminatePortCheck := newPortChecker(c.c, 5000)
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
out, err := cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
out, err = cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "")
}
// startRegistry starts the OpenShift registry with configPart on port, waits for it to be ready, and returns the process object, or terminates on failure.
func (cluster *openshiftCluster) startRegistryProcess(c *check.C, port int, configPath string) *exec.Cmd {
cmd := cluster.clusterCmd(map[string]string{
"KUBECONFIG": "openshift.local.registry/openshift-registry.kubeconfig",
"DOCKER_REGISTRY_URL": fmt.Sprintf("127.0.0.1:%d", port),
}, "dockerregistry", configPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%d", port), cmd)
err := cmd.Start()
c.Assert(err, check.IsNil)
portOpen, terminatePortCheck := newPortChecker(c, port)
defer func() {
terminatePortCheck <- true
}()
c.c.Logf("Waiting for registry to start")
c.Logf("Waiting for registry to start")
<-portOpen
c.c.Logf("OK, Registry port open")
c.Logf("OK, Registry port open")
return cmd
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (cluster *openshiftCluster) startRegistry(c *check.C) {
// Our “primary” registry
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5000, "/atomic-registry-config.yml"))
// A registry configured with acceptschema2:false
schema1Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5005",
"rootdirectory: /registry": "rootdirectory: /registry-schema1",
// The default configuration currently already contains acceptschema2: false
})
// Make sure the configuration contains "acceptschema2: false", because eventually it will be enabled upstream and this function will need to be updated.
configContents, err := ioutil.ReadFile(schema1Config)
c.Assert(err, check.IsNil)
c.Assert(string(configContents), check.Matches, "(?s).*acceptschema2: false.*")
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5005, schema1Config))
// A registry configured with acceptschema2:true
schema2Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5006",
"rootdirectory: /registry": "rootdirectory: /registry-schema2",
"acceptschema2: false": "acceptschema2: true",
})
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5006, schema2Config))
}
// ocLogin runs (oc login) and (oc new-project) on the cluster, or terminates on failure.
func (c *openshiftCluster) ocLoginToProject() {
c.c.Logf("oc login")
cmd := exec.Command("oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
cmd.Dir = c.workingDir
func (cluster *openshiftCluster) ocLoginToProject(c *check.C) {
c.Logf("oc login")
cmd := cluster.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c.c, "oc", "new-project", "myns")
c.c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c, "oc", "new-project", "myns")
c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
}
// dockerLogin simulates (docker login) to the cluster, or terminates on failure.
// We do not run (docker login) directly, because that requires a running daemon and a docker package.
func (c *openshiftCluster) dockerLogin() {
func (cluster *openshiftCluster) dockerLogin(c *check.C) {
dockerDir := filepath.Join(homedir.Get(), ".docker")
err := os.Mkdir(dockerDir, 0700)
c.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
out := combinedOutputOfCommand(c.c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.c.Logf("oc config value: %s", out)
configJSON := fmt.Sprintf(`{
"auths": {
"localhost:5000": {
out := combinedOutputOfCommand(c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.Logf("oc config value: %s", out)
authValue := base64.StdEncoding.EncodeToString([]byte("unused:" + out))
auths := []string{}
for _, port := range []int{5000, 5005, 5006} {
auths = append(auths, fmt.Sprintf(`"localhost:%d": {
"auth": "%s",
"email": "unused"
}
}
}`, base64.StdEncoding.EncodeToString([]byte("unused:"+out)))
}`, port, authValue))
}
configJSON := `{"auths": {` + strings.Join(auths, ",") + `}}`
err = ioutil.WriteFile(filepath.Join(dockerDir, "config.json"), []byte(configJSON), 0600)
c.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
}
// relaxImageSignerPermissions opens up the system:image-signer permissions so that
// anyone can work with signatures
// FIXME: This also allows anyone to DoS anyone else; this design is really not all
// that workable, but it is the best we can do for now.
func (c *openshiftCluster) relaxImageSignerPermissions() {
cmd := exec.Command("oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
cmd.Dir = c.workingDir
cmd.Env = os.Environ()
cmd.Env = modifyEnviron(cmd.Env, "KUBECONFIG", "openshift.local.config/master/admin.kubeconfig")
func (cluster *openshiftCluster) relaxImageSignerPermissions(c *check.C) {
cmd := cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
}
// tearDown stops the cluster services and deletes (only some!) of the state.
func (c *openshiftCluster) tearDown() {
if c.registry != nil && c.registry.Process != nil {
c.registry.Process.Kill()
func (cluster *openshiftCluster) tearDown(c *check.C) {
for i := len(cluster.processes) - 1; i >= 0; i-- {
cluster.processes[i].Process.Kill()
}
if c.master != nil && c.master.Process != nil {
c.master.Process.Kill()
}
if c.workingDir != "" {
os.RemoveAll(c.workingDir)
if cluster.workingDir != "" {
os.RemoveAll(cluster.workingDir)
}
}

View File

@@ -0,0 +1,40 @@
// +build openshift_shell
package main
import (
"os"
"os/exec"
"github.com/go-check/check"
)
/*
TestRunShell is not really a test; it is a convenient way to use the registry setup code
in openshift.go and CopySuite to get an interactive environment for experimentation.
To use it, run:
sudo make shell
to start a container, then within the container:
SKOPEO_CONTAINER_TESTS=1 PS1='nested> ' go test -tags openshift_shell -timeout=24h ./integration -v -check.v -check.vv -check.f='CopySuite.TestRunShell'
An example of what can be done within the container:
cd ..; make binary-local install
./skopeo --tls-verify=false copy --sign-by=personal@example.com docker://busybox:latest atomic:localhost:5000/myns/personal:personal
oc get istag personal:personal -o json
curl -L -v 'http://localhost:5000/v2/'
cat ~/.docker/config.json
curl -L -v 'http://localhost:5000/openshift/token&scope=repository:myns/personal:pull' --header 'Authorization: Basic $auth_from_docker'
curl -L -v 'http://localhost:5000/v2/myns/personal/manifests/personal' --header 'Authorization: Bearer $token_from_oauth'
curl -L -v 'http://localhost:5000/extensions/v2/myns/personal/signatures/$manifest_digest' --header 'Authorization: Bearer $token_from_oauth'
*/
func (s *CopySuite) TestRunShell(c *check.C) {
cmd := exec.Command("bash", "-i")
tty, err := os.OpenFile("/dev/tty", os.O_RDWR, 0)
c.Assert(err, check.IsNil)
cmd.Stdin = tty
cmd.Stdout = tty
cmd.Stderr = tty
err = cmd.Run()
c.Assert(err, check.IsNil)
}

View File

@@ -13,23 +13,10 @@ import (
)
const (
binaryV1 = "docker-registry"
binaryV2 = "registry-v2"
binaryV2Schema1 = "registry-v2-schema1"
)
type testRegistryV1 struct {
cmd *exec.Cmd
url string
dir string
}
func setupRegistryV1At(c *check.C, url string, auth bool) *testRegistryV1 {
return &testRegistryV1{
url: url,
}
}
type testRegistryV2 struct {
cmd *exec.Cmd
url string
@@ -44,7 +31,7 @@ func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistry
c.Assert(err, check.IsNil)
// Wait for registry to be ready to serve requests.
for i := 0; i != 5; i++ {
for i := 0; i != 50; i++ {
if err = reg.Ping(); err == nil {
break
}
@@ -67,6 +54,8 @@ loglevel: debug
storage:
filesystem:
rootdirectory: %s
delete:
enabled: true
http:
addr: %s
%s`
@@ -107,6 +96,7 @@ http:
}
cmd := exec.Command(binary, confPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%s", url), cmd)
if err := cmd.Start(); err != nil {
os.RemoveAll(tmp)
if os.IsNotExist(err) {

View File

@@ -8,6 +8,7 @@ import (
"os/exec"
"strings"
"github.com/containers/image/signature"
"github.com/go-check/check"
)
@@ -35,7 +36,7 @@ func findFingerprint(lineBytes []byte) (string, error) {
return "", errors.New("No fingerprint found")
}
func (s *SigningSuite) SetUpTest(c *check.C) {
func (s *SigningSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
@@ -51,7 +52,7 @@ func (s *SigningSuite) SetUpTest(c *check.C) {
c.Assert(err, check.IsNil)
}
func (s *SigningSuite) TearDownTest(c *check.C) {
func (s *SigningSuite) TearDownSuite(c *check.C) {
if s.gpgHome != "" {
err := os.RemoveAll(s.gpgHome)
c.Assert(err, check.IsNil)
@@ -62,6 +63,13 @@ func (s *SigningSuite) TearDownTest(c *check.C) {
}
func (s *SigningSuite) TestSignVerifySmoke(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/smoketest"

View File

@@ -1,7 +1,9 @@
package main
import (
"bytes"
"io"
"io/ioutil"
"net"
"os/exec"
"strings"
@@ -74,9 +76,15 @@ func assertSkopeoFails(c *check.C, regexp string, args ...string) {
// runCommandWithInput runs a command as if exec.Command(), sending it the input to stdin,
// and verifies that the exit status is 0, or terminates c on failure.
func runCommandWithInput(c *check.C, input string, name string, args ...string) {
c.Logf("Running %s %s", name, strings.Join(args, " "))
cmd := exec.Command(name, args...)
consumeAndLogOutputs(c, name+" "+strings.Join(args, " "), cmd)
runExecCmdWithInput(c, cmd, input)
}
// runExecCmdWithInput runs an exec.Cmd, sending it the input to stdin,
// and verifies that the exit status is 0, or terminates c on failure.
func runExecCmdWithInput(c *check.C, cmd *exec.Cmd, input string) {
c.Logf("Running %s %s", cmd.Path, strings.Join(cmd.Args, " "))
consumeAndLogOutputs(c, cmd.Path+" "+strings.Join(cmd.Args, " "), cmd)
stdin, err := cmd.StdinPipe()
c.Assert(err, check.IsNil)
err = cmd.Start()
@@ -144,3 +152,25 @@ func modifyEnviron(env []string, name, value string) []string {
}
return append(res, prefix+value)
}
// fileFromFixtureFixture applies edits to inputPath and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func fileFromFixture(c *check.C, inputPath string, edits map[string]string) string {
contents, err := ioutil.ReadFile(inputPath)
c.Assert(err, check.IsNil)
for template, value := range edits {
updated := bytes.Replace(contents, []byte(template), []byte(value), -1)
c.Assert(bytes.Equal(updated, contents), check.Equals, false, check.Commentf("Replacing %s in %#v failed", template, string(contents))) // Verify that the template has matched something and we are not silently ignoring it.
contents = updated
}
file, err := ioutil.TempFile("", "policy.json")
c.Assert(err, check.IsNil)
path := file.Name()
_, err = file.Write(contents)
c.Assert(err, check.IsNil)
err = file.Close()
c.Assert(err, check.IsNil)
return path
}

49
vendor.conf Normal file
View File

@@ -0,0 +1,49 @@
github.com/urfave/cli v1.17.0
github.com/containers/image master
github.com/opencontainers/go-digest master
gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
github.com/containers/storage master
github.com/Sirupsen/logrus v0.10.0
github.com/go-check/check v1
github.com/stretchr/testify v1.1.3
github.com/davecgh/go-spew master
github.com/pmezard/go-difflib master
github.com/pkg/errors master
golang.org/x/crypto master
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
github.com/docker/docker v1.13.0
github.com/docker/go-connections 4ccf312bf1d35e5dbda654e57a9be4c3f3cd0366
github.com/vbatts/tar-split v0.10.1
github.com/gorilla/context 14f550f51a
github.com/gorilla/mux e444e69cbd
github.com/docker/go-units 8a7beacffa3009a9ac66bad506b18ffdd110cf97
golang.org/x/net master
# end docker deps
golang.org/x/text master
github.com/docker/distribution master
github.com/docker/libtrust master
github.com/opencontainers/runc master
github.com/opencontainers/image-spec v1.0.0
# -- start OCI image validation requirements.
github.com/opencontainers/runtime-spec v1.0.0
github.com/opencontainers/image-tools v0.1.0
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
go4.org master https://github.com/camlistore/go4
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460
# -- end OCI image validation requirements
github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
k8s.io/client-go master
github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
# containers/storage's dependencies that aren't already being pulled in
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
github.com/opencontainers/selinux master
golang.org/x/sys master
github.com/tchap/go-patricia v2.2.6
github.com/BurntSushi/toml master

View File

@@ -1,6 +1,6 @@
The MIT License
The MIT License (MIT)
Copyright (c) 2010-2014 Google, Inc. http://angularjs.org
Copyright (c) 2013 TOML authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

218
vendor/github.com/BurntSushi/toml/README.md generated vendored Normal file
View File

@@ -0,0 +1,218 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/toml-lang/toml
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
Documentation: https://godoc.org/github.com/BurntSushi/toml
Installation:
```bash
go get github.com/BurntSushi/toml
```
Try the toml validator:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.

509
vendor/github.com/BurntSushi/toml/decode.go generated vendored Normal file
View File

@@ -0,0 +1,509 @@
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

121
vendor/github.com/BurntSushi/toml/decode_meta.go generated vendored Normal file
View File

@@ -0,0 +1,121 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

27
vendor/github.com/BurntSushi/toml/doc.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/toml-lang/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml

568
vendor/github.com/BurntSushi/toml/encode.go generated vendored Normal file
View File

@@ -0,0 +1,568 @@
package toml
import (
"bufio"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
"\"", "\\\"",
"\\", "\\\\",
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
case reflect.Struct:
enc.eStruct(key, rv)
default:
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
continue
}
enc.encode(key.add(mapKey), mrv)
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
continue
}
frv := rv.Field(i)
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
if opts.skip {
continue
}
keyName := sft.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
continue
}
if opts.omitzero && isZero(sf) {
continue
}
enc.encode(key.add(keyName), sf)
}
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

19
vendor/github.com/BurntSushi/toml/encoding_types.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View File

@@ -0,0 +1,18 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

953
vendor/github.com/BurntSushi/toml/lex.go generated vendored Normal file
View File

@@ -0,0 +1,953 @@
package toml
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
itemRawString
itemMultilineString
itemRawMultilineString
itemBool
itemInteger
itemFloat
itemDatetime
itemArray // the start of an array
itemArrayEnd
itemTableStart
itemTableEnd
itemArrayTableStart
itemArrayTableEnd
itemKeyStart
itemCommentStart
itemInlineTableStart
itemInlineTableEnd
)
const (
eof = 0
comma = ','
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
inlineTableStart = '{'
inlineTableEnd = '}'
)
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
// Allow for backing up up to three runes.
// This is necessary because TOML contains 3-rune tokens (""" and ''').
prevWidths [3]int
nprev int // how many of prevWidths are in use
// If we emit an eof, we can still back up, but it is not OK to call
// next again.
atEOF bool
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
}
func (lx *lexer) nextItem() item {
for {
select {
case item := <-lx.items:
return item
default:
lx.state = lx.state(lx)
}
}
}
func lex(input string) *lexer {
lx := &lexer{
input: input,
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
}
return lx
}
func (lx *lexer) push(state stateFn) {
lx.stack = append(lx.stack, state)
}
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
return last
}
func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.start = lx.pos
}
func (lx *lexer) next() (r rune) {
if lx.atEOF {
panic("next called after EOF")
}
if lx.pos >= len(lx.input) {
lx.atEOF = true
return eof
}
if lx.input[lx.pos] == '\n' {
lx.line++
}
lx.prevWidths[2] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[0]
if lx.nprev < 3 {
lx.nprev++
}
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
lx.prevWidths[0] = w
lx.pos += w
return r
}
// ignore skips over the pending input before this point.
func (lx *lexer) ignore() {
lx.start = lx.pos
}
// backup steps back one rune. Can be called only twice between calls to next.
func (lx *lexer) backup() {
if lx.atEOF {
lx.atEOF = false
return
}
if lx.nprev < 1 {
panic("backed up too far")
}
w := lx.prevWidths[0]
lx.prevWidths[0] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[2]
lx.nprev--
lx.pos -= w
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
}
}
// accept consumes the next rune if it's equal to `valid`.
func (lx *lexer) accept(valid rune) bool {
if lx.next() == valid {
return true
}
lx.backup()
return false
}
// peek returns but does not consume the next rune in the input.
func (lx *lexer) peek() rune {
r := lx.next()
lx.backup()
return r
}
// skip ignores all input that matches the given predicate.
func (lx *lexer) skip(pred func(rune) bool) {
for {
r := lx.next()
if pred(r) {
continue
}
lx.backup()
lx.ignore()
return
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (newlines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
}
return nil
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
if isWhitespace(r) || isNL(r) {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
lx.push(lexTop)
return lexCommentStart
case tableStart:
return lexTableStart
case eof:
if lx.pos > lx.start {
return lx.errorf("unexpected EOF")
}
lx.emit(itemEOF)
return nil
}
// At this point, the only valid item can be a key, so we back up
// and let the key lexer do the rest.
lx.backup()
lx.push(lexTopEnd)
return lexKeyStart
}
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a newline. If it sees EOF, it will quit the lexer successfully.
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
// a comment will read to a newline for us.
lx.push(lexTop)
return lexCommentStart
case isWhitespace(r):
return lexTopEnd
case isNL(r):
lx.ignore()
return lexTop
case r == eof:
lx.emit(itemEOF)
return nil
}
return lx.errorf("expected a top-level item to end with a newline, "+
"comment, or EOF, but got %q instead", r)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
// it starts with a character other than '.' and ']'.
// It assumes that '[' has already been consumed.
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
} else {
lx.emit(itemTableStart)
lx.push(lexTableEnd)
}
return lexTableNameStart
}
func lexTableEnd(lx *lexer) stateFn {
lx.emit(itemTableEnd)
return lexTopEnd
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf("expected end of table array name delimiter %q, "+
"but got %q instead", arrayTableEnd, r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
}
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
return lx.errorf("unexpected end of table name " +
"(table names cannot be empty)")
case r == tableSep:
return lx.errorf("unexpected table separator " +
"(table names cannot be empty)")
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.push(lexTableNameEnd)
return lexValue // reuse string lexing
default:
return lexBareTableName
}
}
// lexBareTableName lexes the name of a table. It assumes that at least one
// valid character for the table has already been read.
func lexBareTableName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
return lexBareTableName
}
lx.backup()
lx.emit(itemText)
return lexTableNameEnd
}
// lexTableNameEnd reads the end of a piece of a table name, optionally
// consuming whitespace.
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
lx.ignore()
return lexTableNameStart
case r == tableEnd:
return lx.pop()
default:
return lx.errorf("expected '.' or ']' to end table name, "+
"but got %q instead", r)
}
}
// lexKeyStart consumes a key name up until the first non-whitespace character.
// lexKeyStart will ignore whitespace.
func lexKeyStart(lx *lexer) stateFn {
r := lx.peek()
switch {
case r == keySep:
return lx.errorf("unexpected key separator %q", keySep)
case isWhitespace(r) || isNL(r):
lx.next()
return lexSkip(lx, lexKeyStart)
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.emit(itemKeyStart)
lx.push(lexKeyEnd)
return lexValue // reuse string lexing
default:
lx.ignore()
lx.emit(itemKeyStart)
return lexBareKey
}
}
// lexBareKey consumes the text of a bare key. Assumes that the first character
// (which is not whitespace) has not yet been consumed.
func lexBareKey(lx *lexer) stateFn {
switch r := lx.next(); {
case isBareKeyChar(r):
return lexBareKey
case isWhitespace(r):
lx.backup()
lx.emit(itemText)
return lexKeyEnd
case r == keySep:
lx.backup()
lx.emit(itemText)
return lexKeyEnd
default:
return lx.errorf("bare keys cannot contain %q", r)
}
}
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
// separator).
func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case r == keySep:
return lexSkip(lx, lexValue)
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
default:
return lx.errorf("expected key separator %q, but got %q instead",
keySep, r)
}
}
// lexValue starts the consumption of a value anywhere a value is expected.
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func lexValue(lx *lexer) stateFn {
// We allow whitespace to precede a value, but NOT newlines.
// In array syntax, the array states are responsible for ignoring newlines.
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case isDigit(r):
lx.backup() // avoid an extra state and use the same as above
return lexNumberOrDateStart
}
switch r {
case arrayStart:
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case inlineTableStart:
lx.ignore()
lx.emit(itemInlineTableStart)
return lexInlineTableValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
lx.ignore() // Ignore """
return lexMultilineString
}
lx.backup()
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
lx.ignore() // Ignore """
return lexMultilineRawString
}
lx.backup()
}
lx.ignore() // ignore the "'"
return lexRawString
case '+', '-':
return lexNumberStart
case '.': // special error case, be kind to users
return lx.errorf("floats must start with a digit, not '.'")
}
if unicode.IsLetter(r) {
// Be permissive here; lexBool will give a nice error if the
// user wrote something like
// x = foo
// (i.e. not 'true' or 'false' but is something else word-like.)
lx.backup()
return lexBool
}
return lx.errorf("expected value but found %q instead", r)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and newlines are ignored.
func lexArrayValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
lx.push(lexArrayValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == arrayEnd:
// NOTE(caleb): The spec isn't clear about whether you can have
// a trailing comma or not, so we'll allow it.
return lexArrayEnd
}
lx.backup()
lx.push(lexArrayValueEnd)
return lexValue
}
// lexArrayValueEnd consumes everything between the end of an array value and
// the next value (or the end of the array): it ignores whitespace and newlines
// and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
return lexArrayEnd
}
return lx.errorf(
"expected a comma or array terminator %q, but got %q instead",
arrayEnd, r,
)
}
// lexArrayEnd finishes the lexing of an array.
// It assumes that a ']' has just been consumed.
func lexArrayEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemArrayEnd)
return lx.pop()
}
// lexInlineTableValue consumes one key/value pair in an inline table.
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
func lexInlineTableValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValue)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == inlineTableEnd:
return lexInlineTableEnd
}
lx.backup()
lx.push(lexInlineTableValueEnd)
return lexKeyStart
}
// lexInlineTableValueEnd consumes everything between the end of an inline table
// key/value pair and the next pair (or the end of the table):
// it ignores whitespace and expects either a ',' or a '}'.
func lexInlineTableValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValueEnd)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexInlineTableValue
case r == inlineTableEnd:
return lexInlineTableEnd
}
return lx.errorf("expected a comma or an inline table terminator %q, "+
"but got %q instead", inlineTableEnd, r)
}
// lexInlineTableEnd finishes the lexing of an inline table.
// It assumes that a '}' has just been consumed.
func lexInlineTableEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemInlineTableEnd)
return lx.pop()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func lexString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
lx.backup()
lx.emit(itemString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexString
}
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func lexMultilineString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case '\\':
return lexMultilineStringEscape
case stringEnd:
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
// It assumes that the beginning "'" has already been consumed and ignored.
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == rawStringEnd:
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'''" has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case rawStringEnd:
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemRawMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
return lexMultilineString
}
lx.backup()
lx.push(lexMultilineString)
return lexStringEscape(lx)
}
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'b':
fallthrough
case 't':
fallthrough
case 'n':
fallthrough
case 'f':
fallthrough
case 'r':
fallthrough
case '"':
fallthrough
case '\\':
return lx.pop()
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("invalid escape character %q; only the following "+
"escape characters are allowed: "+
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 4; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected four hexadecimal digits after '\u', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
func lexLongUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 8; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected eight hexadecimal digits after '\U', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '_':
return lexNumber
case 'e', 'E':
return lexFloat
case '.':
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
// lexNumberOrDate consumes either an integer, float or datetime.
func lexNumberOrDate(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '-':
return lexDatetime
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexDatetime consumes a Datetime, to a first approximation.
// The parser validates that it matches one of the accepted formats.
func lexDatetime(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexDatetime
}
switch r {
case '-', 'T', ':', '.', 'Z', '+':
return lexDatetime
}
lx.backup()
lx.emit(itemDatetime)
return lx.pop()
}
// lexNumberStart consumes either an integer or a float. It assumes that a sign
// has already been read, but that *no* digits have been consumed.
// lexNumberStart will move to the appropriate integer or float states.
func lexNumberStart(lx *lexer) stateFn {
// We MUST see a digit. Even floats have to start with a digit.
r := lx.next()
if !isDigit(r) {
if r == '.' {
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
return lexNumber
}
// lexNumber consumes an integer or a float after seeing the first digit.
func lexNumber(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumber
}
switch r {
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexFloat consumes the elements of a float. It allows any sequence of
// float-like characters, so floats emitted by the lexer are only a first
// approximation and must be validated by the parser.
func lexFloat(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexFloat
}
switch r {
case '_', '.', '-', '+', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemFloat)
return lx.pop()
}
// lexBool consumes a bool string: 'true' or 'false.
func lexBool(lx *lexer) stateFn {
var rs []rune
for {
r := lx.next()
if !unicode.IsLetter(r) {
lx.backup()
break
}
rs = append(rs, r)
}
s := string(rs)
switch s {
case "true", "false":
lx.emit(itemBool)
return lx.pop()
}
return lx.errorf("expected value but found %q instead", s)
}
// lexCommentStart begins the lexing of a comment. It will emit
// itemCommentStart and consume no characters, passing control to lexComment.
func lexCommentStart(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemCommentStart)
return lexComment
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first newline character, and pass control
// back to the last state on the stack.
func lexComment(lx *lexer) stateFn {
r := lx.peek()
if isNL(r) || r == eof {
lx.emit(itemText)
return lx.pop()
}
lx.next()
return lexComment
}
// lexSkip ignores all slurped input and moves on to the next state.
func lexSkip(lx *lexer, nextState stateFn) stateFn {
return func(lx *lexer) stateFn {
lx.ignore()
return nextState
}
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
return "Text"
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
return "String"
case itemBool:
return "Bool"
case itemInteger:
return "Integer"
case itemFloat:
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemCommentStart:
return "CommentStart"
}
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
}
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}

592
vendor/github.com/BurntSushi/toml/parse.go generated vendored Normal file
View File

@@ -0,0 +1,592 @@
package toml
import (
"fmt"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
// A list of keys in the order that they appear in the TOML data.
ordered []Key
// the full key for the current hash in scope
context Key
// the base key name for everything except hashes
currentKey string
// rough approximation of line number
approxLine int
// A map of 'key.group.names' to whether they were created implicitly.
implicits map[string]bool
}
type parseError string
func (pe parseError) Error() string {
return string(pe)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(parseError); ok {
return
}
panic(r)
}
}()
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
p.approxLine, p.current(), fmt.Sprintf(format, v...))
panic(parseError(msg))
}
func (p *parser) next() item {
it := p.lx.nextItem()
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart:
p.approxLine = item.line
p.expect(itemText)
case itemTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemTableEnd, kg.typ)
p.establishContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemArrayTableEnd, kg.typ)
p.establishContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart:
kname := p.next()
p.approxLine = kname.line
p.currentKey = p.keyString(kname)
val, typ := p.value(p.next())
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
panic("unreachable")
}
}
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
}
p.bug("Expected boolean value, but got '%s'.", it.val)
case itemInteger:
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseInt(val, 10, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit "+
"signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemFloat:
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be "+
"surrounded by digits", it.val)
}
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed "+
"by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit "+
"IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemDatetime:
var t time.Time
var ok bool
var err error
for _, format := range []string{
"2006-01-02T15:04:05Z07:00",
"2006-01-02T15:04:05",
"2006-01-02",
} {
t, err = time.ParseInLocation(format, it.val, time.Local)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
case itemArray:
array := make([]interface{}, 0)
types := make([]tomlType, 0)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it)
array = append(array, val)
types = append(types, typ)
}
return array, p.typeOfArray(types)
case itemInlineTableStart:
var (
hash = make(map[string]interface{})
outerContext = p.context
outerKey = p.currentKey
)
p.context = append(p.context, p.currentKey)
p.currentKey = ""
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
if it.typ != itemKeyStart {
p.bug("Expected key start but instead found %q, around line %d",
it.val, p.approxLine)
}
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
// retrieve key
k := p.next()
p.approxLine = k.line
kname := p.keyString(k)
// retrieve value
p.currentKey = kname
val, typ := p.value(p.next())
// make sure we keep metadata up to date
p.setType(kname, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
hash[kname] = val
}
p.context = outerContext
p.currentKey = outerKey
return hash, tomlHash
}
p.bug("Unexpected value type: %s", it.typ)
panic("unreachable")
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
accept = false
continue
}
accept = true
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// establishContext sets the current context of the parser,
// where the context is either a hash or an array of hashes. Which one is
// set depends on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) establishContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 5)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as "+
"an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var tmpHash interface{}
var ok bool
hash := p.mapping
keyContext := make(Key, 0)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.bug("Expected hash to have type 'map[string]interface{}', but "+
"it has '%T' instead.", tmpHash)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Typically, if the given key has already been set, then we have
// to raise an error since duplicate keys are disallowed. However,
// it's possible that a key was previously defined implicitly. In this
// case, it is allowed to be redefined concretely. (See the
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// addImplicit sets the given Key as having been created implicitly.
func (p *parser) addImplicit(key Key) {
p.implicits[key.String()] = true
}
// removeImplicit stops tagging the given key as having been implicitly
// created.
func (p *parser) removeImplicit(key Key) {
p.implicits[key.String()] = false
}
// isImplicit returns true if the key group pointed to by the key was created
// implicitly.
func (p *parser) isImplicit(key Key) bool {
return p.implicits[key.String()]
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) == 0 || s[0] != '\n' {
return s
}
return s[1:]
}
func stripEscapedWhitespace(s string) string {
esc := strings.Split(s, "\\\n")
if len(esc) > 1 {
for i := 1; i < len(esc); i++ {
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
}
}
return strings.Join(esc, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}
func isStringType(ty itemType) bool {
return ty == itemString || ty == itemMultilineString ||
ty == itemRawString || ty == itemRawMultilineString
}

91
vendor/github.com/BurntSushi/toml/type_check.go generated vendored Normal file
View File

@@ -0,0 +1,91 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}

242
vendor/github.com/BurntSushi/toml/type_fields.go generated vendored Normal file
View File

@@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

View File

@@ -1 +0,0 @@
logrus

View File

@@ -1,9 +0,0 @@
language: go
go:
- 1.3
- 1.4
- 1.5
- tip
install:
- go get -t ./...
script: GOMAXPROCS=4 GORACE="halt_on_error=1" go test -race -v ./...

View File

@@ -1,66 +0,0 @@
# 0.10.0
* feature: Add a test hook (#180)
* feature: `ParseLevel` is now case-insensitive (#326)
* feature: `FieldLogger` interface that generalizes `Logger` and `Entry` (#308)
* performance: avoid re-allocations on `WithFields` (#335)
# 0.9.0
* logrus/text_formatter: don't emit empty msg
* logrus/hooks/airbrake: move out of main repository
* logrus/hooks/sentry: move out of main repository
* logrus/hooks/papertrail: move out of main repository
* logrus/hooks/bugsnag: move out of main repository
* logrus/core: run tests with `-race`
* logrus/core: detect TTY based on `stderr`
* logrus/core: support `WithError` on logger
* logrus/core: Solaris support
# 0.8.7
* logrus/core: fix possible race (#216)
* logrus/doc: small typo fixes and doc improvements
# 0.8.6
* hooks/raven: allow passing an initialized client
# 0.8.5
* logrus/core: revert #208
# 0.8.4
* formatter/text: fix data race (#218)
# 0.8.3
* logrus/core: fix entry log level (#208)
* logrus/core: improve performance of text formatter by 40%
* logrus/core: expose `LevelHooks` type
* logrus/core: add support for DragonflyBSD and NetBSD
* formatter/text: print structs more verbosely
# 0.8.2
* logrus: fix more Fatal family functions
# 0.8.1
* logrus: fix not exiting on `Fatalf` and `Fatalln`
# 0.8.0
* logrus: defaults to stderr instead of stdout
* hooks/sentry: add special field for `*http.Request`
* formatter/text: ignore Windows for colors
# 0.7.3
* formatter/\*: allow configuration of timestamp layout
# 0.7.2
* formatter/text: Add configuration option for time format (#158)

80
vendor/github.com/containers/image/README.md generated vendored Normal file
View File

@@ -0,0 +1,80 @@
[![GoDoc](https://godoc.org/github.com/containers/image?status.svg)](https://godoc.org/github.com/containers/image) [![Build Status](https://travis-ci.org/containers/image.svg?branch=master)](https://travis-ci.org/containers/image)
=
`image` is a set of Go libraries aimed at working in various way with
containers' images and container image registries.
The containers/image library allows application to pull and push images from
container image registries, like the upstream docker registry. It also
implements "simple image signing".
The containers/image library also allows you to inspect a repository on a
container registry without pulling down the image. This means it fetches the
repository's manifest and it is able to show you a `docker inspect`-like json
output about a whole repository or a tag. This library, in contrast to `docker
inspect`, helps you gather useful information about a repository or a tag
without requiring you to run `docker pull`.
The containers/image library also allows you to translate from one image format
to another, for example docker container images to OCI images. It also allows
you to copy container images between various registries, possibly converting
them as necessary, and to sign and verify images.
## Command-line usage
The containers/image project is only a library with no user interface;
you can either incorporate it into your Go programs, or use the `skopeo` tool:
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
containers/image library and takes advantage of many of its features,
e.g. `skopeo copy` exposes the `containers/image/copy.Image` functionality.
## Dependencies
This library does not ship a committed version of its dependencies in a `vendor`
subdirectory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects, and because
types defined in the `vendor` directory would be impossible to use from your projects.
What this project tests against dependencies-wise is located
[in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf).
## Building
If you want to see what the library can do, or an example of how it is called,
consider starting with the [skopeo](https://github.com/projectatomic/skopeo) tool
instead.
To integrate this library into your project, put it into `$GOPATH` or use
your preferred vendoring tool to include a copy in your project.
Ensure that the dependencies documented [in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf)
are also available
(using those exact versions or different versions of your choosing).
This library, by default, also depends on the GpgME and libostree C libraries. Either install them:
```sh
Fedora$ dnf install gpgme-devel libassuan-devel libostree-devel
macOS$ brew install gpgme
```
or use the build tags described below to avoid the dependencies (e.g. using `go build -tags …`)
### Supported build tags
- `containers_image_openpgp`: Use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
the primary downside is that creating new signatures with the Golang-only implementation is not supported.
- `containers_image_ostree_stub`: Instead of importing `ostree:` transport in `github.com/containers/image/transports/alltransports`, use a stub which reports that the transport is not supported. This allows building the library without requiring the `libostree` development libraries.
(Note that explicitly importing `github.com/containers/image/ostree` will still depend on the `libostree` library, this build tag only affects generic users of …`/alltransports`.)
## Contributing
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.
## License
ASL 2.0
## Contact
- Mailing list: [containers-dev](https://groups.google.com/forum/?hl=en#!forum/containers-dev)
- IRC: #[container-projects](irc://irc.freenode.net:6667/#container-projects) on freenode.net

View File

@@ -3,60 +3,63 @@ package copy
import (
"bytes"
"compress/gzip"
"crypto/sha256"
"crypto/subtle"
"encoding/hex"
"errors"
"fmt"
"hash"
"io"
"io/ioutil"
"reflect"
"runtime"
"strings"
"time"
pb "gopkg.in/cheggaaa/pb.v1"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// supportedDigests lists the supported blob digest types.
var supportedDigests = map[string]func() hash.Hash{
"sha256": sha256.New,
}
type digestingReader struct {
source io.Reader
digest hash.Hash
expectedDigest []byte
digester digest.Digester
expectedDigest digest.Digest
validationFailed bool
}
// imageCopier allows us to keep track of diffID values for blobs, and other
// data, that we're copying between images, and cache other information that
// might allow us to take some shortcuts
type imageCopier struct {
copiedBlobs map[digest.Digest]digest.Digest
cachedDiffIDs map[digest.Digest]digest.Digest
manifestUpdates *types.ManifestUpdateOptions
dest types.ImageDestination
src types.Image
rawSource types.ImageSource
diffIDsAreNeeded bool
canModifyManifest bool
reportWriter io.Writer
progressInterval time.Duration
progress chan types.ProgressProperties
}
// newDigestingReader returns an io.Reader implementation with contents of source, which will eventually return a non-EOF error
// and set validationFailed to true if the source stream does not match expectedDigestString.
func newDigestingReader(source io.Reader, expectedDigestString string) (*digestingReader, error) {
fields := strings.SplitN(expectedDigestString, ":", 2)
if len(fields) != 2 {
return nil, fmt.Errorf("Invalid digest specification %s", expectedDigestString)
// and set validationFailed to true if the source stream does not match expectedDigest.
func newDigestingReader(source io.Reader, expectedDigest digest.Digest) (*digestingReader, error) {
if err := expectedDigest.Validate(); err != nil {
return nil, errors.Errorf("Invalid digest specification %s", expectedDigest)
}
fn, ok := supportedDigests[fields[0]]
if !ok {
return nil, fmt.Errorf("Invalid digest specification %s: unknown digest type %s", expectedDigestString, fields[0])
}
digest := fn()
expectedDigest, err := hex.DecodeString(fields[1])
if err != nil {
return nil, fmt.Errorf("Invalid digest value %s: %v", expectedDigestString, err)
}
if len(expectedDigest) != digest.Size() {
return nil, fmt.Errorf("Invalid digest specification %s: length %d does not match %d", expectedDigestString, len(expectedDigest), digest.Size())
digestAlgorithm := expectedDigest.Algorithm()
if !digestAlgorithm.Available() {
return nil, errors.Errorf("Invalid digest specification %s: unsupported digest algorithm %s", expectedDigest, digestAlgorithm)
}
return &digestingReader{
source: source,
digest: digest,
digester: digestAlgorithm.Digester(),
expectedDigest: expectedDigest,
validationFailed: false,
}, nil
@@ -65,18 +68,18 @@ func newDigestingReader(source io.Reader, expectedDigestString string) (*digesti
func (d *digestingReader) Read(p []byte) (int, error) {
n, err := d.source.Read(p)
if n > 0 {
if n2, err := d.digest.Write(p[:n]); n2 != n || err != nil {
if n2, err := d.digester.Hash().Write(p[:n]); n2 != n || err != nil {
// Coverage: This should not happen, the hash.Hash interface requires
// d.digest.Write to never return an error, and the io.Writer interface
// requires n2 == len(input) if no error is returned.
return 0, fmt.Errorf("Error updating digest during verification: %d vs. %d, %v", n2, n, err)
return 0, errors.Wrapf(err, "Error updating digest during verification: %d vs. %d", n2, n)
}
}
if err == io.EOF {
actualDigest := d.digest.Sum(nil)
if subtle.ConstantTimeCompare(actualDigest, d.expectedDigest) != 1 {
actualDigest := d.digester.Digest()
if actualDigest != d.expectedDigest {
d.validationFailed = true
return 0, fmt.Errorf("Digest did not match, expected %s, got %s", hex.EncodeToString(d.expectedDigest), hex.EncodeToString(actualDigest))
return 0, errors.Errorf("Digest did not match, expected %s, got %s", d.expectedDigest, actualDigest)
}
}
return n, err
@@ -87,145 +90,271 @@ type Options struct {
RemoveSignatures bool // Remove any pre-existing signatures. SignBy will still add a new signature.
SignBy string // If non-empty, asks for a signature to be added during the copy, and specifies a key ID, as accepted by signature.NewGPGSigningMechanism().SignDockerManifest(),
ReportWriter io.Writer
SourceCtx *types.SystemContext
DestinationCtx *types.SystemContext
ProgressInterval time.Duration // time to wait between reports to signal the progress channel
Progress chan types.ProgressProperties // Reported to when ProgressInterval has arrived for a single artifact+offset.
}
// Image copies image from srcRef to destRef, using policyContext to validate source image admissibility.
func Image(ctx *types.SystemContext, policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) error {
reportWriter := options.ReportWriter
if reportWriter == nil {
reportWriter = ioutil.Discard
// Image copies image from srcRef to destRef, using policyContext to validate
// source image admissibility.
func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) (retErr error) {
// NOTE this function uses an output parameter for the error return value.
// Setting this and returning is the ideal way to return an error.
//
// the defers in this routine will wrap the error return with its own errors
// which can be valuable context in the middle of a multi-streamed copy.
if options == nil {
options = &Options{}
}
reportWriter := ioutil.Discard
if options.ReportWriter != nil {
reportWriter = options.ReportWriter
}
writeReport := func(f string, a ...interface{}) {
fmt.Fprintf(reportWriter, f, a...)
}
dest, err := destRef.NewImageDestination(ctx)
if err != nil {
return fmt.Errorf("Error initializing destination %s: %v", transports.ImageName(destRef), err)
}
defer dest.Close()
rawSource, err := srcRef.NewImageSource(ctx, dest.SupportedManifestMIMETypes())
dest, err := destRef.NewImageDestination(options.DestinationCtx)
if err != nil {
return fmt.Errorf("Error initializing source %s: %v", transports.ImageName(srcRef), err)
return errors.Wrapf(err, "Error initializing destination %s", transports.ImageName(destRef))
}
src := image.FromSource(rawSource)
defer src.Close()
defer func() {
if err := dest.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (dest: %v)", err)
}
}()
destSupportedManifestMIMETypes := dest.SupportedManifestMIMETypes()
rawSource, err := srcRef.NewImageSource(options.SourceCtx, destSupportedManifestMIMETypes)
if err != nil {
return errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
}
unparsedImage := image.UnparsedFromSource(rawSource)
defer func() {
if unparsedImage != nil {
if err := unparsedImage.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (unparsed: %v)", err)
}
}
}()
// Please keep this policy check BEFORE reading any other information about the image.
if allowed, err := policyContext.IsRunningImageAllowed(src); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return fmt.Errorf("Source image rejected: %v", err)
if allowed, err := policyContext.IsRunningImageAllowed(unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return errors.Wrap(err, "Source image rejected")
}
src, err := image.FromUnparsedImage(unparsedImage)
if err != nil {
return errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(srcRef))
}
unparsedImage = nil
defer func() {
if err := src.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (source: %v)", err)
}
}()
if err := checkImageDestinationForCurrentRuntimeOS(src, dest); err != nil {
return err
}
writeReport("Getting image source manifest\n")
manifest, _, err := src.Manifest()
if err != nil {
return fmt.Errorf("Error reading manifest: %v", err)
if src.IsMultiImage() {
return errors.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
}
var sigs [][]byte
if options != nil && options.RemoveSignatures {
if options.RemoveSignatures {
sigs = [][]byte{}
} else {
writeReport("Getting image source signatures\n")
s, err := src.Signatures()
if err != nil {
return fmt.Errorf("Error reading signatures: %v", err)
return errors.Wrap(err, "Error reading signatures")
}
sigs = s
}
if len(sigs) != 0 {
writeReport("Checking if image destination supports signatures\n")
if err := dest.SupportsSignatures(); err != nil {
return fmt.Errorf("Can not copy signatures: %v", err)
return errors.Wrap(err, "Can not copy signatures")
}
}
canModifyManifest := len(sigs) == 0
writeReport("Getting image source configuration\n")
srcConfigInfo, err := src.ConfigInfo()
if err != nil {
return fmt.Errorf("Error parsing manifest: %v", err)
canModifyManifest := len(sigs) == 0
manifestUpdates := types.ManifestUpdateOptions{}
manifestUpdates.InformationOnly.Destination = dest
if err := updateEmbeddedDockerReference(&manifestUpdates, dest, src, canModifyManifest); err != nil {
return err
}
if srcConfigInfo.Digest != "" {
writeReport("Uploading blob %s\n", srcConfigInfo.Digest)
destConfigInfo, err := copyBlob(dest, rawSource, srcConfigInfo, false, options.ReportWriter)
if err != nil {
// We compute preferredManifestMIMEType only to show it in error messages.
// Without having to add this context in an error message, we would be happy enough to know only that no conversion is needed.
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest)
if err != nil {
return err
}
// If src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates) will be true, it needs to be true by the time we get here.
ic := imageCopier{
copiedBlobs: make(map[digest.Digest]digest.Digest),
cachedDiffIDs: make(map[digest.Digest]digest.Digest),
manifestUpdates: &manifestUpdates,
dest: dest,
src: src,
rawSource: rawSource,
diffIDsAreNeeded: src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates),
canModifyManifest: canModifyManifest,
reportWriter: reportWriter,
progressInterval: options.ProgressInterval,
progress: options.Progress,
}
if err := ic.copyLayers(); err != nil {
return err
}
// With docker/distribution registries we do not know whether the registry accepts schema2 or schema1 only;
// and at least with the OpenShift registry "acceptschema2" option, there is no way to detect the support
// without actually trying to upload something and getting a types.ManifestTypeRejectedError.
// So, try the preferred manifest MIME type. If the process succeeds, fine…
manifest, err := ic.copyUpdatedConfigAndManifest()
if err != nil {
logrus.Debugf("Writing manifest using preferred type %s failed: %v", preferredManifestMIMEType, err)
// … if it fails, _and_ the failure is because the manifest is rejected, we may have other options.
if _, isManifestRejected := errors.Cause(err).(types.ManifestTypeRejectedError); !isManifestRejected || len(otherManifestMIMETypeCandidates) == 0 {
// We dont have other options.
// In principle the code below would handle this as well, but the resulting error message is fairly ugly.
// Dont bother the user with MIME types if we have no choice.
return err
}
if destConfigInfo.Digest != srcConfigInfo.Digest {
return fmt.Errorf("Internal error: copying uncompressed config blob %s changed digest to %s", srcConfigInfo.Digest, destConfigInfo.Digest)
}
}
srcLayerInfos, err := src.LayerInfos()
if err != nil {
return fmt.Errorf("Error parsing manifest: %v", err)
}
destLayerInfos := []types.BlobInfo{}
copiedLayers := map[string]types.BlobInfo{}
for _, srcLayer := range srcLayerInfos {
destLayer, ok := copiedLayers[srcLayer.Digest]
if !ok {
writeReport("Uploading blob %s\n", srcLayer.Digest)
destLayer, err = copyBlob(dest, rawSource, srcLayer, canModifyManifest, options.ReportWriter)
if err != nil {
return err
}
copiedLayers[srcLayer.Digest] = destLayer
}
destLayerInfos = append(destLayerInfos, destLayer)
}
manifestUpdates := types.ManifestUpdateOptions{}
if layerDigestsDiffer(srcLayerInfos, destLayerInfos) {
manifestUpdates.LayerInfos = destLayerInfos
}
if !reflect.DeepEqual(manifestUpdates, types.ManifestUpdateOptions{}) {
// If the original MIME type is acceptable, determineManifestConversion always uses it as preferredManifestMIMEType.
// So if we are here, we will definitely be trying to convert the manifest.
// With !canModifyManifest, that would just be a string of repeated failures for the same reason,
// so lets bail out early and with a better error message.
if !canModifyManifest {
return fmt.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
return errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
}
manifest, err = src.UpdatedManifest(manifestUpdates)
if err != nil {
return fmt.Errorf("Error creating an updated manifest: %v", err)
// errs is a list of errors when trying various manifest types. Also serves as an "upload succeeded" flag when set to nil.
errs := []string{fmt.Sprintf("%s(%v)", preferredManifestMIMEType, err)}
for _, manifestMIMEType := range otherManifestMIMETypeCandidates {
logrus.Debugf("Trying to use manifest type %s…", manifestMIMEType)
manifestUpdates.ManifestMIMEType = manifestMIMEType
attemptedManifest, err := ic.copyUpdatedConfigAndManifest()
if err != nil {
logrus.Debugf("Upload of manifest type %s failed: %v", manifestMIMEType, err)
errs = append(errs, fmt.Sprintf("%s(%v)", manifestMIMEType, err))
continue
}
// We have successfully uploaded a manifest.
manifest = attemptedManifest
errs = nil // Mark this as a success so that we don't abort below.
break
}
if errs != nil {
return fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
}
}
if options != nil && options.SignBy != "" {
mech, err := signature.NewGPGSigningMechanism()
if options.SignBy != "" {
newSig, err := createSignature(dest, manifest, options.SignBy, reportWriter)
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return fmt.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
writeReport("Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, options.SignBy)
if err != nil {
return fmt.Errorf("Error creating signature: %v", err)
return err
}
sigs = append(sigs, newSig)
}
writeReport("Uploading manifest to image destination\n")
if err := dest.PutManifest(manifest); err != nil {
return fmt.Errorf("Error writing manifest: %v", err)
}
writeReport("Storing signatures\n")
if err := dest.PutSignatures(sigs); err != nil {
return fmt.Errorf("Error writing signatures: %v", err)
return errors.Wrap(err, "Error writing signatures")
}
if err := dest.Commit(); err != nil {
return fmt.Errorf("Error committing the finished image: %v", err)
return errors.Wrap(err, "Error committing the finished image")
}
return nil
}
func checkImageDestinationForCurrentRuntimeOS(src types.Image, dest types.ImageDestination) error {
if dest.MustMatchRuntimeOS() {
c, err := src.OCIConfig()
if err != nil {
return errors.Wrapf(err, "Error parsing image configuration")
}
osErr := fmt.Errorf("image operating system %q cannot be used on %q", c.OS, runtime.GOOS)
if runtime.GOOS == "windows" && c.OS == "linux" {
return osErr
} else if runtime.GOOS != "windows" && c.OS == "windows" {
return osErr
}
}
return nil
}
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
func updateEmbeddedDockerReference(manifestUpdates *types.ManifestUpdateOptions, dest types.ImageDestination, src types.Image, canModifyManifest bool) error {
destRef := dest.Reference().DockerReference()
if destRef == nil {
return nil // Destination does not care about Docker references
}
if !src.EmbeddedDockerReferenceConflicts(destRef) {
return nil // No reference embedded in the manifest, or it matches destRef already.
}
if !canModifyManifest {
return errors.Errorf("Copying a schema1 image with an embedded Docker reference to %s (Docker reference %s) would invalidate existing signatures. Explicitly enable signature removal to proceed anyway",
transports.ImageName(dest.Reference()), destRef.String())
}
manifestUpdates.EmbeddedDockerReference = destRef
return nil
}
// copyLayers copies layers from src/rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
func (ic *imageCopier) copyLayers() error {
srcInfos := ic.src.LayerInfos()
destInfos := []types.BlobInfo{}
diffIDs := []digest.Digest{}
for _, srcLayer := range srcInfos {
var (
destInfo types.BlobInfo
diffID digest.Digest
err error
)
if ic.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
// DiffIDs are, currently, needed only when converting from schema1.
// In which case src.LayerInfos will not have URLs because schema1
// does not support them.
if ic.diffIDsAreNeeded {
return errors.New("getting DiffID for foreign layers is unimplemented")
}
destInfo = srcLayer
fmt.Fprintf(ic.reportWriter, "Skipping foreign layer %q copy to %s\n", destInfo.Digest, ic.dest.Reference().Transport().Name())
} else {
destInfo, diffID, err = ic.copyLayer(srcLayer)
if err != nil {
return err
}
}
destInfos = append(destInfos, destInfo)
diffIDs = append(diffIDs, diffID)
}
ic.manifestUpdates.InformationOnly.LayerInfos = destInfos
if ic.diffIDsAreNeeded {
ic.manifestUpdates.InformationOnly.LayerDiffIDs = diffIDs
}
if layerDigestsDiffer(srcInfos, destInfos) {
ic.manifestUpdates.LayerInfos = destInfos
}
return nil
}
// layerDigestsDiffer return true iff the digests in a and b differ (ignoring sizes and possible other fields)
func layerDigestsDiffer(a, b []types.BlobInfo) bool {
if len(a) != len(b) {
@@ -239,15 +368,193 @@ func layerDigestsDiffer(a, b []types.BlobInfo) bool {
return false
}
// copyBlob copies a blob with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied blob.
func copyBlob(dest types.ImageDestination, src types.ImageSource, srcInfo types.BlobInfo, canCompress bool, reportWriter io.Writer) (types.BlobInfo, error) {
srcStream, srcBlobSize, err := src.GetBlob(srcInfo.Digest) // We currently completely ignore srcInfo.Size throughout.
// copyUpdatedConfigAndManifest updates the image per ic.manifestUpdates, if necessary,
// stores the resulting config and manifest to the destination, and returns the stored manifest.
func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
pendingImage := ic.src
if !reflect.DeepEqual(*ic.manifestUpdates, types.ManifestUpdateOptions{InformationOnly: ic.manifestUpdates.InformationOnly}) {
if !ic.canModifyManifest {
return nil, errors.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
}
if !ic.diffIDsAreNeeded && ic.src.UpdatedImageNeedsLayerDiffIDs(*ic.manifestUpdates) {
// We have set ic.diffIDsAreNeeded based on the preferred MIME type returned by determineManifestConversion.
// So, this can only happen if we are trying to upload using one of the other MIME type candidates.
// Because UpdatedImageNeedsLayerDiffIDs is true only when converting from s1 to s2, this case should only arise
// when ic.dest.SupportedManifestMIMETypes() includes both s1 and s2, the upload using s1 failed, and we are now trying s2.
// Supposedly s2-only registries do not exist or are extremely rare, so failing with this error message is good enough for now.
// If handling such registries turns out to be necessary, we could compute ic.diffIDsAreNeeded based on the full list of manifest MIME type candidates.
return nil, errors.Errorf("Can not convert image to %s, preparing DiffIDs for this case is not supported", ic.manifestUpdates.ManifestMIMEType)
}
pi, err := ic.src.UpdatedImage(*ic.manifestUpdates)
if err != nil {
return nil, errors.Wrap(err, "Error creating an updated image manifest")
}
pendingImage = pi
}
manifest, _, err := pendingImage.Manifest()
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error reading blob %s: %v", srcInfo.Digest, err)
return nil, errors.Wrap(err, "Error reading manifest")
}
if err := ic.copyConfig(pendingImage); err != nil {
return nil, err
}
fmt.Fprintf(ic.reportWriter, "Writing manifest to image destination\n")
if err := ic.dest.PutManifest(manifest); err != nil {
return nil, errors.Wrap(err, "Error writing manifest")
}
return manifest, nil
}
// copyConfig copies config.json, if any, from src to dest.
func (ic *imageCopier) copyConfig(src types.Image) error {
srcInfo := src.ConfigInfo()
if srcInfo.Digest != "" {
fmt.Fprintf(ic.reportWriter, "Copying config %s\n", srcInfo.Digest)
configBlob, err := src.ConfigBlob()
if err != nil {
return errors.Wrapf(err, "Error reading config blob %s", srcInfo.Digest)
}
destInfo, err := ic.copyBlobFromStream(bytes.NewReader(configBlob), srcInfo, nil, false)
if err != nil {
return err
}
if destInfo.Digest != srcInfo.Digest {
return errors.Errorf("Internal error: copying uncompressed config blob %s changed digest to %s", srcInfo.Digest, destInfo.Digest)
}
}
return nil
}
// diffIDResult contains both a digest value and an error from diffIDComputationGoroutine.
// We could also send the error through the pipeReader, but this more cleanly separates the copying of the layer and the DiffID computation.
type diffIDResult struct {
digest digest.Digest
err error
}
// copyLayer copies a layer with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied layer, and a value for LayerDiffIDs if diffIDIsNeeded
func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest.Digest, error) {
// Check if we already have a blob with this digest
haveBlob, extantBlobSize, err := ic.dest.HasBlob(srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error checking for blob %s at destination", srcInfo.Digest)
}
// If we already have a cached diffID for this blob, we don't need to compute it
diffIDIsNeeded := ic.diffIDsAreNeeded && (ic.cachedDiffIDs[srcInfo.Digest] == "")
// If we already have the blob, and we don't need to recompute the diffID, then we might be able to avoid reading it again
if haveBlob && !diffIDIsNeeded {
// Check the blob sizes match, if we were given a size this time
if srcInfo.Size != -1 && srcInfo.Size != extantBlobSize {
return types.BlobInfo{}, "", errors.Errorf("Error: blob %s is already present, but with size %d instead of %d", srcInfo.Digest, extantBlobSize, srcInfo.Size)
}
srcInfo.Size = extantBlobSize
// Tell the image destination that this blob's delta is being applied again. For some image destinations, this can be faster than using GetBlob/PutBlob
blobinfo, err := ic.dest.ReapplyBlob(srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reapplying blob %s at destination", srcInfo.Digest)
}
fmt.Fprintf(ic.reportWriter, "Skipping fetch of repeat blob %s\n", srcInfo.Digest)
return blobinfo, ic.cachedDiffIDs[srcInfo.Digest], err
}
// Fallback: copy the layer, computing the diffID if we need to do so
fmt.Fprintf(ic.reportWriter, "Copying blob %s\n", srcInfo.Digest)
srcStream, srcBlobSize, err := ic.rawSource.GetBlob(srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
defer srcStream.Close()
blobInfo, diffIDChan, err := ic.copyLayerFromStream(srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
diffIDIsNeeded)
if err != nil {
return types.BlobInfo{}, "", err
}
var diffIDResult diffIDResult // = {digest:""}
if diffIDIsNeeded {
diffIDResult = <-diffIDChan
if diffIDResult.err != nil {
return types.BlobInfo{}, "", errors.Wrap(diffIDResult.err, "Error computing layer DiffID")
}
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
ic.cachedDiffIDs[srcInfo.Digest] = diffIDResult.digest
}
return blobInfo, diffIDResult.digest, nil
}
// copyLayerFromStream is an implementation detail of copyLayer; mostly providing a separate “defer” scope.
// it copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
// perhaps compressing the stream if canCompress,
// and returns a complete blobInfo of the copied blob and perhaps a <-chan diffIDResult if diffIDIsNeeded, to be read by the caller.
func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
diffIDIsNeeded bool) (types.BlobInfo, <-chan diffIDResult, error) {
var getDiffIDRecorder func(compression.DecompressorFunc) io.Writer // = nil
var diffIDChan chan diffIDResult
err := errors.New("Internal error: unexpected panic in copyLayer") // For pipeWriter.CloseWithError below
if diffIDIsNeeded {
diffIDChan = make(chan diffIDResult, 1) // Buffered, so that sending a value after this or our caller has failed and exited does not block.
pipeReader, pipeWriter := io.Pipe()
defer func() { // Note that this is not the same as {defer pipeWriter.CloseWithError(err)}; we need err to be evaluated lazily.
pipeWriter.CloseWithError(err) // CloseWithError(nil) is equivalent to Close()
}()
getDiffIDRecorder = func(decompressor compression.DecompressorFunc) io.Writer {
// If this fails, e.g. because we have exited and due to pipeWriter.CloseWithError() above further
// reading from the pipe has failed, we dont really care.
// We only read from diffIDChan if the rest of the flow has succeeded, and when we do read from it,
// the return value includes an error indication, which we do check.
//
// If this gets never called, pipeReader will not be used anywhere, but pipeWriter will only be
// closed above, so we are happy enough with both pipeReader and pipeWriter to just get collected by GC.
go diffIDComputationGoroutine(diffIDChan, pipeReader, decompressor) // Closes pipeReader
return pipeWriter
}
}
blobInfo, err := ic.copyBlobFromStream(srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest) // Sets err to nil on success
return blobInfo, diffIDChan, err
// We need the defer … pipeWriter.CloseWithError() to happen HERE so that the caller can block on reading from diffIDChan
}
// diffIDComputationGoroutine reads all input from layerStream, uncompresses using decompressor if necessary, and sends its digest, and status, if any, to dest.
func diffIDComputationGoroutine(dest chan<- diffIDResult, layerStream io.ReadCloser, decompressor compression.DecompressorFunc) {
result := diffIDResult{
digest: "",
err: errors.New("Internal error: unexpected panic in diffIDComputationGoroutine"),
}
defer func() { dest <- result }()
defer layerStream.Close() // We do not care to bother the other end of the pipe with other failures; we send them to dest instead.
result.digest, result.err = computeDiffID(layerStream, decompressor)
}
// computeDiffID reads all input from layerStream, uncompresses it using decompressor if necessary, and returns its digest.
func computeDiffID(stream io.Reader, decompressor compression.DecompressorFunc) (digest.Digest, error) {
if decompressor != nil {
s, err := decompressor(stream)
if err != nil {
return "", err
}
stream = s
}
return digest.Canonical.FromReader(stream)
}
// copyBlobFromStream copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
// perhaps sending a copy to an io.Writer if getOriginalLayerCopyWriter != nil,
// perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied blob.
func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
getOriginalLayerCopyWriter func(decompressor compression.DecompressorFunc) io.Writer,
canCompress bool) (types.BlobInfo, error) {
// The copying happens through a pipeline of connected io.Readers.
// === Input: srcStream
// === Process input through digestingReader to validate against the expected digest.
// Be paranoid; in case PutBlob somehow managed to ignore an error from digestingReader,
// use a separate validation failure indicator.
// Note that we don't use a stronger "validationSucceeded" indicator, because
@@ -255,29 +562,40 @@ func copyBlob(dest types.ImageDestination, src types.ImageSource, srcInfo types.
// read stream to the end, and validation does not happen.
digestingReader, err := newDigestingReader(srcStream, srcInfo.Digest)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error preparing to verify blob %s: %v", srcInfo.Digest, err)
return types.BlobInfo{}, errors.Wrapf(err, "Error preparing to verify blob %s", srcInfo.Digest)
}
var destStream io.Reader = digestingReader
isCompressed, destStream, err := isStreamCompressed(destStream) // We could skip this in some cases, but let's keep the code path uniform
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error reading blob %s: %v", srcInfo.Digest, err)
}
// === Detect compression of the input stream.
// This requires us to “peek ahead” into the stream to read the initial part, which requires us to chain through another io.Reader returned by DetectCompression.
decompressor, destStream, err := compression.DetectCompression(destStream) // We could skip this in some cases, but let's keep the code path uniform
if err != nil {
return types.BlobInfo{}, errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
isCompressed := decompressor != nil
// === Report progress using a pb.Reader.
bar := pb.New(int(srcInfo.Size)).SetUnits(pb.U_BYTES)
bar.Output = reportWriter
bar.Output = ic.reportWriter
bar.SetMaxWidth(80)
bar.ShowTimeLeft = false
bar.ShowPercent = false
bar.Start()
destStream = bar.NewProxyReader(destStream)
defer fmt.Fprint(ic.reportWriter, "\n")
defer fmt.Fprint(reportWriter, "\n")
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.
if getOriginalLayerCopyWriter != nil {
destStream = io.TeeReader(destStream, getOriginalLayerCopyWriter(decompressor))
originalLayerReader = destStream
}
// === Compress the layer if it is uncompressed and compression is desired
var inputInfo types.BlobInfo
if !canCompress || isCompressed || !dest.ShouldCompressLayers() {
if !canCompress || isCompressed || !ic.dest.ShouldCompressLayers() {
logrus.Debugf("Using original blob without modification")
inputInfo.Digest = srcInfo.Digest
inputInfo.Size = srcBlobSize
inputInfo = srcInfo
} else {
logrus.Debugf("Compressing blob on the fly")
pipeReader, pipeWriter := io.Pipe()
@@ -292,51 +610,42 @@ func copyBlob(dest types.ImageDestination, src types.ImageSource, srcInfo types.
inputInfo.Size = -1
}
uploadedInfo, err := dest.PutBlob(destStream, inputInfo)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error writing blob: %v", err)
}
if digestingReader.validationFailed { // Coverage: This should never happen.
return types.BlobInfo{}, fmt.Errorf("Internal error uploading blob %s, digest verification failed but was ignored", srcInfo.Digest)
}
if inputInfo.Digest != "" && uploadedInfo.Digest != inputInfo.Digest {
return types.BlobInfo{}, fmt.Errorf("Internal error uploading blob %s, blob with digest %s uploaded with digest %s", srcInfo.Digest, inputInfo.Digest, uploadedInfo.Digest)
}
return uploadedInfo, nil
}
// compressionPrefixes is an internal implementation detail of isStreamCompressed
var compressionPrefixes = map[string][]byte{
"gzip": {0x1F, 0x8B, 0x08}, // gzip (RFC 1952)
"bzip2": {0x42, 0x5A, 0x68}, // bzip2 (decompress.c:BZ2_decompress)
"xz": {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}, // xz (/usr/share/doc/xz/xz-file-format.txt)
}
// isStreamCompressed returns true if input is recognized as a compressed format.
// Because it consumes the start of input, other consumers must use the returned io.Reader instead to also read from the beginning.
func isStreamCompressed(input io.Reader) (bool, io.Reader, error) {
buffer := [8]byte{}
n, err := io.ReadAtLeast(input, buffer[:], len(buffer))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// This is a “real” error. We could just ignore it this time, process the data we have, and hope that the source will report the same error again.
// Instead, fail immediately with the original error cause instead of a possibly secondary/misleading error returned later.
return false, nil, err
}
isCompressed := false
for algo, prefix := range compressionPrefixes {
if bytes.HasPrefix(buffer[:n], prefix) {
logrus.Debugf("Detected compression format %s", algo)
isCompressed = true
break
// === Report progress using the ic.progress channel, if required.
if ic.progress != nil && ic.progressInterval > 0 {
destStream = &progressReader{
source: destStream,
channel: ic.progress,
interval: ic.progressInterval,
artifact: srcInfo,
lastTime: time.Now(),
}
}
if !isCompressed {
logrus.Debugf("No compression detected")
// === Finally, send the layer stream to dest.
uploadedInfo, err := ic.dest.PutBlob(destStream, inputInfo)
if err != nil {
return types.BlobInfo{}, errors.Wrap(err, "Error writing blob")
}
return isCompressed, io.MultiReader(bytes.NewReader(buffer[:n]), input), nil
// This is fairly horrible: the writer from getOriginalLayerCopyWriter wants to consumer
// all of the input (to compute DiffIDs), even if dest.PutBlob does not need it.
// So, read everything from originalLayerReader, which will cause the rest to be
// sent there if we are not already at EOF.
if getOriginalLayerCopyWriter != nil {
logrus.Debugf("Consuming rest of the original blob to satisfy getOriginalLayerCopyWriter")
_, err := io.Copy(ioutil.Discard, originalLayerReader)
if err != nil {
return types.BlobInfo{}, errors.Wrapf(err, "Error reading input blob %s", srcInfo.Digest)
}
}
if digestingReader.validationFailed { // Coverage: This should never happen.
return types.BlobInfo{}, errors.Errorf("Internal error writing blob %s, digest verification failed but was ignored", srcInfo.Digest)
}
if inputInfo.Digest != "" && uploadedInfo.Digest != inputInfo.Digest {
return types.BlobInfo{}, errors.Errorf("Internal error writing blob %s, blob with digest %s saved with digest %s", srcInfo.Digest, inputInfo.Digest, uploadedInfo.Digest)
}
return uploadedInfo, nil
}
// compressGoroutine reads all input from src and writes its compressed equivalent to dest.

102
vendor/github.com/containers/image/copy/manifest.go generated vendored Normal file
View File

@@ -0,0 +1,102 @@
package copy
import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
// orderedSet is a list of strings (MIME types in our case), with each string appearing at most once.
type orderedSet struct {
list []string
included map[string]struct{}
}
// newOrderedSet creates a correctly initialized orderedSet.
// [Sometimes it would be really nice if Golang had constructors…]
func newOrderedSet() *orderedSet {
return &orderedSet{
list: []string{},
included: map[string]struct{}{},
}
}
// append adds s to the end of os, only if it is not included already.
func (os *orderedSet) append(s string) {
if _, ok := os.included[s]; !ok {
os.list = append(os.list, s)
os.included[s] = struct{}{}
}
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
// Returns the preferred manifest MIME type (whether we are converting to it or using it unmodified),
// and a list of other possible alternatives, in order.
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) (string, []string, error) {
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return "", nil, errors.Wrap(err, "Error reading manifest")
}
if len(destSupportedManifestMIMETypes) == 0 {
return srcType, []string{}, nil // Anything goes; just use the original as is, do not try any conversions.
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
supportedByDest[t] = struct{}{}
}
// destSupportedManifestMIMETypes is a static guess; a particular registry may still only support a subset of the types.
// So, build a list of types to try in order of decreasing preference.
// FIXME? This treats manifest.DockerV2Schema1SignedMediaType and manifest.DockerV2Schema1MediaType as distinct,
// although we are not really making any conversion, and it is very unlikely that a destination would support one but not the other.
// In practice, schema1 is probably the lowest common denominator, so we would expect to try the first one of the MIME types
// and never attempt the other one.
prioritizedTypes := newOrderedSet()
// First of all, prefer to keep the original manifest unmodified.
if _, ok := supportedByDest[srcType]; ok {
prioritizedTypes.append(srcType)
}
if !canModifyManifest {
// We could also drop the !canModifyManifest parameter and have the caller
// make the choice; it is already doing that to an extent, to improve error
// messages. But it is nice to hide the “if !canModifyManifest, do no conversion”
// special case in here; the caller can then worry (or not) only about a good UI.
logrus.Debugf("We can't modify the manifest, hoping for the best...")
return srcType, []string{}, nil // Take our chances - FIXME? Or should we fail without trying?
}
// Then use our list of preferred types.
for _, t := range preferredManifestMIMETypes {
if _, ok := supportedByDest[t]; ok {
prioritizedTypes.append(t)
}
}
// Finally, try anything else the destination supports.
for _, t := range destSupportedManifestMIMETypes {
prioritizedTypes.append(t)
}
logrus.Debugf("Manifest has MIME type %s, ordered candidate list [%s]", srcType, strings.Join(prioritizedTypes.list, ", "))
if len(prioritizedTypes.list) == 0 { // Coverage: destSupportedManifestMIMETypes is not empty (or we would have exited in the “Anything goes” case above), so this should never happen.
return "", nil, errors.New("Internal error: no candidate MIME types")
}
preferredType := prioritizedTypes.list[0]
if preferredType != srcType {
manifestUpdates.ManifestMIMEType = preferredType
} else {
logrus.Debugf("... will first try using the original manifest unmodified")
}
return preferredType, prioritizedTypes.list[1:], nil
}

View File

@@ -0,0 +1,28 @@
package copy
import (
"io"
"time"
"github.com/containers/image/types"
)
// progressReader is a reader that reports its progress on an interval.
type progressReader struct {
source io.Reader
channel chan types.ProgressProperties
interval time.Duration
artifact types.BlobInfo
lastTime time.Time
offset uint64
}
func (r *progressReader) Read(p []byte) (int, error) {
n, err := r.source.Read(p)
r.offset += uint64(n)
if time.Since(r.lastTime) > r.interval {
r.channel <- types.ProgressProperties{Artifact: r.artifact, Offset: r.offset}
r.lastTime = time.Now()
}
return n, err
}

35
vendor/github.com/containers/image/copy/sign.go generated vendored Normal file
View File

@@ -0,0 +1,35 @@
package copy
import (
"fmt"
"io"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// createSignature creates a new signature of manifest at (identified by) dest using keyIdentity.
func createSignature(dest types.ImageDestination, manifest []byte, keyIdentity string, reportWriter io.Writer) ([]byte, error) {
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return nil, errors.Wrap(err, "Error initializing GPG")
}
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
return nil, errors.Wrap(err, "Signing not supported")
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return nil, errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
fmt.Fprintf(reportWriter, "Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, keyIdentity)
if err != nil {
return nil, errors.Wrap(err, "Error creating signature")
}
return newSig, nil
}

View File

@@ -1,14 +1,13 @@
package directory
import (
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"os"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type dirImageDestination struct {
@@ -27,7 +26,8 @@ func (d *dirImageDestination) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *dirImageDestination) Close() {
func (d *dirImageDestination) Close() error {
return nil
}
func (d *dirImageDestination) SupportedManifestMIMETypes() []string {
@@ -45,6 +45,17 @@ func (d *dirImageDestination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *dirImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dirImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -64,16 +75,16 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
}
}()
h := sha256.New()
tee := io.TeeReader(stream, h)
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := hex.EncodeToString(h.Sum(nil))
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, fmt.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
@@ -86,9 +97,36 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
return types.BlobInfo{}, err
}
succeeded = true
return types.BlobInfo{Digest: "sha256:" + computedDigest, Size: size}, nil
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dirImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
blobPath := d.ref.layerPath(info.Digest)
finfo, err := os.Stat(blobPath)
if err != nil && os.IsNotExist(err) {
return false, -1, nil
}
if err != nil {
return false, -1, err
}
return true, finfo.Size(), nil
}
func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dirImageDestination) PutManifest(manifest []byte) error {
return ioutil.WriteFile(d.ref.manifestPath(), manifest, 0644)
}

View File

@@ -5,7 +5,10 @@ import (
"io/ioutil"
"os"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type dirImageSource struct {
@@ -25,21 +28,27 @@ func (s *dirImageSource) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *dirImageSource) Close() {
func (s *dirImageSource) Close() error {
return nil
}
// it's up to the caller to determine the MIME type of the returned manifest's bytes
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *dirImageSource) GetManifest() ([]byte, string, error) {
m, err := ioutil.ReadFile(s.ref.manifestPath())
if err != nil {
return nil, "", err
}
return m, "", err
return m, manifest.GuessMIMEType(m), err
}
func (s *dirImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return nil, "", errors.Errorf(`Getting target manifest not supported by "dir:"`)
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dirImageSource) GetBlob(digest string) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(digest))
func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(info.Digest))
if err != nil {
return nil, 0, nil
}

View File

@@ -1,17 +1,24 @@
package directory
import (
"errors"
"fmt"
"path/filepath"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/opencontainers/go-digest"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for directory paths.
var Transport = dirTransport{}
@@ -32,7 +39,7 @@ func (t dirTransport) ParseReference(reference string) (types.ImageReference, er
// scope passed to this function will not be "", that value is always allowed.
func (t dirTransport) ValidatePolicyConfigurationScope(scope string) error {
if !strings.HasPrefix(scope, "/") {
return fmt.Errorf("Invalid scope %s: Must be an absolute path", scope)
return errors.Errorf("Invalid scope %s: Must be an absolute path", scope)
}
// Refuse also "/", otherwise "/" and "" would have the same semantics,
// and "" could be unexpectedly shadowed by the "/" entry.
@@ -41,7 +48,7 @@ func (t dirTransport) ValidatePolicyConfigurationScope(scope string) error {
}
cleaned := filepath.Clean(scope)
if cleaned != scope {
return fmt.Errorf(`Invalid scope %s: Uses non-canonical format, perhaps try %s`, scope, cleaned)
return errors.Errorf(`Invalid scope %s: Uses non-canonical format, perhaps try %s`, scope, cleaned)
}
return nil
}
@@ -127,11 +134,13 @@ func (ref dirReference) PolicyConfigurationNamespaces() []string {
return res
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref dirReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src := newImageSource(ref)
return image.FromSource(src), nil
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
@@ -150,7 +159,7 @@ func (ref dirReference) NewImageDestination(ctx *types.SystemContext) (types.Ima
// DeleteImage deletes the named image from the registry, if supported.
func (ref dirReference) DeleteImage(ctx *types.SystemContext) error {
return fmt.Errorf("Deleting images not implemented for dir: images")
return errors.Errorf("Deleting images not implemented for dir: images")
}
// manifestPath returns a path for the manifest within a directory using our conventions.
@@ -159,9 +168,9 @@ func (ref dirReference) manifestPath() string {
}
// layerPath returns a path for a layer tarball within a directory using our conventions.
func (ref dirReference) layerPath(digest string) string {
func (ref dirReference) layerPath(digest digest.Digest) string {
// FIXME: Should we keep the digest identification?
return filepath.Join(ref.path, strings.TrimPrefix(digest, "sha256:")+".tar")
return filepath.Join(ref.path, digest.Hex()+".tar")
}
// signaturePath returns a path for a signature within a directory using our conventions.

View File

@@ -1,9 +1,10 @@
package explicitfilepath
import (
"fmt"
"os"
"path/filepath"
"github.com/pkg/errors"
)
// ResolvePathToFullyExplicit returns the input path converted to an absolute, no-symlinks, cleaned up path.
@@ -25,14 +26,14 @@ func ResolvePathToFullyExplicit(path string) (string, error) {
// This can still happen if there is a filesystem race condition, causing the Lstat() above to fail but the later resolution to succeed.
// We do not care to promise anything if such filesystem race conditions can happen, but we definitely don't want to return "."/".." components
// in the resulting path, and especially not at the end.
return "", fmt.Errorf("Unexpectedly missing special filename component in %s", path)
return "", errors.Errorf("Unexpectedly missing special filename component in %s", path)
}
resolvedPath := filepath.Join(resolvedParent, file)
// As a sanity check, ensure that there are no "." or ".." components.
cleanedResolvedPath := filepath.Clean(resolvedPath)
if cleanedResolvedPath != resolvedPath {
// Coverage: This should never happen.
return "", fmt.Errorf("Internal inconsistency: Path %s resolved to %s still cleaned up to %s", path, resolvedPath, cleanedResolvedPath)
return "", errors.Errorf("Internal inconsistency: Path %s resolved to %s still cleaned up to %s", path, resolvedPath, cleanedResolvedPath)
}
return resolvedPath, nil
default: // err != nil, unrecognized

View File

@@ -0,0 +1,57 @@
package archive
import (
"io"
"os"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
type archiveImageDestination struct {
*tarfile.Destination // Implements most of types.ImageDestination
ref archiveReference
writer io.Closer
}
func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.ImageDestination, error) {
if ref.destinationRef == nil {
return nil, errors.Errorf("docker-archive: destination reference not supplied (must be of form <path>:<reference:tag>)")
}
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_EXCL|os.O_CREATE, 0644)
if err != nil {
// FIXME: It should be possible to modify archives, but the only really
// sane way of doing it is to create a copy of the image, modify
// it and then do a rename(2).
if os.IsExist(err) {
err = errors.New("docker-archive doesn't support modifying existing images")
}
return nil, err
}
return &archiveImageDestination{
Destination: tarfile.NewDestination(fh, ref.destinationRef),
ref: ref,
writer: fh,
}, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *archiveImageDestination) Reference() types.ImageReference {
return d.ref
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *archiveImageDestination) Close() error {
return d.writer.Close()
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *archiveImageDestination) Commit() error {
return d.Destination.Commit()
}

View File

@@ -0,0 +1,36 @@
package archive
import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
)
type archiveImageSource struct {
*tarfile.Source // Implements most of types.ImageSource
ref archiveReference
}
// newImageSource returns a types.ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref archiveReference) types.ImageSource {
if ref.destinationRef != nil {
logrus.Warnf("docker-archive: references are not supported for sources (ignoring)")
}
src := tarfile.NewSource(ref.path)
return &archiveImageSource{
Source: src,
ref: ref,
}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (s *archiveImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *archiveImageSource) Close() error {
return nil
}

View File

@@ -0,0 +1,155 @@
package archive
import (
"fmt"
"strings"
"github.com/containers/image/docker/reference"
ctrImage "github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for local Docker archives.
var Transport = archiveTransport{}
type archiveTransport struct{}
func (t archiveTransport) Name() string {
return "docker-archive"
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func (t archiveTransport) ParseReference(reference string) (types.ImageReference, error) {
return ParseReference(reference)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t archiveTransport) ValidatePolicyConfigurationScope(scope string) error {
// See the explanation in archiveReference.PolicyConfigurationIdentity.
return errors.New(`docker-archive: does not support any scopes except the default "" one`)
}
// archiveReference is an ImageReference for Docker images.
type archiveReference struct {
destinationRef reference.NamedTagged // only used for destinations
path string
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an Docker ImageReference.
func ParseReference(refString string) (types.ImageReference, error) {
if refString == "" {
return nil, errors.Errorf("docker-archive reference %s isn't of the form <path>[:<reference>]", refString)
}
parts := strings.SplitN(refString, ":", 2)
path := parts[0]
var destinationRef reference.NamedTagged
// A :tag was specified, which is only necessary for destinations.
if len(parts) == 2 {
ref, err := reference.ParseNormalizedNamed(parts[1])
if err != nil {
return nil, errors.Wrapf(err, "docker-archive parsing reference")
}
ref = reference.TagNameOnly(ref)
if _, isDigest := ref.(reference.Canonical); isDigest {
return nil, errors.Errorf("docker-archive doesn't support digest references: %s", refString)
}
refTagged, isTagged := ref.(reference.NamedTagged)
if !isTagged {
// Really shouldn't be hit...
return nil, errors.Errorf("internal error: reference is not tagged even after reference.TagNameOnly: %s", refString)
}
destinationRef = refTagged
}
return archiveReference{
destinationRef: destinationRef,
path: path,
}, nil
}
func (ref archiveReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref archiveReference) StringWithinTransport() string {
if ref.destinationRef == nil {
return ref.path
}
return fmt.Sprintf("%s:%s", ref.path, ref.destinationRef.String())
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref archiveReference) DockerReference() reference.Named {
return ref.destinationRef
}
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
// (i.e. various references with exactly the same semantics should return the same configuration identity)
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
// Returns "" if configuration identities for these references are not supported.
func (ref archiveReference) PolicyConfigurationIdentity() string {
// Punt, the justification is similar to dockerReference.PolicyConfigurationIdentity.
return ""
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref archiveReference) PolicyConfigurationNamespaces() []string {
// TODO
return []string{}
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref archiveReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src := newImageSource(ctx, ref)
return ctrImage.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref archiveReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref), nil
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref archiveReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref archiveReference) DeleteImage(ctx *types.SystemContext) error {
// Not really supported, for safety reasons.
return errors.New("Deleting images not implemented for docker-archive: images")
}

View File

@@ -0,0 +1,128 @@
package daemon
import (
"io"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
type daemonImageDestination struct {
ref daemonReference
*tarfile.Destination // Implements most of types.ImageDestination
// For talking to imageLoadGoroutine
goroutineCancel context.CancelFunc
statusChannel <-chan error
writer *io.PipeWriter
// Other state
committed bool // writer has been closed
}
// newImageDestination returns a types.ImageDestination for the specified image reference.
func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
if ref.ref == nil {
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
namedTaggedRef, ok := ref.ref.(reference.NamedTagged)
if !ok {
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
reader, writer := io.Pipe()
// Commit() may never be called, so we may never read from this channel; so, make this buffered to allow imageLoadGoroutine to write status and terminate even if we never read it.
statusChannel := make(chan error, 1)
ctx, goroutineCancel := context.WithCancel(context.Background())
go imageLoadGoroutine(ctx, c, reader, statusChannel)
return &daemonImageDestination{
ref: ref,
Destination: tarfile.NewDestination(writer, namedTaggedRef),
goroutineCancel: goroutineCancel,
statusChannel: statusChannel,
writer: writer,
committed: false,
}, nil
}
// imageLoadGoroutine accepts tar stream on reader, sends it to c, and reports error or success by writing to statusChannel
func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeReader, statusChannel chan<- error) {
err := errors.New("Internal error: unexpected panic in imageLoadGoroutine")
defer func() {
logrus.Debugf("docker-daemon: sending done, status %v", err)
statusChannel <- err
}()
defer func() {
if err == nil {
reader.Close()
} else {
reader.CloseWithError(err)
}
}()
resp, err := c.ImageLoad(ctx, reader, true)
if err != nil {
err = errors.Wrap(err, "Error saving image to docker engine")
return
}
defer resp.Body.Close()
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *daemonImageDestination) MustMatchRuntimeOS() bool {
return true
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *daemonImageDestination) Close() error {
if !d.committed {
logrus.Debugf("docker-daemon: Closing tar stream to abort loading")
// In principle, goroutineCancel() should abort the HTTP request and stop the process from continuing.
// In practice, though, various HTTP implementations used by client.Client.ImageLoad() (including
// https://github.com/golang/net/blob/master/context/ctxhttp/ctxhttp_pre17.go and the
// net/http version with native Context support in Go 1.7) do not always actually immediately cancel
// the operation: they may process the HTTP request, or a part of it, to completion in a goroutine, and
// return early if the context is canceled without terminating the goroutine at all.
// So we need this CloseWithError to terminate sending the HTTP request Body
// immediately, and hopefully, through terminating the sending which uses "Transfer-Encoding: chunked"" without sending
// the terminating zero-length chunk, prevent the docker daemon from processing the tar stream at all.
// Whether that works or not, closing the PipeWriter seems desirable in any case.
d.writer.CloseWithError(errors.New("Aborting upload, daemonImageDestination closed without a previous .Commit()"))
}
d.goroutineCancel()
return nil
}
func (d *daemonImageDestination) Reference() types.ImageReference {
return d.ref
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *daemonImageDestination) Commit() error {
logrus.Debugf("docker-daemon: Closing tar stream")
if err := d.Destination.Commit(); err != nil {
return err
}
if err := d.writer.Close(); err != nil {
return err
}
d.committed = true // We may still fail, but we are done sending to imageLoadGoroutine.
logrus.Debugf("docker-daemon: Waiting for status")
err := <-d.statusChannel
return err
}

View File

@@ -0,0 +1,85 @@
package daemon
import (
"io"
"io/ioutil"
"os"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
type daemonImageSource struct {
ref daemonReference
*tarfile.Source // Implements most of types.ImageSource
tarCopyPath string
}
type layerInfo struct {
path string
size int64
}
// newImageSource returns a types.ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
//
// It would be great if we were able to stream the input tar as it is being
// sent; but Docker sends the top-level manifest, which determines which paths
// to look for, at the end, so in we will need to seek back and re-read, several times.
// (We could, perhaps, expect an exact sequence, assume that the first plaintext file
// is the config, and that the following len(RootFS) files are the layers, but that feels
// way too brittle.)
func newImageSource(ctx *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
// Per NewReference(), ref.StringWithinTransport() is either an image ID (config digest), or a !reference.NameOnly() reference.
// Either way ImageSave should create a tarball with exactly one image.
inputStream, err := c.ImageSave(context.TODO(), []string{ref.StringWithinTransport()})
if err != nil {
return nil, errors.Wrap(err, "Error loading image from docker engine")
}
defer inputStream.Close()
// FIXME: use SystemContext here.
tarCopyFile, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-daemon-tar")
if err != nil {
return nil, err
}
defer tarCopyFile.Close()
succeeded := false
defer func() {
if !succeeded {
os.Remove(tarCopyFile.Name())
}
}()
if _, err := io.Copy(tarCopyFile, inputStream); err != nil {
return nil, err
}
succeeded = true
return &daemonImageSource{
ref: ref,
Source: tarfile.NewSource(tarCopyFile.Name()),
tarCopyPath: tarCopyFile.Name(),
}, nil
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (s *daemonImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *daemonImageSource) Close() error {
return os.Remove(s.tarCopyPath)
}

View File

@@ -0,0 +1,184 @@
package daemon
import (
"github.com/pkg/errors"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for images managed by a local Docker daemon.
var Transport = daemonTransport{}
type daemonTransport struct{}
// Name returns the name of the transport, which must be unique among other transports.
func (t daemonTransport) Name() string {
return "docker-daemon"
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func (t daemonTransport) ParseReference(reference string) (types.ImageReference, error) {
return ParseReference(reference)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t daemonTransport) ValidatePolicyConfigurationScope(scope string) error {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return errors.New(`docker-daemon: does not support any scopes except the default "" one`)
}
// daemonReference is an ImageReference for images managed by a local Docker daemon
// Exactly one of id and ref can be set.
// For daemonImageSource, both id and ref are acceptable, ref must not be a NameOnly (interpreted as all tags in that repository by the daemon)
// For daemonImageDestination, it must be a ref, which is NamedTagged.
// (We could, in principle, also allow storing images without tagging them, and the user would have to refer to them using the docker image ID = config digest.
// Using the config digest requires the caller to parse the manifest themselves, which is very cumbersome; so, for now, we dont bother.)
type daemonReference struct {
id digest.Digest
ref reference.Named // !reference.IsNameOnly
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func ParseReference(refString string) (types.ImageReference, error) {
// This is intended to be compatible with reference.ParseAnyReference, but more strict about refusing some of the ambiguous cases.
// In particular, this rejects unprefixed digest values (64 hex chars), and sha256 digest prefixes (sha256:fewer-than-64-hex-chars).
// digest:hexstring is structurally the same as a reponame:tag (meaning docker.io/library/reponame:tag).
// reference.ParseAnyReference interprets such strings as digests.
if dgst, err := digest.Parse(refString); err == nil {
// The daemon explicitly refuses to tag images with a reponame equal to digest.Canonical - but _only_ this digest name.
// Other digest references are ambiguous, so refuse them.
if dgst.Algorithm() != digest.Canonical {
return nil, errors.Errorf("Invalid docker-daemon: reference %s: only digest algorithm %s accepted", refString, digest.Canonical)
}
return NewReference(dgst, nil)
}
ref, err := reference.ParseNormalizedNamed(refString) // This also rejects unprefixed digest values
if err != nil {
return nil, err
}
if reference.FamiliarName(ref) == digest.Canonical.String() {
return nil, errors.Errorf("Invalid docker-daemon: reference %s: The %s repository name is reserved for (non-shortened) digest references", refString, digest.Canonical)
}
return NewReference("", ref)
}
// NewReference returns a docker-daemon reference for either the supplied image ID (config digest) or the supplied reference (which must satisfy !reference.IsNameOnly)
func NewReference(id digest.Digest, ref reference.Named) (types.ImageReference, error) {
if id != "" && ref != nil {
return nil, errors.New("docker-daemon: reference must not have an image ID and a reference string specified at the same time")
}
if ref != nil {
if reference.IsNameOnly(ref) {
return nil, errors.Errorf("docker-daemon: reference %s has neither a tag nor a digest", reference.FamiliarString(ref))
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// Most versions of docker/reference do not handle that (ignoring the tag), so reject such input.
// This MAY be accepted in the future.
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
return nil, errors.Errorf("docker-daemon: references with both a tag and digest are currently not supported")
}
}
return daemonReference{
id: id,
ref: ref,
}, nil
}
func (ref daemonReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix;
// instead, see transports.ImageName().
func (ref daemonReference) StringWithinTransport() string {
switch {
case ref.id != "":
return ref.id.String()
case ref.ref != nil:
return reference.FamiliarString(ref.ref)
default: // Coverage: Should never happen, NewReference above should refuse such values.
panic("Internal inconsistency: daemonReference has empty id and nil ref")
}
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref daemonReference) DockerReference() reference.Named {
return ref.ref // May be nil
}
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
// (i.e. various references with exactly the same semantics should return the same configuration identity)
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
// Returns "" if configuration identities for these references are not supported.
func (ref daemonReference) PolicyConfigurationIdentity() string {
// We must allow referring to images in the daemon by image ID, otherwise untagged images would not be accessible.
// But the existence of image IDs means that we cant truly well namespace the input; the untagged images would have to fall into the default policy,
// which can be unexpected. So, punt.
return "" // This still allows using the default "" scope to define a policy for this transport.
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref daemonReference) PolicyConfigurationNamespaces() []string {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return []string{}
}
// NewImage returns a types.Image for this reference.
// The caller must call .Close() on the returned Image.
func (ref daemonReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := newImageSource(ctx, ref)
if err != nil {
return nil, err
}
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref daemonReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref daemonReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref daemonReference) DeleteImage(ctx *types.SystemContext) error {
// Should this just untag the image? Should this stop running containers?
// The semantics is not quite as clear as for remote repositories.
// The user can run (docker rmi) directly anyway, so, for now(?), punt instead of trying to guess what the user meant.
return errors.Errorf("Deleting images not implemented for docker-daemon: images")
}

View File

@@ -7,14 +7,22 @@ import (
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"os"
"path/filepath"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/docker/docker/pkg/homedir"
"github.com/containers/storage/pkg/homedir"
"github.com/docker/distribution/registry/client"
"github.com/docker/go-connections/sockets"
"github.com/docker/go-connections/tlsconfig"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
const (
@@ -26,56 +34,204 @@ const (
dockerCfgFileName = "config.json"
dockerCfgObsolete = ".dockercfg"
baseURL = "%s://%s/v2/"
tagsURL = "%s/tags/list"
manifestURL = "%s/manifests/%s"
blobsURL = "%s/blobs/%s"
blobUploadURL = "%s/blobs/uploads/"
systemPerHostCertDirPath = "/etc/docker/certs.d"
resolvedPingV2URL = "%s://%s/v2/"
resolvedPingV1URL = "%s://%s/v1/_ping"
tagsPath = "/v2/%s/tags/list"
manifestPath = "/v2/%s/manifests/%s"
blobsPath = "/v2/%s/blobs/%s"
blobUploadPath = "/v2/%s/blobs/uploads/"
extensionsSignaturePath = "/extensions/v2/%s/signatures/%s"
minimumTokenLifetimeSeconds = 60
extensionSignatureSchemaVersion = 2 // extensionSignature.Version
extensionSignatureTypeAtomic = "atomic" // extensionSignature.Type
)
// ErrV1NotSupported is returned when we're trying to talk to a
// docker V1 registry.
var ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
// extensionSignature and extensionSignatureList come from github.com/openshift/origin/pkg/dockerregistry/server/signaturedispatcher.go:
// signature represents a Docker image signature.
type extensionSignature struct {
Version int `json:"schemaVersion"` // Version specifies the schema version
Name string `json:"name"` // Name must be in "sha256:<digest>@signatureName" format
Type string `json:"type"` // Type is optional, of not set it will be defaulted to "AtomicImageV1"
Content []byte `json:"content"` // Content contains the signature
}
// signatureList represents list of Docker image signatures.
type extensionSignatureList struct {
Signatures []extensionSignature `json:"signatures"`
}
type bearerToken struct {
Token string `json:"token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
}
// dockerClient is configuration for dealing with a single Docker registry.
type dockerClient struct {
ctx *types.SystemContext
registry string
username string
password string
wwwAuthenticate string // Cache of a value set by ping() if scheme is not empty
scheme string // Cache of a value returned by a successful ping() if not empty
client *http.Client
signatureBase signatureStorageBase
// The following members are set by newDockerClient and do not change afterwards.
ctx *types.SystemContext
registry string
username string
password string
client *http.Client
signatureBase signatureStorageBase
scope authScope
// The following members are detected registry properties:
// They are set after a successful detectProperties(), and never change afterwards.
scheme string // Empty value also used to indicate detectProperties() has not yet succeeded.
challenges []challenge
supportsSignatures bool
// The following members are private state for setupRequestAuth, both are valid if token != nil.
token *bearerToken
tokenExpiration time.Time
}
type authScope struct {
remoteName string
actions string
}
// this is cloned from docker/go-connections because upstream docker has changed
// it and make deps here fails otherwise.
// We'll drop this once we upgrade to docker 1.13.x deps.
func serverDefault() *tls.Config {
return &tls.Config{
// Avoid fallback to SSL protocols < TLS1.0
MinVersion: tls.VersionTLS10,
PreferServerCipherSuites: true,
CipherSuites: tlsconfig.DefaultServerAcceptedCiphers,
}
}
func newTransport() *http.Transport {
direct := &net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}
tr := &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: direct.Dial,
TLSHandshakeTimeout: 10 * time.Second,
// TODO(dmcgowan): Call close idle connections when complete and use keep alive
DisableKeepAlives: true,
}
proxyDialer, err := sockets.DialerFromEnvironment(direct)
if err == nil {
tr.Dial = proxyDialer.Dial
}
return tr
}
// dockerCertDir returns a path to a directory to be consumed by setupCertificates() depending on ctx and hostPort.
func dockerCertDir(ctx *types.SystemContext, hostPort string) string {
if ctx != nil && ctx.DockerCertPath != "" {
return ctx.DockerCertPath
}
var hostCertDir string
if ctx != nil && ctx.DockerPerHostCertDirPath != "" {
hostCertDir = ctx.DockerPerHostCertDirPath
} else if ctx != nil && ctx.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(ctx.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
}
return filepath.Join(hostCertDir, hostPort)
}
func setupCertificates(dir string, tlsc *tls.Config) error {
logrus.Debugf("Looking for TLS certificates and private keys in %s", dir)
fs, err := ioutil.ReadDir(dir)
if err != nil {
if os.IsNotExist(err) {
return nil
}
return err
}
for _, f := range fs {
fullPath := filepath.Join(dir, f.Name())
if strings.HasSuffix(f.Name(), ".crt") {
systemPool, err := tlsconfig.SystemCertPool()
if err != nil {
return errors.Wrap(err, "unable to get system cert pool")
}
tlsc.RootCAs = systemPool
logrus.Debugf(" crt: %s", fullPath)
data, err := ioutil.ReadFile(fullPath)
if err != nil {
return err
}
tlsc.RootCAs.AppendCertsFromPEM(data)
}
if strings.HasSuffix(f.Name(), ".cert") {
certName := f.Name()
keyName := certName[:len(certName)-5] + ".key"
logrus.Debugf(" cert: %s", fullPath)
if !hasFile(fs, keyName) {
return errors.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
cert, err := tls.LoadX509KeyPair(filepath.Join(dir, certName), filepath.Join(dir, keyName))
if err != nil {
return err
}
tlsc.Certificates = append(tlsc.Certificates, cert)
}
if strings.HasSuffix(f.Name(), ".key") {
keyName := f.Name()
certName := keyName[:len(keyName)-4] + ".cert"
logrus.Debugf(" key: %s", fullPath)
if !hasFile(fs, certName) {
return errors.Errorf("missing client certificate %s for key %s", certName, keyName)
}
}
}
return nil
}
func hasFile(files []os.FileInfo, name string) bool {
for _, f := range files {
if f.Name() == name {
return true
}
}
return false
}
// newDockerClient returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
// “write” specifies whether the client will be used for "write" access (in particular passed to lookaside.go:toplevelFromSection)
func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool) (*dockerClient, error) {
registry := ref.ref.Hostname()
func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
registry := reference.Domain(ref.ref)
if registry == dockerHostname {
registry = dockerRegistry
}
username, password, err := getAuth(ref.ref.Hostname())
username, password, err := getAuth(ctx, reference.Domain(ref.ref))
if err != nil {
return nil, err
}
var tr *http.Transport
if ctx != nil && (ctx.DockerCertPath != "" || ctx.DockerInsecureSkipTLSVerify) {
tlsc := &tls.Config{}
if ctx.DockerCertPath != "" {
cert, err := tls.LoadX509KeyPair(filepath.Join(ctx.DockerCertPath, "cert.pem"), filepath.Join(ctx.DockerCertPath, "key.pem"))
if err != nil {
return nil, fmt.Errorf("Error loading x509 key pair: %s", err)
}
tlsc.Certificates = append(tlsc.Certificates, cert)
}
tlsc.InsecureSkipVerify = ctx.DockerInsecureSkipTLSVerify
tr = &http.Transport{
TLSClientConfig: tlsc,
}
tr := newTransport()
tr.TLSClientConfig = serverDefault()
// It is undefined whether the host[:port] string for dockerHostname should be dockerHostname or dockerRegistry,
// because docker/docker does not read the certs.d subdirectory at all in that case. We use the user-visible
// dockerHostname here, because it is more symmetrical to read the configuration in that case as well, and because
// generally the UI hides the existence of the different dockerRegistry. But note that this behavior is
// undocumented and may change if docker/docker changes.
certDir := dockerCertDir(ctx, reference.Domain(ref.ref))
if err := setupCertificates(certDir, tr.TLSClientConfig); err != nil {
return nil, err
}
client := &http.Client{}
if tr != nil {
client.Transport = tr
if ctx != nil && ctx.DockerInsecureSkipTLSVerify {
tr.TLSClientConfig.InsecureSkipVerify = true
}
client := &http.Client{Transport: tr}
sigBase, err := configuredSignatureStorageBase(ctx, ref, write)
if err != nil {
@@ -89,29 +245,29 @@ func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool)
password: password,
client: client,
signatureBase: sigBase,
scope: authScope{
actions: actions,
remoteName: reference.Path(ref.ref),
},
}, nil
}
// makeRequest creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// url is NOT an absolute URL, but a path relative to the /v2/ top-level API path. The host name and schema is taken from the client or autodetected.
func (c *dockerClient) makeRequest(method, url string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
if c.scheme == "" {
pr, err := c.ping()
if err != nil {
return nil, err
}
c.wwwAuthenticate = pr.WWWAuthenticate
c.scheme = pr.scheme
// The host name and schema is taken from the client or autodetected, and the path is relative to it, i.e. the path usually starts with /v2/.
func (c *dockerClient) makeRequest(method, path string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
if err := c.detectProperties(); err != nil {
return nil, err
}
url = fmt.Sprintf(baseURL, c.scheme, c.registry) + url
return c.makeRequestToResolvedURL(method, url, headers, stream, -1)
url := fmt.Sprintf("%s://%s%s", c.scheme, c.registry, path)
return c.makeRequestToResolvedURL(method, url, headers, stream, -1, true)
}
// makeRequestToResolvedURL creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// streamLen, if not -1, specifies the length of the data expected on stream.
// makeRequest should generally be preferred.
func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[string][]string, stream io.Reader, streamLen int64) (*http.Response, error) {
// TODO(runcom): too many arguments here, use a struct
func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[string][]string, stream io.Reader, streamLen int64, sendAuth bool) (*http.Response, error) {
req, err := http.NewRequest(method, url, stream)
if err != nil {
return nil, err
@@ -125,7 +281,10 @@ func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[
req.Header.Add(n, hh)
}
}
if c.wwwAuthenticate != "" {
if c.ctx != nil && c.ctx.DockerRegistryUserAgent != "" {
req.Header.Add("User-Agent", c.ctx.DockerRegistryUserAgent)
}
if sendAuth {
if err := c.setupRequestAuth(req); err != nil {
return nil, err
}
@@ -138,63 +297,53 @@ func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[
return res, nil
}
// we're using the challenges from the /v2/ ping response and not the one from the destination
// URL in this request because:
//
// 1) docker does that as well
// 2) gcr.io is sending 401 without a WWW-Authenticate header in the real request
//
// debugging: https://github.com/containers/image/pull/211#issuecomment-273426236 and follows up
func (c *dockerClient) setupRequestAuth(req *http.Request) error {
tokens := strings.SplitN(strings.TrimSpace(c.wwwAuthenticate), " ", 2)
if len(tokens) != 2 {
return fmt.Errorf("expected 2 tokens in WWW-Authenticate: %d, %s", len(tokens), c.wwwAuthenticate)
}
switch tokens[0] {
case "Basic":
req.SetBasicAuth(c.username, c.password)
if len(c.challenges) == 0 {
return nil
case "Bearer":
// FIXME? This gets a new token for every API request;
// we may be easily able to reuse a previous token, e.g.
// for OpenShift the token only identifies the user and does not vary
// across operations. Should we just try the request first, and
// only get a new token on failure?
// OTOH what to do with the single-use body stream in that case?
// Try performing the request, expecting it to fail.
testReq := *req
// Do not use the body stream, or we couldn't reuse it for the "real" call later.
testReq.Body = nil
testReq.ContentLength = 0
res, err := c.client.Do(&testReq)
if err != nil {
return err
}
chs := parseAuthHeader(res.Header)
if res.StatusCode != http.StatusUnauthorized || chs == nil || len(chs) == 0 {
// no need for bearer? wtf?
}
schemeNames := make([]string, 0, len(c.challenges))
for _, challenge := range c.challenges {
schemeNames = append(schemeNames, challenge.Scheme)
switch challenge.Scheme {
case "basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope := fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
return nil
default:
logrus.Debugf("no handler for %s authentication", challenge.Scheme)
}
// Arbitrarily use the first challenge, there is no reason to expect more than one.
challenge := chs[0]
if challenge.Scheme != "bearer" { // Another artifact of trying to handle WWW-Authenticate before it actually happens.
return fmt.Errorf("Unimplemented: WWW-Authenticate Bearer replaced by %#v", challenge.Scheme)
}
realm, ok := challenge.Parameters["realm"]
if !ok {
return fmt.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope, _ := challenge.Parameters["scope"] // Will be "" if not present
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
return nil
}
return fmt.Errorf("no handler for %s authentication", tokens[0])
// support docker bearer with authconfig's Auth string? see docker2aci
logrus.Infof("None of the challenges sent by server (%s) are supported, trying an unauthenticated request anyway", strings.Join(schemeNames, ", "))
return nil
}
func (c *dockerClient) getBearerToken(realm, service, scope string) (string, error) {
func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToken, error) {
authReq, err := http.NewRequest("GET", realm, nil)
if err != nil {
return "", err
return nil, err
}
getParams := authReq.URL.Query()
if service != "" {
@@ -207,145 +356,182 @@ func (c *dockerClient) getBearerToken(realm, service, scope string) (string, err
if c.username != "" && c.password != "" {
authReq.SetBasicAuth(c.username, c.password)
}
// insecure for now to contact the external token service
tr := &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}}
tr := newTransport()
// TODO(runcom): insecure for now to contact the external token service
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
client := &http.Client{Transport: tr}
res, err := client.Do(authReq)
if err != nil {
return "", err
return nil, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusUnauthorized:
return "", fmt.Errorf("unable to retrieve auth token: 401 unauthorized")
return nil, errors.Errorf("unable to retrieve auth token: 401 unauthorized")
case http.StatusOK:
break
default:
return "", fmt.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
return nil, errors.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
}
tokenBlob, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
return nil, err
}
tokenStruct := struct {
Token string `json:"token"`
}{}
if err := json.Unmarshal(tokenBlob, &tokenStruct); err != nil {
return "", err
var token bearerToken
if err := json.Unmarshal(tokenBlob, &token); err != nil {
return nil, err
}
// TODO(runcom): reuse tokens?
//hostAuthTokens, ok = rb.hostsV2AuthTokens[req.URL.Host]
//if !ok {
//hostAuthTokens = make(map[string]string)
//rb.hostsV2AuthTokens[req.URL.Host] = hostAuthTokens
//}
//hostAuthTokens[repo] = tokenStruct.Token
return tokenStruct.Token, nil
if token.ExpiresIn < minimumTokenLifetimeSeconds {
token.ExpiresIn = minimumTokenLifetimeSeconds
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
}
if token.IssuedAt.IsZero() {
token.IssuedAt = time.Now().UTC()
}
return &token, nil
}
func getAuth(hostname string) (string, string, error) {
// TODO(runcom): get this from *cli.Context somehow
//if username != "" && password != "" {
//return username, password, nil
//}
if hostname == dockerHostname {
hostname = dockerAuthRegistry
func getAuth(ctx *types.SystemContext, registry string) (string, string, error) {
if ctx != nil && ctx.DockerAuthConfig != nil {
return ctx.DockerAuthConfig.Username, ctx.DockerAuthConfig.Password, nil
}
var dockerAuth dockerConfigFile
dockerCfgPath := filepath.Join(getDefaultConfigDir(".docker"), dockerCfgFileName)
if _, err := os.Stat(dockerCfgPath); err == nil {
j, err := ioutil.ReadFile(dockerCfgPath)
if err != nil {
return "", "", err
}
var dockerAuth dockerConfigFile
if err := json.Unmarshal(j, &dockerAuth); err != nil {
return "", "", err
}
// try the normal case
if c, ok := dockerAuth.AuthConfigs[hostname]; ok {
return decodeDockerAuth(c.Auth)
}
} else if os.IsNotExist(err) {
// try old config path
oldDockerCfgPath := filepath.Join(getDefaultConfigDir(dockerCfgObsolete))
if _, err := os.Stat(oldDockerCfgPath); err != nil {
return "", "", nil //missing file is not an error
if os.IsNotExist(err) {
return "", "", nil
}
return "", "", errors.Wrap(err, oldDockerCfgPath)
}
j, err := ioutil.ReadFile(oldDockerCfgPath)
if err != nil {
return "", "", err
}
var dockerAuthOld map[string]dockerAuthConfigObsolete
if err := json.Unmarshal(j, &dockerAuthOld); err != nil {
if err := json.Unmarshal(j, &dockerAuth.AuthConfigs); err != nil {
return "", "", err
}
if c, ok := dockerAuthOld[hostname]; ok {
return decodeDockerAuth(c.Auth)
}
} else {
// if file is there but we can't stat it for any reason other
// than it doesn't exist then stop
return "", "", fmt.Errorf("%s - %v", dockerCfgPath, err)
} else if err != nil {
return "", "", errors.Wrap(err, dockerCfgPath)
}
// I'm feeling lucky
if c, exists := dockerAuth.AuthConfigs[registry]; exists {
return decodeDockerAuth(c.Auth)
}
// bad luck; let's normalize the entries first
registry = normalizeRegistry(registry)
normalizedAuths := map[string]dockerAuthConfig{}
for k, v := range dockerAuth.AuthConfigs {
normalizedAuths[normalizeRegistry(k)] = v
}
if c, exists := normalizedAuths[registry]; exists {
return decodeDockerAuth(c.Auth)
}
return "", "", nil
}
type apiErr struct {
Code string
Message string
Detail interface{}
}
// detectProperties detects various properties of the registry.
// See the dockerClient documentation for members which are affected by this.
func (c *dockerClient) detectProperties() error {
if c.scheme != "" {
return nil
}
type pingResponse struct {
WWWAuthenticate string
APIVersion string
scheme string
errors []apiErr
}
func (c *dockerClient) ping() (*pingResponse, error) {
ping := func(scheme string) (*pingResponse, error) {
url := fmt.Sprintf(baseURL, scheme, c.registry)
resp, err := c.client.Get(url)
ping := func(scheme string) error {
url := fmt.Sprintf(resolvedPingV2URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
if err != nil {
return nil, err
return err
}
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", scheme+"://"+c.registry+"/v2/", resp.StatusCode)
logrus.Debugf("Ping %s status %d", url, resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return nil, fmt.Errorf("error pinging repository, response code %d", resp.StatusCode)
return errors.Errorf("error pinging repository, response code %d", resp.StatusCode)
}
pr := &pingResponse{}
pr.WWWAuthenticate = resp.Header.Get("WWW-Authenticate")
pr.APIVersion = resp.Header.Get("Docker-Distribution-Api-Version")
pr.scheme = scheme
if resp.StatusCode == http.StatusUnauthorized {
type APIErrors struct {
Errors []apiErr
}
errs := &APIErrors{}
if err := json.NewDecoder(resp.Body).Decode(errs); err != nil {
return nil, err
}
pr.errors = errs.Errors
c.challenges = parseAuthHeader(resp.Header)
c.scheme = scheme
c.supportsSignatures = resp.Header.Get("X-Registry-Supports-Signatures") == "1"
return nil
}
err := ping("https")
if err != nil && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
err = ping("http")
}
if err != nil {
err = errors.Wrap(err, "pinging docker registry returned")
if c.ctx != nil && c.ctx.DockerDisableV1Ping {
return err
}
// best effort to understand if we're talking to a V1 registry
pingV1 := func(scheme string) bool {
url := fmt.Sprintf(resolvedPingV1URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
if err != nil {
return false
}
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", url, resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return false
}
return true
}
isV1 := pingV1("https")
if !isV1 && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
isV1 = pingV1("http")
}
if isV1 {
err = ErrV1NotSupported
}
return pr, nil
}
pr, err := ping("https")
if err != nil && c.ctx.DockerInsecureSkipTLSVerify {
pr, err = ping("http")
return err
}
// getExtensionsSignatures returns signatures from the X-Registry-Supports-Signatures API extension,
// using the original data structures.
func (c *dockerClient) getExtensionsSignatures(ref dockerReference, manifestDigest digest.Digest) (*extensionSignatureList, error) {
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(ref.ref), manifestDigest)
res, err := c.makeRequest("GET", path, nil, nil)
if err != nil {
return nil, err
}
return pr, err
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, client.HandleErrorResponse(res)
}
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, err
}
var parsedBody extensionSignatureList
if err := json.Unmarshal(body, &parsedBody); err != nil {
return nil, errors.Wrapf(err, "Error decoding signature list")
}
return &parsedBody, nil
}
func getDefaultConfigDir(confPath string) string {
return filepath.Join(homedir.Get(), confPath)
}
type dockerAuthConfigObsolete struct {
Auth string `json:"auth"`
}
type dockerAuthConfig struct {
Auth string `json:"auth,omitempty"`
}
@@ -368,3 +554,28 @@ func decodeDockerAuth(s string) (string, string, error) {
password := strings.Trim(parts[1], "\x00")
return user, password, nil
}
// convertToHostname converts a registry url which has http|https prepended
// to just an hostname.
// Copied from github.com/docker/docker/registry/auth.go
func convertToHostname(url string) string {
stripped := url
if strings.HasPrefix(url, "http://") {
stripped = strings.TrimPrefix(url, "http://")
} else if strings.HasPrefix(url, "https://") {
stripped = strings.TrimPrefix(url, "https://")
}
nameParts := strings.SplitN(stripped, "/", 2)
return nameParts[0]
}
func normalizeRegistry(registry string) string {
normalized := convertToHostname(registry)
switch normalized {
case "registry-1.docker.io", "docker.io":
return "index.docker.io"
}
return normalized
}

View File

@@ -5,8 +5,10 @@ import (
"fmt"
"net/http"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// Image is a Docker-specific implementation of types.Image with a few extra methods
@@ -24,25 +26,29 @@ func newImage(ctx *types.SystemContext, ref dockerReference) (types.Image, error
if err != nil {
return nil, err
}
return &Image{Image: image.FromSource(s), src: s}, nil
img, err := image.FromSource(s)
if err != nil {
return nil, err
}
return &Image{Image: img, src: s}, nil
}
// SourceRefFullName returns a fully expanded name for the repository this image is in.
func (i *Image) SourceRefFullName() string {
return i.src.ref.ref.FullName()
return i.src.ref.ref.Name()
}
// GetRepositoryTags list all tags available in the repository. Note that this has no connection with the tag(s) used for this specific image, if any.
func (i *Image) GetRepositoryTags() ([]string, error) {
url := fmt.Sprintf(tagsURL, i.src.ref.ref.RemoteName())
res, err := i.src.c.makeRequest("GET", url, nil, nil)
path := fmt.Sprintf(tagsPath, reference.Path(i.src.ref.ref))
res, err := i.src.c.makeRequest("GET", path, nil, nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
// print url also
return nil, fmt.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
return nil, errors.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
}
type tagsRes struct {
Tags []string

View File

@@ -2,8 +2,8 @@ package docker
import (
"bytes"
"crypto/sha256"
"encoding/hex"
"crypto/rand"
"encoding/json"
"fmt"
"io"
"io/ioutil"
@@ -11,23 +11,43 @@ import (
"net/url"
"os"
"path/filepath"
"strconv"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
var manifestMIMETypes = []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
func supportedManifestMIMETypesMap() map[string]bool {
m := make(map[string]bool, len(manifestMIMETypes))
for _, mt := range manifestMIMETypes {
m[mt] = true
}
return m
}
type dockerImageDestination struct {
ref dockerReference
c *dockerClient
// State
manifestDigest string // or "" if not yet known.
manifestDigest digest.Digest // or "" if not yet known.
}
// newImageDestination creates a new ImageDestination for the specified image reference.
func newImageDestination(ctx *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
c, err := newDockerClient(ctx, ref, true)
c, err := newDockerClient(ctx, ref, true, "pull,push")
if err != nil {
return nil, err
}
@@ -44,22 +64,28 @@ func (d *dockerImageDestination) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *dockerImageDestination) Close() {
func (d *dockerImageDestination) Close() error {
return nil
}
func (d *dockerImageDestination) SupportedManifestMIMETypes() []string {
return []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
return manifestMIMETypes
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *dockerImageDestination) SupportsSignatures() error {
return fmt.Errorf("Pushing signatures to a Docker Registry is not supported")
if err := d.c.detectProperties(); err != nil {
return err
}
switch {
case d.c.signatureBase != nil:
return nil
case d.c.supportsSignatures:
return nil
default:
return errors.Errorf("X-Registry-Supports-Signatures extension not supported, and lookaside is not configured")
}
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
@@ -67,6 +93,17 @@ func (d *dockerImageDestination) ShouldCompressLayers() bool {
return true
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dockerImageDestination) MustMatchRuntimeOS() bool {
return false
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }
@@ -82,88 +119,108 @@ func (c *sizeCounter) Write(p []byte) (n int, err error) {
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
if inputInfo.Digest != "" {
checkURL := fmt.Sprintf(blobsURL, d.ref.ref.RemoteName(), inputInfo.Digest)
logrus.Debugf("Checking %s", checkURL)
res, err := d.c.makeRequest("HEAD", checkURL, nil, nil)
if inputInfo.Digest.String() != "" {
haveBlob, size, err := d.HasBlob(inputInfo)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusOK:
logrus.Debugf("... already exists, not uploading")
blobLength, err := strconv.ParseInt(res.Header.Get("Content-Length"), 10, 64)
if err != nil {
return types.BlobInfo{}, err
}
return types.BlobInfo{Digest: inputInfo.Digest, Size: blobLength}, nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return types.BlobInfo{}, fmt.Errorf("not authorized to read from destination repository %s", d.ref.ref.RemoteName())
case http.StatusNotFound:
// noop
default:
return types.BlobInfo{}, fmt.Errorf("failed to read from destination repository %s: %v", d.ref.ref.RemoteName(), http.StatusText(res.StatusCode))
if haveBlob {
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
}
logrus.Debugf("... failed, status %d", res.StatusCode)
}
// FIXME? Chunked upload, progress reporting, etc.
uploadURL := fmt.Sprintf(blobUploadURL, d.ref.ref.RemoteName())
logrus.Debugf("Uploading %s", uploadURL)
res, err := d.c.makeRequest("POST", uploadURL, nil, nil)
uploadPath := fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref))
logrus.Debugf("Uploading %s", uploadPath)
res, err := d.c.makeRequest("POST", uploadPath, nil, nil)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusAccepted {
logrus.Debugf("Error initiating layer upload, response %#v", *res)
return types.BlobInfo{}, fmt.Errorf("Error initiating layer upload to %s, status %d", uploadURL, res.StatusCode)
return types.BlobInfo{}, errors.Errorf("Error initiating layer upload to %s, status %d", uploadPath, res.StatusCode)
}
uploadLocation, err := res.Location()
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error determining upload URL: %s", err.Error())
return types.BlobInfo{}, errors.Wrap(err, "Error determining upload URL")
}
h := sha256.New()
digester := digest.Canonical.Digester()
sizeCounter := &sizeCounter{}
tee := io.TeeReader(stream, io.MultiWriter(h, sizeCounter))
res, err = d.c.makeRequestToResolvedURL("PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size)
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
res, err = d.c.makeRequestToResolvedURL("PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
if err != nil {
logrus.Debugf("Error uploading layer chunked, response %#v", *res)
logrus.Debugf("Error uploading layer chunked, response %#v", res)
return types.BlobInfo{}, err
}
defer res.Body.Close()
hash := h.Sum(nil)
computedDigest := "sha256:" + hex.EncodeToString(hash[:])
computedDigest := digester.Digest()
uploadLocation, err = res.Location()
if err != nil {
return types.BlobInfo{}, fmt.Errorf("Error determining upload URL: %s", err.Error())
return types.BlobInfo{}, errors.Wrap(err, "Error determining upload URL")
}
// FIXME: DELETE uploadLocation on failure
locationQuery := uploadLocation.Query()
// TODO: check inputInfo.Digest == computedDigest https://github.com/containers/image/pull/70#discussion_r77646717
locationQuery.Set("digest", computedDigest)
locationQuery.Set("digest", computedDigest.String())
uploadLocation.RawQuery = locationQuery.Encode()
res, err = d.c.makeRequestToResolvedURL("PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1)
res, err = d.c.makeRequestToResolvedURL("PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
logrus.Debugf("Error uploading layer, response %#v", *res)
return types.BlobInfo{}, fmt.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
return types.BlobInfo{}, errors.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
}
logrus.Debugf("Upload of layer %s complete", computedDigest)
return types.BlobInfo{Digest: computedDigest, Size: sizeCounter.size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
checkPath := fmt.Sprintf(blobsPath, reference.Path(d.ref.ref), info.Digest.String())
logrus.Debugf("Checking %s", checkPath)
res, err := d.c.makeRequest("HEAD", checkPath, nil, nil)
if err != nil {
return false, -1, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusOK:
logrus.Debugf("... already exists")
return true, getBlobSize(res), nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return false, -1, errors.Errorf("not authorized to read from destination repository %s", reference.Path(d.ref.ref))
case http.StatusNotFound:
logrus.Debugf("... not present")
return false, -1, nil
default:
return false, -1, errors.Errorf("failed to read from destination repository %s: %v", reference.Path(d.ref.ref), http.StatusText(res.StatusCode))
}
}
func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dockerImageDestination) PutManifest(m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
@@ -171,34 +228,69 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
}
d.manifestDigest = digest
reference, err := d.ref.tagOrDigest()
refTail, err := d.ref.tagOrDigest()
if err != nil {
return err
}
url := fmt.Sprintf(manifestURL, d.ref.ref.RemoteName(), reference)
path := fmt.Sprintf(manifestPath, reference.Path(d.ref.ref), refTail)
headers := map[string][]string{}
mimeType := manifest.GuessMIMEType(m)
if mimeType != "" {
headers["Content-Type"] = []string{mimeType}
}
res, err := d.c.makeRequest("PUT", url, headers, bytes.NewReader(m))
res, err := d.c.makeRequest("PUT", path, headers, bytes.NewReader(m))
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
body, err := ioutil.ReadAll(res.Body)
if err == nil {
logrus.Debugf("Error body %s", string(body))
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest to %s", path)
if isManifestInvalidError(errors.Cause(err)) {
err = types.ManifestTypeRejectedError{Err: err}
}
logrus.Debugf("Error uploading manifest, status %d, %#v", res.StatusCode, res)
return fmt.Errorf("Error uploading manifest to %s, status %d", url, res.StatusCode)
return err
}
return nil
}
// isManifestInvalidError returns true iff err from client.HandleErrorReponse is a “manifest invalid” error.
func isManifestInvalidError(err error) bool {
errors, ok := err.(errcode.Errors)
if !ok || len(errors) == 0 {
return false
}
ec, ok := errors[0].(errcode.ErrorCoder)
if !ok {
return false
}
// ErrorCodeManifestInvalid is returned by OpenShift with acceptschema2=false.
// ErrorCodeTagInvalid is returned by docker/distribution (at least as of commit ec87e9b6971d831f0eff752ddb54fb64693e51cd)
// when uploading to a tag (because it cant find a matching tag inside the manifest)
return ec.ErrorCode() == v2.ErrorCodeManifestInvalid || ec.ErrorCode() == v2.ErrorCodeTagInvalid
}
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
// Do not fail if we dont really need to support signatures.
if len(signatures) == 0 {
return nil
}
if err := d.c.detectProperties(); err != nil {
return err
}
switch {
case d.c.signatureBase != nil:
return d.putSignaturesToLookaside(signatures)
case d.c.supportsSignatures:
return d.putSignaturesToAPIExtension(signatures)
default:
return errors.Errorf("X-Registry-Supports-Signatures extension not supported, and lookaside is not configured")
}
}
// putSignaturesToLookaside implements PutSignatures() from the lookaside location configured in s.c.signatureBase,
// which is not nil.
func (d *dockerImageDestination) putSignaturesToLookaside(signatures [][]byte) error {
// FIXME? This overwrites files one at a time, definitely not atomic.
// A failure when updating signatures with a reordered copy could lose some of them.
@@ -206,19 +298,17 @@ func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) == 0 {
return nil
}
if d.c.signatureBase == nil {
return fmt.Errorf("Pushing signatures to a Docker Registry is not supported, and there is no applicable signature storage configured")
}
// FIXME: This assumption that signatures are stored after the manifest rather breaks the model.
if d.manifestDigest == "" {
return fmt.Errorf("Unknown manifest digest, can't add signatures")
if d.manifestDigest.String() == "" {
// This shouldnt happen, ImageDestination users are required to call PutManifest before PutSignatures
return errors.Errorf("Unknown manifest digest, can't add signatures")
}
// NOTE: Keep this in sync with docs/signature-protocols.md!
for i, signature := range signatures {
url := signatureStorageURL(d.c.signatureBase, d.manifestDigest, i)
if url == nil {
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
err := d.putOneSignature(url, signature)
if err != nil {
@@ -233,7 +323,7 @@ func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
for i := len(signatures); ; i++ {
url := signatureStorageURL(d.c.signatureBase, d.manifestDigest, i)
if url == nil {
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
missing, err := d.c.deleteOneSignature(url)
if err != nil {
@@ -248,6 +338,7 @@ func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
}
// putOneSignature stores one signature to url.
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (d *dockerImageDestination) putOneSignature(url *url.URL, signature []byte) error {
switch url.Scheme {
case "file":
@@ -263,14 +354,15 @@ func (d *dockerImageDestination) putOneSignature(url *url.URL, signature []byte)
return nil
case "http", "https":
return fmt.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location.", url.Scheme, url.String())
return errors.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location", url.Scheme, url.String())
default:
return fmt.Errorf("Unsupported scheme when writing signature to %s", url.String())
return errors.Errorf("Unsupported scheme when writing signature to %s", url.String())
}
}
// deleteOneSignature deletes a signature from url, if it exists.
// If it successfully determines that the signature does not exist, returns (true, nil)
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error) {
switch url.Scheme {
case "file":
@@ -282,12 +374,88 @@ func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error
return false, err
case "http", "https":
return false, fmt.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location.", url.Scheme, url.String())
return false, errors.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location", url.Scheme, url.String())
default:
return false, fmt.Errorf("Unsupported scheme when deleting signature from %s", url.String())
return false, errors.Errorf("Unsupported scheme when deleting signature from %s", url.String())
}
}
// putSignaturesToAPIExtension implements PutSignatures() using the X-Registry-Supports-Signatures API extension.
func (d *dockerImageDestination) putSignaturesToAPIExtension(signatures [][]byte) error {
// Skip dealing with the manifest digest, or reading the old state, if not necessary.
if len(signatures) == 0 {
return nil
}
if d.manifestDigest.String() == "" {
// This shouldnt happen, ImageDestination users are required to call PutManifest before PutSignatures
return errors.Errorf("Unknown manifest digest, can't add signatures")
}
// Because image signatures are a shared resource in Atomic Registry, the default upload
// always adds signatures. Eventually we should also allow removing signatures,
// but the X-Registry-Supports-Signatures API extension does not support that yet.
existingSignatures, err := d.c.getExtensionsSignatures(d.ref, d.manifestDigest)
if err != nil {
return err
}
existingSigNames := map[string]struct{}{}
for _, sig := range existingSignatures.Signatures {
existingSigNames[sig.Name] = struct{}{}
}
sigExists:
for _, newSig := range signatures {
for _, existingSig := range existingSignatures.Signatures {
if existingSig.Version == extensionSignatureSchemaVersion && existingSig.Type == extensionSignatureTypeAtomic && bytes.Equal(existingSig.Content, newSig) {
continue sigExists
}
}
// The API expect us to invent a new unique name. This is racy, but hopefully good enough.
var signatureName string
for {
randBytes := make([]byte, 16)
n, err := rand.Read(randBytes)
if err != nil || n != 16 {
return errors.Wrapf(err, "Error generating random signature len %d", n)
}
signatureName = fmt.Sprintf("%s@%032x", d.manifestDigest.String(), randBytes)
if _, ok := existingSigNames[signatureName]; !ok {
break
}
}
sig := extensionSignature{
Version: extensionSignatureSchemaVersion,
Name: signatureName,
Type: extensionSignatureTypeAtomic,
Content: newSig,
}
body, err := json.Marshal(sig)
if err != nil {
return err
}
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), d.manifestDigest.String())
res, err := d.c.makeRequest("PUT", path, nil, bytes.NewReader(body))
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
body, err := ioutil.ReadAll(res.Body)
if err == nil {
logrus.Debugf("Error body %s", string(body))
}
logrus.Debugf("Error uploading signature, status %d, %#v", res.StatusCode, res)
return errors.Errorf("Error uploading signature to %s, status %d", path, res.StatusCode)
}
}
return nil
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called

View File

@@ -11,19 +11,14 @@ import (
"strconv"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type errFetchManifest struct {
statusCode int
body []byte
}
func (e errFetchManifest) Error() string {
return fmt.Sprintf("error fetching manifest: status code: %d, body: %s", e.statusCode, string(e.body))
}
type dockerImageSource struct {
ref dockerReference
requestedManifestMIMETypes []string
@@ -38,13 +33,24 @@ type dockerImageSource struct {
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref dockerReference, requestedManifestMIMETypes []string) (*dockerImageSource, error) {
c, err := newDockerClient(ctx, ref, false)
c, err := newDockerClient(ctx, ref, false, "pull")
if err != nil {
return nil, err
}
if requestedManifestMIMETypes == nil {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
supportedMIMEs := supportedManifestMIMETypesMap()
acceptableRequestedMIMEs := false
for _, mtrequested := range requestedManifestMIMETypes {
if supportedMIMEs[mtrequested] {
acceptableRequestedMIMEs = true
break
}
}
if !acceptableRequestedMIMEs {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
return &dockerImageSource{
ref: ref,
requestedManifestMIMETypes: requestedManifestMIMETypes,
@@ -59,7 +65,8 @@ func (s *dockerImageSource) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *dockerImageSource) Close() {
func (s *dockerImageSource) Close() error {
return nil
}
// simplifyContentType drops parameters from a HTTP media type (see https://tools.ietf.org/html/rfc7231#section-3.1.1.1)
@@ -75,6 +82,8 @@ func simplifyContentType(contentType string) string {
return mimeType
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
err := s.ensureManifestIsLoaded()
if err != nil {
@@ -83,6 +92,31 @@ func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
return s.cachedManifest, s.cachedManifestMIMEType, nil
}
func (s *dockerImageSource) fetchManifest(tagOrDigest string) ([]byte, string, error) {
path := fmt.Sprintf(manifestPath, reference.Path(s.ref.ref), tagOrDigest)
headers := make(map[string][]string)
headers["Accept"] = s.requestedManifestMIMETypes
res, err := s.c.makeRequest("GET", path, headers, nil)
if err != nil {
return nil, "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, "", client.HandleErrorResponse(res)
}
manblob, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, "", err
}
return manblob, simplifyContentType(res.Header.Get("Content-Type")), nil
}
// GetTargetManifest returns an image's manifest given a digest.
// This is mainly used to retrieve a single image's manifest out of a manifest list.
func (s *dockerImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return s.fetchManifest(digest.String())
}
// ensureManifestIsLoaded sets s.cachedManifest and s.cachedManifestMIMEType
//
// ImageSource implementations are not required or expected to do any caching,
@@ -99,53 +133,82 @@ func (s *dockerImageSource) ensureManifestIsLoaded() error {
if err != nil {
return err
}
url := fmt.Sprintf(manifestURL, s.ref.ref.RemoteName(), reference)
// TODO(runcom) set manifest version header! schema1 for now - then schema2 etc etc and v1
// TODO(runcom) NO, switch on the resulter manifest like Docker is doing
headers := make(map[string][]string)
headers["Accept"] = s.requestedManifestMIMETypes
res, err := s.c.makeRequest("GET", url, headers, nil)
manblob, mt, err := s.fetchManifest(reference)
if err != nil {
return err
}
defer res.Body.Close()
manblob, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
}
if res.StatusCode != http.StatusOK {
return errFetchManifest{res.StatusCode, manblob}
}
// We might validate manblob against the Docker-Content-Digest header here to protect against transport errors.
s.cachedManifest = manblob
s.cachedManifestMIMEType = simplifyContentType(res.Header.Get("Content-Type"))
s.cachedManifestMIMEType = mt
return nil
}
func (s *dockerImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, error) {
var (
resp *http.Response
err error
)
for _, url := range urls {
resp, err = s.c.makeRequestToResolvedURL("GET", url, nil, nil, -1, false)
if err == nil {
if resp.StatusCode != http.StatusOK {
err = errors.Errorf("error fetching external blob from %q: %d", url, resp.StatusCode)
logrus.Debug(err)
continue
}
}
}
if resp.Body != nil && err == nil {
return resp.Body, getBlobSize(resp), nil
}
return nil, 0, err
}
func getBlobSize(resp *http.Response) int64 {
size, err := strconv.ParseInt(resp.Header.Get("Content-Length"), 10, 64)
if err != nil {
size = -1
}
return size
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dockerImageSource) GetBlob(digest string) (io.ReadCloser, int64, error) {
url := fmt.Sprintf(blobsURL, s.ref.ref.RemoteName(), digest)
logrus.Debugf("Downloading %s", url)
res, err := s.c.makeRequest("GET", url, nil, nil)
func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if len(info.URLs) != 0 {
return s.getExternalBlob(info.URLs)
}
path := fmt.Sprintf(blobsPath, reference.Path(s.ref.ref), info.Digest.String())
logrus.Debugf("Downloading %s", path)
res, err := s.c.makeRequest("GET", path, nil, nil)
if err != nil {
return nil, 0, err
}
if res.StatusCode != http.StatusOK {
// print url also
return nil, 0, fmt.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
return nil, 0, errors.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
}
size, err := strconv.ParseInt(res.Header.Get("Content-Length"), 10, 64)
if err != nil {
size = -1
}
return res.Body, size, nil
return res.Body, getBlobSize(res), nil
}
func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
if s.c.signatureBase == nil { // Skip dealing with the manifest digest if not necessary.
if err := s.c.detectProperties(); err != nil {
return nil, err
}
switch {
case s.c.signatureBase != nil:
return s.getSignaturesFromLookaside()
case s.c.supportsSignatures:
return s.getSignaturesFromAPIExtension()
default:
return [][]byte{}, nil
}
}
// getSignaturesFromLookaside implements GetSignatures() from the lookaside location configured in s.c.signatureBase,
// which is not nil.
func (s *dockerImageSource) getSignaturesFromLookaside() ([][]byte, error) {
if err := s.ensureManifestIsLoaded(); err != nil {
return nil, err
}
@@ -154,11 +217,12 @@ func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
return nil, err
}
// NOTE: Keep this in sync with docs/signature-protocols.md!
signatures := [][]byte{}
for i := 0; ; i++ {
url := signatureStorageURL(s.c.signatureBase, manifestDigest, i)
if url == nil {
return nil, fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return nil, errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
signature, missing, err := s.getOneSignature(url)
if err != nil {
@@ -174,6 +238,7 @@ func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
// getOneSignature downloads one signature from url.
// If it successfully determines that the signature does not exist, returns with missing set to true and error set to nil.
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, missing bool, err error) {
switch url.Scheme {
case "file":
@@ -197,7 +262,7 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
if res.StatusCode == http.StatusNotFound {
return nil, true, nil
} else if res.StatusCode != http.StatusOK {
return nil, false, fmt.Errorf("Error reading signature from %s: status %d", url.String(), res.StatusCode)
return nil, false, errors.Errorf("Error reading signature from %s: status %d", url.String(), res.StatusCode)
}
sig, err := ioutil.ReadAll(res.Body)
if err != nil {
@@ -206,13 +271,37 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
return sig, false, nil
default:
return nil, false, fmt.Errorf("Unsupported scheme when reading signature from %s", url.String())
return nil, false, errors.Errorf("Unsupported scheme when reading signature from %s", url.String())
}
}
// getSignaturesFromAPIExtension implements GetSignatures() using the X-Registry-Supports-Signatures API extension.
func (s *dockerImageSource) getSignaturesFromAPIExtension() ([][]byte, error) {
if err := s.ensureManifestIsLoaded(); err != nil {
return nil, err
}
manifestDigest, err := manifest.Digest(s.cachedManifest)
if err != nil {
return nil, err
}
parsedBody, err := s.c.getExtensionsSignatures(s.ref, manifestDigest)
if err != nil {
return nil, err
}
var sigs [][]byte
for _, sig := range parsedBody.Signatures {
if sig.Version == extensionSignatureSchemaVersion && sig.Type == extensionSignatureTypeAtomic {
sigs = append(sigs, sig.Content)
}
}
return sigs, nil
}
// deleteImage deletes the named image from the registry, if supported.
func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
c, err := newDockerClient(ctx, ref, true)
c, err := newDockerClient(ctx, ref, true, "push")
if err != nil {
return err
}
@@ -222,12 +311,12 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
headers := make(map[string][]string)
headers["Accept"] = []string{manifest.DockerV2Schema2MediaType}
reference, err := ref.tagOrDigest()
refTail, err := ref.tagOrDigest()
if err != nil {
return err
}
getURL := fmt.Sprintf(manifestURL, ref.ref.RemoteName(), reference)
get, err := c.makeRequest("GET", getURL, headers, nil)
getPath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), refTail)
get, err := c.makeRequest("GET", getPath, headers, nil)
if err != nil {
return err
}
@@ -239,17 +328,17 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
switch get.StatusCode {
case http.StatusOK:
case http.StatusNotFound:
return fmt.Errorf("Unable to delete %v. Image may not exist or is not stored with a v2 Schema in a v2 registry.", ref.ref)
return errors.Errorf("Unable to delete %v. Image may not exist or is not stored with a v2 Schema in a v2 registry", ref.ref)
default:
return fmt.Errorf("Failed to delete %v: %s (%v)", ref.ref, manifestBody, get.Status)
return errors.Errorf("Failed to delete %v: %s (%v)", ref.ref, manifestBody, get.Status)
}
digest := get.Header.Get("Docker-Content-Digest")
deleteURL := fmt.Sprintf(manifestURL, ref.ref.RemoteName(), digest)
deletePath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), digest)
// When retrieving the digest from a registry >= 2.3 use the following header:
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
delete, err := c.makeRequest("DELETE", deleteURL, headers, nil)
delete, err := c.makeRequest("DELETE", deletePath, headers, nil)
if err != nil {
return err
}
@@ -260,7 +349,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
return err
}
if delete.StatusCode != http.StatusAccepted {
return fmt.Errorf("Failed to delete %v: %s (%v)", deleteURL, string(body), delete.Status)
return errors.Errorf("Failed to delete %v: %s (%v)", deletePath, string(body), delete.Status)
}
if c.signatureBase != nil {
@@ -272,7 +361,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
for i := 0; ; i++ {
url := signatureStorageURL(c.signatureBase, manifestDigest, i)
if url == nil {
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
missing, err := c.deleteOneSignature(url)
if err != nil {

View File

@@ -5,10 +5,16 @@ import (
"strings"
"github.com/containers/image/docker/policyconfiguration"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/pkg/errors"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for Docker registry-hosted images.
var Transport = dockerTransport{}
@@ -42,29 +48,30 @@ type dockerReference struct {
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an Docker ImageReference.
func ParseReference(refString string) (types.ImageReference, error) {
if !strings.HasPrefix(refString, "//") {
return nil, fmt.Errorf("docker: image reference %s does not start with //", refString)
return nil, errors.Errorf("docker: image reference %s does not start with //", refString)
}
ref, err := reference.ParseNamed(strings.TrimPrefix(refString, "//"))
ref, err := reference.ParseNormalizedNamed(strings.TrimPrefix(refString, "//"))
if err != nil {
return nil, err
}
ref = reference.WithDefaultTag(ref)
ref = reference.TagNameOnly(ref)
return NewReference(ref)
}
// NewReference returns a Docker reference for a named reference. The reference must satisfy !reference.IsNameOnly().
func NewReference(ref reference.Named) (types.ImageReference, error) {
if reference.IsNameOnly(ref) {
return nil, fmt.Errorf("Docker reference %s has neither a tag nor a digest", ref.String())
return nil, errors.Errorf("Docker reference %s has neither a tag nor a digest", reference.FamiliarString(ref))
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// docker/reference does not handle that, so fail.
// The docker/distribution API does not really support that (we cant ask for an image with a specific
// tag and digest), so fail. This MAY be accepted in the future.
// (Even if it were supported, the semantics of policy namespaces are unclear - should we drop
// the tag or the digest first?)
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
return nil, fmt.Errorf("Docker references with both a tag and digest are currently not supported")
return nil, errors.Errorf("Docker references with both a tag and digest are currently not supported")
}
return dockerReference{
ref: ref,
@@ -81,7 +88,7 @@ func (ref dockerReference) Transport() types.ImageTransport {
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref dockerReference) StringWithinTransport() string {
return "//" + ref.ref.String()
return "//" + reference.FamiliarString(ref.ref)
}
// DockerReference returns a Docker reference associated with this reference
@@ -115,8 +122,10 @@ func (ref dockerReference) PolicyConfigurationNamespaces() []string {
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref dockerReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return newImage(ctx, ref)
}
@@ -149,5 +158,5 @@ func (ref dockerReference) tagOrDigest() (string, error) {
return ref.Tag(), nil
}
// This should not happen, NewReference above refuses reference.IsNameOnly values.
return "", fmt.Errorf("Internal inconsistency: Reference %s unexpectedly has neither a digest nor a tag", ref.ref.String())
return "", errors.Errorf("Internal inconsistency: Reference %s unexpectedly has neither a digest nor a tag", reference.FamiliarString(ref.ref))
}

View File

@@ -9,10 +9,12 @@ import (
"path/filepath"
"strings"
"github.com/ghodss/yaml"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/ghodss/yaml"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// systemRegistriesDirPath is the path to registries.d, used for locating lookaside Docker signature storage.
@@ -59,12 +61,13 @@ func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReferenc
url, err := url.Parse(topLevel)
if err != nil {
return nil, fmt.Errorf("Invalid signature storage URL %s: %v", topLevel, err)
return nil, errors.Wrapf(err, "Invalid signature storage URL %s", topLevel)
}
// NOTE: Keep this in sync with docs/signature-protocols.md!
// FIXME? Restrict to explicitly supported schemes?
repo := ref.ref.FullName() // Note that this is without a tag or digest.
if path.Clean(repo) != repo { // Coverage: This should not be reachable because /./ and /../ components are not valid in docker references
return nil, fmt.Errorf("Unexpected path elements in Docker reference %s for signature storage", ref.ref.String())
repo := reference.Path(ref.ref) // Note that this is without a tag or digest.
if path.Clean(repo) != repo { // Coverage: This should not be reachable because /./ and /../ components are not valid in docker references
return nil, errors.Errorf("Unexpected path elements in Docker reference %s for signature storage", ref.ref.String())
}
url.Path = url.Path + "/" + repo
return url, nil
@@ -113,12 +116,12 @@ func loadAndMergeConfig(dirPath string) (*registryConfiguration, error) {
var config registryConfiguration
err = yaml.Unmarshal(configBytes, &config)
if err != nil {
return nil, fmt.Errorf("Error parsing %s: %v", configPath, err)
return nil, errors.Wrapf(err, "Error parsing %s", configPath)
}
if config.DefaultDocker != nil {
if mergedConfig.DefaultDocker != nil {
return nil, fmt.Errorf(`Error parsing signature storage configuration: "default-docker" defined both in "%s" and "%s"`,
return nil, errors.Errorf(`Error parsing signature storage configuration: "default-docker" defined both in "%s" and "%s"`,
dockerDefaultMergedFrom, configPath)
}
mergedConfig.DefaultDocker = config.DefaultDocker
@@ -127,7 +130,7 @@ func loadAndMergeConfig(dirPath string) (*registryConfiguration, error) {
for nsName, nsConfig := range config.Docker { // includes config.Docker == nil
if _, ok := mergedConfig.Docker[nsName]; ok {
return nil, fmt.Errorf(`Error parsing signature storage configuration: "docker" namespace "%s" defined both in "%s" and "%s"`,
return nil, errors.Errorf(`Error parsing signature storage configuration: "docker" namespace "%s" defined both in "%s" and "%s"`,
nsName, nsMergedFrom[nsName], configPath)
}
mergedConfig.Docker[nsName] = nsConfig
@@ -188,11 +191,12 @@ func (ns registryNamespace) signatureTopLevel(write bool) string {
// signatureStorageURL returns an URL usable for acessing signature index in base with known manifestDigest, or nil if not applicable.
// Returns nil iff base == nil.
func signatureStorageURL(base signatureStorageBase, manifestDigest string, index int) *url.URL {
// NOTE: Keep this in sync with docs/signature-protocols.md!
func signatureStorageURL(base signatureStorageBase, manifestDigest digest.Digest, index int) *url.URL {
if base == nil {
return nil
}
url := *base
url.Path = fmt.Sprintf("%s@%s/signature-%d", url.Path, manifestDigest, index+1)
url.Path = fmt.Sprintf("%s@%s=%s/signature-%d", url.Path, manifestDigest.Algorithm(), manifestDigest.Hex(), index+1)
return &url
}

View File

@@ -1,25 +1,24 @@
package policyconfiguration
import (
"errors"
"fmt"
"strings"
"github.com/docker/docker/reference"
"github.com/containers/image/docker/reference"
"github.com/pkg/errors"
)
// DockerReferenceIdentity returns a string representation of the reference, suitable for policy lookup,
// as a backend for ImageReference.PolicyConfigurationIdentity.
// The reference must satisfy !reference.IsNameOnly().
func DockerReferenceIdentity(ref reference.Named) (string, error) {
res := ref.FullName()
res := ref.Name()
tagged, isTagged := ref.(reference.NamedTagged)
digested, isDigested := ref.(reference.Canonical)
switch {
case isTagged && isDigested: // This should not happen, docker/reference.ParseNamed drops the tag.
return "", fmt.Errorf("Unexpected Docker reference %s with both a name and a digest", ref.String())
case isTagged && isDigested: // Note that this CAN actually happen.
return "", errors.Errorf("Unexpected Docker reference %s with both a name and a digest", reference.FamiliarString(ref))
case !isTagged && !isDigested: // This should not happen, the caller is expected to ensure !reference.IsNameOnly()
return "", fmt.Errorf("Internal inconsistency: Docker reference %s with neither a tag nor a digest", ref.String())
return "", errors.Errorf("Internal inconsistency: Docker reference %s with neither a tag nor a digest", reference.FamiliarString(ref))
case isTagged:
res = res + ":" + tagged.Tag()
case isDigested:
@@ -43,7 +42,7 @@ func DockerReferenceNamespaces(ref reference.Named) []string {
// ref.FullName() == ref.Hostname() + "/" + ref.RemoteName(), so the last
// iteration matches the host name (for any namespace).
res := []string{}
name := ref.FullName()
name := ref.Name()
for {
res = append(res, name)

View File

@@ -0,0 +1,2 @@
This is a copy of github.com/docker/distribution/reference as of commit fb0bebc4b64e3881cc52a2478d749845ed76d2a8,
except that ParseAnyReferenceWithSet has been removed to drop the dependency on github.com/docker/distribution/digestset.

View File

@@ -0,0 +1,42 @@
package reference
import "path"
// IsNameOnly returns true if reference only contains a repo name.
func IsNameOnly(ref Named) bool {
if _, ok := ref.(NamedTagged); ok {
return false
}
if _, ok := ref.(Canonical); ok {
return false
}
return true
}
// FamiliarName returns the familiar name string
// for the given named, familiarizing if needed.
func FamiliarName(ref Named) string {
if nn, ok := ref.(normalizedNamed); ok {
return nn.Familiar().Name()
}
return ref.Name()
}
// FamiliarString returns the familiar string representation
// for the given reference, familiarizing if needed.
func FamiliarString(ref Reference) string {
if nn, ok := ref.(normalizedNamed); ok {
return nn.Familiar().String()
}
return ref.String()
}
// FamiliarMatch reports whether ref matches the specified pattern.
// See https://godoc.org/path#Match for supported patterns.
func FamiliarMatch(pattern string, ref Reference) (bool, error) {
matched, err := path.Match(pattern, FamiliarString(ref))
if namedRef, isNamed := ref.(Named); isNamed && !matched {
matched, _ = path.Match(pattern, FamiliarName(namedRef))
}
return matched, err
}

View File

@@ -0,0 +1,152 @@
package reference
import (
"errors"
"fmt"
"strings"
"github.com/opencontainers/go-digest"
)
var (
legacyDefaultDomain = "index.docker.io"
defaultDomain = "docker.io"
officialRepoName = "library"
defaultTag = "latest"
)
// normalizedNamed represents a name which has been
// normalized and has a familiar form. A familiar name
// is what is used in Docker UI. An example normalized
// name is "docker.io/library/ubuntu" and corresponding
// familiar name of "ubuntu".
type normalizedNamed interface {
Named
Familiar() Named
}
// ParseNormalizedNamed parses a string into a named reference
// transforming a familiar name from Docker UI to a fully
// qualified reference. If the value may be an identifier
// use ParseAnyReference.
func ParseNormalizedNamed(s string) (Named, error) {
if ok := anchoredIdentifierRegexp.MatchString(s); ok {
return nil, fmt.Errorf("invalid repository name (%s), cannot specify 64-byte hexadecimal strings", s)
}
domain, remainder := splitDockerDomain(s)
var remoteName string
if tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {
remoteName = remainder[:tagSep]
} else {
remoteName = remainder
}
if strings.ToLower(remoteName) != remoteName {
return nil, errors.New("invalid reference format: repository name must be lowercase")
}
ref, err := Parse(domain + "/" + remainder)
if err != nil {
return nil, err
}
named, isNamed := ref.(Named)
if !isNamed {
return nil, fmt.Errorf("reference %s has no name", ref.String())
}
return named, nil
}
// splitDockerDomain splits a repository name to domain and remotename string.
// If no valid domain is found, the default domain is used. Repository name
// needs to be already validated before.
func splitDockerDomain(name string) (domain, remainder string) {
i := strings.IndexRune(name, '/')
if i == -1 || (!strings.ContainsAny(name[:i], ".:") && name[:i] != "localhost") {
domain, remainder = defaultDomain, name
} else {
domain, remainder = name[:i], name[i+1:]
}
if domain == legacyDefaultDomain {
domain = defaultDomain
}
if domain == defaultDomain && !strings.ContainsRune(remainder, '/') {
remainder = officialRepoName + "/" + remainder
}
return
}
// familiarizeName returns a shortened version of the name familiar
// to to the Docker UI. Familiar names have the default domain
// "docker.io" and "library/" repository prefix removed.
// For example, "docker.io/library/redis" will have the familiar
// name "redis" and "docker.io/dmcgowan/myapp" will be "dmcgowan/myapp".
// Returns a familiarized named only reference.
func familiarizeName(named namedRepository) repository {
repo := repository{
domain: named.Domain(),
path: named.Path(),
}
if repo.domain == defaultDomain {
repo.domain = ""
// Handle official repositories which have the pattern "library/<official repo name>"
if split := strings.Split(repo.path, "/"); len(split) == 2 && split[0] == officialRepoName {
repo.path = split[1]
}
}
return repo
}
func (r reference) Familiar() Named {
return reference{
namedRepository: familiarizeName(r.namedRepository),
tag: r.tag,
digest: r.digest,
}
}
func (r repository) Familiar() Named {
return familiarizeName(r)
}
func (t taggedReference) Familiar() Named {
return taggedReference{
namedRepository: familiarizeName(t.namedRepository),
tag: t.tag,
}
}
func (c canonicalReference) Familiar() Named {
return canonicalReference{
namedRepository: familiarizeName(c.namedRepository),
digest: c.digest,
}
}
// TagNameOnly adds the default tag "latest" to a reference if it only has
// a repo name.
func TagNameOnly(ref Named) Named {
if IsNameOnly(ref) {
namedTagged, err := WithTag(ref, defaultTag)
if err != nil {
// Default tag must be valid, to create a NamedTagged
// type with non-validated input the WithTag function
// should be used instead
panic(err)
}
return namedTagged
}
return ref
}
// ParseAnyReference parses a reference string as a possible identifier,
// full digest, or familiar name.
func ParseAnyReference(ref string) (Reference, error) {
if ok := anchoredIdentifierRegexp.MatchString(ref); ok {
return digestReference("sha256:" + ref), nil
}
if dgst, err := digest.Parse(ref); err == nil {
return digestReference(dgst), nil
}
return ParseNormalizedNamed(ref)
}

View File

@@ -0,0 +1,433 @@
// Package reference provides a general type to represent any way of referencing images within the registry.
// Its main purpose is to abstract tags and digests (content-addressable hash).
//
// Grammar
//
// reference := name [ ":" tag ] [ "@" digest ]
// name := [domain '/'] path-component ['/' path-component]*
// domain := domain-component ['.' domain-component]* [':' port-number]
// domain-component := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
// port-number := /[0-9]+/
// path-component := alpha-numeric [separator alpha-numeric]*
// alpha-numeric := /[a-z0-9]+/
// separator := /[_.]|__|[-]*/
//
// tag := /[\w][\w.-]{0,127}/
//
// digest := digest-algorithm ":" digest-hex
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]
// digest-algorithm-separator := /[+.-_]/
// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value
//
// identifier := /[a-f0-9]{64}/
// short-identifier := /[a-f0-9]{6,64}/
package reference
import (
"errors"
"fmt"
"strings"
"github.com/opencontainers/go-digest"
)
const (
// NameTotalLengthMax is the maximum total number of characters in a repository name.
NameTotalLengthMax = 255
)
var (
// ErrReferenceInvalidFormat represents an error while trying to parse a string as a reference.
ErrReferenceInvalidFormat = errors.New("invalid reference format")
// ErrTagInvalidFormat represents an error while trying to parse a string as a tag.
ErrTagInvalidFormat = errors.New("invalid tag format")
// ErrDigestInvalidFormat represents an error while trying to parse a string as a tag.
ErrDigestInvalidFormat = errors.New("invalid digest format")
// ErrNameContainsUppercase is returned for invalid repository names that contain uppercase characters.
ErrNameContainsUppercase = errors.New("repository name must be lowercase")
// ErrNameEmpty is returned for empty, invalid repository names.
ErrNameEmpty = errors.New("repository name must have at least one component")
// ErrNameTooLong is returned when a repository name is longer than NameTotalLengthMax.
ErrNameTooLong = fmt.Errorf("repository name must not be more than %v characters", NameTotalLengthMax)
// ErrNameNotCanonical is returned when a name is not canonical.
ErrNameNotCanonical = errors.New("repository name must be canonical")
)
// Reference is an opaque object reference identifier that may include
// modifiers such as a hostname, name, tag, and digest.
type Reference interface {
// String returns the full reference
String() string
}
// Field provides a wrapper type for resolving correct reference types when
// working with encoding.
type Field struct {
reference Reference
}
// AsField wraps a reference in a Field for encoding.
func AsField(reference Reference) Field {
return Field{reference}
}
// Reference unwraps the reference type from the field to
// return the Reference object. This object should be
// of the appropriate type to further check for different
// reference types.
func (f Field) Reference() Reference {
return f.reference
}
// MarshalText serializes the field to byte text which
// is the string of the reference.
func (f Field) MarshalText() (p []byte, err error) {
return []byte(f.reference.String()), nil
}
// UnmarshalText parses text bytes by invoking the
// reference parser to ensure the appropriately
// typed reference object is wrapped by field.
func (f *Field) UnmarshalText(p []byte) error {
r, err := Parse(string(p))
if err != nil {
return err
}
f.reference = r
return nil
}
// Named is an object with a full name
type Named interface {
Reference
Name() string
}
// Tagged is an object which has a tag
type Tagged interface {
Reference
Tag() string
}
// NamedTagged is an object including a name and tag.
type NamedTagged interface {
Named
Tag() string
}
// Digested is an object which has a digest
// in which it can be referenced by
type Digested interface {
Reference
Digest() digest.Digest
}
// Canonical reference is an object with a fully unique
// name including a name with domain and digest
type Canonical interface {
Named
Digest() digest.Digest
}
// namedRepository is a reference to a repository with a name.
// A namedRepository has both domain and path components.
type namedRepository interface {
Named
Domain() string
Path() string
}
// Domain returns the domain part of the Named reference
func Domain(named Named) string {
if r, ok := named.(namedRepository); ok {
return r.Domain()
}
domain, _ := splitDomain(named.Name())
return domain
}
// Path returns the name without the domain part of the Named reference
func Path(named Named) (name string) {
if r, ok := named.(namedRepository); ok {
return r.Path()
}
_, path := splitDomain(named.Name())
return path
}
func splitDomain(name string) (string, string) {
match := anchoredNameRegexp.FindStringSubmatch(name)
if len(match) != 3 {
return "", name
}
return match[1], match[2]
}
// SplitHostname splits a named reference into a
// hostname and name string. If no valid hostname is
// found, the hostname is empty and the full value
// is returned as name
// DEPRECATED: Use Domain or Path
func SplitHostname(named Named) (string, string) {
if r, ok := named.(namedRepository); ok {
return r.Domain(), r.Path()
}
return splitDomain(named.Name())
}
// Parse parses s and returns a syntactically valid Reference.
// If an error was encountered it is returned, along with a nil Reference.
// NOTE: Parse will not handle short digests.
func Parse(s string) (Reference, error) {
matches := ReferenceRegexp.FindStringSubmatch(s)
if matches == nil {
if s == "" {
return nil, ErrNameEmpty
}
if ReferenceRegexp.FindStringSubmatch(strings.ToLower(s)) != nil {
return nil, ErrNameContainsUppercase
}
return nil, ErrReferenceInvalidFormat
}
if len(matches[1]) > NameTotalLengthMax {
return nil, ErrNameTooLong
}
var repo repository
nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
if nameMatch != nil && len(nameMatch) == 3 {
repo.domain = nameMatch[1]
repo.path = nameMatch[2]
} else {
repo.domain = ""
repo.path = matches[1]
}
ref := reference{
namedRepository: repo,
tag: matches[2],
}
if matches[3] != "" {
var err error
ref.digest, err = digest.Parse(matches[3])
if err != nil {
return nil, err
}
}
r := getBestReferenceType(ref)
if r == nil {
return nil, ErrNameEmpty
}
return r, nil
}
// ParseNamed parses s and returns a syntactically valid reference implementing
// the Named interface. The reference must have a name and be in the canonical
// form, otherwise an error is returned.
// If an error was encountered it is returned, along with a nil Reference.
// NOTE: ParseNamed will not handle short digests.
func ParseNamed(s string) (Named, error) {
named, err := ParseNormalizedNamed(s)
if err != nil {
return nil, err
}
if named.String() != s {
return nil, ErrNameNotCanonical
}
return named, nil
}
// WithName returns a named object representing the given string. If the input
// is invalid ErrReferenceInvalidFormat will be returned.
func WithName(name string) (Named, error) {
if len(name) > NameTotalLengthMax {
return nil, ErrNameTooLong
}
match := anchoredNameRegexp.FindStringSubmatch(name)
if match == nil || len(match) != 3 {
return nil, ErrReferenceInvalidFormat
}
return repository{
domain: match[1],
path: match[2],
}, nil
}
// WithTag combines the name from "name" and the tag from "tag" to form a
// reference incorporating both the name and the tag.
func WithTag(name Named, tag string) (NamedTagged, error) {
if !anchoredTagRegexp.MatchString(tag) {
return nil, ErrTagInvalidFormat
}
var repo repository
if r, ok := name.(namedRepository); ok {
repo.domain = r.Domain()
repo.path = r.Path()
} else {
repo.path = name.Name()
}
if canonical, ok := name.(Canonical); ok {
return reference{
namedRepository: repo,
tag: tag,
digest: canonical.Digest(),
}, nil
}
return taggedReference{
namedRepository: repo,
tag: tag,
}, nil
}
// WithDigest combines the name from "name" and the digest from "digest" to form
// a reference incorporating both the name and the digest.
func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
if !anchoredDigestRegexp.MatchString(digest.String()) {
return nil, ErrDigestInvalidFormat
}
var repo repository
if r, ok := name.(namedRepository); ok {
repo.domain = r.Domain()
repo.path = r.Path()
} else {
repo.path = name.Name()
}
if tagged, ok := name.(Tagged); ok {
return reference{
namedRepository: repo,
tag: tagged.Tag(),
digest: digest,
}, nil
}
return canonicalReference{
namedRepository: repo,
digest: digest,
}, nil
}
// TrimNamed removes any tag or digest from the named reference.
func TrimNamed(ref Named) Named {
domain, path := SplitHostname(ref)
return repository{
domain: domain,
path: path,
}
}
func getBestReferenceType(ref reference) Reference {
if ref.Name() == "" {
// Allow digest only references
if ref.digest != "" {
return digestReference(ref.digest)
}
return nil
}
if ref.tag == "" {
if ref.digest != "" {
return canonicalReference{
namedRepository: ref.namedRepository,
digest: ref.digest,
}
}
return ref.namedRepository
}
if ref.digest == "" {
return taggedReference{
namedRepository: ref.namedRepository,
tag: ref.tag,
}
}
return ref
}
type reference struct {
namedRepository
tag string
digest digest.Digest
}
func (r reference) String() string {
return r.Name() + ":" + r.tag + "@" + r.digest.String()
}
func (r reference) Tag() string {
return r.tag
}
func (r reference) Digest() digest.Digest {
return r.digest
}
type repository struct {
domain string
path string
}
func (r repository) String() string {
return r.Name()
}
func (r repository) Name() string {
if r.domain == "" {
return r.path
}
return r.domain + "/" + r.path
}
func (r repository) Domain() string {
return r.domain
}
func (r repository) Path() string {
return r.path
}
type digestReference digest.Digest
func (d digestReference) String() string {
return digest.Digest(d).String()
}
func (d digestReference) Digest() digest.Digest {
return digest.Digest(d)
}
type taggedReference struct {
namedRepository
tag string
}
func (t taggedReference) String() string {
return t.Name() + ":" + t.tag
}
func (t taggedReference) Tag() string {
return t.tag
}
type canonicalReference struct {
namedRepository
digest digest.Digest
}
func (c canonicalReference) String() string {
return c.Name() + "@" + c.digest.String()
}
func (c canonicalReference) Digest() digest.Digest {
return c.digest
}

View File

@@ -0,0 +1,143 @@
package reference
import "regexp"
var (
// alphaNumericRegexp defines the alpha numeric atom, typically a
// component of names. This only allows lower case characters and digits.
alphaNumericRegexp = match(`[a-z0-9]+`)
// separatorRegexp defines the separators allowed to be embedded in name
// components. This allow one period, one or two underscore and multiple
// dashes.
separatorRegexp = match(`(?:[._]|__|[-]*)`)
// nameComponentRegexp restricts registry path component names to start
// with at least one letter or number, with following parts able to be
// separated by one period, one or two underscore and multiple dashes.
nameComponentRegexp = expression(
alphaNumericRegexp,
optional(repeated(separatorRegexp, alphaNumericRegexp)))
// domainComponentRegexp restricts the registry domain component of a
// repository name to start with a component as defined by domainRegexp
// and followed by an optional port.
domainComponentRegexp = match(`(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`)
// domainRegexp defines the structure of potential domain components
// that may be part of image names. This is purposely a subset of what is
// allowed by DNS to ensure backwards compatibility with Docker image
// names.
domainRegexp = expression(
domainComponentRegexp,
optional(repeated(literal(`.`), domainComponentRegexp)),
optional(literal(`:`), match(`[0-9]+`)))
// TagRegexp matches valid tag names. From docker/docker:graph/tags.go.
TagRegexp = match(`[\w][\w.-]{0,127}`)
// anchoredTagRegexp matches valid tag names, anchored at the start and
// end of the matched string.
anchoredTagRegexp = anchored(TagRegexp)
// DigestRegexp matches valid digests.
DigestRegexp = match(`[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][[:xdigit:]]{32,}`)
// anchoredDigestRegexp matches valid digests, anchored at the start and
// end of the matched string.
anchoredDigestRegexp = anchored(DigestRegexp)
// NameRegexp is the format for the name component of references. The
// regexp has capturing groups for the domain and name part omitting
// the separating forward slash from either.
NameRegexp = expression(
optional(domainRegexp, literal(`/`)),
nameComponentRegexp,
optional(repeated(literal(`/`), nameComponentRegexp)))
// anchoredNameRegexp is used to parse a name value, capturing the
// domain and trailing components.
anchoredNameRegexp = anchored(
optional(capture(domainRegexp), literal(`/`)),
capture(nameComponentRegexp,
optional(repeated(literal(`/`), nameComponentRegexp))))
// ReferenceRegexp is the full supported format of a reference. The regexp
// is anchored and has capturing groups for name, tag, and digest
// components.
ReferenceRegexp = anchored(capture(NameRegexp),
optional(literal(":"), capture(TagRegexp)),
optional(literal("@"), capture(DigestRegexp)))
// IdentifierRegexp is the format for string identifier used as a
// content addressable identifier using sha256. These identifiers
// are like digests without the algorithm, since sha256 is used.
IdentifierRegexp = match(`([a-f0-9]{64})`)
// ShortIdentifierRegexp is the format used to represent a prefix
// of an identifier. A prefix may be used to match a sha256 identifier
// within a list of trusted identifiers.
ShortIdentifierRegexp = match(`([a-f0-9]{6,64})`)
// anchoredIdentifierRegexp is used to check or match an
// identifier value, anchored at start and end of string.
anchoredIdentifierRegexp = anchored(IdentifierRegexp)
// anchoredShortIdentifierRegexp is used to check if a value
// is a possible identifier prefix, anchored at start and end
// of string.
anchoredShortIdentifierRegexp = anchored(ShortIdentifierRegexp)
)
// match compiles the string to a regular expression.
var match = regexp.MustCompile
// literal compiles s into a literal regular expression, escaping any regexp
// reserved characters.
func literal(s string) *regexp.Regexp {
re := match(regexp.QuoteMeta(s))
if _, complete := re.LiteralPrefix(); !complete {
panic("must be a literal")
}
return re
}
// expression defines a full expression, where each regular expression must
// follow the previous.
func expression(res ...*regexp.Regexp) *regexp.Regexp {
var s string
for _, re := range res {
s += re.String()
}
return match(s)
}
// optional wraps the expression in a non-capturing group and makes the
// production optional.
func optional(res ...*regexp.Regexp) *regexp.Regexp {
return match(group(expression(res...)).String() + `?`)
}
// repeated wraps the regexp in a non-capturing group to get one or more
// matches.
func repeated(res ...*regexp.Regexp) *regexp.Regexp {
return match(group(expression(res...)).String() + `+`)
}
// group wraps the regexp in a non-capturing group.
func group(res ...*regexp.Regexp) *regexp.Regexp {
return match(`(?:` + expression(res...).String() + `)`)
}
// capture wraps the expression in a capturing group.
func capture(res ...*regexp.Regexp) *regexp.Regexp {
return match(`(` + expression(res...).String() + `)`)
}
// anchored anchors the regular expression by adding start and end delimiters.
func anchored(res ...*regexp.Regexp) *regexp.Regexp {
return match(`^` + expression(res...).String() + `$`)
}

View File

@@ -0,0 +1,258 @@
package tarfile
import (
"archive/tar"
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
// Destination is a partial implementation of types.ImageDestination for writing to an io.Writer.
type Destination struct {
writer io.Writer
tar *tar.Writer
repoTag string
// Other state.
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
}
// NewDestination returns a tarfile.Destination for the specified io.Writer.
func NewDestination(dest io.Writer, ref reference.NamedTagged) *Destination {
// For github.com/docker/docker consumers, this works just as well as
// refString := ref.String()
// because when reading the RepoTags strings, github.com/docker/docker/reference
// normalizes both of them to the same value.
//
// Doing it this way to include the normalized-out `docker.io[/library]` does make
// a difference for github.com/projectatomic/docker consumers, with the
// “Add --add-registry and --block-registry options to docker daemon” patch.
// These consumers treat reference strings which include a hostname and reference
// strings without a hostname differently.
//
// Using the host name here is more explicit about the intent, and it has the same
// effect as (docker pull) in projectatomic/docker, which tags the result using
// a hostname-qualified reference.
// See https://github.com/containers/image/issues/72 for a more detailed
// analysis and explanation.
refString := fmt.Sprintf("%s:%s", ref.Name(), ref.Tag())
return &Destination{
writer: dest,
tar: tar.NewWriter(dest),
repoTag: refString,
blobs: make(map[digest.Digest]types.BlobInfo),
}
}
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
// If an empty slice or nil it's returned, then any mime type can be tried to upload
func (d *Destination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema2MediaType, // We rely on the types.Image.UpdatedImage schema conversion capabilities.
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *Destination) SupportsSignatures() error {
return errors.Errorf("Storing signatures for docker tar files is not supported")
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *Destination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *Destination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *Destination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
if inputInfo.Digest.String() == "" {
return types.BlobInfo{}, errors.Errorf("Can not stream a blob with unknown digest to docker tarfile")
}
ok, size, err := d.HasBlob(inputInfo)
if err != nil {
return types.BlobInfo{}, err
}
if ok {
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
}
if inputInfo.Size == -1 { // Ouch, we need to stream the blob into a temporary file just to determine the size.
logrus.Debugf("docker tarfile: input with unknown size, streaming to disk first ...")
streamCopy, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-tarfile-blob")
if err != nil {
return types.BlobInfo{}, err
}
defer os.Remove(streamCopy.Name())
defer streamCopy.Close()
size, err := io.Copy(streamCopy, stream)
if err != nil {
return types.BlobInfo{}, err
}
_, err = streamCopy.Seek(0, os.SEEK_SET)
if err != nil {
return types.BlobInfo{}, err
}
inputInfo.Size = size // inputInfo is a struct, so we are only modifying our copy.
stream = streamCopy
logrus.Debugf("... streaming done")
}
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
if err := d.sendFile(inputInfo.Digest.String(), inputInfo.Size, tee); err != nil {
return types.BlobInfo{}, err
}
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}
return types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with
// the matching digest which can be reapplied using ReapplyBlob. Unlike
// PutBlob, the digest can not be empty. If HasBlob returns true, the size of
// the blob must also be returned. If the destination does not contain the
// blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil); it
// returns a non-nil error only on an unexpected failure.
func (d *Destination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf("Can not check for a blob with unknown digest")
}
if blob, ok := d.blobs[info.Digest]; ok {
return true, blob.Size, nil
}
return false, -1, nil
}
// ReapplyBlob informs the image destination that a blob for which HasBlob
// previously returned true would have been passed to PutBlob if it had
// returned false. Like HasBlob and unlike PutBlob, the digest can not be
// empty. If the blob is a filesystem layer, this signifies that the changes
// it describes need to be applied again when composing a filesystem tree.
func (d *Destination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *Destination) PutManifest(m []byte) error {
// We do not bother with types.ManifestTypeRejectedError; our .SupportedManifestMIMETypes() above is already providing only one alternative,
// so the caller trying a different manifest kind would be pointless.
var man schema2Manifest
if err := json.Unmarshal(m, &man); err != nil {
return errors.Wrap(err, "Error parsing manifest")
}
if man.SchemaVersion != 2 || man.MediaType != manifest.DockerV2Schema2MediaType {
return errors.Errorf("Unsupported manifest type, need a Docker schema 2 manifest")
}
layerPaths := []string{}
for _, l := range man.Layers {
layerPaths = append(layerPaths, l.Digest.String())
}
items := []ManifestItem{{
Config: man.Config.Digest.String(),
RepoTags: []string{d.repoTag},
Layers: layerPaths,
Parent: "",
LayerSources: nil,
}}
itemsBytes, err := json.Marshal(&items)
if err != nil {
return err
}
// FIXME? Do we also need to support the legacy format?
return d.sendFile(manifestFileName, int64(len(itemsBytes)), bytes.NewReader(itemsBytes))
}
type tarFI struct {
path string
size int64
}
func (t *tarFI) Name() string {
return t.path
}
func (t *tarFI) Size() int64 {
return t.size
}
func (t *tarFI) Mode() os.FileMode {
return 0444
}
func (t *tarFI) ModTime() time.Time {
return time.Unix(0, 0)
}
func (t *tarFI) IsDir() bool {
return false
}
func (t *tarFI) Sys() interface{} {
return nil
}
// sendFile sends a file into the tar stream.
func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader) error {
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: expectedSize}, "")
if err != nil {
return nil
}
logrus.Debugf("Sending as tar file %s", path)
if err := d.tar.WriteHeader(hdr); err != nil {
return err
}
size, err := io.Copy(d.tar, stream)
if err != nil {
return err
}
if size != expectedSize {
return errors.Errorf("Size mismatch when copying %s, expected %d, got %d", path, expectedSize, size)
}
return nil
}
// PutSignatures adds the given signatures to the docker tarfile (currently not
// supported). MUST be called after PutManifest (signatures reference manifest
// contents)
func (d *Destination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return errors.Errorf("Storing signatures for docker tar files is not supported")
}
return nil
}
// Commit finishes writing data to the underlying io.Writer.
// It is the caller's responsibility to close it, if necessary.
func (d *Destination) Commit() error {
return d.tar.Close()
}

View File

@@ -0,0 +1,3 @@
// Package tarfile is an internal implementation detail of some transports.
// Do not use outside of the github.com/containers/image repo!
package tarfile

View File

@@ -0,0 +1,359 @@
package tarfile
import (
"archive/tar"
"bytes"
"encoding/json"
"io"
"io/ioutil"
"os"
"path"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// Source is a partial implementation of types.ImageSource for reading from tarPath.
type Source struct {
tarPath string
// The following data is only available after ensureCachedDataIsPresent() succeeds
tarManifest *ManifestItem // nil if not available yet.
configBytes []byte
configDigest digest.Digest
orderedDiffIDList []diffID
knownLayers map[diffID]*layerInfo
// Other state
generatedManifest []byte // Private cache for GetManifest(), nil if not set yet.
}
type layerInfo struct {
path string
size int64
}
// NewSource returns a tarfile.Source for the specified path.
func NewSource(path string) *Source {
// TODO: We could add support for multiple images in a single archive, so
// that people could use docker-archive:opensuse.tar:opensuse:leap as
// the source of an image.
return &Source{
tarPath: path,
}
}
// tarReadCloser is a way to close the backing file of a tar.Reader when the user no longer needs the tar component.
type tarReadCloser struct {
*tar.Reader
backingFile *os.File
}
func (t *tarReadCloser) Close() error {
return t.backingFile.Close()
}
// openTarComponent returns a ReadCloser for the specific file within the archive.
// This is linear scan; we assume that the tar file will have a fairly small amount of files (~layers),
// and that filesystem caching will make the repeated seeking over the (uncompressed) tarPath cheap enough.
// The caller should call .Close() on the returned stream.
func (s *Source) openTarComponent(componentPath string) (io.ReadCloser, error) {
f, err := os.Open(s.tarPath)
if err != nil {
return nil, err
}
succeeded := false
defer func() {
if !succeeded {
f.Close()
}
}()
tarReader, header, err := findTarComponent(f, componentPath)
if err != nil {
return nil, err
}
if header == nil {
return nil, os.ErrNotExist
}
if header.FileInfo().Mode()&os.ModeType == os.ModeSymlink { // FIXME: untested
// We follow only one symlink; so no loops are possible.
if _, err := f.Seek(0, os.SEEK_SET); err != nil {
return nil, err
}
// The new path could easily point "outside" the archive, but we only compare it to existing tar headers without extracting the archive,
// so we don't care.
tarReader, header, err = findTarComponent(f, path.Join(path.Dir(componentPath), header.Linkname))
if err != nil {
return nil, err
}
if header == nil {
return nil, os.ErrNotExist
}
}
if !header.FileInfo().Mode().IsRegular() {
return nil, errors.Errorf("Error reading tar archive component %s: not a regular file", header.Name)
}
succeeded = true
return &tarReadCloser{Reader: tarReader, backingFile: f}, nil
}
// findTarComponent returns a header and a reader matching path within inputFile,
// or (nil, nil, nil) if not found.
func findTarComponent(inputFile io.Reader, path string) (*tar.Reader, *tar.Header, error) {
t := tar.NewReader(inputFile)
for {
h, err := t.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, nil, err
}
if h.Name == path {
return t, h, nil
}
}
return nil, nil, nil
}
// readTarComponent returns full contents of componentPath.
func (s *Source) readTarComponent(path string) ([]byte, error) {
file, err := s.openTarComponent(path)
if err != nil {
return nil, errors.Wrapf(err, "Error loading tar component %s", path)
}
defer file.Close()
bytes, err := ioutil.ReadAll(file)
if err != nil {
return nil, err
}
return bytes, nil
}
// ensureCachedDataIsPresent loads data necessary for any of the public accessors.
func (s *Source) ensureCachedDataIsPresent() error {
if s.tarManifest != nil {
return nil
}
// Read and parse manifest.json
tarManifest, err := s.loadTarManifest()
if err != nil {
return err
}
// Check to make sure length is 1
if len(tarManifest) != 1 {
return errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(tarManifest))
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest[0].Config)
if err != nil {
return err
}
var parsedConfig image // Most fields ommitted, we only care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest[0].Config)
}
knownLayers, err := s.prepareLayerData(&tarManifest[0], &parsedConfig)
if err != nil {
return err
}
// Success; commit.
s.tarManifest = &tarManifest[0]
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
s.knownLayers = knownLayers
return nil
}
// loadTarManifest loads and decodes the manifest.json.
func (s *Source) loadTarManifest() ([]ManifestItem, error) {
// FIXME? Do we need to deal with the legacy format?
bytes, err := s.readTarComponent(manifestFileName)
if err != nil {
return nil, err
}
var items []ManifestItem
if err := json.Unmarshal(bytes, &items); err != nil {
return nil, errors.Wrap(err, "Error decoding tar manifest.json")
}
return items, nil
}
// LoadTarManifest loads and decodes the manifest.json
func (s *Source) LoadTarManifest() ([]ManifestItem, error) {
return s.loadTarManifest()
}
func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *image) (map[diffID]*layerInfo, error) {
// Collect layer data available in manifest and config.
if len(tarManifest.Layers) != len(parsedConfig.RootFS.DiffIDs) {
return nil, errors.Errorf("Inconsistent layer count: %d in manifest, %d in config", len(tarManifest.Layers), len(parsedConfig.RootFS.DiffIDs))
}
knownLayers := map[diffID]*layerInfo{}
unknownLayerSizes := map[string]*layerInfo{} // Points into knownLayers, a "to do list" of items with unknown sizes.
for i, diffID := range parsedConfig.RootFS.DiffIDs {
if _, ok := knownLayers[diffID]; ok {
// Apparently it really can happen that a single image contains the same layer diff more than once.
// In that case, the diffID validation ensures that both layers truly are the same, and it should not matter
// which of the tarManifest.Layers paths is used; (docker save) actually makes the duplicates symlinks to the original.
continue
}
layerPath := tarManifest.Layers[i]
if _, ok := unknownLayerSizes[layerPath]; ok {
return nil, errors.Errorf("Layer tarfile %s used for two different DiffID values", layerPath)
}
li := &layerInfo{ // A new element in each iteration
path: layerPath,
size: -1,
}
knownLayers[diffID] = li
unknownLayerSizes[layerPath] = li
}
// Scan the tar file to collect layer sizes.
file, err := os.Open(s.tarPath)
if err != nil {
return nil, err
}
defer file.Close()
t := tar.NewReader(file)
for {
h, err := t.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, err
}
if li, ok := unknownLayerSizes[h.Name]; ok {
li.size = h.Size
delete(unknownLayerSizes, h.Name)
}
}
if len(unknownLayerSizes) != 0 {
return nil, errors.Errorf("Some layer tarfiles are missing in the tarball") // This could do with a better error reporting, if this ever happened in practice.
}
return knownLayers, nil
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *Source) GetManifest() ([]byte, string, error) {
if s.generatedManifest == nil {
if err := s.ensureCachedDataIsPresent(); err != nil {
return nil, "", err
}
m := schema2Manifest{
SchemaVersion: 2,
MediaType: manifest.DockerV2Schema2MediaType,
Config: distributionDescriptor{
MediaType: manifest.DockerV2Schema2ConfigMediaType,
Size: int64(len(s.configBytes)),
Digest: s.configDigest,
},
Layers: []distributionDescriptor{},
}
for _, diffID := range s.orderedDiffIDList {
li, ok := s.knownLayers[diffID]
if !ok {
return nil, "", errors.Errorf("Internal inconsistency: Information about layer %s missing", diffID)
}
m.Layers = append(m.Layers, distributionDescriptor{
Digest: digest.Digest(diffID), // diffID is a digest of the uncompressed tarball
MediaType: manifest.DockerV2Schema2LayerMediaType,
Size: li.size,
})
}
manifestBytes, err := json.Marshal(&m)
if err != nil {
return nil, "", err
}
s.generatedManifest = manifestBytes
}
return s.generatedManifest, manifest.DockerV2Schema2MediaType, nil
}
// GetTargetManifest returns an image's manifest given a digest. This is mainly used to retrieve a single image's manifest
// out of a manifest list.
func (s *Source) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
// How did we even get here? GetManifest() above has returned a manifest.DockerV2Schema2MediaType.
return nil, "", errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
}
type readCloseWrapper struct {
io.Reader
closeFunc func() error
}
func (r readCloseWrapper) Close() error {
if r.closeFunc != nil {
return r.closeFunc()
}
return nil
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if err := s.ensureCachedDataIsPresent(); err != nil {
return nil, 0, err
}
if info.Digest == s.configDigest { // FIXME? Implement a more general algorithm matching instead of assuming sha256.
return ioutil.NopCloser(bytes.NewReader(s.configBytes)), int64(len(s.configBytes)), nil
}
if li, ok := s.knownLayers[diffID(info.Digest)]; ok { // diffID is a digest of the uncompressed tarball,
stream, err := s.openTarComponent(li.path)
if err != nil {
return nil, 0, err
}
// In order to handle the fact that digests != diffIDs (and thus that a
// caller which is trying to verify the blob will run into problems),
// we need to decompress blobs. This is a bit ugly, but it's a
// consequence of making everything addressable by their DiffID rather
// than by their digest...
//
// In particular, because the v2s2 manifest being generated uses
// DiffIDs, any caller of GetBlob is going to be asking for DiffIDs of
// layers not their _actual_ digest. The result is that copy/... will
// be verifing a "digest" which is not the actual layer's digest (but
// is instead the DiffID).
decompressFunc, reader, err := compression.DetectCompression(stream)
if err != nil {
return nil, 0, errors.Wrapf(err, "Detecting compression in blob %s", info.Digest)
}
if decompressFunc != nil {
reader, err = decompressFunc(reader)
if err != nil {
return nil, 0, errors.Wrapf(err, "Decompressing blob %s stream", info.Digest)
}
}
newStream := readCloseWrapper{
Reader: reader,
closeFunc: stream.Close,
}
return newStream, li.size, nil
}
return nil, 0, errors.Errorf("Unknown blob %s", info.Digest)
}
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
func (s *Source) GetSignatures() ([][]byte, error) {
return [][]byte{}, nil
}

View File

@@ -0,0 +1,54 @@
package tarfile
import "github.com/opencontainers/go-digest"
// Various data structures.
// Based on github.com/docker/docker/image/tarexport/tarexport.go
const (
manifestFileName = "manifest.json"
// legacyLayerFileName = "layer.tar"
// legacyConfigFileName = "json"
// legacyVersionFileName = "VERSION"
// legacyRepositoriesFileName = "repositories"
)
// ManifestItem is an element of the array stored in the top-level manifest.json file.
type ManifestItem struct {
Config string
RepoTags []string
Layers []string
Parent imageID `json:",omitempty"`
LayerSources map[diffID]distributionDescriptor `json:",omitempty"`
}
type imageID string
type diffID digest.Digest
// Based on github.com/docker/distribution/blobs.go
type distributionDescriptor struct {
MediaType string `json:"mediaType,omitempty"`
Size int64 `json:"size,omitempty"`
Digest digest.Digest `json:"digest,omitempty"`
URLs []string `json:"urls,omitempty"`
}
// Based on github.com/docker/distribution/manifest/schema2/manifest.go
// FIXME: We are repeating this all over the place; make a public copy?
type schema2Manifest struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType,omitempty"`
Config distributionDescriptor `json:"config"`
Layers []distributionDescriptor `json:"layers"`
}
// Based on github.com/docker/docker/image/image.go
// MOST CONTENT OMITTED AS UNNECESSARY
type image struct {
RootFS *rootFS `json:"rootfs,omitempty"`
}
type rootFS struct {
Type string `json:"type"`
DiffIDs []diffID `json:"diff_ids,omitempty"`
}

View File

@@ -0,0 +1,63 @@
package image
import (
"encoding/json"
"runtime"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type platformSpec struct {
Architecture string `json:"architecture"`
OS string `json:"os"`
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
Variant string `json:"variant,omitempty"`
Features []string `json:"features,omitempty"` // removed in OCI
}
// A manifestDescriptor references a platform-specific manifest.
type manifestDescriptor struct {
descriptor
Platform platformSpec `json:"platform"`
}
type manifestList struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
Manifests []manifestDescriptor `json:"manifests"`
}
func manifestSchema2FromManifestList(src types.ImageSource, manblob []byte) (genericManifest, error) {
list := manifestList{}
if err := json.Unmarshal(manblob, &list); err != nil {
return nil, err
}
var targetManifestDigest digest.Digest
for _, d := range list.Manifests {
if d.Platform.Architecture == runtime.GOARCH && d.Platform.OS == runtime.GOOS {
targetManifestDigest = d.Digest
break
}
}
if targetManifestDigest == "" {
return nil, errors.New("no supported platform found in manifest list")
}
manblob, mt, err := src.GetTargetManifest(targetManifestDigest)
if err != nil {
return nil, err
}
matches, err := manifest.MatchesDigest(manblob, targetManifestDigest)
if err != nil {
return nil, errors.Wrap(err, "Error computing manifest digest")
}
if !matches {
return nil, errors.Errorf("Manifest image does not match selected manifest digest %s", targetManifestDigest)
}
return manifestInstanceFromBlob(src, manblob, mt)
}

View File

@@ -2,12 +2,16 @@ package image
import (
"encoding/json"
"errors"
"fmt"
"regexp"
"strings"
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
var (
@@ -15,18 +19,33 @@ var (
)
type fsLayersSchema1 struct {
BlobSum string `json:"blobSum"`
BlobSum digest.Digest `json:"blobSum"`
}
type historySchema1 struct {
V1Compatibility string `json:"v1Compatibility"`
}
// historySchema1 is a string containing this. It is similar to v1Image but not the same, in particular note the ThrowAway field.
type v1Compatibility struct {
ID string `json:"id"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig struct {
Cmd []string
} `json:"container_config,omitempty"`
Author string `json:"author,omitempty"`
ThrowAway bool `json:"throwaway,omitempty"`
}
type manifestSchema1 struct {
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []fsLayersSchema1 `json:"fsLayers"`
History []struct {
V1Compatibility string `json:"v1Compatibility"`
} `json:"history"`
SchemaVersion int `json:"schemaVersion"`
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []fsLayersSchema1 `json:"fsLayers"`
History []historySchema1 `json:"history"`
SchemaVersion int `json:"schemaVersion"`
}
func manifestSchema1FromManifest(manifest []byte) (genericManifest, error) {
@@ -34,23 +53,80 @@ func manifestSchema1FromManifest(manifest []byte) (genericManifest, error) {
if err := json.Unmarshal(manifest, mschema1); err != nil {
return nil, err
}
if mschema1.SchemaVersion != 1 {
return nil, errors.Errorf("unsupported schema version %d", mschema1.SchemaVersion)
}
if len(mschema1.FSLayers) != len(mschema1.History) {
return nil, errors.New("length of history not equal to number of layers")
}
if len(mschema1.FSLayers) == 0 {
return nil, errors.New("no FSLayers in manifest")
}
if err := fixManifestLayers(mschema1); err != nil {
return nil, err
}
// TODO(runcom): verify manifest schema 1, 2 etc
//if len(m.FSLayers) != len(m.History) {
//return nil, fmt.Errorf("length of history not equal to number of layers for %q", ref.String())
//}
//if len(m.FSLayers) == 0 {
//return nil, fmt.Errorf("no FSLayers in manifest for %q", ref.String())
//}
return mschema1, nil
}
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
func manifestSchema1FromComponents(ref reference.Named, fsLayers []fsLayersSchema1, history []historySchema1, architecture string) genericManifest {
var name, tag string
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
name = reference.Path(ref)
if tagged, ok := ref.(reference.NamedTagged); ok {
tag = tagged.Tag()
}
}
return &manifestSchema1{
Name: name,
Tag: tag,
Architecture: architecture,
FSLayers: fsLayers,
History: history,
SchemaVersion: 1,
}
}
func (m *manifestSchema1) serialize() ([]byte, error) {
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
unsigned, err := json.Marshal(*m)
if err != nil {
return nil, err
}
return manifest.AddDummyV2S1Signature(unsigned)
}
func (m *manifestSchema1) manifestMIMEType() string {
return manifest.DockerV2Schema1SignedMediaType
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{}
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema1) ConfigBlob() ([]byte, error) {
return nil, nil
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestSchema1) OCIConfig() (*imgspecv1.Image, error) {
v2s2, err := m.convertToManifestSchema2(nil, nil)
if err != nil {
return nil, err
}
return v2s2.OCIConfig()
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
layers := make([]types.BlobInfo, len(m.FSLayers))
for i, layer := range m.FSLayers { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
@@ -59,17 +135,30 @@ func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
return layers
}
func (m *manifestSchema1) Config() ([]byte, error) {
return []byte(m.History[0].V1Compatibility), nil
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
// This is a bit convoluted: We cant just have a "get embedded docker reference" method
// and have the “does it conflict” logic in the generic copy code, because the manifest does not actually
// embed a full docker/distribution reference, but only the repo name and tag (without the host name).
// So we would have to provide a “return repo without host name, and tag” getter for the generic code,
// which would be very awkward. Instead, we do the matching here in schema1-specific code, and all the
// generic copy code needs to know about is reference.Named and that a manifest may need updating
// for some destinations.
name := reference.Path(ref)
var tag string
if tagged, isTagged := ref.(reference.NamedTagged); isTagged {
tag = tagged.Tag()
} else {
tag = ""
}
return m.Name != name || m.Tag != tag
}
func (m *manifestSchema1) ImageInspectInfo() (*types.ImageInspectInfo, error) {
func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
v1 := &v1Image{}
config, err := m.Config()
if err != nil {
return nil, err
}
if err := json.Unmarshal(config, v1); err != nil {
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
@@ -82,12 +171,21 @@ func (m *manifestSchema1) ImageInspectInfo() (*types.ImageInspectInfo, error) {
}, nil
}
func (m *manifestSchema1) UpdatedManifest(options types.ManifestUpdateOptions) ([]byte, error) {
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return options.ManifestMIMEType == manifest.DockerV2Schema2MediaType
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m
if options.LayerInfos != nil {
// Our LayerInfos includes empty layers (where m.History.V1Compatibility->ThrowAway), so expect them to be included here as well.
if len(copy.FSLayers) != len(options.LayerInfos) {
return nil, fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.FSLayers), len(options.LayerInfos))
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.FSLayers), len(options.LayerInfos))
}
for i, info := range options.LayerInfos {
// (docker push) sets up m.History.V1Compatibility->{Id,Parent} based on values of info.Digest,
@@ -96,12 +194,27 @@ func (m *manifestSchema1) UpdatedManifest(options types.ManifestUpdateOptions) (
copy.FSLayers[(len(options.LayerInfos)-1)-i].BlobSum = info.Digest
}
}
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
unsigned, err := json.Marshal(copy)
if err != nil {
return nil, err
if options.EmbeddedDockerReference != nil {
copy.Name = reference.Path(options.EmbeddedDockerReference)
if tagged, isTagged := options.EmbeddedDockerReference.(reference.NamedTagged); isTagged {
copy.Tag = tagged.Tag()
} else {
copy.Tag = ""
}
}
return manifest.AddDummyV2S1Signature(unsigned)
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
// We have 2 MIME types for schema 1, which are basically equivalent (even the un-"Signed" MIME type will be rejected if there isnt a signature; so,
// handle conversions between them by doing nothing.
case manifest.DockerV2Schema2MediaType:
return copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema1SignedMediaType, options.ManifestMIMEType)
}
return memoryImageFromManifest(&copy), nil
}
// fixManifestLayers, after validating the supplied manifest
@@ -130,7 +243,7 @@ func fixManifestLayers(manifest *manifestSchema1) error {
}
}
if imgs[len(imgs)-1].Parent != "" {
return errors.New("Invalid parent ID in the base layer of the image.")
return errors.New("Invalid parent ID in the base layer of the image")
}
// check general duplicates to error instead of a deadlock
idmap := make(map[string]struct{})
@@ -138,7 +251,7 @@ func fixManifestLayers(manifest *manifestSchema1) error {
for _, img := range imgs {
// skip IDs that appear after each other, we handle those later
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID)
return errors.Errorf("ID %+v appears multiple times in manifest", img.ID)
}
lastID = img.ID
idmap[lastID] = struct{}{}
@@ -149,7 +262,7 @@ func fixManifestLayers(manifest *manifestSchema1) error {
manifest.FSLayers = append(manifest.FSLayers[:i], manifest.FSLayers[i+1:]...)
manifest.History = append(manifest.History[:i], manifest.History[i+1:]...)
} else if imgs[i].Parent != imgs[i+1].ID {
return fmt.Errorf("Invalid parent ID. Expected %v, got %v.", imgs[i+1].ID, imgs[i].Parent)
return errors.Errorf("Invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent)
}
}
return nil
@@ -157,7 +270,106 @@ func fixManifestLayers(manifest *manifestSchema1) error {
func validateV1ID(id string) error {
if ok := validHex.MatchString(id); !ok {
return fmt.Errorf("image ID %q is invalid", id)
return errors.Errorf("image ID %q is invalid", id)
}
return nil
}
// Based on github.com/docker/docker/distribution/pull_v2.go
func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.BlobInfo, layerDiffIDs []digest.Digest) (types.Image, error) {
if len(m.History) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
return nil, errors.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema2MediaType)
}
if len(m.History) != len(m.FSLayers) {
return nil, errors.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.History), len(m.FSLayers))
}
if uploadedLayerInfos != nil && len(uploadedLayerInfos) != len(m.FSLayers) {
return nil, errors.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.FSLayers))
}
if layerDiffIDs != nil && len(layerDiffIDs) != len(m.FSLayers) {
return nil, errors.Errorf("Internal error: collected %d DiffID values, but schema1 manifest has %d fsLayers", len(layerDiffIDs), len(m.FSLayers))
}
rootFS := rootFS{
Type: "layers",
DiffIDs: []digest.Digest{},
BaseLayer: "",
}
var layers []descriptor
history := make([]imageHistory, len(m.History))
for v1Index := len(m.History) - 1; v1Index >= 0; v1Index-- {
v2Index := (len(m.History) - 1) - v1Index
var v1compat v1Compatibility
if err := json.Unmarshal([]byte(m.History[v1Index].V1Compatibility), &v1compat); err != nil {
return nil, errors.Wrapf(err, "Error decoding history entry %d", v1Index)
}
history[v2Index] = imageHistory{
Created: v1compat.Created,
Author: v1compat.Author,
CreatedBy: strings.Join(v1compat.ContainerConfig.Cmd, " "),
Comment: v1compat.Comment,
EmptyLayer: v1compat.ThrowAway,
}
if !v1compat.ThrowAway {
var size int64
if uploadedLayerInfos != nil {
size = uploadedLayerInfos[v2Index].Size
}
var d digest.Digest
if layerDiffIDs != nil {
d = layerDiffIDs[v2Index]
}
layers = append(layers, descriptor{
MediaType: "application/vnd.docker.image.rootfs.diff.tar.gzip",
Size: size,
Digest: m.FSLayers[v1Index].BlobSum,
})
rootFS.DiffIDs = append(rootFS.DiffIDs, d)
}
}
configJSON, err := configJSONFromV1Config([]byte(m.History[0].V1Compatibility), rootFS, history)
if err != nil {
return nil, err
}
configDescriptor := descriptor{
MediaType: "application/vnd.docker.container.image.v1+json",
Size: int64(len(configJSON)),
Digest: digest.FromBytes(configJSON),
}
m2 := manifestSchema2FromComponents(configDescriptor, nil, configJSON, layers)
return memoryImageFromManifest(m2), nil
}
func configJSONFromV1Config(v1ConfigJSON []byte, rootFS rootFS, history []imageHistory) ([]byte, error) {
// github.com/docker/docker/image/v1/imagev1.go:MakeConfigFromV1Config unmarshals and re-marshals the input if docker_version is < 1.8.3 to remove blank fields;
// we don't do that here. FIXME? Should we? AFAICT it would only affect the digest value of the schema2 manifest, and we don't particularly need that to be
// a consistently reproducible value.
// Preserve everything we don't specifically know about.
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
rawContents := map[string]*json.RawMessage{}
if err := json.Unmarshal(v1ConfigJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
return nil, err
}
delete(rawContents, "id")
delete(rawContents, "parent")
delete(rawContents, "Size")
delete(rawContents, "parent_id")
delete(rawContents, "layer_id")
delete(rawContents, "throwaway")
updates := map[string]interface{}{"rootfs": rootFS, "history": history}
for field, value := range updates {
encoded, err := json.Marshal(value)
if err != nil {
return nil, err
}
rawContents[field] = (*json.RawMessage)(&encoded)
}
return json.Marshal(rawContents)
}

View File

@@ -1,25 +1,48 @@
package image
import (
"bytes"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io/ioutil"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// gzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
// This comes from github.com/docker/distribution/manifest/schema1/config_builder.go; there is
// a non-zero embedded timestamp; we could zero that, but that would just waste storage space
// in registries, so lets use the same values.
var gzippedEmptyLayer = []byte{
31, 139, 8, 0, 0, 9, 110, 136, 0, 255, 98, 24, 5, 163, 96, 20, 140, 88,
0, 8, 0, 0, 255, 255, 46, 175, 181, 239, 0, 4, 0, 0,
}
// gzippedEmptyLayerDigest is a digest of gzippedEmptyLayer
const gzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
type descriptor struct {
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
Digest string `json:"digest"`
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
URLs []string `json:"urls,omitempty"`
}
type manifestSchema2 struct {
src types.ImageSource
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
}
func manifestSchema2FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
@@ -30,30 +53,103 @@ func manifestSchema2FromManifest(src types.ImageSource, manifest []byte) (generi
return &v2s2, nil
}
// manifestSchema2FromComponents builds a new manifestSchema2 from the supplied data:
func manifestSchema2FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
return &manifestSchema2{
src: src,
configBlob: configBlob,
SchemaVersion: 2,
MediaType: manifest.DockerV2Schema2MediaType,
ConfigDescriptor: config,
LayersDescriptors: layers,
}
}
func (m *manifestSchema2) serialize() ([]byte, error) {
return json.Marshal(*m)
}
func (m *manifestSchema2) manifestMIMEType() string {
return m.MediaType
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema2) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestSchema2) OCIConfig() (*imgspecv1.Image, error) {
configBlob, err := m.ConfigBlob()
if err != nil {
return nil, err
}
// docker v2s2 and OCI v1 are mostly compatible but v2s2 contains more fields
// than OCI v1. This unmarshal makes sure we drop docker v2s2
// fields that aren't needed in OCI v1.
configOCI := &imgspecv1.Image{}
if err := json.Unmarshal(configBlob, configOCI); err != nil {
return nil, err
}
return configOCI, nil
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestSchema2")
}
stream, _, err := m.src.GetBlob(types.BlobInfo{
Digest: m.ConfigDescriptor.Digest,
Size: m.ConfigDescriptor.Size,
URLs: m.ConfigDescriptor.URLs,
})
if err != nil {
return nil, err
}
defer stream.Close()
blob, err := ioutil.ReadAll(stream)
if err != nil {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.ConfigDescriptor.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
}
m.configBlob = blob
}
return m.configBlob, nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size})
blobs = append(blobs, types.BlobInfo{
Digest: layer.Digest,
Size: layer.Size,
URLs: layer.URLs,
})
}
return blobs
}
func (m *manifestSchema2) Config() ([]byte, error) {
rawConfig, _, err := m.src.GetBlob(m.ConfigDescriptor.Digest)
if err != nil {
return nil, err
}
config, err := ioutil.ReadAll(rawConfig)
rawConfig.Close()
return config, err
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema2) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
func (m *manifestSchema2) ImageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.Config()
func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
return nil, err
}
@@ -70,16 +166,199 @@ func (m *manifestSchema2) ImageInspectInfo() (*types.ImageInspectInfo, error) {
}, nil
}
func (m *manifestSchema2) UpdatedManifest(options types.ManifestUpdateOptions) ([]byte, error) {
copy := *m
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema2) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return false
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
if options.LayerInfos != nil {
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].MediaType = m.LayersDescriptors[i].MediaType
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
copy.LayersDescriptors[i].URLs = info.URLs
}
}
return json.Marshal(copy)
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1 to schema2, but we really don't care.
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema1SignedMediaType, manifest.DockerV2Schema1MediaType:
return copy.convertToManifestSchema1(options.InformationOnly.Destination)
case imgspecv1.MediaTypeImageManifest:
return copy.convertToManifestOCI1()
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema2MediaType, options.ManifestMIMEType)
}
return memoryImageFromManifest(&copy), nil
}
func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
configOCI, err := m.OCIConfig()
if err != nil {
return nil, err
}
configOCIBytes, err := json.Marshal(configOCI)
if err != nil {
return nil, err
}
config := descriptorOCI1{
descriptor: descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Size: int64(len(configOCIBytes)),
Digest: digest.FromBytes(configOCIBytes),
},
}
layers := make([]descriptorOCI1, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = descriptorOCI1{descriptor: m.LayersDescriptors[idx]}
if m.LayersDescriptors[idx].MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable
} else {
// we assume layers are gzip'ed because docker v2s2 only deals with
// gzip'ed layers. However, OCI has non-gzip'ed layers as well.
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerGzip
}
}
m1 := manifestOCI1FromComponents(config, m.src, configOCIBytes, layers)
return memoryImageFromManifest(m1), nil
}
// Based on docker/distribution/manifest/schema1/config_builder.go
func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination) (types.Image, error) {
configBytes, err := m.ConfigBlob()
if err != nil {
return nil, err
}
imageConfig := &image{}
if err := json.Unmarshal(configBytes, imageConfig); err != nil {
return nil, err
}
// Build fsLayers and History, discarding all configs. We will patch the top-level config in later.
fsLayers := make([]fsLayersSchema1, len(imageConfig.History))
history := make([]historySchema1, len(imageConfig.History))
nonemptyLayerIndex := 0
var parentV1ID string // Set in the loop
v1ID := ""
haveGzippedEmptyLayer := false
if len(imageConfig.History) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
return nil, errors.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema1SignedMediaType)
}
for v2Index, historyEntry := range imageConfig.History {
parentV1ID = v1ID
v1Index := len(imageConfig.History) - 1 - v2Index
var blobDigest digest.Digest
if historyEntry.EmptyLayer {
if !haveGzippedEmptyLayer {
logrus.Debugf("Uploading empty layer during conversion to schema 1")
info, err := dest.PutBlob(bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Digest: gzippedEmptyLayerDigest, Size: int64(len(gzippedEmptyLayer))})
if err != nil {
return nil, errors.Wrap(err, "Error uploading empty layer")
}
if info.Digest != gzippedEmptyLayerDigest {
return nil, errors.Errorf("Internal error: Uploaded empty layer has digest %#v instead of %s", info.Digest, gzippedEmptyLayerDigest)
}
haveGzippedEmptyLayer = true
}
blobDigest = gzippedEmptyLayerDigest
} else {
if nonemptyLayerIndex >= len(m.LayersDescriptors) {
return nil, errors.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.LayersDescriptors))
}
blobDigest = m.LayersDescriptors[nonemptyLayerIndex].Digest
nonemptyLayerIndex++
}
// AFAICT pull ignores these ID values, at least nowadays, so we could use anything unique, including a simple counter. Use what Docker uses for cargo-cult consistency.
v, err := v1IDFromBlobDigestAndComponents(blobDigest, parentV1ID)
if err != nil {
return nil, err
}
v1ID = v
fakeImage := v1Compatibility{
ID: v1ID,
Parent: parentV1ID,
Comment: historyEntry.Comment,
Created: historyEntry.Created,
Author: historyEntry.Author,
ThrowAway: historyEntry.EmptyLayer,
}
fakeImage.ContainerConfig.Cmd = []string{historyEntry.CreatedBy}
v1CompatibilityBytes, err := json.Marshal(&fakeImage)
if err != nil {
return nil, errors.Errorf("Internal error: Error creating v1compatibility for %#v", fakeImage)
}
fsLayers[v1Index] = fsLayersSchema1{BlobSum: blobDigest}
history[v1Index] = historySchema1{V1Compatibility: string(v1CompatibilityBytes)}
// Note that parentV1ID of the top layer is preserved when exiting this loop
}
// Now patch in real configuration for the top layer (v1Index == 0)
v1ID, err = v1IDFromBlobDigestAndComponents(fsLayers[0].BlobSum, parentV1ID, string(configBytes)) // See above WRT v1ID value generation and cargo-cult consistency.
if err != nil {
return nil, err
}
v1Config, err := v1ConfigFromConfigJSON(configBytes, v1ID, parentV1ID, imageConfig.History[len(imageConfig.History)-1].EmptyLayer)
if err != nil {
return nil, err
}
history[0].V1Compatibility = string(v1Config)
m1 := manifestSchema1FromComponents(dest.Reference().DockerReference(), fsLayers, history, imageConfig.Architecture)
return memoryImageFromManifest(m1), nil
}
func v1IDFromBlobDigestAndComponents(blobDigest digest.Digest, others ...string) (string, error) {
if err := blobDigest.Validate(); err != nil {
return "", err
}
parts := append([]string{blobDigest.Hex()}, others...)
v1IDHash := sha256.Sum256([]byte(strings.Join(parts, " ")))
return hex.EncodeToString(v1IDHash[:]), nil
}
func v1ConfigFromConfigJSON(configJSON []byte, v1ID, parentV1ID string, throwaway bool) ([]byte, error) {
// Preserve everything we don't specifically know about.
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
rawContents := map[string]*json.RawMessage{}
if err := json.Unmarshal(configJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
return nil, err
}
delete(rawContents, "rootfs")
delete(rawContents, "history")
updates := map[string]interface{}{"id": v1ID}
if parentV1ID != "" {
updates["parent"] = parentV1ID
}
if throwaway {
updates["throwaway"] = throwaway
}
for field, value := range updates {
encoded, err := json.Marshal(value)
if err != nil {
return nil, err
}
rawContents[field] = (*json.RawMessage)(&encoded)
}
return json.Marshal(rawContents)
}

View File

@@ -1,186 +0,0 @@
// Package image consolidates knowledge about various container image formats
// (as opposed to image storage mechanisms, which are handled by types.ImageSource)
// and exposes all of them using an unified interface.
package image
import (
"errors"
"fmt"
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
)
// genericImage is a general set of utilities for working with container images,
// whatever is their underlying location (i.e. dockerImageSource-independent).
// Note the existence of skopeo/docker.Image: some instances of a `types.Image`
// may not be a `genericImage` directly. However, most users of `types.Image`
// do not care, and those who care about `skopeo/docker.Image` know they do.
type genericImage struct {
src types.ImageSource
// private cache for Manifest(); nil if not yet known.
cachedManifest []byte
// private cache for the manifest media type w/o having to guess it
// this may be the empty string in case the MIME Type wasn't guessed correctly
// this field is valid only if cachedManifest is not nil
cachedManifestMIMEType string
// private cache for Signatures(); nil if not yet known.
cachedSignatures [][]byte
}
// FromSource returns a types.Image implementation for source.
// The caller must call .Close() on the returned Image.
//
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// Image and ImageSource objects simultaneously, but it means that they only need to
// the Image.)
func FromSource(src types.ImageSource) types.Image {
return &genericImage{src: src}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *genericImage) Reference() types.ImageReference {
return i.src.Reference()
}
// Close removes resources associated with an initialized Image, if any.
func (i *genericImage) Close() {
i.src.Close()
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
// NOTE: It is essential for signature verification that Manifest returns the manifest from which ConfigInfo and LayerInfos is computed.
func (i *genericImage) Manifest() ([]byte, string, error) {
if i.cachedManifest == nil {
m, mt, err := i.src.GetManifest()
if err != nil {
return nil, "", err
}
i.cachedManifest = m
if mt == "" || mt == "text/plain" {
// Crane registries can return "text/plain".
// This makes no real sense, but it happens
// because requests for manifests are
// redirected to a content distribution
// network which is configured that way.
mt = manifest.GuessMIMEType(i.cachedManifest)
}
i.cachedManifestMIMEType = mt
}
return i.cachedManifest, i.cachedManifestMIMEType, nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *genericImage) Signatures() ([][]byte, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignatures()
if err != nil {
return nil, err
}
i.cachedSignatures = sigs
}
return i.cachedSignatures, nil
}
type config struct {
Labels map[string]string
}
type v1Image struct {
// Config is the configuration of the container received from the client
Config *config `json:"config,omitempty"`
// DockerVersion specifies version on which image is built
DockerVersion string `json:"docker_version,omitempty"`
// Created timestamp when image was created
Created time.Time `json:"created"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
}
// will support v1 one day...
type genericManifest interface {
Config() ([]byte, error)
ConfigInfo() types.BlobInfo
LayerInfos() []types.BlobInfo
ImageInspectInfo() (*types.ImageInspectInfo, error) // The caller will need to fill in Layers
UpdatedManifest(types.ManifestUpdateOptions) ([]byte, error)
}
// getParsedManifest parses the manifest into a data structure, cleans it up, and returns it.
// NOTE: The manifest may have been modified in the process; DO NOT reserialize and store the return value
// if you want to preserve the original manifest; use the blob returned by Manifest() directly.
// NOTE: It is essential for signature verification that the object is computed from the same manifest which is returned by Manifest().
func (i *genericImage) getParsedManifest() (genericManifest, error) {
manblob, mt, err := i.Manifest()
if err != nil {
return nil, err
}
switch mt {
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
// need to happen within the ImageSource.
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, "application/json":
return manifestSchema1FromManifest(manblob)
case manifest.DockerV2Schema2MediaType:
return manifestSchema2FromManifest(i.src, manblob)
case "":
return nil, errors.New("could not guess manifest media type")
default:
return nil, fmt.Errorf("unsupported manifest media type %s", mt)
}
}
func (i *genericImage) Inspect() (*types.ImageInspectInfo, error) {
// TODO(runcom): unused version param for now, default to docker v2-1
m, err := i.getParsedManifest()
if err != nil {
return nil, err
}
info, err := m.ImageInspectInfo()
if err != nil {
return nil, err
}
layers := m.LayerInfos()
info.Layers = make([]string, len(layers))
for i, layer := range layers {
info.Layers[i] = layer.Digest
}
return info, nil
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// NOTE: It is essential for signature verification that ConfigInfo is computed from the same manifest which is returned by Manifest().
func (i *genericImage) ConfigInfo() (types.BlobInfo, error) {
m, err := i.getParsedManifest()
if err != nil {
return types.BlobInfo{}, err
}
return m.ConfigInfo(), nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// NOTE: It is essential for signature verification that LayerInfos is computed from the same manifest which is returned by Manifest().
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (i *genericImage) LayerInfos() ([]types.BlobInfo, error) {
m, err := i.getParsedManifest()
if err != nil {
return nil, err
}
return m.LayerInfos(), nil
}
// UpdatedManifest returns the image's manifest modified according to updateOptions.
// This does not change the state of the Image object.
func (i *genericImage) UpdatedManifest(options types.ManifestUpdateOptions) ([]byte, error) {
m, err := i.getParsedManifest()
if err != nil {
return nil, err
}
return m.UpdatedManifest(options)
}

129
vendor/github.com/containers/image/image/manifest.go generated vendored Normal file
View File

@@ -0,0 +1,129 @@
package image
import (
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/strslice"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type config struct {
Cmd strslice.StrSlice
Labels map[string]string
}
type v1Image struct {
ID string `json:"id,omitempty"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig *config `json:"container_config,omitempty"`
DockerVersion string `json:"docker_version,omitempty"`
Author string `json:"author,omitempty"`
// Config is the configuration of the container received from the client
Config *config `json:"config,omitempty"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
}
type image struct {
v1Image
History []imageHistory `json:"history,omitempty"`
RootFS *rootFS `json:"rootfs,omitempty"`
}
type imageHistory struct {
Created time.Time `json:"created"`
Author string `json:"author,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
Comment string `json:"comment,omitempty"`
EmptyLayer bool `json:"empty_layer,omitempty"`
}
type rootFS struct {
Type string `json:"type"`
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
BaseLayer string `json:"base_layer,omitempty"`
}
// genericManifest is an interface for parsing, modifying image manifests and related data.
// Note that the public methods are intended to be a subset of types.Image
// so that embedding a genericManifest into structs works.
// will support v1 one day...
type genericManifest interface {
serialize() ([]byte, error)
manifestMIMEType() string
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
ConfigInfo() types.BlobInfo
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
ConfigBlob() ([]byte, error)
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
OCIConfig() (*imgspecv1.Image, error)
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []types.BlobInfo
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
imageInspectInfo() (*types.ImageInspectInfo, error) // To be called by inspectManifest
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error)
}
func manifestInstanceFromBlob(src types.ImageSource, manblob []byte, mt string) (genericManifest, error) {
switch mt {
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
// need to happen within the ImageSource.
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, "application/json":
return manifestSchema1FromManifest(manblob)
case imgspecv1.MediaTypeImageManifest:
return manifestOCI1FromManifest(src, manblob)
case manifest.DockerV2Schema2MediaType:
return manifestSchema2FromManifest(src, manblob)
case manifest.DockerV2ListMediaType:
return manifestSchema2FromManifestList(src, manblob)
default:
// If it's not a recognized manifest media type, or we have failed determining the type, we'll try one last time
// to deserialize using v2s1 as per https://github.com/docker/distribution/blob/master/manifests.go#L108
// and https://github.com/docker/distribution/blob/master/manifest/schema1/manifest.go#L50
//
// Crane registries can also return "text/plain", or pretty much anything else depending on a file extension “recognized” in the tag.
// This makes no real sense, but it happens
// because requests for manifests are
// redirected to a content distribution
// network which is configured that way. See https://bugzilla.redhat.com/show_bug.cgi?id=1389442
return manifestSchema1FromManifest(manblob)
}
}
// inspectManifest is an implementation of types.Image.Inspect
func inspectManifest(m genericManifest) (*types.ImageInspectInfo, error) {
info, err := m.imageInspectInfo()
if err != nil {
return nil, err
}
layers := m.LayerInfos()
info.Layers = make([]string, len(layers))
for i, layer := range layers {
info.Layers[i] = layer.Digest.String()
}
return info, nil
}

71
vendor/github.com/containers/image/image/memory.go generated vendored Normal file
View File

@@ -0,0 +1,71 @@
package image
import (
"github.com/pkg/errors"
"github.com/containers/image/types"
)
// memoryImage is a mostly-implementation of types.Image assembled from data
// created in memory, used primarily as a return value of types.Image.UpdatedImage
// as a way to carry various structured information in a type-safe and easy-to-use way.
// Note that this _only_ carries the immediate metadata; it is _not_ a stand-alone
// collection of all related information, e.g. there is no way to get layer blobs
// from a memoryImage.
type memoryImage struct {
genericManifest
serializedManifest []byte // A private cache for Manifest()
}
func memoryImageFromManifest(m genericManifest) types.Image {
return &memoryImage{
genericManifest: m,
serializedManifest: nil,
}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *memoryImage) Reference() types.ImageReference {
// It would really be inappropriate to return the ImageReference of the image this was based on.
return nil
}
// Close removes resources associated with an initialized UnparsedImage, if any.
func (i *memoryImage) Close() error {
return nil
}
// Size returns the size of the image as stored, if known, or -1 if not.
func (i *memoryImage) Size() (int64, error) {
return -1, nil
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Manifest() ([]byte, string, error) {
if i.serializedManifest == nil {
m, err := i.genericManifest.serialize()
if err != nil {
return nil, "", err
}
i.serializedManifest = m
}
return i.serializedManifest, i.genericManifest.manifestMIMEType(), nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Signatures() ([][]byte, error) {
// Modifying an image invalidates signatures; a caller asking the updated image for signatures
// is probably confused.
return nil, errors.New("Internal error: Image.Signatures() is not supported for images modified in memory")
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (i *memoryImage) Inspect() (*types.ImageInspectInfo, error) {
return inspectManifest(i.genericManifest)
}
// IsMultiImage returns true if the image's manifest is a list of images, false otherwise.
func (i *memoryImage) IsMultiImage() bool {
return false
}

196
vendor/github.com/containers/image/image/oci.go generated vendored Normal file
View File

@@ -0,0 +1,196 @@
package image
import (
"encoding/json"
"io/ioutil"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
type descriptorOCI1 struct {
descriptor
Annotations map[string]string `json:"annotations,omitempty"`
}
type manifestOCI1 struct {
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
ConfigDescriptor descriptorOCI1 `json:"config"`
LayersDescriptors []descriptorOCI1 `json:"layers"`
Annotations map[string]string `json:"annotations,omitempty"`
}
func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
oci := manifestOCI1{src: src}
if err := json.Unmarshal(manifest, &oci); err != nil {
return nil, err
}
return &oci, nil
}
// manifestOCI1FromComponents builds a new manifestOCI1 from the supplied data:
func manifestOCI1FromComponents(config descriptorOCI1, src types.ImageSource, configBlob []byte, layers []descriptorOCI1) genericManifest {
return &manifestOCI1{
src: src,
configBlob: configBlob,
SchemaVersion: 2,
ConfigDescriptor: config,
LayersDescriptors: layers,
}
}
func (m *manifestOCI1) serialize() ([]byte, error) {
return json.Marshal(*m)
}
func (m *manifestOCI1) manifestMIMEType() string {
return imgspecv1.MediaTypeImageManifest
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestOCI1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestOCI1")
}
stream, _, err := m.src.GetBlob(types.BlobInfo{
Digest: m.ConfigDescriptor.Digest,
Size: m.ConfigDescriptor.Size,
URLs: m.ConfigDescriptor.URLs,
})
if err != nil {
return nil, err
}
defer stream.Close()
blob, err := ioutil.ReadAll(stream)
if err != nil {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.ConfigDescriptor.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
}
m.configBlob = blob
}
return m.configBlob, nil
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestOCI1) OCIConfig() (*imgspecv1.Image, error) {
cb, err := m.ConfigBlob()
if err != nil {
return nil, err
}
configOCI := &imgspecv1.Image{}
if err := json.Unmarshal(cb, configOCI); err != nil {
return nil, err
}
return configOCI, nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size})
}
return blobs
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestOCI1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
return nil, err
}
v1 := &v1Image{}
if err := json.Unmarshal(config, v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
}, nil
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestOCI1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return false
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
if options.LayerInfos != nil {
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptorOCI1, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].MediaType = m.LayersDescriptors[i].MediaType
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1, but we really don't care.
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema2MediaType:
return copy.convertToManifestSchema2()
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", imgspecv1.MediaTypeImageManifest, options.ManifestMIMEType)
}
return memoryImageFromManifest(&copy), nil
}
func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
// Create a copy of the descriptor.
config := m.ConfigDescriptor.descriptor
// The only difference between OCI and DockerSchema2 is the mediatypes. The
// media type of the manifest is handled by manifestSchema2FromComponents.
config.MediaType = manifest.DockerV2Schema2ConfigMediaType
layers := make([]descriptor, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx].descriptor
layers[idx].MediaType = manifest.DockerV2Schema2LayerMediaType
}
// Rather than copying the ConfigBlob now, we just pass m.src to the
// translated manifest, since the only difference is the mediatype of
// descriptors there is no change to any blob stored in m.src.
m1 := manifestSchema2FromComponents(config, m.src, nil, layers)
return memoryImageFromManifest(m1), nil
}

90
vendor/github.com/containers/image/image/sourced.go generated vendored Normal file
View File

@@ -0,0 +1,90 @@
// Package image consolidates knowledge about various container image formats
// (as opposed to image storage mechanisms, which are handled by types.ImageSource)
// and exposes all of them using an unified interface.
package image
import (
"github.com/containers/image/manifest"
"github.com/containers/image/types"
)
// FromSource returns a types.Image implementation for source.
// The caller must call .Close() on the returned Image.
//
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// Image and ImageSource objects simultaneously, but it means that they only need to
// the Image.)
//
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage instead of calling this function.
func FromSource(src types.ImageSource) (types.Image, error) {
return FromUnparsedImage(UnparsedFromSource(src))
}
// sourcedImage is a general set of utilities for working with container images,
// whatever is their underlying location (i.e. dockerImageSource-independent).
// Note the existence of skopeo/docker.Image: some instances of a `types.Image`
// may not be a `sourcedImage` directly. However, most users of `types.Image`
// do not care, and those who care about `skopeo/docker.Image` know they do.
type sourcedImage struct {
*UnparsedImage
manifestBlob []byte
manifestMIMEType string
// genericManifest contains data corresponding to manifestBlob.
// NOTE: The manifest may have been modified in the process; DO NOT reserialize and store genericManifest
// if you want to preserve the original manifest; use manifestBlob directly.
genericManifest
}
// FromUnparsedImage returns a types.Image implementation for unparsed.
// The caller must call .Close() on the returned Image.
//
// FromSource “takes ownership” of the input UnparsedImage and will call uparsed.Close()
// when the image is closed. (This does not prevent callers from using both the
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
// keep a reference to the Image.)
func FromUnparsedImage(unparsed *UnparsedImage) (types.Image, error) {
// Note that the input parameter above is specifically *image.UnparsedImage, not types.UnparsedImage:
// we want to be able to use unparsed.src. We could make that an explicit interface, but, well,
// this is the only UnparsedImage implementation around, anyway.
// Also, we do not explicitly implement types.Image.Close; we let the implementation fall through to
// unparsed.Close.
// NOTE: It is essential for signature verification that all parsing done in this object happens on the same manifest which is returned by unparsed.Manifest().
manifestBlob, manifestMIMEType, err := unparsed.Manifest()
if err != nil {
return nil, err
}
parsedManifest, err := manifestInstanceFromBlob(unparsed.src, manifestBlob, manifestMIMEType)
if err != nil {
return nil, err
}
return &sourcedImage{
UnparsedImage: unparsed,
manifestBlob: manifestBlob,
manifestMIMEType: manifestMIMEType,
genericManifest: parsedManifest,
}, nil
}
// Size returns the size of the image as stored, if it's known, or -1 if it isn't.
func (i *sourcedImage) Size() (int64, error) {
return -1, nil
}
// Manifest overrides the UnparsedImage.Manifest to always use the fields which we have already fetched.
func (i *sourcedImage) Manifest() ([]byte, string, error) {
return i.manifestBlob, i.manifestMIMEType, nil
}
func (i *sourcedImage) Inspect() (*types.ImageInspectInfo, error) {
return inspectManifest(i.genericManifest)
}
func (i *sourcedImage) IsMultiImage() bool {
return i.manifestMIMEType == manifest.DockerV2ListMediaType
}

83
vendor/github.com/containers/image/image/unparsed.go generated vendored Normal file
View File

@@ -0,0 +1,83 @@
package image
import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// UnparsedImage implements types.UnparsedImage .
type UnparsedImage struct {
src types.ImageSource
cachedManifest []byte // A private cache for Manifest(); nil if not yet known.
// A private cache for Manifest(), may be the empty string if guessing failed.
// Valid iff cachedManifest is not nil.
cachedManifestMIMEType string
cachedSignatures [][]byte // A private cache for Signatures(); nil if not yet known.
}
// UnparsedFromSource returns a types.UnparsedImage implementation for source.
// The caller must call .Close() on the returned UnparsedImage.
//
// UnparsedFromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
// keep a reference to the UnparsedImage.)
func UnparsedFromSource(src types.ImageSource) *UnparsedImage {
return &UnparsedImage{src: src}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *UnparsedImage) Reference() types.ImageReference {
return i.src.Reference()
}
// Close removes resources associated with an initialized UnparsedImage, if any.
func (i *UnparsedImage) Close() error {
return i.src.Close()
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Manifest() ([]byte, string, error) {
if i.cachedManifest == nil {
m, mt, err := i.src.GetManifest()
if err != nil {
return nil, "", err
}
// ImageSource.GetManifest does not do digest verification, but we do;
// this immediately protects also any user of types.Image.
ref := i.Reference().DockerReference()
if ref != nil {
if canonical, ok := ref.(reference.Canonical); ok {
digest := digest.Digest(canonical.Digest())
matches, err := manifest.MatchesDigest(m, digest)
if err != nil {
return nil, "", errors.Wrap(err, "Error computing manifest digest")
}
if !matches {
return nil, "", errors.Errorf("Manifest does not match provided manifest digest %s", digest)
}
}
}
i.cachedManifest = m
i.cachedManifestMIMEType = mt
}
return i.cachedManifest, i.cachedManifestMIMEType, nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Signatures() ([][]byte, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignatures()
if err != nil {
return nil, err
}
i.cachedSignatures = sigs
}
return i.cachedSignatures, nil
}

View File

@@ -1,11 +1,10 @@
package manifest
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"github.com/docker/libtrust"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
@@ -19,8 +18,14 @@ const (
DockerV2Schema1SignedMediaType = "application/vnd.docker.distribution.manifest.v1+prettyjws"
// DockerV2Schema2MediaType MIME type represents Docker manifest schema 2
DockerV2Schema2MediaType = "application/vnd.docker.distribution.manifest.v2+json"
// DockerV2Schema2ConfigMediaType is the MIME type used for schema 2 config blobs.
DockerV2Schema2ConfigMediaType = "application/vnd.docker.container.image.v1+json"
// DockerV2Schema2LayerMediaType is the MIME type used for schema 2 layers.
DockerV2Schema2LayerMediaType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
// DockerV2ListMediaType MIME type represents Docker manifest schema 2 list
DockerV2ListMediaType = "application/vnd.docker.distribution.manifest.list.v2+json"
// DockerV2Schema2ForeignLayerMediaType is the MIME type used for schema 2 foreign layers.
DockerV2Schema2ForeignLayerMediaType = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
)
// DefaultRequestedManifestMIMETypes is a list of MIME types a types.ImageSource
@@ -30,6 +35,7 @@ var DefaultRequestedManifestMIMETypes = []string{
DockerV2Schema2MediaType,
DockerV2Schema1SignedMediaType,
DockerV2Schema1MediaType,
DockerV2ListMediaType,
}
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
@@ -48,7 +54,7 @@ func GuessMIMEType(manifest []byte) string {
}
switch meta.MediaType {
case DockerV2Schema2MediaType, DockerV2ListMediaType, imgspecv1.MediaTypeImageManifest, imgspecv1.MediaTypeImageManifestList: // A recognized type.
case DockerV2Schema2MediaType, DockerV2ListMediaType: // A recognized type.
return meta.MediaType
}
// this is the only way the function can return DockerV2Schema1MediaType, and recognizing that is essential for stripping the JWS signatures = computing the correct manifest digest.
@@ -58,14 +64,38 @@ func GuessMIMEType(manifest []byte) string {
return DockerV2Schema1SignedMediaType
}
return DockerV2Schema1MediaType
case 2: // Really should not happen, meta.MediaType should have been set. But given the data, this is our best guess.
case 2:
// best effort to understand if this is an OCI image since mediaType
// isn't in the manifest for OCI anymore
// for docker v2s2 meta.MediaType should have been set. But given the data, this is our best guess.
ociMan := struct {
Config struct {
MediaType string `json:"mediaType"`
} `json:"config"`
Layers []imgspecv1.Descriptor `json:"layers"`
}{}
if err := json.Unmarshal(manifest, &ociMan); err != nil {
return ""
}
if ociMan.Config.MediaType == imgspecv1.MediaTypeImageConfig && len(ociMan.Layers) != 0 {
return imgspecv1.MediaTypeImageManifest
}
ociIndex := struct {
Manifests []imgspecv1.Descriptor `json:"manifests"`
}{}
if err := json.Unmarshal(manifest, &ociIndex); err != nil {
return ""
}
if len(ociIndex.Manifests) != 0 && ociIndex.Manifests[0].MediaType == imgspecv1.MediaTypeImageManifest {
return imgspecv1.MediaTypeImageIndex
}
return DockerV2Schema2MediaType
}
return ""
}
// Digest returns the a digest of a docker manifest, with any necessary implied transformations like stripping v1s1 signatures.
func Digest(manifest []byte) (string, error) {
func Digest(manifest []byte) (digest.Digest, error) {
if GuessMIMEType(manifest) == DockerV2Schema1SignedMediaType {
sig, err := libtrust.ParsePrettySignature(manifest, "signatures")
if err != nil {
@@ -79,15 +109,14 @@ func Digest(manifest []byte) (string, error) {
}
}
hash := sha256.Sum256(manifest)
return "sha256:" + hex.EncodeToString(hash[:]), nil
return digest.FromBytes(manifest), nil
}
// MatchesDigest returns true iff the manifest matches expectedDigest.
// Error may be set if this returns false.
// Note that this is not doing ConstantTimeCompare; by the time we get here, the cryptographic signature must already have been verified,
// or we are not using a cryptographic channel and the attacker can modify the digest along with the manifest blob.
func MatchesDigest(manifest []byte, expectedDigest string) (bool, error) {
func MatchesDigest(manifest []byte, expectedDigest digest.Digest) (bool, error) {
// This should eventually support various digest types.
actualDigest, err := Digest(manifest)
if err != nil {

View File

@@ -1,29 +1,35 @@
package layout
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"runtime"
"github.com/pkg/errors"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspec "github.com/opencontainers/image-spec/specs-go"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type ociImageDestination struct {
ref ociReference
ref ociReference
index imgspecv1.Index
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ref ociReference) types.ImageDestination {
return &ociImageDestination{ref: ref}
index := imgspecv1.Index{
Versioned: imgspec.Versioned{
SchemaVersion: 2,
},
}
return &ociImageDestination{ref: ref, index: index}
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
@@ -33,24 +39,35 @@ func (d *ociImageDestination) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *ociImageDestination) Close() {
func (d *ociImageDestination) Close() error {
return nil
}
func (d *ociImageDestination) SupportedManifestMIMETypes() []string {
return []string{
imgspecv1.MediaTypeImageManifest,
manifest.DockerV2Schema2MediaType,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *ociImageDestination) SupportsSignatures() error {
return fmt.Errorf("Pushing signatures for OCI images is not supported")
return errors.Errorf("Pushing signatures for OCI images is not supported")
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *ociImageDestination) ShouldCompressLayers() bool {
return true
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ociImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ociImageDestination) MustMatchRuntimeOS() bool {
return false
}
@@ -76,16 +93,16 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
}
}()
h := sha256.New()
tee := io.TeeReader(stream, h)
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := "sha256:" + hex.EncodeToString(h.Sum(nil))
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, fmt.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
@@ -108,77 +125,68 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
func createManifest(m []byte) ([]byte, string, error) {
om := imgspecv1.Manifest{}
mt := manifest.GuessMIMEType(m)
switch mt {
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
// There a simple reason about not yet implementing this.
// OCI image-spec assure about backward compatibility with docker v2s2 but not v2s1
// generating a v2s2 is a migration docker does when upgrading to 1.10.3
// and I don't think we should bother about this now (I don't want to have migration code here in skopeo)
return nil, "", errors.New("can't create an OCI manifest from Docker V2 schema 1 manifest")
case manifest.DockerV2Schema2MediaType:
if err := json.Unmarshal(m, &om); err != nil {
return nil, "", err
}
om.MediaType = imgspecv1.MediaTypeImageManifest
for i := range om.Layers {
om.Layers[i].MediaType = imgspecv1.MediaTypeImageLayer
}
om.Config.MediaType = imgspecv1.MediaTypeImageConfig
b, err := json.Marshal(om)
if err != nil {
return nil, "", err
}
return b, om.MediaType, nil
case manifest.DockerV2ListMediaType:
return nil, "", errors.New("can't create an OCI manifest from Docker V2 schema 2 manifest list")
case imgspecv1.MediaTypeImageManifestList:
return nil, "", errors.New("can't create an OCI manifest from OCI manifest list")
case imgspecv1.MediaTypeImageManifest:
return m, mt, nil
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *ociImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
return nil, "", fmt.Errorf("unrecognized manifest media type %q", mt)
blobPath, err := d.ref.blobPath(info.Digest)
if err != nil {
return false, -1, err
}
finfo, err := os.Stat(blobPath)
if err != nil && os.IsNotExist(err) {
return false, -1, nil
}
if err != nil {
return false, -1, err
}
return true, finfo.Size(), nil
}
func (d *ociImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *ociImageDestination) PutManifest(m []byte) error {
// TODO(mitr, runcom): this breaks signatures entirely since at this point we're creating a new manifest
// and signatures don't apply anymore. Will fix.
ociMan, mt, err := createManifest(m)
digest, err := manifest.Digest(m)
if err != nil {
return err
}
digest, err := manifest.Digest(ociMan)
if err != nil {
return err
}
desc := imgspec.Descriptor{}
desc := imgspecv1.Descriptor{}
desc.Digest = digest
// TODO(runcom): beaware and add support for OCI manifest list
desc.MediaType = mt
desc.Size = int64(len(ociMan))
data, err := json.Marshal(desc)
if err != nil {
return err
}
desc.MediaType = imgspecv1.MediaTypeImageManifest
desc.Size = int64(len(m))
blobPath, err := d.ref.blobPath(digest)
if err != nil {
return err
}
if err := ioutil.WriteFile(blobPath, ociMan, 0644); err != nil {
if err := ensureParentDirectoryExists(blobPath); err != nil {
return err
}
// TODO(runcom): ugly here?
if err := ioutil.WriteFile(d.ref.ociLayoutPath(), []byte(`{"imageLayoutVersion": "1.0.0"}`), 0644); err != nil {
if err := ioutil.WriteFile(blobPath, m, 0644); err != nil {
return err
}
descriptorPath := d.ref.descriptorPath(d.ref.tag)
if err := ensureParentDirectoryExists(descriptorPath); err != nil {
return err
annotations := make(map[string]string)
annotations["org.opencontainers.image.ref.name"] = d.ref.tag
desc.Annotations = annotations
desc.Platform = &imgspecv1.Platform{
Architecture: runtime.GOARCH,
OS: runtime.GOOS,
}
return ioutil.WriteFile(descriptorPath, data, 0644)
d.index.Manifests = append(d.index.Manifests, desc)
return nil
}
func ensureDirectoryExists(path string) error {
@@ -197,7 +205,7 @@ func ensureParentDirectoryExists(path string) error {
func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return fmt.Errorf("Pushing signatures for OCI images is not supported")
return errors.Errorf("Pushing signatures for OCI images is not supported")
}
return nil
}
@@ -207,5 +215,12 @@ func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *ociImageDestination) Commit() error {
return nil
if err := ioutil.WriteFile(d.ref.ociLayoutPath(), []byte(`{"imageLayoutVersion": "1.0.0"}`), 0644); err != nil {
return err
}
indexJSON, err := json.Marshal(d.index)
if err != nil {
return err
}
return ioutil.WriteFile(d.ref.indexPath(), indexJSON, 0644)
}

View File

@@ -0,0 +1,90 @@
package layout
import (
"io"
"io/ioutil"
"os"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type ociImageSource struct {
ref ociReference
descriptor imgspecv1.Descriptor
}
// newImageSource returns an ImageSource for reading from an existing directory.
func newImageSource(ref ociReference) (types.ImageSource, error) {
descriptor, err := ref.getManifestDescriptor()
if err != nil {
return nil, err
}
return &ociImageSource{ref: ref, descriptor: descriptor}, nil
}
// Reference returns the reference used to set up this source.
func (s *ociImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *ociImageSource) Close() error {
return nil
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *ociImageSource) GetManifest() ([]byte, string, error) {
manifestPath, err := s.ref.blobPath(digest.Digest(s.descriptor.Digest))
if err != nil {
return nil, "", err
}
m, err := ioutil.ReadFile(manifestPath)
if err != nil {
return nil, "", err
}
return m, s.descriptor.MediaType, nil
}
func (s *ociImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
manifestPath, err := s.ref.blobPath(digest)
if err != nil {
return nil, "", err
}
m, err := ioutil.ReadFile(manifestPath)
if err != nil {
return nil, "", err
}
// XXX: GetTargetManifest means that we don't have the context of what
// mediaType the manifest has. In OCI this means that we don't know
// what reference it came from, so we just *assume* that its
// MediaTypeImageManifest.
return m, imgspecv1.MediaTypeImageManifest, nil
}
// GetBlob returns a stream for the specified blob, and the blob's size.
func (s *ociImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
path, err := s.ref.blobPath(info.Digest)
if err != nil {
return nil, 0, err
}
r, err := os.Open(path)
if err != nil {
return nil, 0, err
}
fi, err := r.Stat()
if err != nil {
return nil, 0, err
}
return r, fi.Size(), nil
}
func (s *ociImageSource) GetSignatures() ([][]byte, error) {
return [][]byte{}, nil
}

View File

@@ -1,17 +1,27 @@
package layout
import (
"errors"
"encoding/json"
"fmt"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for OCI directories.
var Transport = ociTransport{}
@@ -41,16 +51,16 @@ func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
dir = scope[:sep]
tag := scope[sep+1:]
if !refRegexp.MatchString(tag) {
return fmt.Errorf("Invalid tag %s", tag)
return errors.Errorf("Invalid tag %s", tag)
}
}
if strings.Contains(dir, ":") {
return fmt.Errorf("Invalid OCI reference %s: path contains a colon", scope)
return errors.Errorf("Invalid OCI reference %s: path contains a colon", scope)
}
if !strings.HasPrefix(dir, "/") {
return fmt.Errorf("Invalid scope %s: must be an absolute path", scope)
return errors.Errorf("Invalid scope %s: must be an absolute path", scope)
}
// Refuse also "/", otherwise "/" and "" would have the same semantics,
// and "" could be unexpectedly shadowed by the "/" entry.
@@ -60,7 +70,7 @@ func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
}
cleaned := filepath.Clean(dir)
if cleaned != dir {
return fmt.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
return errors.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
}
return nil
}
@@ -104,10 +114,10 @@ func NewReference(dir, tag string) (types.ImageReference, error) {
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
// from being ambiguous with values of PolicyConfigurationIdentity.
if strings.Contains(resolved, ":") {
return nil, fmt.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, tag, resolved)
return nil, errors.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, tag, resolved)
}
if !refRegexp.MatchString(tag) {
return nil, fmt.Errorf("Invalid tag %s", tag)
return nil, errors.Errorf("Invalid tag %s", tag)
}
return ociReference{dir: dir, resolvedDir: resolved, tag: tag}, nil
}
@@ -164,10 +174,46 @@ func (ref ociReference) PolicyConfigurationNamespaces() []string {
return res
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return nil, errors.New("Full Image support not implemented for oci: image names")
src, err := newImageSource(ref)
if err != nil {
return nil, err
}
return image.FromSource(src)
}
func (ref ociReference) getManifestDescriptor() (imgspecv1.Descriptor, error) {
indexJSON, err := os.Open(ref.indexPath())
if err != nil {
return imgspecv1.Descriptor{}, err
}
defer indexJSON.Close()
index := imgspecv1.Index{}
if err := json.NewDecoder(indexJSON).Decode(&index); err != nil {
return imgspecv1.Descriptor{}, err
}
var d *imgspecv1.Descriptor
for _, md := range index.Manifests {
if md.MediaType != imgspecv1.MediaTypeImageManifest {
continue
}
refName, ok := md.Annotations["org.opencontainers.image.ref.name"]
if !ok {
continue
}
if refName == ref.tag {
d = &md
break
}
}
if d == nil {
return imgspecv1.Descriptor{}, fmt.Errorf("no descriptor found for reference %q", ref.tag)
}
return *d, nil
}
// NewImageSource returns a types.ImageSource for this reference,
@@ -175,7 +221,7 @@ func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error)
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref ociReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return nil, errors.New("Reading images not implemented for oci: image names")
return newImageSource(ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
@@ -186,24 +232,23 @@ func (ref ociReference) NewImageDestination(ctx *types.SystemContext) (types.Ima
// DeleteImage deletes the named image from the registry, if supported.
func (ref ociReference) DeleteImage(ctx *types.SystemContext) error {
return fmt.Errorf("Deleting images not implemented for oci: images")
return errors.Errorf("Deleting images not implemented for oci: images")
}
// ociLayoutPathPath returns a path for the oci-layout within a directory using OCI conventions.
// ociLayoutPath returns a path for the oci-layout within a directory using OCI conventions.
func (ref ociReference) ociLayoutPath() string {
return filepath.Join(ref.dir, "oci-layout")
}
// blobPath returns a path for a blob within a directory using OCI image-layout conventions.
func (ref ociReference) blobPath(digest string) (string, error) {
pts := strings.SplitN(digest, ":", 2)
if len(pts) != 2 {
return "", fmt.Errorf("unexpected digest reference %s", digest)
}
return filepath.Join(ref.dir, "blobs", pts[0], pts[1]), nil
// indexPath returns a path for the index.json within a directory using OCI conventions.
func (ref ociReference) indexPath() string {
return filepath.Join(ref.dir, "index.json")
}
// descriptorPath returns a path for the manifest within a directory using OCI conventions.
func (ref ociReference) descriptorPath(digest string) string {
return filepath.Join(ref.dir, "refs", digest)
// blobPath returns a path for a blob within a directory using OCI image-layout conventions.
func (ref ociReference) blobPath(digest digest.Digest) (string, error) {
if err := digest.Validate(); err != nil {
return "", errors.Wrapf(err, "unexpected digest reference %s", digest)
}
return filepath.Join(ref.dir, "blobs", digest.Algorithm().String(), digest.Hex()), nil
}

View File

@@ -4,7 +4,6 @@ import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net"
@@ -14,13 +13,14 @@ import (
"path"
"path/filepath"
"reflect"
"strings"
"time"
"github.com/ghodss/yaml"
"github.com/imdario/mergo"
utilerrors "k8s.io/kubernetes/pkg/util/errors"
"k8s.io/kubernetes/pkg/util/homedir"
utilnet "k8s.io/kubernetes/pkg/util/net"
"github.com/pkg/errors"
"golang.org/x/net/http2"
"k8s.io/client-go/util/homedir"
)
// restTLSClientConfig is a modified copy of k8s.io/kubernets/pkg/client/restclient.TLSClientConfig.
@@ -348,20 +348,20 @@ func validateClusterInfo(clusterName string, clusterInfo clientcmdCluster) []err
if len(clusterInfo.Server) == 0 {
if len(clusterName) == 0 {
validationErrors = append(validationErrors, fmt.Errorf("default cluster has no server defined"))
validationErrors = append(validationErrors, errors.Errorf("default cluster has no server defined"))
} else {
validationErrors = append(validationErrors, fmt.Errorf("no server found for cluster %q", clusterName))
validationErrors = append(validationErrors, errors.Errorf("no server found for cluster %q", clusterName))
}
}
// Make sure CA data and CA file aren't both specified
if len(clusterInfo.CertificateAuthority) != 0 && len(clusterInfo.CertificateAuthorityData) != 0 {
validationErrors = append(validationErrors, fmt.Errorf("certificate-authority-data and certificate-authority are both specified for %v. certificate-authority-data will override", clusterName))
validationErrors = append(validationErrors, errors.Errorf("certificate-authority-data and certificate-authority are both specified for %v. certificate-authority-data will override", clusterName))
}
if len(clusterInfo.CertificateAuthority) != 0 {
clientCertCA, err := os.Open(clusterInfo.CertificateAuthority)
defer clientCertCA.Close()
if err != nil {
validationErrors = append(validationErrors, fmt.Errorf("unable to read certificate-authority %v for %v due to %v", clusterInfo.CertificateAuthority, clusterName, err))
validationErrors = append(validationErrors, errors.Errorf("unable to read certificate-authority %v for %v due to %v", clusterInfo.CertificateAuthority, clusterName, err))
}
}
@@ -385,36 +385,36 @@ func validateAuthInfo(authInfoName string, authInfo clientcmdAuthInfo) []error {
if len(authInfo.ClientCertificate) != 0 || len(authInfo.ClientCertificateData) != 0 {
// Make sure cert data and file aren't both specified
if len(authInfo.ClientCertificate) != 0 && len(authInfo.ClientCertificateData) != 0 {
validationErrors = append(validationErrors, fmt.Errorf("client-cert-data and client-cert are both specified for %v. client-cert-data will override", authInfoName))
validationErrors = append(validationErrors, errors.Errorf("client-cert-data and client-cert are both specified for %v. client-cert-data will override", authInfoName))
}
// Make sure key data and file aren't both specified
if len(authInfo.ClientKey) != 0 && len(authInfo.ClientKeyData) != 0 {
validationErrors = append(validationErrors, fmt.Errorf("client-key-data and client-key are both specified for %v; client-key-data will override", authInfoName))
validationErrors = append(validationErrors, errors.Errorf("client-key-data and client-key are both specified for %v; client-key-data will override", authInfoName))
}
// Make sure a key is specified
if len(authInfo.ClientKey) == 0 && len(authInfo.ClientKeyData) == 0 {
validationErrors = append(validationErrors, fmt.Errorf("client-key-data or client-key must be specified for %v to use the clientCert authentication method", authInfoName))
validationErrors = append(validationErrors, errors.Errorf("client-key-data or client-key must be specified for %v to use the clientCert authentication method", authInfoName))
}
if len(authInfo.ClientCertificate) != 0 {
clientCertFile, err := os.Open(authInfo.ClientCertificate)
defer clientCertFile.Close()
if err != nil {
validationErrors = append(validationErrors, fmt.Errorf("unable to read client-cert %v for %v due to %v", authInfo.ClientCertificate, authInfoName, err))
validationErrors = append(validationErrors, errors.Errorf("unable to read client-cert %v for %v due to %v", authInfo.ClientCertificate, authInfoName, err))
}
}
if len(authInfo.ClientKey) != 0 {
clientKeyFile, err := os.Open(authInfo.ClientKey)
defer clientKeyFile.Close()
if err != nil {
validationErrors = append(validationErrors, fmt.Errorf("unable to read client-key %v for %v due to %v", authInfo.ClientKey, authInfoName, err))
validationErrors = append(validationErrors, errors.Errorf("unable to read client-key %v for %v due to %v", authInfo.ClientKey, authInfoName, err))
}
}
}
// authPath also provides information for the client to identify the server, so allow multiple auth methods in that case
if (len(methods) > 1) && (!usingAuthPath) {
validationErrors = append(validationErrors, fmt.Errorf("more than one authentication method found for %v; found %v, only one is allowed", authInfoName, methods))
validationErrors = append(validationErrors, errors.Errorf("more than one authentication method found for %v; found %v, only one is allowed", authInfoName, methods))
}
return validationErrors
@@ -450,6 +450,55 @@ func (config *directClientConfig) getCluster() clientcmdCluster {
return mergedClusterInfo
}
// aggregateErr is a modified copy of k8s.io/apimachinery/pkg/util/errors.aggregate.
// This helper implements the error and Errors interfaces. Keeping it private
// prevents people from making an aggregate of 0 errors, which is not
// an error, but does satisfy the error interface.
type aggregateErr []error
// newAggregate is a modified copy of k8s.io/apimachinery/pkg/util/errors.NewAggregate.
// NewAggregate converts a slice of errors into an Aggregate interface, which
// is itself an implementation of the error interface. If the slice is empty,
// this returns nil.
// It will check if any of the element of input error list is nil, to avoid
// nil pointer panic when call Error().
func newAggregate(errlist []error) error {
if len(errlist) == 0 {
return nil
}
// In case of input error list contains nil
var errs []error
for _, e := range errlist {
if e != nil {
errs = append(errs, e)
}
}
if len(errs) == 0 {
return nil
}
return aggregateErr(errs)
}
// Error is a modified copy of k8s.io/apimachinery/pkg/util/errors.aggregate.Error.
// Error is part of the error interface.
func (agg aggregateErr) Error() string {
if len(agg) == 0 {
// This should never happen, really.
return ""
}
if len(agg) == 1 {
return agg[0].Error()
}
result := fmt.Sprintf("[%s", agg[0].Error())
for i := 1; i < len(agg); i++ {
result += fmt.Sprintf(", %s", agg[i].Error())
}
result += "]"
return result
}
// REMOVED: aggregateErr.Errors
// errConfigurationInvalid is a modified? copy of k8s.io/kubernetes/pkg/client/unversioned/clientcmd.errConfigurationInvalid.
// errConfigurationInvalid is a set of errors indicating the configuration is invalid.
type errConfigurationInvalid []error
@@ -470,7 +519,7 @@ func newErrConfigurationInvalid(errs []error) error {
// Error implements the error interface
func (e errConfigurationInvalid) Error() string {
return fmt.Sprintf("invalid configuration: %v", utilerrors.NewAggregate(e).Error())
return fmt.Sprintf("invalid configuration: %v", newAggregate(e).Error())
}
// clientConfigLoadingRules is a modified copy of k8s.io/kubernetes/pkg/client/unversioned/clientcmd.ClientConfigLoadingRules
@@ -518,7 +567,7 @@ func (rules *clientConfigLoadingRules) Load() (*clientcmdConfig, error) {
continue
}
if err != nil {
errlist = append(errlist, fmt.Errorf("Error loading config file \"%s\": %v", filename, err))
errlist = append(errlist, errors.Wrapf(err, "Error loading config file \"%s\"", filename))
continue
}
@@ -550,7 +599,7 @@ func (rules *clientConfigLoadingRules) Load() (*clientcmdConfig, error) {
errlist = append(errlist, err)
}
return config, utilerrors.NewAggregate(errlist)
return config, newAggregate(errlist)
}
// loadFromFile is a modified copy of k8s.io/kubernetes/pkg/client/unversioned/clientcmd.LoadFromFile
@@ -623,7 +672,7 @@ func resolveLocalPaths(config *clientcmdConfig) error {
}
base, err := filepath.Abs(filepath.Dir(cluster.LocationOfOrigin))
if err != nil {
return fmt.Errorf("Could not determine the absolute path of config file %s: %v", cluster.LocationOfOrigin, err)
return errors.Wrapf(err, "Could not determine the absolute path of config file %s", cluster.LocationOfOrigin)
}
if err := resolvePaths(getClusterFileReferences(cluster), base); err != nil {
@@ -636,7 +685,7 @@ func resolveLocalPaths(config *clientcmdConfig) error {
}
base, err := filepath.Abs(filepath.Dir(authInfo.LocationOfOrigin))
if err != nil {
return fmt.Errorf("Could not determine the absolute path of config file %s: %v", authInfo.LocationOfOrigin, err)
return errors.Wrapf(err, "Could not determine the absolute path of config file %s", authInfo.LocationOfOrigin)
}
if err := resolvePaths(getAuthInfoFileReferences(authInfo), base); err != nil {
@@ -706,7 +755,7 @@ func restClientFor(config *restConfig) (*url.URL, *http.Client, error) {
// Kubernetes API.
func defaultServerURL(host string, defaultTLS bool) (*url.URL, error) {
if host == "" {
return nil, fmt.Errorf("host must be a URL or a host:port pair")
return nil, errors.Errorf("host must be a URL or a host:port pair")
}
base := host
hostURL, err := url.Parse(base)
@@ -723,7 +772,7 @@ func defaultServerURL(host string, defaultTLS bool) (*url.URL, error) {
return nil, err
}
if hostURL.Path != "" && hostURL.Path != "/" {
return nil, fmt.Errorf("host must be a URL or a host:port pair: %q", base)
return nil, errors.Errorf("host must be a URL or a host:port pair: %q", base)
}
}
@@ -793,12 +842,58 @@ func transportNew(config *restConfig) (http.RoundTripper, error) {
// REMOVED: HTTPWrappersForConfig(config, rt) in favor of the caller setting HTTP headers itself based on restConfig. Only this inlined check remains.
if len(config.Username) != 0 && len(config.BearerToken) != 0 {
return nil, fmt.Errorf("username/password or bearer token may be set, but not both")
return nil, errors.Errorf("username/password or bearer token may be set, but not both")
}
return rt, nil
}
// newProxierWithNoProxyCIDR is a modified copy of k8s.io/apimachinery/pkg/util/net.NewProxierWithNoProxyCIDR.
// NewProxierWithNoProxyCIDR constructs a Proxier function that respects CIDRs in NO_PROXY and delegates if
// no matching CIDRs are found
func newProxierWithNoProxyCIDR(delegate func(req *http.Request) (*url.URL, error)) func(req *http.Request) (*url.URL, error) {
// we wrap the default method, so we only need to perform our check if the NO_PROXY envvar has a CIDR in it
noProxyEnv := os.Getenv("NO_PROXY")
noProxyRules := strings.Split(noProxyEnv, ",")
cidrs := []*net.IPNet{}
for _, noProxyRule := range noProxyRules {
_, cidr, _ := net.ParseCIDR(noProxyRule)
if cidr != nil {
cidrs = append(cidrs, cidr)
}
}
if len(cidrs) == 0 {
return delegate
}
return func(req *http.Request) (*url.URL, error) {
host := req.URL.Host
// for some urls, the Host is already the host, not the host:port
if net.ParseIP(host) == nil {
var err error
host, _, err = net.SplitHostPort(req.URL.Host)
if err != nil {
return delegate(req)
}
}
ip := net.ParseIP(host)
if ip == nil {
return delegate(req)
}
for _, cidr := range cidrs {
if cidr.Contains(ip) {
return nil, nil
}
}
return delegate(req)
}
}
// tlsCacheGet is a modified copy of k8s.io/kubernetes/pkg/client/transport.tlsTransportCache.get.
func tlsCacheGet(config *restConfig) (http.RoundTripper, error) {
// REMOVED: any actual caching
@@ -813,15 +908,23 @@ func tlsCacheGet(config *restConfig) (http.RoundTripper, error) {
return http.DefaultTransport, nil
}
return utilnet.SetTransportDefaults(&http.Transport{ // FIXME??
Proxy: http.ProxyFromEnvironment,
// REMOVED: Call to k8s.io/apimachinery/pkg/util/net.SetTransportDefaults; instead of the generic machinery and conditionals, hard-coded the result here.
t := &http.Transport{
// http.ProxyFromEnvironment doesn't respect CIDRs and that makes it impossible to exclude things like pod and service IPs from proxy settings
// ProxierWithNoProxyCIDR allows CIDR rules in NO_PROXY
Proxy: newProxierWithNoProxyCIDR(http.ProxyFromEnvironment),
TLSHandshakeTimeout: 10 * time.Second,
TLSClientConfig: tlsConfig,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
}), nil
}
// Allow clients to disable http2 if needed.
if s := os.Getenv("DISABLE_HTTP2"); len(s) == 0 {
_ = http2.ConfigureTransport(t)
}
return t, nil
}
// tlsConfigFor is a modified copy of k8s.io/kubernetes/pkg/client/transport.TLSConfigFor.
@@ -832,7 +935,7 @@ func tlsConfigFor(c *restConfig) (*tls.Config, error) {
return nil, nil
}
if c.HasCA() && c.Insecure {
return nil, fmt.Errorf("specifying a root certificates file with the insecure flag is not allowed")
return nil, errors.Errorf("specifying a root certificates file with the insecure flag is not allowed")
}
if err := loadTLSFiles(c); err != nil {
return nil, err

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"crypto/rand"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
@@ -14,9 +13,12 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/image/version"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// openshiftClient is configuration for dealing with a single image stream, for reading or writing.
@@ -123,7 +125,7 @@ func (c *openshiftClient) doRequest(method, path string, requestBody []byte) ([]
if statusValid {
return nil, errors.New(status.Message)
}
return nil, fmt.Errorf("HTTP error: status code: %d, body: %s", res.StatusCode, string(body))
return nil, errors.Errorf("HTTP error: status code: %d, body: %s", res.StatusCode, string(body))
}
return body, nil
@@ -150,9 +152,9 @@ func (c *openshiftClient) getImage(imageStreamImageName string) (*image, error)
func (c *openshiftClient) convertDockerImageReference(ref string) (string, error) {
parts := strings.SplitN(ref, "/", 2)
if len(parts) != 2 {
return "", fmt.Errorf("Invalid format of docker reference %s: missing '/'", ref)
return "", errors.Errorf("Invalid format of docker reference %s: missing '/'", ref)
}
return c.ref.dockerReference.Hostname() + "/" + parts[1], nil
return reference.Domain(c.ref.dockerReference) + "/" + parts[1], nil
}
type openshiftImageSource struct {
@@ -189,13 +191,26 @@ func (s *openshiftImageSource) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *openshiftImageSource) Close() {
func (s *openshiftImageSource) Close() error {
if s.docker != nil {
s.docker.Close()
err := s.docker.Close()
s.docker = nil
return err
}
return nil
}
func (s *openshiftImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, "", err
}
return s.docker.GetTargetManifest(digest)
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, "", err
@@ -204,11 +219,11 @@ func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *openshiftImageSource) GetBlob(digest string) (io.ReadCloser, int64, error) {
func (s *openshiftImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, 0, err
}
return s.docker.GetBlob(digest)
return s.docker.GetBlob(info)
}
func (s *openshiftImageSource) GetSignatures() ([][]byte, error) {
@@ -257,7 +272,7 @@ func (s *openshiftImageSource) ensureImageIsResolved() error {
}
}
if te == nil {
return fmt.Errorf("No matching tag found")
return errors.Errorf("No matching tag found")
}
logrus.Debugf("tag event %#v", te)
dockerRefString, err := s.client.convertDockerImageReference(te.DockerImageReference)
@@ -295,7 +310,7 @@ func newImageDestination(ctx *types.SystemContext, ref openshiftReference) (type
// FIXME: Should this always use a digest, not a tag? Uploading to Docker by tag requires the tag _inside_ the manifest to match,
// i.e. a single signed image cannot be available under multiple tags. But with types.ImageDestination, we don't know
// the manifest digest at this point.
dockerRefString := fmt.Sprintf("//%s/%s/%s:%s", client.ref.dockerReference.Hostname(), client.ref.namespace, client.ref.stream, client.ref.dockerReference.Tag())
dockerRefString := fmt.Sprintf("//%s/%s/%s:%s", reference.Domain(client.ref.dockerReference), client.ref.namespace, client.ref.stream, client.ref.dockerReference.Tag())
dockerRef, err := docker.ParseReference(dockerRefString)
if err != nil {
return nil, err
@@ -318,15 +333,12 @@ func (d *openshiftImageDestination) Reference() types.ImageReference {
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *openshiftImageDestination) Close() {
d.docker.Close()
func (d *openshiftImageDestination) Close() error {
return d.docker.Close()
}
func (d *openshiftImageDestination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
return d.docker.SupportedManifestMIMETypes()
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
@@ -340,6 +352,17 @@ func (d *openshiftImageDestination) ShouldCompressLayers() bool {
return true
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *openshiftImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *openshiftImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -350,19 +373,35 @@ func (d *openshiftImageDestination) PutBlob(stream io.Reader, inputInfo types.Bl
return d.docker.PutBlob(stream, inputInfo)
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *openshiftImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
return d.docker.HasBlob(info)
}
func (d *openshiftImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return d.docker.ReapplyBlob(info)
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *openshiftImageDestination) PutManifest(m []byte) error {
manifestDigest, err := manifest.Digest(m)
if err != nil {
return err
}
d.imageStreamImageName = manifestDigest
d.imageStreamImageName = manifestDigest.String()
return d.docker.PutManifest(m)
}
func (d *openshiftImageDestination) PutSignatures(signatures [][]byte) error {
if d.imageStreamImageName == "" {
return fmt.Errorf("Internal error: Unknown manifest digest, can't add signatures")
return errors.Errorf("Internal error: Unknown manifest digest, can't add signatures")
}
// Because image signatures are a shared resource in Atomic Registry, the default upload
// always adds signatures. Eventually we should also allow removing signatures.
@@ -394,7 +433,7 @@ sigExists:
randBytes := make([]byte, 16)
n, err := rand.Read(randBytes)
if err != nil || n != 16 {
return fmt.Errorf("Error generating random signature ID: %v, len %d", err, n)
return errors.Wrapf(err, "Error generating random signature len %d", n)
}
signatureName = fmt.Sprintf("%s@%032x", d.imageStreamImageName, randBytes)
if _, ok := existingSigNames[signatureName]; !ok {

View File

@@ -6,11 +6,17 @@ import (
"strings"
"github.com/containers/image/docker/policyconfiguration"
"github.com/containers/image/docker/reference"
genericImage "github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/docker/docker/reference"
"github.com/pkg/errors"
)
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for OpenShift registry-hosted images.
var Transport = openshiftTransport{}
@@ -36,7 +42,7 @@ var scopeRegexp = regexp.MustCompile("^[^/]*(/[^:/]*(/[^:/]*(:[^:/]*)?)?)?$")
// scope passed to this function will not be "", that value is always allowed.
func (t openshiftTransport) ValidatePolicyConfigurationScope(scope string) error {
if scopeRegexp.FindStringIndex(scope) == nil {
return fmt.Errorf("Invalid scope name %s", scope)
return errors.Errorf("Invalid scope name %s", scope)
}
return nil
}
@@ -50,22 +56,23 @@ type openshiftReference struct {
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an OpenShift ImageReference.
func ParseReference(ref string) (types.ImageReference, error) {
r, err := reference.ParseNamed(ref)
r, err := reference.ParseNormalizedNamed(ref)
if err != nil {
return nil, fmt.Errorf("failed to parse image reference %q, %v", ref, err)
return nil, errors.Wrapf(err, "failed to parse image reference %q", ref)
}
tagged, ok := r.(reference.NamedTagged)
if !ok {
return nil, fmt.Errorf("invalid image reference %s, %#v", ref, r)
return nil, errors.Errorf("invalid image reference %s, expected format: 'hostname/namespace/stream:tag'", ref)
}
return NewReference(tagged)
}
// NewReference returns an OpenShift reference for a reference.NamedTagged
func NewReference(dockerRef reference.NamedTagged) (types.ImageReference, error) {
r := strings.SplitN(dockerRef.RemoteName(), "/", 3)
r := strings.SplitN(reference.Path(dockerRef), "/", 3)
if len(r) != 2 {
return nil, fmt.Errorf("invalid image reference %s", dockerRef.String())
return nil, errors.Errorf("invalid image reference: %s, expected format: 'hostname/namespace/stream:tag'",
reference.FamiliarString(dockerRef))
}
return openshiftReference{
namespace: r[0],
@@ -84,7 +91,7 @@ func (ref openshiftReference) Transport() types.ImageTransport {
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref openshiftReference) StringWithinTransport() string {
return ref.dockerReference.String()
return reference.FamiliarString(ref.dockerReference)
}
// DockerReference returns a Docker reference associated with this reference
@@ -118,14 +125,16 @@ func (ref openshiftReference) PolicyConfigurationNamespaces() []string {
return policyconfiguration.DockerReferenceNamespaces(ref.dockerReference)
}
// NewImage returns a types.Image for this reference.
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref openshiftReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := newImageSource(ctx, ref, nil)
if err != nil {
return nil, err
}
return genericImage.FromSource(src), nil
return genericImage.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
@@ -144,5 +153,5 @@ func (ref openshiftReference) NewImageDestination(ctx *types.SystemContext) (typ
// DeleteImage deletes the named image from the registry, if supported.
func (ref openshiftReference) DeleteImage(ctx *types.SystemContext) error {
return fmt.Errorf("Deleting images not implemented for atomic: images")
return errors.Errorf("Deleting images not implemented for atomic: images")
}

View File

@@ -0,0 +1,323 @@
package ostree
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/ostreedev/ostree-go/pkg/otbuiltin"
)
type blobToImport struct {
Size int64
Digest digest.Digest
BlobPath string
}
type descriptor struct {
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
}
type manifestSchema struct {
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
}
type ostreeImageDestination struct {
ref ostreeReference
manifest string
schema manifestSchema
tmpDirPath string
blobs map[string]*blobToImport
}
// newImageDestination returns an ImageDestination for writing to an existing ostree.
func newImageDestination(ref ostreeReference, tmpDirPath string) (types.ImageDestination, error) {
tmpDirPath = filepath.Join(tmpDirPath, ref.branchName)
if err := ensureDirectoryExists(tmpDirPath); err != nil {
return nil, err
}
return &ostreeImageDestination{ref, "", manifestSchema{}, tmpDirPath, map[string]*blobToImport{}}, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *ostreeImageDestination) Reference() types.ImageReference {
return d.ref
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *ostreeImageDestination) Close() error {
return os.RemoveAll(d.tmpDirPath)
}
func (d *ostreeImageDestination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema2MediaType,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *ostreeImageDestination) SupportsSignatures() error {
return nil
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *ostreeImageDestination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ostreeImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ostreeImageDestination) MustMatchRuntimeOS() bool {
return true
}
func (d *ostreeImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
tmpDir, err := ioutil.TempDir(d.tmpDirPath, "blob")
if err != nil {
return types.BlobInfo{}, err
}
blobPath := filepath.Join(tmpDir, "content")
blobFile, err := os.Create(blobPath)
if err != nil {
return types.BlobInfo{}, err
}
defer blobFile.Close()
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
}
hash := computedDigest.Hex()
d.blobs[hash] = &blobToImport{Size: size, Digest: computedDigest, BlobPath: blobPath}
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
func fixFiles(dir string, usermode bool) error {
entries, err := ioutil.ReadDir(dir)
if err != nil {
return err
}
for _, info := range entries {
fullpath := filepath.Join(dir, info.Name())
if info.Mode()&(os.ModeNamedPipe|os.ModeSocket|os.ModeDevice) != 0 {
if err := os.Remove(fullpath); err != nil {
return err
}
continue
}
if info.IsDir() {
if usermode {
if err := os.Chmod(fullpath, info.Mode()|0700); err != nil {
return err
}
}
err = fixFiles(fullpath, usermode)
if err != nil {
return err
}
} else if usermode && (info.Mode().IsRegular() || (info.Mode()&os.ModeSymlink) != 0) {
if err := os.Chmod(fullpath, info.Mode()|0600); err != nil {
return err
}
}
}
return nil
}
func (d *ostreeImageDestination) ostreeCommit(repo *otbuiltin.Repo, branch string, root string, metadata []string) error {
opts := otbuiltin.NewCommitOptions()
opts.AddMetadataString = metadata
opts.Timestamp = time.Now()
// OCI layers have no parent OSTree commit
opts.Parent = "0000000000000000000000000000000000000000000000000000000000000000"
_, err := repo.Commit(root, branch, opts)
return err
}
func (d *ostreeImageDestination) importBlob(repo *otbuiltin.Repo, blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
destinationPath := filepath.Join(d.tmpDirPath, blob.Digest.Hex(), "root")
if err := ensureDirectoryExists(destinationPath); err != nil {
return err
}
defer func() {
os.Remove(blob.BlobPath)
os.RemoveAll(destinationPath)
}()
if os.Getuid() == 0 {
if err := archive.UntarPath(blob.BlobPath, destinationPath); err != nil {
return err
}
if err := fixFiles(destinationPath, false); err != nil {
return err
}
} else {
os.MkdirAll(destinationPath, 0755)
if err := exec.Command("tar", "-C", destinationPath, "--no-same-owner", "--no-same-permissions", "--delay-directory-restore", "-xf", blob.BlobPath).Run(); err != nil {
return err
}
if err := fixFiles(destinationPath, true); err != nil {
return err
}
}
return d.ostreeCommit(repo, ostreeBranch, destinationPath, []string{fmt.Sprintf("docker.size=%d", blob.Size)})
}
func (d *ostreeImageDestination) importConfig(blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
return exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.size=%d", blob.Size),
"--branch", ostreeBranch, filepath.Dir(blob.BlobPath)).Run()
}
func (d *ostreeImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
branch := fmt.Sprintf("ociimage/%s", info.Digest.Hex())
output, err := exec.Command("ostree", "show", "--repo", d.ref.repo, "--print-metadata-key=docker.size", branch).CombinedOutput()
if err != nil {
if bytes.Index(output, []byte("not found")) >= 0 || bytes.Index(output, []byte("No such")) >= 0 {
return false, -1, nil
}
return false, -1, err
}
size, err := strconv.ParseInt(strings.Trim(string(output), "'\n"), 10, 64)
if err != nil {
return false, -1, err
}
return true, size, nil
}
func (d *ostreeImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *ostreeImageDestination) PutManifest(manifest []byte) error {
d.manifest = string(manifest)
if err := json.Unmarshal(manifest, &d.schema); err != nil {
return err
}
manifestPath := filepath.Join(d.tmpDirPath, d.ref.manifestPath())
if err := ensureParentDirectoryExists(manifestPath); err != nil {
return err
}
return ioutil.WriteFile(manifestPath, manifest, 0644)
}
func (d *ostreeImageDestination) PutSignatures(signatures [][]byte) error {
path := filepath.Join(d.tmpDirPath, d.ref.signaturePath(0))
if err := ensureParentDirectoryExists(path); err != nil {
return err
}
for i, sig := range signatures {
signaturePath := filepath.Join(d.tmpDirPath, d.ref.signaturePath(i))
if err := ioutil.WriteFile(signaturePath, sig, 0644); err != nil {
return err
}
}
return nil
}
func (d *ostreeImageDestination) Commit() error {
repo, err := otbuiltin.OpenRepo(d.ref.repo)
if err != nil {
return err
}
_, err = repo.PrepareTransaction()
if err != nil {
return err
}
for _, layer := range d.schema.LayersDescriptors {
hash := layer.Digest.Hex()
blob := d.blobs[hash]
// if the blob is not present in d.blobs then it is already stored in OSTree,
// and we don't need to import it.
if blob == nil {
continue
}
err := d.importBlob(repo, blob)
if err != nil {
return err
}
}
hash := d.schema.ConfigDescriptor.Digest.Hex()
blob := d.blobs[hash]
if blob != nil {
err := d.importConfig(blob)
if err != nil {
return err
}
}
manifestPath := filepath.Join(d.tmpDirPath, "manifest")
metadata := []string{fmt.Sprintf("docker.manifest=%s", string(d.manifest))}
err = d.ostreeCommit(repo, fmt.Sprintf("ociimage/%s", d.ref.branchName), manifestPath, metadata)
_, err = repo.CommitTransaction()
return err
}
func ensureDirectoryExists(path string) error {
if _, err := os.Stat(path); err != nil && os.IsNotExist(err) {
if err := os.MkdirAll(path, 0755); err != nil {
return err
}
}
return nil
}
func ensureParentDirectoryExists(path string) error {
return ensureDirectoryExists(filepath.Dir(path))
}

View File

@@ -0,0 +1,235 @@
package ostree
import (
"bytes"
"fmt"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
)
const defaultOSTreeRepo = "/ostree/repo"
// Transport is an ImageTransport for ostree paths.
var Transport = ostreeTransport{}
type ostreeTransport struct{}
func (t ostreeTransport) Name() string {
return "ostree"
}
func init() {
transports.Register(Transport)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t ostreeTransport) ValidatePolicyConfigurationScope(scope string) error {
sep := strings.Index(scope, ":")
if sep < 0 {
return errors.Errorf("Invalid ostree: scope %s: Must include a repo", scope)
}
repo := scope[:sep]
if !strings.HasPrefix(repo, "/") {
return errors.Errorf("Invalid ostree: scope %s: repository must be an absolute path", scope)
}
cleaned := filepath.Clean(repo)
if cleaned != repo {
return errors.Errorf(`Invalid ostree: scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
}
// FIXME? In the namespaces within a repo,
// we could be verifying the various character set and length restrictions
// from docker/distribution/reference.regexp.go, but other than that there
// are few semantically invalid strings.
return nil
}
// ostreeReference is an ImageReference for ostree paths.
type ostreeReference struct {
image string
branchName string
repo string
}
func (t ostreeTransport) ParseReference(ref string) (types.ImageReference, error) {
var repo = ""
var image = ""
s := strings.SplitN(ref, "@/", 2)
if len(s) == 1 {
image, repo = s[0], defaultOSTreeRepo
} else {
image, repo = s[0], "/"+s[1]
}
return NewReference(image, repo)
}
// NewReference returns an OSTree reference for a specified repo and image.
func NewReference(image string, repo string) (types.ImageReference, error) {
// image is not _really_ in a containers/image/docker/reference format;
// as far as the libOSTree ociimage/* namespace is concerned, it is more or
// less an arbitrary string with an implied tag.
// We use the reference.* parsers basically for the default tag name in
// reference.TagNameOnly, and incidentally for some character set and length
// restrictions.
var ostreeImage reference.Named
s := strings.SplitN(image, ":", 2)
named, err := reference.WithName(s[0])
if err != nil {
return nil, err
}
if len(s) == 1 {
ostreeImage = reference.TagNameOnly(named)
} else {
ostreeImage, err = reference.WithTag(named, s[1])
if err != nil {
return nil, err
}
}
resolved, err := explicitfilepath.ResolvePathToFullyExplicit(repo)
if err != nil {
// With os.IsNotExist(err), the parent directory of repo is also not existent;
// that should ordinarily not happen, but it would be a bit weird to reject
// references which do not specify a repo just because the implicit defaultOSTreeRepo
// does not exist.
if os.IsNotExist(err) && repo == defaultOSTreeRepo {
resolved = repo
} else {
return nil, err
}
}
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
// from being ambiguous with values of PolicyConfigurationIdentity.
if strings.Contains(resolved, ":") {
return nil, errors.Errorf("Invalid OSTreeCI reference %s@%s: path %s contains a colon", image, repo, resolved)
}
return ostreeReference{
image: ostreeImage.String(),
branchName: encodeOStreeRef(ostreeImage.String()),
repo: resolved,
}, nil
}
func (ref ostreeReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref ostreeReference) StringWithinTransport() string {
return fmt.Sprintf("%s@%s", ref.image, ref.repo)
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref ostreeReference) DockerReference() reference.Named {
return nil
}
func (ref ostreeReference) PolicyConfigurationIdentity() string {
return fmt.Sprintf("%s:%s", ref.repo, ref.image)
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref ostreeReference) PolicyConfigurationNamespaces() []string {
s := strings.SplitN(ref.image, ":", 2)
if len(s) != 2 { // Coverage: Should never happen, NewReference above ensures ref.image has a :tag.
panic(fmt.Sprintf("Internal inconsistency: ref.image value %q does not have a :tag", ref.image))
}
name := s[0]
res := []string{}
for {
res = append(res, fmt.Sprintf("%s:%s", ref.repo, name))
lastSlash := strings.LastIndex(name, "/")
if lastSlash == -1 {
break
}
name = name[:lastSlash]
}
return res
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ostreeReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return nil, errors.New("Reading ostree: images is currently not supported")
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref ostreeReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return nil, errors.New("Reading ostree: images is currently not supported")
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref ostreeReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
var tmpDir string
if ctx == nil || ctx.OSTreeTmpDirPath == "" {
tmpDir = os.TempDir()
} else {
tmpDir = ctx.OSTreeTmpDirPath
}
return newImageDestination(ref, tmpDir)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref ostreeReference) DeleteImage(ctx *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for ostree: images")
}
var ostreeRefRegexp = regexp.MustCompile(`^[A-Za-z0-9.-]$`)
func encodeOStreeRef(in string) string {
var buffer bytes.Buffer
for i := range in {
sub := in[i : i+1]
if ostreeRefRegexp.MatchString(sub) {
buffer.WriteString(sub)
} else {
buffer.WriteString(fmt.Sprintf("_%02X", sub[0]))
}
}
return buffer.String()
}
// manifestPath returns a path for the manifest within a ostree using our conventions.
func (ref ostreeReference) manifestPath() string {
return filepath.Join("manifest", "manifest.json")
}
// signaturePath returns a path for a signature within a ostree using our conventions.
func (ref ostreeReference) signaturePath(index int) string {
return filepath.Join("manifest", fmt.Sprintf("signature-%d", index+1))
}

View File

@@ -0,0 +1,67 @@
package compression
import (
"bytes"
"compress/bzip2"
"compress/gzip"
"io"
"github.com/pkg/errors"
"github.com/Sirupsen/logrus"
)
// DecompressorFunc returns the decompressed stream, given a compressed stream.
type DecompressorFunc func(io.Reader) (io.Reader, error)
// GzipDecompressor is a DecompressorFunc for the gzip compression algorithm.
func GzipDecompressor(r io.Reader) (io.Reader, error) {
return gzip.NewReader(r)
}
// Bzip2Decompressor is a DecompressorFunc for the bzip2 compression algorithm.
func Bzip2Decompressor(r io.Reader) (io.Reader, error) {
return bzip2.NewReader(r), nil
}
// XzDecompressor is a DecompressorFunc for the xz compression algorithm.
func XzDecompressor(r io.Reader) (io.Reader, error) {
return nil, errors.New("Decompressing xz streams is not supported")
}
// compressionAlgos is an internal implementation detail of DetectCompression
var compressionAlgos = map[string]struct {
prefix []byte
decompressor DecompressorFunc
}{
"gzip": {[]byte{0x1F, 0x8B, 0x08}, GzipDecompressor}, // gzip (RFC 1952)
"bzip2": {[]byte{0x42, 0x5A, 0x68}, Bzip2Decompressor}, // bzip2 (decompress.c:BZ2_decompress)
"xz": {[]byte{0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}, XzDecompressor}, // xz (/usr/share/doc/xz/xz-file-format.txt)
}
// DetectCompression returns a DecompressorFunc if the input is recognized as a compressed format, nil otherwise.
// Because it consumes the start of input, other consumers must use the returned io.Reader instead to also read from the beginning.
func DetectCompression(input io.Reader) (DecompressorFunc, io.Reader, error) {
buffer := [8]byte{}
n, err := io.ReadAtLeast(input, buffer[:], len(buffer))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// This is a “real” error. We could just ignore it this time, process the data we have, and hope that the source will report the same error again.
// Instead, fail immediately with the original error cause instead of a possibly secondary/misleading error returned later.
return nil, nil, err
}
var decompressor DecompressorFunc
for name, algo := range compressionAlgos {
if bytes.HasPrefix(buffer[:n], algo.prefix) {
logrus.Debugf("Detected compression format %s", name)
decompressor = algo.decompressor
break
}
}
if decompressor == nil {
logrus.Debugf("No compression detected")
}
return decompressor, io.MultiReader(bytes.NewReader(buffer[:n]), input), nil
}

Some files were not shown because too many files have changed in this diff Show More