Compare commits

...

276 Commits

Author SHA1 Message Date
Antonio Murdaca
1715c90841 version: bump to v0.1.32
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-11-07 18:06:24 +01:00
Miloslav Trmač
187299a20b Merge pull request #566 from isimluk/fix-podman-build
Enable build in podman
2018-11-03 02:05:12 +01:00
Šimon Lukašík
89d8bddd9b Actually, enable build in podman 2018-11-03 00:10:44 +01:00
Daniel J Walsh
ba649c56bf Merge pull request #565 from armstrongli/master
add timeout support for image copy
2018-11-02 11:32:36 -04:00
jianqli
3456577268 add command timeout support for skopeo
* add global command-timeout option for skopeo
2018-11-01 23:02:33 +08:00
Miloslav Trmač
b52e700666 Merge pull request #562 from Bhudjo/patch-1
Fix typo
2018-10-17 01:42:29 +02:00
Alessandro Buggin
ee32f1f7aa Fix typo 2018-10-16 15:47:30 +01:00
Miloslav Trmač
5af0da9de6 Merge pull request #558 from nalind/copy-digest
Update containers/image to 5e5b67d6b1cf43cc349128ec3ed7d5283a6cc0d1
2018-10-16 00:22:09 +02:00
Nalin Dahyabhai
879a6d793f Log the version of the go toolchain
Before we use "go get" in CI, run "go version" so that we can be sure of
which version of the toolchain we're using.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2018-10-15 15:45:45 -04:00
Nalin Dahyabhai
2734f93e30 Update for github.com/containers/image API change
github.com/containers/image/copy.Image() now returns the copied
manifest, so we at least need to ignore it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2018-10-15 15:45:45 -04:00
Nalin Dahyabhai
2b97124e4a bump(github.com/containers/imge)
Bump github.com/containers/image to version
5e5b67d6b1cf43cc349128ec3ed7d5283a6cc0d1, which modifies copy.Image() to
add the new image's manifest to the values that it returns.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2018-10-15 15:45:43 -04:00
Antonio Murdaca
7815a5801e Merge pull request #561 from runcom/fix-golint
fix golint
2018-10-13 16:44:19 +02:00
Antonio Murdaca
501e1be3cf fix golint
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-10-13 15:41:55 +02:00
Miloslav Trmač
fc386a6dca Merge pull request #535 from rhatdan/podman
Add support for using podman in testing skopeo
2018-10-01 17:37:34 +02:00
Daniel J Walsh
2a134a0ddd Add support for using podman in testing skopeo
Also rename commands from Docker to container.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-09-30 07:54:23 +02:00
Miloslav Trmač
17250d7e8d Merge pull request #554 from rhatdan/vendor
Update vendor for skopeo release
2018-09-27 21:09:28 +02:00
Daniel J Walsh
65d28709c3 Update vendor for skopeo release
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-09-21 08:49:55 -04:00
Miloslav Trmač
d6c6c78d1b Merge pull request #553 from mtrmac/revendor
Run (make vendor)
2018-09-17 16:49:44 +02:00
Miloslav Trmač
67ffa00b1d Run (make vendor)
Temporarily vendor opencontainers/image-spec from a fork
to fix "id" value duplication, which is detected and
refused by gojsonschema now
( https://github.com/opencontainers/image-spec/pull/750 ).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-09-17 16:16:19 +02:00
Antonio Murdaca
a581847345 Merge pull request #552 from mtrmac/disable-cgo
Fix (make DISABLE_CGO=1)
2018-09-17 14:28:44 +02:00
Miloslav Trmač
bcd26a4ae4 Fix (make DISABLE_CGO=1)
... which has, apparently, never worked, because the golang image
has neither the GOPATH nor the working directory the Makefile expects.

Rather than move all this configuration into the Makefile to be able
to work with the golang images, just always use the skopeobuildimage
path, and only override the tags, to minimize divergence.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-09-17 13:55:46 +02:00
Miloslav Trmač
e38c345f23 Merge pull request #551 from rst0git/patch-1
readme: Add Ubuntu dependencies
2018-09-17 13:11:13 +02:00
Radostin Stoyanov
0421fb04c2 readme: Add Ubuntu dependencies
Signed-off-by: Radostin Stoyanov <rstoyanov1@gmail.com>
2018-09-16 12:01:32 +01:00
Antonio Murdaca
82186b916f Merge pull request #543 from giuseppe/remove-glog
vendor.conf: remove unused github.com/golang/glog
2018-08-28 12:00:07 +02:00
Giuseppe Scrivano
15eed5beda vendor.conf: remove unused github.com/golang/glog
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-08-28 11:37:47 +02:00
Miloslav Trmač
81837bd55b Merge pull request #499 from mtrmac/no-docker.Image
Stop using docker.Image
2018-08-27 16:00:12 +02:00
Miloslav Trmač
3dec6a1cdf Stop using docker.Image
Instead, use DockerReference() to obtain the repository name (which
also makes it work for other transports that support Docker references),
and a check for docker.Transport + docker.GetRepositoryTags.

This will allow dropping docker.Image from containers/image, and maybe
even all of ImageReference.NewImage (forcing callers to think about
manifest lists, among other things).
2018-08-27 14:00:44 +02:00
Miloslav Trmač
fe14427129 Merge pull request #542 from mtrmac/projectatomic-.validate
Update one more reference to projectatomic/skopeo
2018-08-27 13:52:20 +02:00
Miloslav Trmač
be27588418 Update one more reference to projectatomic/skopeo
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-08-27 13:10:02 +02:00
Miloslav Trmač
fb84437cd1 Merge pull request #541 from marcov/testflags
Allow passing TESTFLAGS env to hack/make.sh from make
2018-08-27 11:15:36 +02:00
Marco Vedovati
d9b495ca38 Allow passing TESTFLAGS env to hack/make.sh from make
Minor change to allow passing the env TESTFLAGS to make. That's pretty
convenient to filter what tests to run.
E.g. run integration tests containing the substring `Copy`:
make test-integration TESTFLAGS="-check.f Copy"

Signed-off-by: Marco Vedovati <mvedovati@suse.com>
2018-08-27 10:51:40 +02:00
Miloslav Trmač
6b93d4794f Merge pull request #537 from giuseppe/fix-makefile-vendor
Makefile: make vendor a .PHONY target
2018-08-16 15:58:26 +02:00
Giuseppe Scrivano
5d3849a510 Makefile: make vendor a .PHONY target
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-08-16 15:06:16 +02:00
Daniel J Walsh
fef142f811 Merge pull request #536 from SUSE/update-github-references
Complete transition from the `projectactomic` project to the `containers` one
2018-08-15 16:23:43 -04:00
Flavio Castelli
2684e51aa5 Complete transition from the projectactomic project to the containers one
Replace the occurrences of `github.com/projectatomic` with
`github.com/containers` to ensure clean clones of the project are
building, travis badges on the README work as expected and other minor
things.

Signed-off-by: Flavio Castelli <fcastelli@suse.com>
2018-08-14 10:10:06 +02:00
Antonio Murdaca
e814f9605a Merge pull request #529 from runcom/new-0131
release v0.1.31
2018-07-29 19:17:19 +02:00
Antonio Murdaca
5d136a46ed version: bump to v0.1.32-dev
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-07-29 19:03:02 +02:00
Antonio Murdaca
b0b750dfa1 version: bump to v0.1.31
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-07-29 19:02:33 +02:00
Miloslav Trmač
e3034e1d91 Merge pull request #525 from mtrmac/docker-archive-auto-compression
Vendor after merging mtrmac/image:docker-archive-auto-compression
2018-07-18 01:19:09 +02:00
Miloslav Trmač
1a259b76da Vendor after merging mtrmac/image:docker-archive-auto-compression
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-07-18 01:02:26 +02:00
Antonio Murdaca
ae64ff7084 Merge pull request #522 from AkihiroSuda/minimal
Makefile: add binary-minimal and binary-static-minimal
2018-07-09 15:11:02 +02:00
Akihiro Suda
d67d3a4620 Makefile: add binary-minimal and binary-static-minimal
These targets produce a pure-Go binary, without the following features:
* ostree
* devicemapper
* btrfs
* gpgme

Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
2018-07-09 18:35:58 +09:00
Daniel J Walsh
196bc48723 Merge pull request #520 from mtrmac/copy-error-handling
Ensure that we still return 1, and print something to stderr, on (skopeo copy) failure
2018-07-02 07:57:19 -04:00
Miloslav Trmač
1c6c7bc481 Ensure that we still return 1, and print something to stderr, on (skopeo copy) failure
https://github.com/projectatomic/skopeo/pull/519 made (skopeo copy)
suceed and print nothing to stderr; that could lead to hard-to-diagnose
failures in rare corner cases, e.g. shell scripts which do
(skopeo copy $src $dst) (as opposed to the correct
(skopeo copy "$src" "$dst") ) if $src and $dst are empty due to
a previous failure.
2018-06-30 13:05:23 +02:00
Daniel J Walsh
6e23a32282 Merge pull request #519 from vbatts/copy_help_output
cmd/skopeo: show full help on 'copy' with no args
2018-06-29 11:17:03 -04:00
Vincent Batts
f398c9c035 cmd/skopeo: show full help on 'copy' with no args
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
2018-06-29 10:13:13 -04:00
Miloslav Trmač
0144aa8dc5 Merge pull request #514 from giuseppe/vndr-c-image
vendor: update containers/image
2018-05-30 20:50:35 +02:00
Giuseppe Scrivano
0df5dcf09c vendor: update containers/image
Needed to pick up this change:

ostree: use the same thread for ostree operations

Since https://github.com/ostreedev/ostree/pull/1555, locking is
enabled by default in OSTree.  Unfortunately it uses thread-private
data and it breaks the Golang bindings.  Force the same thread for the
write operations to the OSTree repository.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-05-30 19:00:34 +02:00
Miloslav Trmač
f9baaa6b87 Merge pull request #512 from mtrmac/docker_vendor_update
Update docker/docker dependencies.
2018-05-26 05:56:42 +02:00
Max Goltzsche
67ff78925b Update docker/docker dependencies.
Required to update those dependencies in containers/image.
See https://github.com/containers/image/pull/446.

Updated by mitr@redhat.com to vendor from containers/image master again,
which brought in a few more dependency updates.

Signed-off-by: Max Goltzsche <max.goltzsche@gmail.com>
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-05-26 05:41:06 +02:00
Miloslav Trmač
5c611083f2 Merge pull request #510 from rhatdan/vendor
Vendor in latest go-selinux and containers/storage
2018-05-22 17:47:23 +02:00
Daniel J Walsh
976d57ea45 Vendor in latest go-selinux and containers/storage
skopeo is failing to build now on 32 bit systems.  go-selinux update
should fix this.  Also container/storage has had some cleanup fixes
to devicemapper support.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-05-22 11:09:34 -04:00
Antonio Murdaca
63569fcd63 Merge pull request #509 from runcom/bump-0.1.30
Bump 0.1.30
2018-05-20 11:11:19 +02:00
Antonio Murdaca
98b3a13b46 version: bump v0.1.31-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-05-20 10:56:32 +02:00
Antonio Murdaca
ca3bff6a7c version: bump v0.1.30
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-05-20 10:56:16 +02:00
Daniel J Walsh
563a4ac523 Merge pull request #507 from mtrmac/c-image-docs
Include vendor/github.com/containers/image/docs
2018-05-19 04:02:44 -04:00
Miloslav Trmač
14ea9f8bfd Run (make vendor) for the first time.
This primarily adds vendor/github.com/containers/image/docs/ ,
but also updates other dependencies that are not pinned to a specific
commit.
2018-05-19 04:24:17 +02:00
Miloslav Trmač
05e38e127e Add a (make vendor) target, primarily to preserve c/image documentation
The goal is to include the c/image documentation in a skopeo release,
so that RPMs and other distribution mechanisms can ship the c/image
documentation without having to create a separate package for c/image
(which would not otherwise be needed because it is vendored in users).

So, unify the updates of the "vendor" subdirectory as (make vendor),
and document it in README.md.  Also drop hack/vendor.sh, we neither
use nor document it, so updating it as well seems pointless.
2018-05-19 04:21:15 +02:00
Antonio Murdaca
1ef80d8082 Merge pull request #506 from rhatdan/storage
Vendor in latest containers-storage to add devmapper support
2018-05-18 23:33:15 +02:00
Daniel J Walsh
597b6bd204 Vendor in latest containers-storage to add devmapper support
containers/storage and storage.conf now support flags to allow users
to setup containers/storage to run on devicemapper.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-05-18 12:04:00 -04:00
Miloslav Trmač
7e9a664764 Merge pull request #505 from umohnani8/transport
[DO NOT MERGE] Pick up changes to transports in containers/image
2018-05-15 21:40:03 +02:00
umohnani8
79449a358d Pick up changes to transports in containers/image
docker-archive and oci-archive now allow the image reference
for the destination to be empty.
Update tests for this new change.

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-05-15 15:16:36 -04:00
Daniel J Walsh
2d04db9ac8 Merge pull request #504 from mtrmac/README-installation
Improve installation documentation a bit
2018-05-14 16:47:36 -04:00
Miloslav Trmač
3e7a28481c Improve installation documentation a bit
- _Start_ with installing distribution packages, instead of
  mentioning it after the user has already built everything from source.
- Note that both the binary and documentation needs to be built
  for (make install) to work.
2018-05-12 04:09:00 +02:00
Miloslav Trmač
79225f2e65 Merge pull request #501 from vrothberg/multitags
skopeo-copy: docker-archive: multitag support
2018-05-11 22:07:55 +02:00
Valentin Rothberg
e1c1bbf26d skopeo-copy: docker-archive: multitag support
Add multitag support when generating docker-archive tarballs via the
newly added '--aditional-tag' option, which can be specified multiple
times to add more than one tag.  All specified tags will be added to the
RepoTags field in the docker-archive's manifest.json file.

This change requires to vendor the latest containers/image with
commit a1a9391830fd08637edbe45133fd0a8a2682ae75.

Signed-off-by: Valentin Rothberg <vrothberg@suse.com>
2018-05-11 07:43:23 +02:00
Miloslav Trmač
c4808f002e Merge pull request #503 from mtrmac/vet-package
Run (go vet) on all subpackages instead only of changed files
2018-05-10 21:35:34 +02:00
Miloslav Trmač
42203b366d Run (go vet) on all subpackages instead only of changed files
Apparently, it was never documented to use (go vet $somefile.go)
(but (go tool vet $somefile.go) was).

go 1.10 seems to do more checks within packages, and $somefile.go
is interpreted as a package with only that file (even if other files
from that package are in the same directory), leading to spurious
"undefined: $symbol" errors.

So, just run (go vet) on ./... (explicitly excluding skopeo/vendor for the
benefit of Go 1.8). We only have three subpackages, so the savings, if any,
from running (go vet) only on the modified subpackages would be small.

More importantly, on a toolchain update, ./... allows us to see the newly
detected issues all at once, instead of randomly waiting for a commit that
changes one of the affected files for the failure to show up.
2018-05-10 21:12:51 +02:00
Miloslav Trmač
1f11b8b350 Merge pull request #500 from mtrmac/go-1.10-check
Fix test suite failures with go >= 1.10
2018-05-07 19:17:06 +02:00
Miloslav Trmač
ea23621c70 Fix test suite failures with go >= 1.10
The hack/common.sh script contains
    local go_version
    go_version=($(go version))
    if [[ "${go_version[2]}" < "go1.5" ]]; then
      # fail
    fi

which does a lexicographic string comparison, and fails with 1.10.
Just drop it, the fedora:latest image is not likely to revert to 1.5.
2018-05-07 18:32:28 +02:00
Miloslav Trmač
ab2bc6e8d1 Merge pull request #494 from umohnani8/oci
Vendor in changes made to containers/image
2018-04-12 12:24:00 +02:00
umohnani8
c520041b83 Vendor in changes made to containers/image
containers/image returns a more detailed error message for oci and
oci-archive transports when the syntax given by the user is incorrect

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2018-04-11 15:58:41 -04:00
Miloslav Trmač
e626fca6a7 Merge pull request #493 from mtrmac/context-everywhere
Update for API changes in containers/image#431
2018-04-10 19:30:07 +02:00
Miloslav Trmač
92b6262224 Update for adding context.Context to containers/image API
In addition to the minimum necessary to update the API, also rename some
parameters/variables for consistency:

c	*cli.Context
ctx	context.Context
sys	*types.SystemContext

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-04-10 19:08:49 +02:00
Miloslav Trmač
e8dea9e770 Vendor after merging https://github.com/novas0x2a/image:context-everywhere 2018-04-10 19:08:37 +02:00
Miloslav Trmač
28080c8d5f Merge pull request #487 from jlsalmon/configure-daemon-host
Support host configuration for docker-daemon sources and destinations
2018-04-07 06:28:05 +02:00
Justin Lewis Salmon
0cea6dde02 Support host configuration for docker-daemon sources and destinations
This PR adds CLI support for overriding the default docker daemon host when using the
`docker-daemon` transport.

Fixes #244

Signed-off-by: Justin Lewis Salmon <justin.lewis.salmon@gmail.com>
2018-04-07 14:14:32 +10:00
Miloslav Trmač
22482e099a Merge pull request #490 from mtrmac/storage-api-revendor
Vendor after merging containers/image#436
2018-04-05 21:53:18 +02:00
Miloslav Trmač
7aba888e99 Vendor after merging containers/image#436
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-04-05 21:33:04 +02:00
Miloslav Trmač
c61482d2cf Merge pull request #486 from runcom/bump-0.1.29
Bump 0.1.29
2018-03-29 15:40:29 +02:00
Antonio Murdaca
db941ebd8f version: bump v0.1.30-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-03-29 15:03:29 +02:00
Antonio Murdaca
7add6fc80b version: bump v0.1.29
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-03-29 15:03:14 +02:00
Miloslav Trmač
eb9d74090e Merge pull request #485 from nlewo/pr/docker-archive-legacy
Add Docker legacy archive support
2018-03-28 22:38:49 +02:00
Antoine Eiche
61351d44d7 Vendor after merging https://github.com/containers/image/pull/370
Signed-off-by: Antoine Eiche <lewo@abesis.fr>
2018-03-28 18:46:26 +02:00
Miloslav Trmač
aa73bd9d0d Update for changed PutBlob API
Signed-off-by: Antoine Eiche <lewo@abesis.fr>
2018-03-28 18:46:14 +02:00
Miloslav Trmač
b08350db15 Merge pull request #477 from mtrmac/305-cleanup
Vendor mtrmac/image:305-cleanup
2018-03-15 16:17:46 +01:00
Miloslav Trmač
f63f78225d Update for types.Image.Inspect output change
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-03-15 15:26:00 +01:00
Miloslav Trmač
60aa4aa82d Vendor after merging mtrmac/image:305-cleanup
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-03-15 15:25:31 +01:00
Miloslav Trmač
37264e21fb Merge pull request #483 from lsm5/contrib-storage
add storage.conf and manpage in contrib/
2018-03-12 19:07:12 +01:00
Lokesh Mandvekar
fe2591054c add storage.conf and manpage in contrib/
These files are used by deb and rpm packages, so I'd rather have them
upstream than maintain in 2 separate places.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2018-03-12 13:28:43 -04:00
Miloslav Trmač
fd0c3d7f08 Merge pull request #482 from umohnani8/gzip
Vendor in latest containers/image
2018-03-09 04:08:37 +01:00
umohnani8
b325cc22b8 Vendor in latest containers/image
Adds support to handle compressed docker-archive files

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-03-08 15:42:28 -05:00
Miloslav Trmač
5f754820da Merge pull request #479 from umohnani8/dir
Fix skopeo tests with changes to dir transport
2018-02-22 17:08:40 +01:00
umohnani8
43acc747d5 Fix skopeo tests with changes to dir transport
The dir transport has been changed to save the blobs without the .tar extension
Fixes the skopeo tests failing due to this change

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-22 10:50:22 -05:00
Daniel J Walsh
b3dec98757 Merge pull request #476 from jonboulle/fixbuild
Dockerfile: bump to ubuntu 17.10
2018-02-12 14:36:15 -05:00
Jonathan Boulle
b1795a08fb Dockerfile: bump to ubuntu 17.10
17.04 is EOLed and no longer works.

Signed-off-by: Jonathan Boulle <jonathanboulle@gmail.com>
2018-02-12 19:58:11 +01:00
Antonio Murdaca
1307cac0c2 Merge pull request #468 from mtrmac/oci-schema-rebase
Re-vendor, notably opencontainers/image-spec to fix tests
2018-02-09 20:16:42 +01:00
Miloslav Trmač
dc1567c8bc Re-vendor, and use mtrmac/image-spec:id-based-loader to fix tests
Anyone running (vndr) currently ends up with failing tests in OCI schema
validation because gojsonschema has fixed its "$ref" interpretation, exposing
inconsistent URI usage inside image-spec/schema.

So, this runs (vndr), and uses mtrmac/image-spec:id-based-loader
( https://github.com/opencontainers/image-spec/pull/739 ) to make the tests pass
again.  As soon as that PR is merged we should revert to using the upstream
image-spec repo again.
2018-02-09 18:34:31 +01:00
Antonio Murdaca
22c524b0e0 Merge pull request #474 from runcom/bump-0.1.28
Bump 0.1.28
2018-01-31 16:23:15 +01:00
Antonio Murdaca
9a225c3968 version: bump to v0.1.29-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-01-31 16:01:51 +01:00
Antonio Murdaca
0270e5694c version: bump to v0.1.28
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-01-31 16:01:27 +01:00
Miloslav Trmač
4ff902dab9 Merge pull request #470 from giuseppe/revendor-containers-image-2
vendor: bump containers/image and containers/image
2018-01-22 16:38:19 +01:00
Giuseppe Scrivano
64b3bd28e3 vendor: bump containers/image and containers/image
Update containers/image and containers/storage to the current master
revisions.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-01-17 15:47:07 +01:00
Miloslav Trmač
d8e506c648 Merge pull request #372 from nalind/storage-update
Bump containers/storage and containers/image
2018-01-04 16:39:23 +01:00
Nalin Dahyabhai
aa6c809e5a Bump containers/image and containers/image
Update containers/image and containers/storage to the current master
revisions.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-12-15 13:36:23 -05:00
Miloslav Trmač
1c27d6918f Merge pull request #466 from nalind/update-storage
Bump containers/storage and containers/image
2017-12-14 12:21:08 +01:00
Nalin Dahyabhai
9f2491694d Bump containers/storage and containers/image
Re-vendor containers/storage to current revision
0d32dfce498e06c132c60dac945081bf44c22464, and containers/image to
current revision c8bcd6aa11c62637c5a7da1420f43dd6a15f0e8d.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-12-13 11:03:37 -05:00
Miloslav Trmač
14245f2e24 Merge pull request #461 from jonjohnsonjr/patch-1
Update README.md
2017-11-30 16:36:54 +01:00
jonjohnsonjr
8a1d480274 Update README.md
Fix OCI image spec link.
2017-11-29 14:08:38 -08:00
Miloslav Trmač
78b29a5c2f Merge pull request #460 from giuseppe/revendor-containers-image
vendor: revendor containers/image
2017-11-25 13:30:45 +01:00
Giuseppe Scrivano
20d31daec0 vendor: revendor containers/image
Include last changes in the ostree driver.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-11-24 22:23:47 +01:00
Antonio Murdaca
5a8f212630 Merge pull request #458 from runcom/bump-v0.1.27
Bump v0.1.27
2017-11-22 02:27:18 +01:00
Antonio Murdaca
34e77f9897 version: bump to v0.1.28-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-22 01:53:39 +01:00
Antonio Murdaca
93876acc5e version: bump to v0.1.27
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-22 01:53:19 +01:00
Daniel J Walsh
031283efb1 Merge pull request #456 from rhatdan/master
Cleanup skopeo man page and README.md
2017-11-21 13:07:04 -05:00
Daniel J Walsh
23c54feddd Cleanup skopeo man page and README.md
Fix spelling mistakes
Fix reference to garbage collection on the container registry server.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-11-21 11:38:48 -05:00
Miloslav Trmač
04e04edbfe Merge pull request #453 from umohnani8/creds
Use credentials from authfile for skopeo commands
2017-11-21 17:22:44 +01:00
Urvashi Mohnani
cbedcd967e Use credentials from authfile for skopeo commands
skopeo copy, delete, and inspect can now use credentials stored in the auth file
by the kpod login command
e.g kpod login docker.io -> skopeo copy dir:mydir docker://username/image

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2017-11-21 10:33:59 -05:00
Miloslav Trmač
fa08bd7e91 Merge pull request #454 from nalind/update-storage
Update to a newer containers/storage master
2017-11-21 03:43:46 +01:00
Nalin Dahyabhai
874d119dd9 Update to a newer containers/storage master
Bump containers/storage to master=138cddaf9d6b3910b18de44a017417f60bff4e66

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-11-16 18:38:09 -05:00
Miloslav Trmač
eb43d93b57 Merge pull request #420 from mtrmac/manifest-lists
Support copying a single image from manifest lists
2017-11-16 17:05:08 +01:00
Miloslav Trmač
c1a0084bb3 Replace TestCopyFailWithManifestList by a test which expects success 2017-11-16 16:28:03 +01:00
Miloslav Trmač
e8fb01e1ed Add global --override-arch and --override-os options
This e.g. allows accessing Linux images on macOS.
2017-11-16 16:28:03 +01:00
Miloslav Trmač
0543f551c7 Update for changed types.Image/types.ImageCloser 2017-11-16 16:28:03 +01:00
Miloslav Trmač
27f320b27f Vendor after merging mtrmac/image:manifest-lists 2017-11-16 16:27:52 +01:00
Antonio Murdaca
c0dffd9b3e Merge pull request #452 from runcom/bump-v0.1.26
[DO NOT MERGE] Bump v0.1.26
2017-11-15 15:46:41 +01:00
Antonio Murdaca
66a97d038e version: bump to v0.1.27-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-15 14:49:00 +01:00
Antonio Murdaca
2e8377a708 version: bump to v0.1.26
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-15 14:48:39 +01:00
Daniel J Walsh
a76cfb7dc7 Merge pull request #450 from umohnani8/dir_transport
Add manifest type conversion to skopeo copy
2017-11-15 08:32:05 -05:00
Urvashi Mohnani
409dce8a89 Add manifest type conversion to skopeo with dir transport
User can select from 3 manifest types: oci, v2s1, or v2s2
skopeo copy defaults to oci manifest if the --format flag is not set
Adds option to compress blobs when saving to the directory using the dir transport
e.g skopeo copy --format v2s1 --compress-blobs docker-archive:alp.tar dir:my-directory

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2017-11-14 14:45:32 -05:00
Urvashi Mohnani
5b14746045 Vendor in changes from containers/image
Adds manifest type conversion to dir transport

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2017-11-14 14:45:32 -05:00
Antonio Murdaca
a3d2e8323a Merge pull request #409 from runcom/add-logo
[do not merge yet] add logo :)
2017-11-14 20:10:20 +01:00
Antonio Murdaca
2be4deb980 README.md: add logo
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-14 20:09:19 +01:00
Antonio Murdaca
5f71547262 Merge pull request #451 from runcom/cve-error-log
Fix CVE in tar-split
2017-11-08 16:26:21 +01:00
Antonio Murdaca
6c791a0559 bump back to v0.1.26-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-08 15:59:21 +01:00
Antonio Murdaca
7fd6f66b7f bump to v0.1.25
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-08 15:59:01 +01:00
Antonio Murdaca
a1b48be22e Fix CVE in tar-split
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-11-08 15:58:43 +01:00
Antonio Murdaca
bb5584bc4c Merge pull request #442 from mtrmac/fix-vendor
Revert mis-merged reverts of vendor.conf
2017-11-07 20:40:03 +01:00
Miloslav Trmač
3e57660394 Revert mis-merged reverts of vendor.conf
PR #440 reverted the vendor.conf edits of #426.  This passed CI
because the corresponding vendor/* subpackages were not modified.

Restore the vendor.conf changes, and re-run full (vndr) to ensure
the two are consistent again.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-11-07 19:34:26 +01:00
Miloslav Trmač
803619cabf Merge pull request #447 from marcosps/issue_446
README.md: Fix example of skopeo copy command
2017-11-07 17:53:25 +01:00
Marcos Paulo de Souza
8c1a69d1f6 README.md: Fix example of skopeo copy command
In README.md, there is an example of skopeo copy command to download an
image in OCI format, but the current code returns an error:

skopeo copy docker://busybox:latest oci:busybox_ocilayout
FATA[0000] Error initializing destination oci:tmp:: cannot save image with empty image.ref.name

If we add a tag after the oci directory, the problem is gone:
skopeo copy docker://busybox:latest oci:busybox_ocilayout:latest

Fixes: #446

Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com>
2017-11-06 22:54:11 -02:00
Miloslav Trmač
24423ce4a7 Merge pull request #443 from nstack/shared-blobs
copy: add shared blob directory support for OCI sources/destinations
2017-11-06 18:46:34 +01:00
Jonathan Boulle
76b071cf74 cmd/copy: add {src,dst}-shared-blob-dir flags
Only works for OCI layout sources and destinations.
2017-11-06 17:00:39 +01:00
Jonathan Boulle
407a7d9e70 vendor: bump containers/image to master
To pick up containers/image#369

Signed-off-by: Jonathan Boulle <jonathanboulle@gmail.com>
2017-11-06 17:00:39 +01:00
Antonio Murdaca
0d0055df05 Merge pull request #444 from mtrmac/contributing-commits
Modify CONTRIBUTING.md to prefer smaller commits over squashing them
2017-11-06 16:31:37 +01:00
Miloslav Trmač
63b3be2f13 Modify CONTRIBUTING.md to prefer smaller commits over squashing them
See the updated text for the rationale.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-11-06 16:21:50 +01:00
Antonio Murdaca
62c68998d7 Merge pull request #440 from hferentschik/container-image-issue-327
[DO NOT MERGE] - Aligning Docker version between containers/image and skopeo
2017-11-06 11:58:04 +01:00
Hardy Ferentschik
4125d741cf Aligning Docker version between containers/image and skopeo
Signed-off-by: Hardy Ferentschik <hardy@hibernate.org>
2017-11-06 11:45:03 +01:00
Antonio Murdaca
bd07ffb9f4 Merge pull request #426 from mtrmac/single-logrus
Update image-tools, and remove the duplicate Sirupsen/logrus vendor
2017-10-31 16:23:48 +01:00
Miloslav Trmač
700199c944 Update image-tools, and remove the duplicate Sirupsen/logrus vendor 2017-10-30 17:24:44 +01:00
Antonio Murdaca
40a5f48632 Merge pull request #441 from mtrmac/smaller-containers
Create smaller testing containers
2017-10-28 09:05:30 +02:00
Miloslav Trmač
83ca466071 Remove the openshift/origin checkout from /tmp after building it 2017-10-28 02:19:29 +02:00
Miloslav Trmač
a7e8a9b4d4 Run (dnf clean all) after finishing the installation
... to drop all caches which will never be needed again.
2017-10-28 02:17:15 +02:00
Miloslav Trmač
3f10c1726d Do not use a separate yum command to install OpenShift dependencies
We don't really need to pay the depsolving overhead twice.
2017-10-28 02:16:25 +02:00
Miloslav Trmač
832eaa1f67 Use ordinary shell variables instead of ENV for REGISTRY_COMMIT*
They are only used in the immediately following shell snippet,
no need to pollute the container environment with them, nor to add
two extra layers.
2017-10-28 02:14:44 +02:00
Antonio Murdaca
e2b2d25f24 Merge pull request #439 from TomSweeneyRedHat/dev/tsweeney/dockfix/4
A few wording touchups to CONTRIBUTING.md
2017-10-25 17:35:04 +02:00
TomSweeneyRedHat
3fa370fa2e A few wording touchups to CONTRIBUTING.md
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-10-25 10:30:26 -04:00
Antonio Murdaca
88cff614ed Merge pull request #408 from cyphar/use-buildmode-pie
makefile: use -buildmode=pie
2017-10-22 09:05:53 +02:00
Aleksa Sarai
b23cac9c05 makefile: use -buildmode=pie
The security benefits of PIC binaries are quite well known (since they
work with ASLR), and there is effectively no downside. In addition,
we've been seeing some weird linker errors on ppc64le that are resolved
by using -buildmode=pie.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-10-22 10:13:20 +11:00
Antonio Murdaca
c928962ea8 Merge pull request #438 from runcom/bump-v0.1.24
Bump v0.1.24
2017-10-21 19:44:26 +02:00
Antonio Murdaca
ee011b1bf9 bump to v0.1.25-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-10-21 19:27:29 +02:00
Antonio Murdaca
dd2c3e3a8e bump to v0.1.24
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-10-21 19:27:07 +02:00
Antonio Murdaca
f74e3fbb0f Merge pull request #437 from runcom/fix-config-nil
fix inspect with nil image config
2017-10-21 18:18:51 +02:00
Antonio Murdaca
e3f7733de1 fix inspect with nil image config
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-10-21 18:05:19 +02:00
Antonio Murdaca
5436111796 Merge pull request #435 from mtrmac/brew-failure
Work around broken brew in Travis
2017-10-20 15:24:26 +02:00
Miloslav Trmač
b8a502ae87 Work around broken brew in Travis
Per https://github.com/Homebrew/brew/issues/3299 , (brew update)
is needed to avoid a
> ==> Downloading https://homebrew.bintray.com/bottles-portable/portable-ruby-2.3.3.leopard_64.bottle.1.tar.gz
> ######################################################################## 100.0%
> ==> Pouring portable-ruby-2.3.3.leopard_64.bottle.1.tar.gz
...
> /usr/local/Homebrew/Library/Homebrew/brew.rb:12:in `<main>': Homebrew must be run under Ruby 2.3! You're running 2.0.0. (RuntimeError)

Ideally Travis should bake the (brew update) into its images
(https://github.com/travis-ci/travis-ci/issues/8552 ), but that’s only going
to happen around November 2017 per https://blog.travis-ci.com/2017-10-16-a-new-default-os-x-image-is-coming .

Until then, we have to do that ourselves.
2017-10-19 18:04:23 +02:00
Jhon Honce
28d4e08a4b Merge pull request #429 from giuseppe/vendor-containers-image
containers/image: vendor
2017-10-06 10:24:34 -07:00
Giuseppe Scrivano
ef464797c1 containers/image: vendor
Vendor in latest containers/image

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-10-06 18:16:39 +02:00
Antonio Murdaca
2966f794fc Merge pull request #406 from mtrmac/macOS
Simplify macOS builds
2017-10-03 13:01:44 +02:00
Miloslav Trmač
37e34aaff2 Automatically use the right CFLAGS and LDFLAGS for gpgme
On macOS, (brew install gpgme) installs it within /usr/local, but
/usr/local/include is not in the default search path.

Rather than hard-code this directory, use gpgme-config. Sadly that
must be done at the top-level user instead of locally in the gpgme
subpackage, because cgo supports only pkg-config, not general shell
scripts, and gpgme does not install a pkg-config file.

If gpgme is not installed or gpgme-config can’t be found for other reasons,
the error is silently ignored (and the user will probably find out because
the cgo compilation will fail); this is so that users can use the
containers_image_openpgp build tag without seeing ugly errors
(and without the Makefile having to detect that build tag in even more
shell scripts).
2017-10-02 21:58:23 +02:00
Miloslav Trmač
aa6df53779 Only use the cgo workaround if using gpgme
Otherwise we would try to link with gpgme only for that unnecessary
workaround.
2017-10-02 20:49:49 +02:00
Miloslav Trmač
e3170801c5 Merge pull request #427 from rhatdan/vendor
Vendor in latest containers storage
2017-10-02 17:46:40 +02:00
Daniel J Walsh
4e9ef94365 Vendor in latest containers storage
We want to get support into skopeo for handling
override_kernel_checks so that we can use overlay
backend on RHEL.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-09-30 10:40:45 +00:00
Miloslav Trmač
9f54acd6bd Merge pull request #424 from lsm5/Makefile-go-var
use Makefile var for go compiler
2017-09-26 17:54:37 +02:00
Lokesh Mandvekar
e735faac75 use Makefile var for go compiler
This will allow compilation with a custom go binary,
for example /usr/lib/go-1.8/bin/go instead of /usr/bin/go on Ubuntu
16.04 which is still version 1.6

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2017-09-25 17:23:00 -04:00
Miloslav Trmač
3c67427272 Merge pull request #422 from umohnani8/auth
fixing error checking due to update in make lint
2017-09-21 16:11:37 +02:00
umohnani8
a1865e9d8b fixing error checking due to update in make lint
make lint is complaining for cases where the error returned is checked
for err != nil, and then returned anyways.

Signed-off-by: umohnani8 <umohnani@redhat.com>
2017-09-20 14:45:45 -04:00
Miloslav Trmač
875dd2e7a9 Merge pull request #417 from mtrmac/manifest-list-hofix
Vendor in mtrmac/image:manifest-list-hotfix
2017-09-14 17:41:55 +02:00
Miloslav Trmač
75dc703d6a Mark TestCopyFailsWithManifestList with ExpectFailure
This is one of the trade-offs we made.
2017-09-13 19:54:51 +02:00
Miloslav Trmač
fd6324f800 Vendor after merging mtrmac/image:manifest-list-hotfix 2017-09-13 18:47:43 +02:00
Miloslav Trmač
a41cd0a0ab Merge pull request #413 from TomSweeneyRedHat/dev/tsweeney/docfix/3
Spruce up the README.md a bit
2017-09-11 19:55:04 +02:00
TomSweeneyRedHat
b548b5f96f Spruce up the README.md a bit
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-09-11 12:54:04 -04:00
Miloslav Trmač
2bfbb4cbf2 Merge pull request #412 from thomasmckay/patch-1
update installing build dependencies
2017-09-11 18:17:04 +02:00
thomasmckay
b2297592f3 removed builddep for adding ostree-devel 2017-09-11 08:51:08 -04:00
thomasmckay
fe6073e87e update installing build dependencies 2017-09-08 09:37:32 -04:00
Antonio Murdaca
09557f308c Merge pull request #411 from TomSweeneyRedHat/dev/tsweeney/docfix2
Touch up a few tpyos
2017-09-08 01:07:48 +02:00
TomSweeneyRedHat
10c2053967 Touch up a few tpyos
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-09-07 14:19:08 -04:00
Miloslav Trmač
55e7b079f1 Merge pull request #403 from owtaylor/requested-manifest-mime-types
Update for removal of requestedMIMETypes from ImageReference.NewImageSource()
2017-09-07 18:04:32 +02:00
Owen W. Taylor
035fc3a817 Update for removal of requestedMIMETypes from containers/image/types.ImageReference.NewImageSource()
Signed-off-by: Owen W. Taylor <otaylor@fishsoup.net>
2017-09-07 10:28:37 -04:00
Miloslav Trmač
5faf7d8001 Merge pull request #407 from dlorenc/helper
Add docker-credential-helpers dependency.
2017-09-04 19:54:15 +02:00
dlorenc
2ada6b20a2 Add docker-credential-helpers dependency. 2017-09-04 10:00:08 -07:00
Miloslav Trmač
16181f1cfb Merge pull request #404 from mtrmac/shallow-clone
Use (git clone --depth=1) to speed up testing
2017-09-02 00:32:57 +02:00
Miloslav Trmač
8c07dec7a9 Use (git clone --depth=1) to speed up testing
This reduces the time used to clone openshift/origin on Travis from
> real	2m34.227s
> user	4m18.844s
> sys	0m8.144s
to
> real	0m8.816s
> user	0m2.640s
> sys	0m0.856s
, and the download size from  782.78 MiB to 70.05 MiB .

We can't trivially do this for docker/distribution because it is using
(git checkout $commit) on the cloned repo; we could do a clone+fetch+fetch
with --depth=1, but the full clone takes less than two seconds, so let's
keep that one simple.
2017-09-01 21:05:13 +02:00
Miloslav Trmač
a9e8c588e9 Merge pull request #399 from umohnani8/oci_name
Modify skopeo tests
2017-08-31 19:03:37 +02:00
umohnani8
bf6812ea86 [DO NOT MERGE] Modify skopeo tests
The oci name changes in containers/image caused the skopeo test to fail

Signed-off-by: umohnani8 <umohnani@redhat.com>
2017-08-31 12:02:57 -04:00
Antonio Murdaca
5f6b0a00e2 Merge pull request #397 from cyphar/renable-verification-oci
integration: re-enable image-tools upstream validation
2017-08-17 04:58:20 +02:00
Aleksa Sarai
d55a17ee43 integration: re-enable image-tools upstream validation
This effectively reverts f4a44f00b8 ("integration: disable check with
image-tools for image-spec RC5"), which disabled the compliance
validation due to upstream bugs. Since those bugs have been fixed,
re-enable the tests (to make the smoke tests far more effective).

Fixes: f4a44f00b8 ("integration: disable check with image-tools for image-spec RC5")
Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-08-15 02:08:34 +10:00
Aleksa Sarai
96ce8b63bc vendor: revendor github.com/opencontainers/image-tools@da84dc9dddc823a32f543e60323f841d12429c51
This requires re-vendoring a bunch of other things (as well as the old
Sirupsen/logrus path), the relevant commits being:

* github.com/xeipuuv/gojsonschema@0c8571ac0ce161a5feb57375a9cdf148c98c0f70
* github.com/xeipuuv/gojsonpointer@6fe8760cad3569743d51ddbb243b26f8456742dc
* github.com/xeipuuv/gojsonreference@e02fc20de94c78484cd5ffb007f8af96be030a45
* go4.org@034d17a462f7b2dcd1a4a73553ec5357ff6e6c6e

Signed-off-by: Aleksa Sarai <asarai@suse.de>
2017-08-15 02:08:12 +10:00
Antonio Murdaca
1d1cc1ff5b Merge pull request #388 from mrunalp/update_deps
[DO NOT MERGE] Update dependencies to change to logrus 1.0.0
2017-08-04 20:09:16 +02:00
Mrunal Patel
6f3ed0ecd9 Update dependencies to change to logrus 1.0.0
Signed-off-by: Mrunal Patel <mrunalp@gmail.com>
2017-08-04 10:22:59 -07:00
Antonio Murdaca
7e90bc082a Merge pull request #392 from mtrmac/test-test-failures
Fix travis failures on macOS
2017-08-04 11:58:37 +02:00
Miloslav Trmač
d11d173db1 Install golint on macOS
On Linux this is done in the Dockerfile, but macOS tests do not use containers.
2017-08-04 00:13:55 +02:00
Miloslav Trmač
72e662bcb7 Merge pull request #387 from errm/errm/osx-install
Fix `make install` on OSX
2017-07-27 18:00:26 +02:00
Ed Robinson
a934622220 Install go-md2man on travis osx 2017-07-27 09:47:34 +01:00
Ed Robinson
c448bc0a29 Check make install on osx travis build 2017-07-26 20:14:49 +01:00
Ed Robinson
b0b85dc32f Fix make install on OSX
Fixes #383

Signed-off-by: Ed Robinson <ed.robinson@reevoo.com>
2017-07-26 20:11:03 +01:00
Miloslav Trmač
75811bd4b1 Merge pull request #382 from errm/errm/automate-osx-build
Adds automated build for OSX
2017-07-26 17:08:02 +02:00
Ed Robinson
bb84c696e2 Stubb out libostree support when building on OSX
Signed-off-by: Ed Robinson <ed.robinson@reevoo.com>
2017-07-26 14:50:11 +01:00
Ed Robinson
4d5e442c25 Adds automated build for OSX re #380
Signed-off-by: Ed Robinson <ed.robinson@reevoo.com>
2017-07-25 15:54:46 +01:00
Antonio Murdaca
91606d49f2 Merge pull request #379 from runcom/bump-oci-v1
Bump v0.1.23 for OCIv1 support and bug fixes
2017-07-20 19:53:18 +02:00
Antonio Murdaca
85bbb497d3 bump back to v0.1.24-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-07-20 19:36:46 +02:00
Antonio Murdaca
1bbd87f435 bump to v0.1.23
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-07-20 19:36:46 +02:00
Antonio Murdaca
bf149c6426 Merge pull request #378 from mtrmac/runtime-spec-v1.0.0
Update to image-spec v1.0.0 and revendor
2017-07-20 19:33:24 +02:00
Miloslav Trmač
ca03debe59 Update to image-spec v1.0.0 and revendor 2017-07-20 18:04:00 +02:00
Antonio Murdaca
2d168e3723 Merge pull request #377 from mtrmac/image-spec-1.0.0
Update to image-spec v1.0.0 and revendor
2017-07-20 14:36:33 +02:00
Miloslav Trmač
2c1ede8449 Update to image-spec v1.0.0 and revendor 2017-07-19 23:50:50 +02:00
Miloslav Trmač
b2a06ed720 Merge pull request #376 from runcom/fix-375
vendor c/image: fix auth handlers
2017-07-18 18:58:36 +02:00
Antonio Murdaca
2874584be4 vendor c/image: fix auth handlers
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-07-18 17:14:35 +02:00
Antonio Murdaca
91e801b451 Merge pull request #371 from nalind/vendor-storage
Bump and pin containers/storage and containers/image
2017-06-28 17:30:14 +02:00
Nalin Dahyabhai
b0648d79d4 Bump containers/storage and containers/image
Update containers/storage and containers/image to the
current-as-of-this-writing versions,
105f7c77aef0c797429e41552743bf5b03b63263 and
23bddaa64cc6bf3f3077cda0dbf1cdd7007434df respectively.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-06-28 11:05:26 -04:00
Miloslav Trmač
437d608772 Merge pull request #370 from 0x0916/2017-06-22/dockerfile
Dockerfile.build: using ubuntu 17.04
2017-06-22 13:03:44 +02:00
0x0916
d57934d529 Dockerfile.build: using ubuntu 17.04
ubuntu 16.04 have not package `libostree-dev`. also, we should
install `libglib2.0-dev` package when build skopeo with command `make binary`.

Signed-off-by: 0x0916 <w@laoqinren.net>
2017-06-22 17:17:24 +08:00
Antonio Murdaca
4470b88c50 Merge pull request #368 from runcom/cut-v0.1.22
Cut v0.1.22
2017-06-21 10:39:49 +02:00
Antonio Murdaca
03595a83d0 bump back to v0.1.23
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-21 10:22:58 +02:00
Antonio Murdaca
5d24b67f5e bump to v0.1.22
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-21 10:22:43 +02:00
Antonio Murdaca
3b9ee4f322 Merge pull request #366 from rhatdan/transports
Give more useful help when explaining usage
2017-06-20 17:02:22 +02:00
Daniel J Walsh
0ca26cce94 Give more useful help when explaining usage
Also specify container-storage as a valid transport

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-06-20 14:28:02 +00:00
Antonio Murdaca
29528d00ec Merge pull request #367 from runcom/vendor-c/image-list-names
vendor c/image for ListNames in transports pkg
2017-06-17 00:26:40 +02:00
Antonio Murdaca
af34f50b8c bump ostree-go
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-17 00:08:54 +02:00
Antonio Murdaca
e7b32b1e6a vendor c/image for ListNames in transports pkg
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-16 23:39:06 +02:00
Antonio Murdaca
4049bf2801 Merge pull request #365 from rhatdan/docker
Remove docker references whereever possible
2017-06-16 19:15:33 +02:00
Daniel J Walsh
ad33537769 Remove docker references whereever possible
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-06-16 10:51:42 -04:00
Antonio Murdaca
d5e34c1b5e Merge pull request #362 from rhatdan/master
Vendor in ostree fixes
2017-06-16 11:58:52 +02:00
Dan Walsh
5e586f3781 Vendor in ostree fixes
This will fix the compiler issues.

Signed-off-by: Dan Walsh <dwalsh@redhat.com>
2017-06-16 05:42:47 -04:00
Antonio Murdaca
455177e749 Merge pull request #358 from runcom/bump-release-0.1.21
Bump release 0.1.21
2017-06-15 17:16:46 +02:00
Antonio Murdaca
b85b7319aa bump back to v0.1.22
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 16:58:43 +02:00
Antonio Murdaca
0b73154601 bump to v0.1.21
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 16:58:24 +02:00
Miloslav Trmač
da48399f89 Merge pull request #357 from runcom/up-cstorage
*: update c/storage
2017-06-15 15:12:33 +02:00
Antonio Murdaca
08504d913c *: update c/storage
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 14:29:56 +02:00
Miloslav Trmač
fe67466701 Merge pull request #351 from giuseppe/add-ostree-dep
vendor.conf: add ostree-go
2017-06-14 17:18:51 +02:00
Giuseppe Scrivano
47e5d0cd9e vendor.conf: add ostree-go
it is used by containers/image for pulling images to the OSTree storage.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-06-14 15:35:34 +02:00
Antonio Murdaca
f0d830a8ca Merge pull request #355 from runcom/fail-on-diff-os
fail early when image os doesn't match host os
2017-06-06 13:55:26 +02:00
Antonio Murdaca
6d3f523c57 fail early when image os doesn't match host os
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-06 13:40:16 +02:00
Miloslav Trmač
87a36f9d5b Merge pull request #343 from mtrmac/readme-updates
README.md updates
2017-06-05 17:50:15 +02:00
Miloslav Trmač
9d12d72fb7 Improve documentation on what to do with containers/image failures in test-skopeo 2017-06-05 15:16:43 +02:00
Miloslav Trmač
fc88117065 Drop finished work from TODO
- We now have the docker-archive: transport
- Integration tests with built registries also exist
2017-06-05 15:16:43 +02:00
Miloslav Trmač
0debda0bbb Start all command examples with $
That has mostly been the case, with one outlier.  Fix it.
2017-06-05 15:16:43 +02:00
Miloslav Trmač
49f10736d1 Restructure the “Building without a container” version
Consolidate the Fedora and macOS instructions to prevent duplication,
and to suggest using $GOPATH for both.

Start with installing dependencies.
2017-06-05 15:15:04 +02:00
Miloslav Trmač
3d0d2ea6bb Document that there is a choice in using containers, and what the trade-off is 2017-06-05 15:15:02 +02:00
Miloslav Trmač
a2499d3451 Create “Building {without a container,in a container}” subsections
To make it clearer that the two are alternatives.

Document that a docker command is needed for the in-container build.

Also move the “checkout in $GOPATH” warning into the “without a
container” section, where it belongs.
2017-06-05 15:13:25 +02:00
Miloslav Trmač
7db0aab330 Move documentation build instructions to the end, with a separate header
We want to start with the Go 1.5 dependency and build/checkout
instructions.

Also create a separate subsection, to match the future “Building
in/without a container” subsections
2017-06-05 14:42:19 +02:00
Miloslav Trmač
150eb5bf18 Add Fedora instructions for installing go-md2man
It would be nice to have macOS instructions as well.
2017-06-05 14:40:58 +02:00
Miloslav Trmač
bcc0de69d4 Merge pull request #353 from surajssd/add-fedora-dependency
docs(README): add build dependencies for fedora
2017-06-05 14:39:58 +02:00
Suraj Deshmukh
cab89b9b9c docs(README): add build dependencies for fedora
Two more packages are needed to locally build skopeo
on fedora viz. btrfs-progs-devel & device-mapper-devel,
so added them in README.

Signed-off-by: Suraj Deshmukh <surajssd009005@gmail.com>
2017-06-04 16:03:23 +05:30
Miloslav Trmač
5b95a21401 Merge pull request #350 from mhrivnak/patch-1
Fixes a typo in the Name field
2017-05-31 20:49:28 +02:00
Michael Hrivnak
78c83dbcff Fixes a typo in the Name field 2017-05-31 14:22:22 -04:00
Miloslav Trmač
98ced5196c Merge pull request #347 from mtrmac/docker-certs.d
Support /etc/docker/certs.d
2017-05-30 19:40:36 +02:00
Miloslav Trmač
63272a10d7 Vendor after merging mtrmac/image:docker-certs.d 2017-05-30 18:26:43 +02:00
Antonio Murdaca
07c798ff82 Merge pull request #346 from jingqiuELE/master
Always combine RUN apt-get update with apt-get install in the same RUN statement.
2017-05-25 14:54:31 +02:00
Jing Qiu
750d72873d Always combine RUN apt-get update with apt-get install in the same RUN
statement.

From the [docs](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#build-cache) in March 2017:

Always combine RUN apt-get update with apt-get install in the same  RUN statement, for example

RUN apt-get update && apt-get install -y package-bar

Using apt-get update alone in a RUN statement causes caching issues and subsequent apt-get install instructions fail.

Signed-off-by: Jing Qiu <aqiu0720@gmail.com>
2017-05-25 11:38:35 +08:00
Antonio Murdaca
81dddac7d6 Merge pull request #345 from runcom/image-spec-rc6
update image-spec to v1.0.0-rc6
2017-05-24 16:55:18 +02:00
Antonio Murdaca
405b912f7e update image-spec to v1.0.0-rc6
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-24 16:35:57 +02:00
Miloslav Trmač
e1884aa8a8 Merge pull request #344 from mtrmac/storage-rebase
Pull in rhatdan/image:master
2017-05-17 22:20:50 +02:00
Miloslav Trmač
ffb01385dd Vendor after merging https://github.com/containers/image/pull/275 2017-05-17 17:12:23 +02:00
Miloslav Trmač
c5adc4b580 Merge pull request #331 from mtrmac/storage-rebase
Re-vendor, primarily for https://github.com/containers/storage/pull/11
2017-05-12 18:50:40 +02:00
Miloslav Trmač
69b9106646 Re-vendor, primarily for https://github.com/containers/storage/pull/11
containers/storage got new dependencies, so we will need to re-vendor
eventually anyway, and having this separate from other major work is
cleaner.

But the primary goal of this commit is to see whether it makes skopeo
buildable on OS X.
2017-05-11 13:07:14 +02:00
Antonio Murdaca
565688a963 Merge pull request #341 from projectatomic/jzb-patch-1
Update README.md
2017-05-10 21:21:00 +02:00
Joe Brockmeier
5205f3646d Update README.md
Just clarifying that Skopeo is available in later versions of Fedora as well.
2017-05-10 13:53:36 -04:00
Miloslav Trmač
3c57a0f084 Merge pull request #340 from mtrmac/setup-test
Simplify infrastructure setup, and make registry timeouts less strict
2017-05-10 15:05:52 +02:00
Miloslav Trmač
ed2088a4e5 Increase the time we wait for a registry from 0.5 to 5 seconds
We are not testing registry start-up performance, and killing the test
suite just because Travis is a bit busy doesn’t help; we’re much better
off with a test run which gives the registry a bit more time.
2017-05-10 14:46:28 +02:00
Miloslav Trmač
8b36001c0e Do not build docker/registry and remove remaining helpers 2017-05-10 14:46:28 +02:00
Miloslav Trmač
cd300805d1 Remove registry instances which we don’t use at all 2017-05-10 14:46:12 +02:00
Miloslav Trmač
8d3d0404fe Clean up SigningSuite test initialization
Move "skip if signing is not available" into the test, there may be
tests which only need verification.

Move GNUPGHOME creation from SetUpTest to SetUpSuite, sharing a single
key is fine.  We don’t change the GNUPGHOME contents at test runtime.
2017-05-10 14:45:10 +02:00
Miloslav Trmač
9985f12cd4 Move skopeoBinary check from *Suite.SetUpTest to SetUpSuite
The results are not going to vary across individual tests, so let’s only
check once.
2017-05-10 14:44:20 +02:00
Antonio Murdaca
5a42657cdb Merge pull request #339 from runcom/new-release-v0.1.20
New release v0.1.20
2017-05-10 13:01:27 +02:00
Antonio Murdaca
43d6128036 bump back to v0.1.21-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 12:37:37 +02:00
1432 changed files with 407901 additions and 19867 deletions

View File

@@ -1,10 +1,24 @@
sudo: required
services:
- docker
matrix:
include:
- os: linux
sudo: required
services:
- docker
- os: osx
notifications:
email: false
install:
# NOTE: The (brew update) should not be necessary, and slows things down;
# we include it as a workaround for https://github.com/Homebrew/brew/issues/3299
# ideally Travis should bake the (brew update) into its images
# (https://github.com/travis-ci/travis-ci/issues/8552 ), but thats only going
# to happen around November 2017 per https://blog.travis-ci.com/2017-10-16-a-new-default-os-x-image-is-coming .
# Remove the (brew update) at that time.
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update && brew install gpgme ; fi
script:
- make check
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then hack/travis_osx.sh ; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make check ; fi

View File

@@ -8,12 +8,14 @@ that we follow.
* [Reporting Issues](#reporting-issues)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Communications](#communications)
<!--
* [Becoming a Maintainer](#becoming-a-maintainer)
-->
## Reporting Issues
Before reporting an issue, check our backlog of
[open issues](https://github.com/projectatomic/skopeo/issues)
[open issues](https://github.com/containers/skopeo/issues)
to see if someone else has already reported it. If so, feel free to add
your scenario, or additional information, to the discussion. Or simply
"subscribe" to it to be notified when it is updated.
@@ -37,9 +39,9 @@ It's ok to just open up a PR with the fix, but make sure you include the same
information you would have included in an issue - like how to reproduce it.
PRs for new features should include some background on what use cases the
new code is trying to address. And, when possible and it makes, try to break-up
new code is trying to address. When possible and when it makes sense, try to break-up
larger PRs into smaller ones - it's easier to review smaller
code changes. But, only if those smaller ones make sense as stand-alone PRs.
code changes. But only if those smaller ones make sense as stand-alone PRs.
Regardless of the type of PR, all PRs should include:
* well documented code changes
@@ -47,9 +49,9 @@ Regardless of the type of PR, all PRs should include:
* documentation changes
Squash your commits into logical pieces of work that might want to be reviewed
separate from the rest of the PRs. But, squashing down to just one commit is ok
too since in the end the entire PR will be reviewed anyway. When in doubt,
squash.
separate from the rest of the PRs. Ideally, each commit should implement a single
idea, and the PR branch should pass the tests at every commit. GitHub makes it easy
to review the cumulative effect of many commits; so, when in doubt, use smaller commits.
PRs that fix issues should include a reference like `Closes #XXXX` in the
commit message so that github will automatically close the referenced issue
@@ -115,14 +117,14 @@ commit automatically with `git commit -s`.
## Communications
For general questions, or dicsussions, please use the
For general questions, or discussions, please use the
IRC group on `irc.freenode.net` called `container-projects`
that has been setup.
For discussions around issues/bugs and features, you can use the github
[issues](https://github.com/projectatomic/skopeo/issues)
[issues](https://github.com/containers/skopeo/issues)
and
[PRs](https://github.com/projectatomic/skopeo/pulls)
[PRs](https://github.com/containers/skopeo/pulls)
tracking system.
<!--

View File

@@ -6,23 +6,19 @@ RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md
device-mapper-devel \
# gpgme bindings deps
libassuan-devel gpgme-devel \
ostree-devel \
gnupg \
# registry v1 deps
xz-devel \
python-devel \
python-pip \
swig \
redhat-rpm-config \
openssl-devel \
patch
# OpenShift deps
which tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \
&& dnf clean all
# Install three versions of the registry. The first is an older version that
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests. Install registry v1 also.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
# and schema2 manifests.
RUN set -x \
&& REGISTRY_COMMIT_SCHEMA1=ec87e9b6971d831f0eff752ddb54fb64693e51cd \
&& REGISTRY_COMMIT=47a064d4195a9b56133891bbb13620c3ac83a827 \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
@@ -31,31 +27,24 @@ RUN set -x \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH" \
&& export DRV1="$(mktemp -d)" \
&& git clone https://github.com/docker/docker-registry.git "$DRV1" \
# no need for setuptools since we have a version conflict with fedora
&& sed -i.bak s/setuptools==5.8//g "$DRV1/requirements/main.txt" \
&& sed -i.bak s/setuptools==5.8//g "$DRV1/depends/docker-registry-core/requirements/main.txt" \
&& pip install "$DRV1/depends/docker-registry-core" \
&& pip install file://"$DRV1#egg=docker-registry[bugsnag,newrelic,cors]" \
&& patch $(python -c 'import boto; import os; print os.path.dirname(boto.__file__)')/connection.py \
< "$DRV1/contrib/boto_header_patch.diff" \
&& dnf -y update && dnf install -y m2crypto
&& rm -rf "$GOPATH"
RUN set -x \
&& yum install -y which git tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \
&& export GOPATH=$(mktemp -d) \
&& git clone -b v1.5.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
&& git clone --depth 1 -b v1.5.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
# The sed edits out a "go < 1.5" check which works incorrectly with go ≥ 1.10. \
&& sed -i -e 's/\[\[ "\${go_version\[2]}" < "go1.5" ]]/false/' "$GOPATH/src/github.com/openshift/origin/hack/common.sh" \
&& (cd "$GOPATH/src/github.com/openshift/origin" && make clean build && make all WHAT=cmd/dockerregistry) \
&& cp -a "$GOPATH/src/github.com/openshift/origin/_output/local/bin/linux"/*/* /usr/local/bin \
&& cp "$GOPATH/src/github.com/openshift/origin/images/dockerregistry/config.yml" /atomic-registry-config.yml \
&& rm -rf "$GOPATH" \
&& mkdir /registry
ENV GOPATH /usr/share/gocode:/go
ENV PATH $GOPATH/bin:/usr/share/gocode/bin:$PATH
RUN go get github.com/golang/lint/golint
WORKDIR /go/src/github.com/projectatomic/skopeo
COPY . /go/src/github.com/projectatomic/skopeo
RUN go version
RUN go get golang.org/x/lint/golint
WORKDIR /go/src/github.com/containers/skopeo
COPY . /go/src/github.com/containers/skopeo
#ENTRYPOINT ["hack/dind"]

View File

@@ -1,5 +1,14 @@
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y golang btrfs-tools git-core libdevmapper-dev libgpgme11-dev go-md2man
FROM ubuntu:17.10
RUN apt-get update && apt-get install -y \
golang \
btrfs-tools \
git-core \
libdevmapper-dev \
libgpgme11-dev \
go-md2man \
libglib2.0-dev \
libostree-dev
ENV GOPATH=/
WORKDIR /src/github.com/projectatomic/skopeo
WORKDIR /src/github.com/containers/skopeo

View File

@@ -1,8 +1,21 @@
.PHONY: all binary build-container build-local clean install install-binary install-completions shell test-integration
.PHONY: all binary build-container docs build-local clean install install-binary install-completions shell test-integration vendor
export GO15VENDOREXPERIMENT=1
ifeq ($(shell uname),Darwin)
PREFIX ?= ${DESTDIR}/usr/local
DARWIN_BUILD_TAG=containers_image_ostree_stub
# On macOS, (brew install gpgme) installs it within /usr/local, but /usr/local/include is not in the default search path.
# Rather than hard-code this directory, use gpgme-config. Sadly that must be done at the top-level user
# instead of locally in the gpgme subpackage, because cgo supports only pkg-config, not general shell scripts,
# and gpgme does not install a pkg-config file.
# If gpgme is not installed or gpgme-config cant be found for other reasons, the error is silently ignored
# (and the user will probably find out because the cgo compilation will fail).
GPGME_ENV := CGO_CFLAGS="$(shell gpgme-config --cflags 2>/dev/null)" CGO_LDFLAGS="$(shell gpgme-config --libs 2>/dev/null)"
else
PREFIX ?= ${DESTDIR}/usr
endif
INSTALLDIR=${PREFIX}/bin
MANINSTALLDIR=${PREFIX}/share/man
CONTAINERSSYSCONFIGDIR=${DESTDIR}/etc/containers
@@ -10,23 +23,29 @@ REGISTRIESDDIR=${CONTAINERSSYSCONFIGDIR}/registries.d
SIGSTOREDIR=${DESTDIR}/var/lib/atomic/sigstore
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
GO_MD2MAN ?= go-md2man
GO ?= go
CONTAINER_RUNTIME := $(shell command -v podman 2> /dev/null || echo docker)
ifeq ($(DEBUG), 1)
override GOGCFLAGS += -N -l
endif
ifeq ($(shell go env GOOS), linux)
GO_DYN_FLAGS="-buildmode=pie"
endif
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
DOCKER_IMAGE := skopeo-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
IMAGE := skopeo-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
# set env like gobuildtag?
DOCKER_FLAGS := docker run --rm -i #$(DOCKER_ENVS)
CONTAINER_CMD := ${CONTAINER_RUNTIME} run --rm -i -e TESTFLAGS="$(TESTFLAGS)" #$(CONTAINER_ENVS)
# if this session isn't interactive, then we don't want to allocate a
# TTY, which would fail, but if it is interactive, we do want to attach
# so that the user can send e.g. ^C through.
INTERACTIVE := $(shell [ -t 0 ] && echo 1 || echo 0)
ifeq ($(INTERACTIVE), 1)
DOCKER_FLAGS += -t
CONTAINER_CMD += -t
endif
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
CONTAINER_RUN := $(CONTAINER_CMD) "$(IMAGE)"
GIT_COMMIT := $(shell git rev-parse HEAD 2> /dev/null || true)
@@ -34,41 +53,44 @@ MANPAGES_MD = $(wildcard docs/*.md)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh)
LIBDM_BUILD_TAG = $(shell hack/libdm_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG) $(DARWIN_BUILD_TAG)
BUILDTAGS += $(LOCAL_BUILD_TAGS)
ifeq ($(DISABLE_CGO), 1)
override BUILDTAGS = containers_image_ostree_stub exclude_graphdriver_devicemapper exclude_graphdriver_btrfs containers_image_openpgp
endif
# make all DEBUG=1
# Note: Uses the -N -l go compiler options to disable compiler optimizations
# and inlining. Using these build options allows you to subsequently
# use source debugging tools like delve.
all: binary docs
# Build a docker image (skopeobuild) that has everything we need to build.
# Build a container image (skopeobuild) that has everything we need to build.
# Then do the build and the output (skopeo) should appear in current dir
binary: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
${CONTAINER_RUNTIME} run --rm --security-opt label:disable -v $$(pwd):/src/github.com/containers/skopeo \
skopeobuildimage make binary-local $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
binary-static: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
${CONTAINER_RUNTIME} run --rm --security-opt label:disable -v $$(pwd):/src/github.com/containers/skopeo \
skopeobuildimage make binary-local-static $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
# Build w/o using Docker containers
# Build w/o using containers
binary-local:
go build -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
$(GPGME_ENV) $(GO) build ${GO_DYN_FLAGS} -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
binary-local-static:
go build -ldflags "-extldflags \"-static\" -X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
$(GPGME_ENV) $(GO) build -ldflags "-extldflags \"-static\" -X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
build-container:
docker build ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" .
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -t "$(IMAGE)" .
docs/%.1: docs/%.1.md
$(GO_MD2MAN) -in $< -out $@.tmp && touch $@.tmp && mv $@.tmp $@
.PHONY: docs
docs: $(MANPAGES_MD:%.md=%)
clean:
@@ -76,33 +98,38 @@ clean:
install: install-binary install-docs install-completions
install -d -m 755 ${SIGSTOREDIR}
install -D -m 644 default-policy.json ${CONTAINERSSYSCONFIGDIR}/policy.json
install -D -m 644 default.yaml ${REGISTRIESDDIR}/default.yaml
install -d -m 755 ${CONTAINERSSYSCONFIGDIR}
install -m 644 default-policy.json ${CONTAINERSSYSCONFIGDIR}/policy.json
install -d -m 755 ${REGISTRIESDDIR}
install -m 644 default.yaml ${REGISTRIESDDIR}/default.yaml
install-binary: ./skopeo
install -D -m 755 skopeo ${INSTALLDIR}/skopeo
install -d -m 755 ${INSTALLDIR}
install -m 755 skopeo ${INSTALLDIR}/skopeo
install-docs: docs/skopeo.1
install -D -m 644 docs/skopeo.1 ${MANINSTALLDIR}/man1/skopeo.1
install -d -m 755 ${MANINSTALLDIR}/man1
install -m 644 docs/skopeo.1 ${MANINSTALLDIR}/man1/skopeo.1
install-completions:
install -m 644 -D completions/bash/skopeo ${BASHINSTALLDIR}/skopeo
install -m 755 -d ${BASHINSTALLDIR}
install -m 644 completions/bash/skopeo ${BASHINSTALLDIR}/skopeo
shell: build-container
$(DOCKER_RUN_DOCKER) bash
$(CONTAINER_RUN) bash
check: validate test-unit test-integration
# The tests can run out of entropy and block in containers, so replace /dev/random.
test-integration: build-container
$(DOCKER_RUN_DOCKER) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; SKOPEO_CONTAINER_TESTS=1 BUILDTAGS="$(BUILDTAGS)" hack/make.sh test-integration'
$(CONTAINER_RUN) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; SKOPEO_CONTAINER_TESTS=1 BUILDTAGS="$(BUILDTAGS)" hack/make.sh test-integration'
test-unit: build-container
# Just call (make test unit-local) here instead of worrying about environment differences, e.g. GO15VENDOREXPERIMENT.
$(DOCKER_RUN_DOCKER) make test-unit-local BUILDTAGS='$(BUILDTAGS)'
$(CONTAINER_RUN) make test-unit-local BUILDTAGS='$(BUILDTAGS)'
validate: build-container
$(DOCKER_RUN_DOCKER) hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
$(CONTAINER_RUN) hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
# This target is only intended for development, e.g. executing it from an IDE. Use (make test) for CI or pre-release testing.
test-all-local: validate-local test-unit-local
@@ -111,4 +138,7 @@ validate-local:
hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
test-unit-local:
go test -tags "$(BUILDTAGS)" $$(go list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')
$(GPGME_ENV) $(GO) test -tags "$(BUILDTAGS)" $$($(GO) list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/containers/skopeo/\(integration\|vendor/.*\)$$')
vendor: vendor.conf
vndr -whitelist '^github.com/containers/image/docs/.*'

166
README.md
View File

@@ -1,16 +1,53 @@
skopeo [![Build Status](https://travis-ci.org/projectatomic/skopeo.svg?branch=master)](https://travis-ci.org/projectatomic/skopeo)
skopeo [![Build Status](https://travis-ci.org/containers/skopeo.svg?branch=master)](https://travis-ci.org/containers/skopeo)
=
_Please be aware `skopeo` is still work in progress and it currently supports only registry API V2_
<img src="https://cdn.rawgit.com/containers/skopeo/master/docs/skopeo.svg" width="250">
`skopeo` is a command line utility for various operations on container images and image repositories.
----
`skopeo` is a command line utility that performs various operations on container images and image repositories.
`skopeo` can work with [OCI images](https://github.com/opencontainers/image-spec) as well as the original Docker v2 images.
Skopeo works with API V2 registries such as Docker registries, the Atomic registry, private registries, local directories and local OCI-layout directories. Skopeo does not require a daemon to be running to perform these operations which consist of:
* Copying an image from and to various storage mechanisms.
For example you can copy images from one registry to another, without requiring privilege.
* Inspecting a remote image showing its properties including its layers, without requiring you to pull the image to the host.
* Deleting an image from an image repository.
* When required by the repository, skopeo can pass the appropriate credentials and certificates for authentication.
Skopeo operates on the following image and repository types:
* containers-storage:docker-reference
An image located in a local containers/storage image store. Location and image store specified in /etc/containers/storage.conf
* dir:path
An existing local directory path storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
* docker://docker-reference
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in $HOME/.docker/config.json, which is set e.g. using (docker login).
* docker-archive:path[:docker-reference]
An image is stored in the `docker save` formated file. docker-reference is only used when creating such a file, and it must not contain a digest.
* docker-daemon:docker-reference
An image docker-reference stored in the docker daemon internal storage. docker-reference must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
* oci:path:tag
An image tag in a directory compliant with "Open Container Image Layout Specification" at path.
* ostree:image[@/absolute/repo/path]
An image in local OSTree repository. /absolute/repo/path defaults to /ostree/repo.
Inspecting a repository
-
`skopeo` is able to _inspect_ a repository on a Docker registry and fetch images layers.
By _inspect_ I mean it fetches the repository's manifest and it is able to show you a `docker inspect`-like
The _inspect_ command fetches the repository's manifest and it is able to show you a `docker inspect`-like
json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about
a repository or a tag before pulling it (using disk space) - e.g. - which tags are available for the given repository? which labels the image has?
a repository or a tag before pulling it (using disk space). The inspect command can show you which tags are available for the given
repository, the labels the image has, the creation date and operating system of the image and more.
Examples:
```sh
@@ -47,14 +84,25 @@ $ skopeo inspect docker://docker.io/fedora:rawhide | jq '.Digest'
Copying images
-
`skopeo` can copy container images between various storage mechanisms,
e.g. Docker registries (including the Docker Hub), the Atomic Registry,
local directories, and local OCI-layout directories:
`skopeo` can copy container images between various storage mechanisms, including:
* Docker distribution based registries
- The Docker Hub, OpenShift, GCR, Artifactory, Quay ...
* Container Storage backends
- Docker daemon storage
- github.com/containers/storage (Backend for CRI-O, Buildah and friends)
* Local directories
* Local OCI-layout directories
```sh
$ skopeo copy docker://busybox:1-glibc atomic:myns/unsigned:streaming
$ skopeo copy docker://busybox:latest dir:existingemptydirectory
$ skopeo copy docker://busybox:latest oci:busybox_ocilayout
$ skopeo copy docker://busybox:latest oci:busybox_ocilayout:latest
```
Deleting images
@@ -101,45 +149,77 @@ $ skopeo copy --src-creds=testuser:testpassword docker://myregistrydomain.com:50
If your cli config is found but it doesn't contain the necessary credentials for the queried registry
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--creds` or `--src-creds|--dest-creds`.
Building
Obtaining skopeo
-
`skopeo` may already be packaged in your distribution, for example on Fedora 23 and later you can install it using
```sh
$ sudo dnf install skopeo
```
Otherwise, read on for building and installing it from source:
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag.
There are two ways to build skopeo: in a container, or locally without a container. Choose the one which better matches your needs and environment.
### Building without a container
Building without a container requires a bit more manual work and setup in your environment, but it is more flexible:
- It should work in more environments (e.g. for native macOS builds)
- It does not require root privileges (after dependencies are installed)
- It is faster, therefore more convenient for developing `skopeo`.
Install the necessary dependencies:
```sh
Fedora$ sudo dnf install gpgme-devel libassuan-devel btrfs-progs-devel device-mapper-devel ostree-devel
Ubuntu$ sudo apt install libgpgme-dev libassuan-dev btrfs-progs libdevmapper-dev libostree-dev
macOS$ brew install gpgme
```
Make sure to clone this repository in your `GOPATH` - otherwise compilation fails.
```sh
$ git clone https://github.com/containers/skopeo $GOPATH/src/github.com/containers/skopeo
$ cd $GOPATH/src/github.com/containers/skopeo && make binary-local
```
### Building in a container
Building in a container is simpler, but more restrictive:
- It requires the `docker` command and the ability to run Linux containers
- The created executable is a Linux executable, and depends on dynamic libraries which may only be available only in a container of a similar Linux distribution.
```sh
$ make binary # Or (make all) to also build documentation, see below.
```
To build a pure-Go static binary (disables ostree, devicemapper, btrfs, and gpgme):
```sh
$ make binary-static DISABLE_CGO=1
```
### Building documentation
To build the manual you will need go-md2man.
```sh
$ sudo apt-get install go-md2man
Debian$ sudo apt-get install go-md2man
Fedora$ sudo dnf install go-md2man
```
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag. Also, make sure to clone the repository in your `GOPATH` - otherwise compilation fails.
Then
```sh
$ git clone https://github.com/projectatomic/skopeo $GOPATH/src/github.com/projectatomic/skopeo
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make all
$ make docs
```
To build localy on OSX:
```sh
$ brew install gpgme
$ make binary-local
```
You may need to install additional development packages: `gpgme-devel` and `libassuan-devel`
```sh
$ dnf install gpgme-devel libassuan-devel
```
Installing
-
If you built from source:
### Installation
Finally, after the binary and documentation is built:
```sh
$ sudo make install
```
`skopeo` is also available from Fedora 23:
```sh
sudo dnf install skopeo
```
TODO
-
- list all images on registry?
- registry v2 search?
- support output to docker load tar(s)
- show repo tags via flag or when reference isn't tagged or digested
- add tests (integration with deployed registries in container - Docker-like)
- support rkt/appc image spec
NOT TODO
@@ -151,22 +231,32 @@ CONTRIBUTING
### Dependencies management
`skopeo` uses [`vndr`](https://github.com/LK4D4/vndr) for dependencies management.
Make sure [`vndr`](https://github.com/LK4D4/vndr) is installed.
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- run `vndr github.com/pkg/errors`
- run `make vendor`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `vndr github.com/pkg/errors`
- run `make vendor`
In order to test out new PRs from [containers/image](https://github.com/containers/image) to not break `skopeo`:
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `vndr github.com/containers/image`
- run `make vendor`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `make vendor`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete succcesfully again, merge the skopeo PR
License
-

View File

@@ -1,3 +1,5 @@
// +build !containers_image_openpgp
package main
/*

View File

@@ -4,10 +4,15 @@ import (
"errors"
"fmt"
"os"
"strings"
"github.com/containers/image/copy"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
@@ -26,49 +31,100 @@ func contextsFromGlobalOptions(c *cli.Context) (*types.SystemContext, *types.Sys
return sourceCtx, destinationCtx, nil
}
func copyHandler(context *cli.Context) error {
if len(context.Args()) != 2 {
return errors.New("Usage: copy source destination")
func copyHandler(c *cli.Context) error {
if len(c.Args()) != 2 {
cli.ShowCommandHelp(c, "copy")
return errors.New("Exactly two arguments expected")
}
policyContext, err := getPolicyContext(context)
policyContext, err := getPolicyContext(c)
if err != nil {
return fmt.Errorf("Error loading trust policy: %v", err)
}
defer policyContext.Destroy()
srcRef, err := alltransports.ParseImageName(context.Args()[0])
srcRef, err := alltransports.ParseImageName(c.Args()[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
return fmt.Errorf("Invalid source name %s: %v", c.Args()[0], err)
}
destRef, err := alltransports.ParseImageName(context.Args()[1])
destRef, err := alltransports.ParseImageName(c.Args()[1])
if err != nil {
return fmt.Errorf("Invalid destination name %s: %v", context.Args()[1], err)
return fmt.Errorf("Invalid destination name %s: %v", c.Args()[1], err)
}
signBy := context.String("sign-by")
removeSignatures := context.Bool("remove-signatures")
signBy := c.String("sign-by")
removeSignatures := c.Bool("remove-signatures")
sourceCtx, destinationCtx, err := contextsFromGlobalOptions(context)
sourceCtx, destinationCtx, err := contextsFromGlobalOptions(c)
if err != nil {
return err
}
return copy.Image(policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: removeSignatures,
SignBy: signBy,
ReportWriter: os.Stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
var manifestType string
if c.IsSet("format") {
switch c.String("format") {
case "oci":
manifestType = imgspecv1.MediaTypeImageManifest
case "v2s1":
manifestType = manifest.DockerV2Schema1SignedMediaType
case "v2s2":
manifestType = manifest.DockerV2Schema2MediaType
default:
return fmt.Errorf("unknown format %q. Choose on of the supported formats: 'oci', 'v2s1', or 'v2s2'", c.String("format"))
}
}
if c.IsSet("additional-tag") {
for _, image := range c.StringSlice("additional-tag") {
ref, err := reference.ParseNormalizedNamed(image)
if err != nil {
return fmt.Errorf("error parsing additional-tag '%s': %v", image, err)
}
namedTagged, isNamedTagged := ref.(reference.NamedTagged)
if !isNamedTagged {
return fmt.Errorf("additional-tag '%s' must be a tagged reference", image)
}
destinationCtx.DockerArchiveAdditionalTags = append(destinationCtx.DockerArchiveAdditionalTags, namedTagged)
}
}
ctx, cancel := commandTimeoutContextFromGlobalOptions(c)
defer cancel()
_, err = copy.Image(ctx, policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: removeSignatures,
SignBy: signBy,
ReportWriter: os.Stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
ForceManifestMIMEType: manifestType,
})
return err
}
var copyCmd = cli.Command{
Name: "copy",
Usage: "Copy an image from one location to another",
Name: "copy",
Usage: "Copy an IMAGE-NAME from one location to another",
Description: fmt.Sprintf(`
Container "IMAGE-NAME" uses a "transport":"details" format.
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "SOURCE-IMAGE DESTINATION-IMAGE",
Action: copyHandler,
// FIXME: Do we need to namespace the GPG aspect?
Flags: []cli.Flag{
cli.StringSliceFlag{
Name: "additional-tag",
Usage: "additional tags (supports docker-archive)",
},
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.BoolFlag{
Name: "remove-signatures",
Usage: "Do not copy signatures from SOURCE-IMAGE",
@@ -90,25 +146,53 @@ var copyCmd = cli.Command{
cli.StringFlag{
Name: "src-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the source registry",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the source registry or daemon",
},
cli.BoolTFlag{
Name: "src-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker source registry (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to the container source registry or daemon (defaults to true)",
},
cli.StringFlag{
Name: "dest-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the destination registry",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the destination registry or daemon",
},
cli.BoolTFlag{
Name: "dest-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker destination registry (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to the container destination registry or daemon (defaults to true)",
},
cli.StringFlag{
Name: "dest-ostree-tmp-dir",
Value: "",
Usage: "`DIRECTORY` to use for OSTree temporary files",
},
cli.StringFlag{
Name: "src-shared-blob-dir",
Value: "",
Usage: "`DIRECTORY` to use to fetch retrieved blobs (OCI layout sources only)",
},
cli.StringFlag{
Name: "dest-shared-blob-dir",
Value: "",
Usage: "`DIRECTORY` to use to store retrieved blobs (OCI layout destinations only)",
},
cli.StringFlag{
Name: "format, f",
Usage: "`MANIFEST TYPE` (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)",
},
cli.BoolFlag{
Name: "dest-compress",
Usage: "Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)",
},
cli.StringFlag{
Name: "src-daemon-host",
Value: "",
Usage: "use docker daemon host at `HOST` (docker-daemon sources only)",
},
cli.StringFlag{
Name: "dest-daemon-host",
Value: "",
Usage: "use docker daemon host at `HOST` (docker-daemon destinations only)",
},
},
}

View File

@@ -3,37 +3,51 @@ package main
import (
"errors"
"fmt"
"strings"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/urfave/cli"
)
func deleteHandler(context *cli.Context) error {
if len(context.Args()) != 1 {
func deleteHandler(c *cli.Context) error {
if len(c.Args()) != 1 {
return errors.New("Usage: delete imageReference")
}
ref, err := alltransports.ParseImageName(context.Args()[0])
ref, err := alltransports.ParseImageName(c.Args()[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
return fmt.Errorf("Invalid source name %s: %v", c.Args()[0], err)
}
ctx, err := contextFromGlobalOptions(context, "")
sys, err := contextFromGlobalOptions(c, "")
if err != nil {
return err
}
if err := ref.DeleteImage(ctx); err != nil {
return err
}
return nil
ctx, cancel := commandTimeoutContextFromGlobalOptions(c)
defer cancel()
return ref.DeleteImage(ctx, sys)
}
var deleteCmd = cli.Command{
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Description: fmt.Sprintf(`
Delete an "IMAGE_NAME" from a transport
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Action: deleteHandler,
Flags: []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.StringFlag{
Name: "creds",
Value: "",
@@ -46,7 +60,7 @@ var deleteCmd = cli.Command{
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
},
}

View File

@@ -6,11 +6,12 @@ import (
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker"
"github.com/containers/image/manifest"
"github.com/containers/image/transports"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@@ -20,7 +21,7 @@ type inspectOutput struct {
Tag string `json:",omitempty"`
Digest digest.Digest
RepoTags []string
Created time.Time
Created *time.Time
DockerVersion string
Labels map[string]string
Architecture string
@@ -29,10 +30,22 @@ type inspectOutput struct {
}
var inspectCmd = cli.Command{
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Description: fmt.Sprintf(`
Return low-level information about "IMAGE-NAME" in a registry/transport
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Flags: []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
@@ -40,7 +53,7 @@ var inspectCmd = cli.Command{
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
cli.BoolFlag{
Name: "raw",
@@ -53,7 +66,10 @@ var inspectCmd = cli.Command{
},
},
Action: func(c *cli.Context) (retErr error) {
img, err := parseImage(c)
ctx, cancel := commandTimeoutContextFromGlobalOptions(c)
defer cancel()
img, err := parseImage(ctx, c)
if err != nil {
return err
}
@@ -64,7 +80,7 @@ var inspectCmd = cli.Command{
}
}()
rawManifest, _, err := img.Manifest()
rawManifest, _, err := img.Manifest(ctx)
if err != nil {
return err
}
@@ -75,15 +91,15 @@ var inspectCmd = cli.Command{
}
return nil
}
imgInspect, err := img.Inspect()
imgInspect, err := img.Inspect(ctx)
if err != nil {
return err
}
outputData := inspectOutput{
Name: "", // Possibly overridden for a docker.Image.
Name: "", // Set below if DockerReference() is known
Tag: imgInspect.Tag,
// Digest is set below.
RepoTags: []string{}, // Possibly overriden for a docker.Image.
RepoTags: []string{}, // Possibly overriden for docker.Transport.
Created: imgInspect.Created,
DockerVersion: imgInspect.DockerVersion,
Labels: imgInspect.Labels,
@@ -95,9 +111,15 @@ var inspectCmd = cli.Command{
if err != nil {
return fmt.Errorf("Error computing manifest digest: %v", err)
}
if dockerImg, ok := img.(*docker.Image); ok {
outputData.Name = dockerImg.SourceRefFullName()
outputData.RepoTags, err = dockerImg.GetRepositoryTags()
if dockerRef := img.Reference().DockerReference(); dockerRef != nil {
outputData.Name = dockerRef.Name()
}
if img.Reference().Transport() == docker.Transport {
sys, err := contextFromGlobalOptions(c, "")
if err != nil {
return err
}
outputData.RepoTags, err = docker.GetRepositoryTags(ctx, sys, img.Reference())
if err != nil {
// some registries may decide to block the "list all tags" endpoint
// gracefully allow the inspect to continue in this case. Currently

View File

@@ -8,7 +8,6 @@ import (
"github.com/containers/image/directory"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
@@ -25,16 +24,19 @@ var layersCmd = cli.Command{
if c.NArg() == 0 {
return errors.New("Usage: layers imageReference [layer...]")
}
rawSource, err := parseImageSource(c, c.Args()[0], []string{
// TODO: skopeo layers only supports these now
// eventually we'll remove this command altogether...
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
})
ctx, cancel := commandTimeoutContextFromGlobalOptions(c)
defer cancel()
sys, err := contextFromGlobalOptions(c, "")
if err != nil {
return err
}
src, err := image.FromSource(rawSource)
rawSource, err := parseImageSource(ctx, c, c.Args()[0])
if err != nil {
return err
}
src, err := image.FromSource(ctx, sys, rawSource)
if err != nil {
if closeErr := rawSource.Close(); closeErr != nil {
return errors.Wrapf(err, " (close error: %v)", closeErr)
@@ -48,7 +50,11 @@ var layersCmd = cli.Command{
}
}()
var blobDigests []digest.Digest
type blobDigest struct {
digest digest.Digest
isConfig bool
}
var blobDigests []blobDigest
for _, dString := range c.Args().Tail() {
if !strings.HasPrefix(dString, "sha256:") {
dString = "sha256:" + dString
@@ -57,7 +63,7 @@ var layersCmd = cli.Command{
if err != nil {
return err
}
blobDigests = append(blobDigests, d)
blobDigests = append(blobDigests, blobDigest{digest: d, isConfig: false})
}
if len(blobDigests) == 0 {
@@ -65,13 +71,13 @@ var layersCmd = cli.Command{
seenLayers := map[digest.Digest]struct{}{}
for _, info := range layers {
if _, ok := seenLayers[info.Digest]; !ok {
blobDigests = append(blobDigests, info.Digest)
blobDigests = append(blobDigests, blobDigest{digest: info.Digest, isConfig: false})
seenLayers[info.Digest] = struct{}{}
}
}
configInfo := src.ConfigInfo()
if configInfo.Digest != "" {
blobDigests = append(blobDigests, configInfo.Digest)
blobDigests = append(blobDigests, blobDigest{digest: configInfo.Digest, isConfig: true})
}
}
@@ -83,7 +89,7 @@ var layersCmd = cli.Command{
if err != nil {
return err
}
dest, err := tmpDirRef.NewImageDestination(nil)
dest, err := tmpDirRef.NewImageDestination(ctx, nil)
if err != nil {
return err
}
@@ -94,12 +100,12 @@ var layersCmd = cli.Command{
}
}()
for _, digest := range blobDigests {
r, blobSize, err := rawSource.GetBlob(types.BlobInfo{Digest: digest, Size: -1})
for _, bd := range blobDigests {
r, blobSize, err := rawSource.GetBlob(ctx, types.BlobInfo{Digest: bd.digest, Size: -1})
if err != nil {
return err
}
if _, err := dest.PutBlob(r, types.BlobInfo{Digest: digest, Size: blobSize}); err != nil {
if _, err := dest.PutBlob(ctx, r, types.BlobInfo{Digest: bd.digest, Size: blobSize}, bd.isConfig); err != nil {
if closeErr := r.Close(); closeErr != nil {
return errors.Wrapf(err, " (close error: %v)", closeErr)
}
@@ -107,18 +113,14 @@ var layersCmd = cli.Command{
}
}
manifest, _, err := src.Manifest()
manifest, _, err := src.Manifest(ctx)
if err != nil {
return err
}
if err := dest.PutManifest(manifest); err != nil {
if err := dest.PutManifest(ctx, manifest); err != nil {
return err
}
if err := dest.Commit(); err != nil {
return err
}
return nil
return dest.Commit(ctx)
},
}

View File

@@ -4,10 +4,10 @@ import (
"fmt"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/image/signature"
"github.com/containers/skopeo/version"
"github.com/containers/storage/pkg/reexec"
"github.com/projectatomic/skopeo/version"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@@ -33,7 +33,7 @@ func createApp() *cli.App {
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
Hidden: true,
},
cli.StringFlag{
@@ -48,7 +48,21 @@ func createApp() *cli.App {
cli.StringFlag{
Name: "registries.d",
Value: "",
Usage: "use registry configuration files in `DIR` (e.g. for docker signature storage)",
Usage: "use registry configuration files in `DIR` (e.g. for container signature storage)",
},
cli.StringFlag{
Name: "override-arch",
Value: "",
Usage: "use `ARCH` instead of the architecture of the machine for choosing images",
},
cli.StringFlag{
Name: "override-os",
Value: "",
Usage: "use `OS` instead of the running OS for choosing images",
},
cli.DurationFlag{
Name: "command-timeout",
Usage: "timeout for the command execution",
},
}
app.Before = func(c *cli.Context) error {

View File

@@ -10,14 +10,14 @@ import (
"github.com/urfave/cli"
)
func standaloneSign(context *cli.Context) error {
outputFile := context.String("output")
if len(context.Args()) != 3 || outputFile == "" {
func standaloneSign(c *cli.Context) error {
outputFile := c.String("output")
if len(c.Args()) != 3 || outputFile == "" {
return errors.New("Usage: skopeo standalone-sign manifest docker-reference key-fingerprint -o signature")
}
manifestPath := context.Args()[0]
dockerReference := context.Args()[1]
fingerprint := context.Args()[2]
manifestPath := c.Args()[0]
dockerReference := c.Args()[1]
fingerprint := c.Args()[2]
manifest, err := ioutil.ReadFile(manifestPath)
if err != nil {
@@ -53,14 +53,14 @@ var standaloneSignCmd = cli.Command{
},
}
func standaloneVerify(context *cli.Context) error {
if len(context.Args()) != 4 {
func standaloneVerify(c *cli.Context) error {
if len(c.Args()) != 4 {
return errors.New("Usage: skopeo standalone-verify manifest docker-reference key-fingerprint signature")
}
manifestPath := context.Args()[0]
expectedDockerReference := context.Args()[1]
expectedFingerprint := context.Args()[2]
signaturePath := context.Args()[3]
manifestPath := c.Args()[0]
expectedDockerReference := c.Args()[1]
expectedFingerprint := c.Args()[2]
signaturePath := c.Args()[3]
unverifiedManifest, err := ioutil.ReadFile(manifestPath)
if err != nil {
@@ -81,7 +81,7 @@ func standaloneVerify(context *cli.Context) error {
return fmt.Errorf("Error verifying signature: %v", err)
}
fmt.Fprintf(context.App.Writer, "Signature verified, digest %s\n", sig.DockerManifestDigest)
fmt.Fprintf(c.App.Writer, "Signature verified, digest %s\n", sig.DockerManifestDigest)
return nil
}
@@ -92,11 +92,11 @@ var standaloneVerifyCmd = cli.Command{
Action: standaloneVerify,
}
func untrustedSignatureDump(context *cli.Context) error {
if len(context.Args()) != 1 {
func untrustedSignatureDump(c *cli.Context) error {
if len(c.Args()) != 1 {
return errors.New("Usage: skopeo untrusted-signature-dump-without-verification signature")
}
untrustedSignaturePath := context.Args()[0]
untrustedSignaturePath := c.Args()[0]
untrustedSignature, err := ioutil.ReadFile(untrustedSignaturePath)
if err != nil {
@@ -111,7 +111,7 @@ func untrustedSignatureDump(context *cli.Context) error {
if err != nil {
return err
}
fmt.Fprintln(context.App.Writer, string(untrustedOut))
fmt.Fprintln(c.App.Writer, string(untrustedOut))
return nil
}

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"errors"
"strings"
@@ -11,12 +12,20 @@ import (
func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemContext, error) {
ctx := &types.SystemContext{
RegistriesDirPath: c.GlobalString("registries.d"),
DockerCertPath: c.String(flagPrefix + "cert-dir"),
RegistriesDirPath: c.GlobalString("registries.d"),
ArchitectureChoice: c.GlobalString("override-arch"),
OSChoice: c.GlobalString("override-os"),
DockerCertPath: c.String(flagPrefix + "cert-dir"),
// DEPRECATED: keep this here for backward compatibility, but override
// them if per subcommand flags are provided (see below).
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
OSTreeTmpDirPath: c.String(flagPrefix + "ostree-tmp-dir"),
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
OSTreeTmpDirPath: c.String(flagPrefix + "ostree-tmp-dir"),
OCISharedBlobDirPath: c.String(flagPrefix + "shared-blob-dir"),
DirForceCompress: c.Bool(flagPrefix + "compress"),
AuthFilePath: c.String("authfile"),
DockerDaemonHost: c.String(flagPrefix + "daemon-host"),
DockerDaemonCertPath: c.String(flagPrefix + "cert-dir"),
DockerDaemonInsecureSkipTLSVerify: !c.BoolT(flagPrefix + "tls-verify"),
}
if c.IsSet(flagPrefix + "tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT(flagPrefix + "tls-verify")
@@ -31,6 +40,15 @@ func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemC
return ctx, nil
}
func commandTimeoutContextFromGlobalOptions(c *cli.Context) (context.Context, context.CancelFunc) {
ctx := context.Background()
var cancel context.CancelFunc = func() {}
if c.GlobalDuration("command-timeout") > 0 {
ctx, cancel = context.WithTimeout(ctx, c.GlobalDuration("command-timeout"))
}
return ctx, cancel
}
func parseCreds(creds string) (string, string, error) {
if creds == "" {
return "", "", errors.New("credentials can't be empty")
@@ -57,31 +75,30 @@ func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
}
// parseImage converts image URL-like string to an initialized handler for that image.
// The caller must call .Close() on the returned Image.
func parseImage(c *cli.Context) (types.Image, error) {
// The caller must call .Close() on the returned ImageCloser.
func parseImage(ctx context.Context, c *cli.Context) (types.ImageCloser, error) {
imgName := c.Args().First()
ref, err := alltransports.ParseImageName(imgName)
if err != nil {
return nil, err
}
ctx, err := contextFromGlobalOptions(c, "")
sys, err := contextFromGlobalOptions(c, "")
if err != nil {
return nil, err
}
return ref.NewImage(ctx)
return ref.NewImage(ctx, sys)
}
// parseImageSource converts image URL-like string to an ImageSource.
// requestedManifestMIMETypes is as in types.ImageReference.NewImageSource.
// The caller must call .Close() on the returned ImageSource.
func parseImageSource(c *cli.Context, name string, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func parseImageSource(ctx context.Context, c *cli.Context, name string) (types.ImageSource, error) {
ref, err := alltransports.ParseImageName(name)
if err != nil {
return nil, err
}
ctx, err := contextFromGlobalOptions(c, "")
sys, err := contextFromGlobalOptions(c, "")
if err != nil {
return nil, err
}
return ref.NewImageSource(ctx, requestedManifestMIMETypes)
return ref.NewImageSource(ctx, sys)
}

View File

@@ -20,30 +20,38 @@ _complete_() {
}
_skopeo_copy() {
local options_with_args="
--sign-by
--src-creds --screds
--src-cert-dir
--src-tls-verify
--dest-creds --dcreds
--dest-cert-dir
--dest-ostree-tmp-dir
--dest-tls-verify
"
local boolean_options="
--remove-signatures
"
_complete_ "$options_with_args" "$boolean_options"
local options_with_args="
--authfile
--format -f
--sign-by
--src-creds --screds
--src-cert-dir
--src-tls-verify
--dest-creds --dcreds
--dest-cert-dir
--dest-ostree-tmp-dir
--dest-tls-verify
--src-daemon-host
--dest-daemon-host
"
local boolean_options="
--dest-compress
--remove-signatures
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_inspect() {
local options_with_args="
--creds
--cert-dir
--authfile
--creds
--cert-dir
"
local boolean_options="
--raw
--tls-verify
--raw
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
@@ -75,11 +83,12 @@ _skopeo_manifest_digest() {
_skopeo_delete() {
local options_with_args="
--creds
--cert-dir
--authfile
--creds
--cert-dir
"
local boolean_options="
--tls-verify
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
@@ -99,6 +108,9 @@ _skopeo_skopeo() {
local options_with_args="
--policy
--registries.d
--override-arch
--override-os
--command-timeout
"
local boolean_options="
--insecure-policy

View File

@@ -0,0 +1,60 @@
% storage.conf(5) Container Storage Configuration File
% Dan Walsh
% May 2017
# NAME
storage.conf - Syntax of Container Storage configuration file
# DESCRIPTION
The STORAGE configuration file specifies all of the available container storage options
for tools using shared container storage.
# FORMAT
The [TOML format][toml] is used as the encoding of the configuration file.
Every option and subtable listed here is nested under a global "storage" table.
No bare options are used. The format of TOML can be simplified to:
[table]
option = value
[table.subtable1]
option = value
[table.subtable2]
option = value
## STORAGE TABLE
The `storage` table supports the following options:
**graphroot**=""
container storage graph dir (default: "/var/lib/containers/storage")
Default directory to store all writable content created by container storage programs.
**runroot**=""
container storage run dir (default: "/var/run/containers/storage")
Default directory to store all temporary writable content created by container storage programs.
**driver**=""
container storage driver (default is "overlay")
Default Copy On Write (COW) container storage driver.
### STORAGE OPTIONS TABLE
The `storage.options` table supports the following options:
**additionalimagestores**=[]
Paths to additional container image stores. Usually these are read-only and stored on remote network shares.
**size**=""
Maximum size of a container image. Default is 10GB. This flag can be used to set quota
on the size of container images.
**override_kernel_check**=""
Tell storage drivers to ignore kernel version checks. Some storage drivers assume that if a kernel is too
old, the driver is not supported. But for kernels that have had the drivers backported, this flag
allows users to override the checks.
# HISTORY
May 2017, Originally compiled by Dan Walsh <dwalsh@redhat.com>
Format copied from crio.conf man page created by Aleksa Sarai <asarai@suse.de>

28
contrib/storage.conf Normal file
View File

@@ -0,0 +1,28 @@
# storage.conf is the configuration file for all tools
# that share the containers/storage libraries
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]
# Default Storage Driver
driver = "overlay"
# Temporary storage location
runroot = "/var/run/containers/storage"
# Primary read-write location of container storage
graphroot = "/var/lib/containers/storage"
[storage.options]
# AdditionalImageStores is used to pass paths to additional read-only image stores
# Must be comma separated list.
additionalimagestores = [
]
# Size is used to set a maximum size of the container image. Only supported by
# certain container storage drivers (currently overlay, zfs, vfs, btrfs)
size = ""
# OverrideKernelCheck tells the driver to ignore kernel checks based on kernel version
override_kernel_check = "true"

View File

@@ -2,36 +2,48 @@
% Jhon Honce
% August 2016
# NAME
skopeo -- Various operations with container images images and container image registries
skopeo -- Command line utility used to interact with local and remote container images and container image registries
# SYNOPSIS
**skopeo** [_global options_] _command_ [_command options_]
# DESCRIPTION
`skopeo` is a command line utility providing various operations with container images and container image registries. For example, it is able to inspect a repository on a Docker registry and fetch image. It fetches the repository's manifest and it is able to show you a `docker inspect`-like json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - which tags are available for the given repository? which labels the image has?
`skopeo` is a command line utility providing various operations with container images and container image registries.
`skopeo` can copy container images between various containers image stores, converting them as necessary. For example you can use `skopeo` to copy container images from one container registry to another.
`skopeo` can convert a Docker schema 2 or schema 1 container image to an OCI image.
`skopeo` can inspect a repository on a container registry without needlessly pulling the image. Pulling an image from a repository, especially a remote repository, is an expensive network and storage operation. Skopeo fetches the repository's manifest and displays a `docker inspect`-like json output about the repository or a tag. `skopeo`, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - Which tags are available for the given repository? Which labels does the image have?
`skopeo` can sign and verify container images.
`skopeo` can delete container images from a remote container registry.
Note: `skopeo` does not require any container runtimes to be running, to do most of
its functionality. It also does not require root, unless you are copying images into a container runtime storage backend, like the docker daemon or github.com/containers/storage.
It also allows you to copy container images between various registries, possibly converting them as necessary, and to sign and verify images.
## IMAGE NAMES
Most commands refer to container images, using a _transport_`:`_details_ format. The following formats are supported:
**atomic:**_namespace_**/**_stream_**:**_tag_
An image in the current project of the current default Atomic
Registry. The current project and Atomic Registry instance are by
default read from `$HOME/.kube/config`, which is set e.g. using
`(oc login)`.
**containers-storage:**_docker-reference_
An image located in a local containers/storage image store. Location and image store specified in /etc/containers/storage.conf
**dir:**_path_
An existing local directory _path_ storing the manifest, layer
tarballs and signatures as individual files. This is a
non-standardized format, primarily useful for debugging or
noninvasive container inspection.
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2".
By default, uses the authorization state in `$HOME/.docker/config.json`,
which is set e.g. using `(docker login)`.
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(kpod login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
**docker-archive:**_path_[**:**_docker-reference_]
An image is stored in the `docker save` formatted file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
**docker-daemon:**_docker-reference_
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can be docker-daemon:algo:digest (an image ID).
**oci:**_path_**:**_tag_
An image _tag_ in a directory compliant with "Open Container Image
Layout Specification" at _path_.
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
# OPTIONS
@@ -41,7 +53,13 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**--insecure-policy** Adopt an insecure, permissive policy that allows anything. This obviates the need for a policy file.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for docker signature storage), overriding the default path.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for container signature storage), overriding the default path.
**--override-arch** _arch_ Use _arch_ instead of the architecture of the machine for choosing images.
**--override-os** _OS_ Use _OS_ instead of the running OS for choosing images.
**--command-timeout** _duration_ Timeout for the command execution.
**--help**|**-h** Show help
@@ -60,40 +78,65 @@ Uses the system's trust policy to validate images, rejects images not trusted by
_destination-image_ use the "image name" format described above
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--format, -f** _manifest-type_ Manifest type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
**--remove-signatures** do not copy signatures, if any, from _source-image_. Necessary when copying a signed image to a destination which does not support signatures.
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
**--src-creds** _username[:password]_ for accessing the source registry
**--dest-compress** _bool-value_ Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)
**--dest-creds** _username[:password]_ for accessing the destination registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry or daemon
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker source registry (defaults to true)
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry or daemon (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry or daemon
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker destination registry (defaults to true)
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry or daemon (defaults to true)
**--src-daemon-host** _host_ Copy from docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
**--dest-daemon-host** _host_ Copy to docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
Existing signatures, if any, are preserved as well.
## skopeo delete
**skopeo delete** _image-name_
Mark _image-name_ for deletion. To release the allocated disk space, you need to execute the docker registry garabage collector. E.g.,
Mark _image-name_ for deletion. To release the allocated disk space, you must login to the container registry server and execute the container registry garbage collector. E.g.,
```sh
$ docker exec -it registry bin/registry garbage-collect /etc/docker/registry/config.yml
```
/usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
Note: sometimes the config.yml is stored in /etc/docker/registry/config.yml
If you are running the container registry inside of a container you would execute something like:
$ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
```
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
@@ -106,11 +149,16 @@ Return low-level information about _image-name_ in a registry
_image-name_ name of image to retrieve information about
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
## skopeo manifest-digest
**skopeo manifest-digest** _manifest-file_
@@ -237,6 +285,9 @@ $ skopeo standalone-verify busybox-manifest.json registry.example.com/example/bu
Signature verified, digest sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55
```
# SEE ALSO
kpod-login(1), docker-login(1)
# AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

546
docs/skopeo.svg Normal file
View File

@@ -0,0 +1,546 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="480.61456"
height="472.66098"
viewBox="0 0 127.1626 125.05822"
version="1.1"
id="svg8"
inkscape:version="0.92.2 5c3e80d, 2017-08-06"
sodipodi:docname="skopeo.svg"
inkscape:export-filename="/home/duffy/Documents/Projects/Favors/skopeo-logo/skopeo.color.png"
inkscape:export-xdpi="90"
inkscape:export-ydpi="90">
<defs
id="defs2">
<linearGradient
inkscape:collect="always"
id="linearGradient84477">
<stop
style="stop-color:#0093d9;stop-opacity:1"
offset="0"
id="stop84473" />
<stop
style="stop-color:#ffffff;stop-opacity:1"
offset="1"
id="stop84475" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84469">
<stop
style="stop-color:#f6e6c8;stop-opacity:1"
offset="0"
id="stop84465" />
<stop
style="stop-color:#dc9f2e;stop-opacity:1"
offset="1"
id="stop84467" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84461">
<stop
style="stop-color:#bfdce8;stop-opacity:1;"
offset="0"
id="stop84457" />
<stop
style="stop-color:#2a72ac;stop-opacity:1"
offset="1"
id="stop84459" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84420">
<stop
style="stop-color:#a7a9ac;stop-opacity:1;"
offset="0"
id="stop84416" />
<stop
style="stop-color:#e7e8e9;stop-opacity:1"
offset="1"
id="stop84418" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84347">
<stop
style="stop-color:#2c2d2f;stop-opacity:1;"
offset="0"
id="stop84343" />
<stop
style="stop-color:#000000;stop-opacity:1"
offset="1"
id="stop84345" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84339">
<stop
style="stop-color:#002442;stop-opacity:1;"
offset="0"
id="stop84335" />
<stop
style="stop-color:#151617;stop-opacity:1"
offset="1"
id="stop84337" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84331">
<stop
style="stop-color:#003d6e;stop-opacity:1;"
offset="0"
id="stop84327" />
<stop
style="stop-color:#59b5ff;stop-opacity:1"
offset="1"
id="stop84329" />
</linearGradient>
<linearGradient
inkscape:collect="always"
id="linearGradient84323">
<stop
style="stop-color:#dc9f2e;stop-opacity:1;"
offset="0"
id="stop84319" />
<stop
style="stop-color:#ffffff;stop-opacity:1"
offset="1"
id="stop84321" />
</linearGradient>
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84323"
id="linearGradient84325"
x1="221.5741"
y1="250.235"
x2="219.20772"
y2="221.99771"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84331"
id="linearGradient84333"
x1="223.23239"
y1="212.83418"
x2="245.52328"
y2="129.64345"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84339"
id="linearGradient84341"
x1="190.36137"
y1="217.8925"
x2="205.20828"
y2="209.32063"
gradientUnits="userSpaceOnUse" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84347"
id="linearGradient84349"
x1="212.05453"
y1="215.20055"
x2="237.73705"
y2="230.02835"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84323"
id="linearGradient84363"
x1="193.61516"
y1="225.045"
x2="224.08698"
y2="223.54327"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84323"
id="linearGradient84377"
x1="182.72513"
y1="222.54439"
x2="184.01024"
y2="210.35291"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84420"
id="linearGradient84408"
x1="211.73801"
y1="225.48302"
x2="204.24324"
y2="238.46432"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84420"
id="linearGradient84422"
x1="190.931"
y1="221.83777"
x2="187.53873"
y2="229.26593"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84339"
id="linearGradient84425"
gradientUnits="userSpaceOnUse"
x1="190.36137"
y1="217.8925"
x2="205.20828"
y2="209.32063"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84420"
id="linearGradient84441"
x1="169.95944"
y1="215.77036"
x2="174.0289"
y2="207.81528"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84420"
id="linearGradient84455"
x1="234.08092"
y1="252.39755"
x2="245.88477"
y2="251.21777"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
<radialGradient
inkscape:collect="always"
xlink:href="#linearGradient84461"
id="radialGradient84463"
cx="213.19594"
cy="223.40646"
fx="214.12064"
fy="217.34077"
r="33.39888"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(2.6813748,0.05304973,-0.0423372,2.1399146,-349.74924,-255.6421)" />
<radialGradient
inkscape:collect="always"
xlink:href="#linearGradient84469"
id="radialGradient84471"
cx="207.18298"
cy="211.06483"
fx="207.18298"
fy="211.06483"
r="2.77954"
gradientTransform="matrix(1.4407627,0.18685239,-0.24637721,1.8997405,-38.989952,-218.98841)"
gradientUnits="userSpaceOnUse" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient84477"
id="linearGradient84479"
x1="241.60336"
y1="255.46982"
x2="244.45177"
y2="250.4846"
gradientUnits="userSpaceOnUse"
gradientTransform="translate(0,10.583333)" />
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="1"
inkscape:cx="517.27113"
inkscape:cy="314.79773"
inkscape:document-units="mm"
inkscape:current-layer="layer1"
inkscape:document-rotation="0"
showgrid="false"
units="px"
inkscape:snap-global="false"
inkscape:window-width="2560"
inkscape:window-height="1376"
inkscape:window-x="0"
inkscape:window-y="27"
inkscape:window-maximized="1"
fit-margin-top="0"
fit-margin-left="0"
fit-margin-right="0"
fit-margin-bottom="0" />
<metadata
id="metadata5">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-149.15784,-175.92614)">
<g
id="g84497"
style="stroke-width:1.32291663;stroke-miterlimit:4;stroke-dasharray:none"
transform="translate(0,10.583333)">
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84485"
width="31.605196"
height="19.16976"
x="299.48376"
y="87.963303"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84487"
width="16.725054"
height="9.8947001"
x="258.07639"
y="92.60083"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84489"
width="4.8383565"
height="11.503917"
x="253.2236"
y="91.796227"
transform="rotate(30)" />
<rect
y="86.859642"
x="331.21924"
height="21.377089"
width="4.521956"
id="rect84491"
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
transform="rotate(30)" />
</g>
<path
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 246.61693,255.0795 -9.11198,15.78242 a 2.6351497,9.1643514 30 0 0 6.60453,-6.7032 2.6351497,9.1643514 30 0 0 2.50745,-9.07922 z"
id="path84483"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="cccccc"
inkscape:connector-curvature="0"
id="path84481"
d="m 202.36709,199.05917 26.65552,8.43269 21.69622,19.51455 -8.68507,12.39398 -46.04559,-26.61429 z"
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952" />
<circle
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="path84224"
cx="213.64427"
cy="234.18927"
r="35.482784" />
<circle
r="33.39888"
cy="234.18927"
cx="213.64427"
id="circle84226"
style="fill:url(#radialGradient84463);fill-opacity:1;stroke:none;stroke-width:0.52916664;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84114"
width="31.605196"
height="19.16976"
x="304.77545"
y="97.128738"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84116"
width="4.521956"
height="21.377089"
x="300.27435"
y="96.025078"
transform="rotate(30)" />
<rect
y="99.087395"
x="283.71848"
height="15.252436"
width="16.459545"
id="rect84118"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
transform="rotate(30)" />
<rect
y="98.190086"
x="280.00021"
height="17.047071"
width="3.617183"
id="rect84120"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84122"
width="16.725054"
height="9.8947001"
x="263.36807"
y="101.76627"
transform="rotate(30)" />
<rect
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
id="rect84124"
width="4.8383565"
height="11.503917"
x="258.51526"
y="100.96166"
transform="rotate(30)" />
<rect
y="96.025078"
x="336.51093"
height="21.377089"
width="4.521956"
id="rect84126"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
transform="rotate(30)" />
<path
style="fill:url(#linearGradient84325);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 207.24023,252.71811 25.53907,14.74414 8.52539,-14.76953 -25.53711,-14.74415 z"
id="rect84313"
inkscape:connector-curvature="0" />
<path
inkscape:connector-curvature="0"
id="path84128"
d="m 215.3335,241.36799 22.49734,12.98884"
style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:0.52916664;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<path
inkscape:connector-curvature="0"
id="path84130"
d="m 246.61693,255.0795 -9.11198,15.78242 a 2.6351497,9.1643514 30 0 0 6.60453,-6.7032 2.6351497,9.1643514 30 0 0 2.50745,-9.07922 z"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952" />
<path
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
d="m 195.97877,212.80238 46.0456,26.61429 -3.50256,6.07342 -46.0456,-26.61429 z"
id="path84134"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccccc" />
<path
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
d="m 202.36709,199.05917 26.65552,8.43269 21.69622,19.51455 -8.68507,12.39398 -46.04559,-26.61429 z"
id="path84136"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cccccc" />
<path
style="fill:url(#linearGradient84422);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 186.31445,239.41146 1.30078,0.75 7.46485,-12.92968 -1.30078,-0.75 z"
id="rect84410"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84349);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
d="m 193.92188,218.48568 44.21289,25.55469 2.44335,-4.23242 -44.21289,-25.55664 z"
id="path84284"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84363);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 189.98438,240.4935 12.42187,7.16992 6.56641,-11.375 -12.42188,-7.16992 z"
id="rect84351"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84377);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 173.69727,227.99936 12.65234,7.30273 3.88867,-6.73633 -12.65234,-7.30273 z"
id="rect84365"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ccccc"
inkscape:connector-curvature="0"
id="path84138"
d="m 192.47621,218.8758 -11.1013,8.29627 c 0,0 6.16202,4.57403 15.2798,4.67656 9.1178,0.1025 11.46925,-3.93799 11.46925,-3.93799 z"
style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:0.79374999;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<ellipse
cy="223.01579"
cx="207.08998"
id="circle84140"
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
rx="3.8395541"
ry="3.8438656" />
<path
style="fill:url(#linearGradient84333);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
d="m 197.35938,212.35287 44.36523,25.64453 7.58984,-10.83203 -20.82617,-18.73242 -25.55078,-8.08399 z"
id="path84272"
inkscape:connector-curvature="0" />
<path
inkscape:connector-curvature="0"
id="path84142"
d="m 200.6837,212.37603 11.49279,-6.98413 -8.11935,-2.73742"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.5291667;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<path
inkscape:connector-curvature="0"
id="path84144"
d="m 241.31895,235.3047 -8.04514,-4.75769 10.057,-4.72299"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.5291667;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
sodipodi:nodetypes="ccc" />
<path
sodipodi:nodetypes="ccc"
style="fill:none;fill-rule:evenodd;stroke:#2a72ac;stroke-width:0.52899998;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 241.06868,235.79543 -8.9307,-5.38071 10.81942,-5.07707"
id="path84280"
inkscape:connector-curvature="0" />
<path
style="fill:none;fill-rule:evenodd;stroke:#2a72ac;stroke-width:0.5291667;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 200.60886,211.70589 10.37702,-6.1817 -7.12581,-2.30459"
id="path84290"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccc" />
<path
style="fill:url(#radialGradient84471);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 206.89258,220.23959 -0.29297,0.0352 -0.23633,0.0527 -0.26953,0.0898 -0.2793,0.125 -0.23437,0.13477 -0.20508,0.14648 -0.2207,0.19532 -0.18946,0.20117 -0.006,0.008 0.004,-0.008 -0.006,0.01 -0.008,0.01 -0.004,0.004 -0.006,0.006 -0.12109,0.1582 -0.002,0.004 -0.002,0.002 -0.16406,0.26758 -0.12109,0.24804 -0.0996,0.28125 -0.0645,0.24219 -0.0371,0.26367 -0.0176,0.31641 0.008,0.18164 0.0332,0.28711 0.0527,0.23437 0.004,0.0117 0.0937,0.28516 0.11133,0.24805 0.13086,0.23046 0.16992,0.23829 0.1836,0.20898 0.21093,0.19727 0.19532,0.14843 0.25586,0.15625 0.24218,0.11719 0.26172,0.0977 0.27344,0.0684 0.27344,0.043 0.29297,0.0137 0.18164,-0.008 0.29687,-0.0351 0.24024,-0.0547 0.27539,-0.0898 0.24218,-0.10938 0.25,-0.14453 0.23047,-0.16406 0.20899,-0.1836 0.20508,-0.21875 0.125,-0.16406 0.004,-0.006 0.1582,-0.25781 0.004,-0.008 0.12695,-0.26172 0.0996,-0.27344 0.002,-0.006 0.0586,-0.24023 0.0391,-0.26563 0.0176,-0.3125 -0.008,-0.17968 -0.0332,-0.28711 -0.0527,-0.23438 -0.004,-0.0117 -0.0937,-0.28515 -0.11132,-0.24805 -0.13086,-0.23047 -0.16993,-0.23828 -0.18554,-0.20899 -0.19922,-0.18945 -0.21875,-0.16406 -0.23828,-0.14844 -0.26563,-0.12695 -0.01,-0.004 -0.21875,-0.0801 -0.28516,-0.0723 -0.27344,-0.043 -0.29492,-0.0137 z"
id="ellipse84292"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84425);fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:0.79374999;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 183.23633,227.10092 c 5.59753,3.20336 12.36881,4.51528 18.71366,3.17108 1.59516,-0.38 3.17489,-0.99021 4.44874,-2.04739 -0.73893,-0.64617 -1.68301,-0.99544 -2.49844,-1.53493 -3.78032,-2.18293 -7.56064,-4.36587 -11.34096,-6.5488 -3.10767,2.32001 -6.21533,4.64003 -9.323,6.96004 z"
id="path84298"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cccccc" />
<path
style="fill:url(#linearGradient84479);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 238.62695,269.97787 0.006,-0.002 0.39453,-0.27735 0.41797,-0.34179 0.002,-0.002 0.45703,-0.42382 0.47851,-0.49219 0.0156,-0.0176 0.47656,-0.53711 0.002,-0.002 0.0117,-0.0137 0.48438,-0.5918 0.0117,-0.0156 0.49023,-0.64257 0.01,-0.0137 0.49609,-0.69726 0.48047,-0.71875 0.01,-0.0137 0.46485,-0.74805 0.004,-0.008 0.002,-0.002 0.30468,-0.51562 0.008,-0.0117 0.4375,-0.78711 0.40625,-0.77734 0.008,-0.0137 0.37109,-0.77149 0.008,-0.0156 0.33789,-0.75977 0.006,-0.0156 0.30078,-0.73829 0.27148,-0.74609 0.21289,-0.66602 0.17969,-0.66796 v -0.002 l 0.12305,-0.58203 0.002,-0.0137 0.0723,-0.51562 0.0176,-0.31836 z"
id="path84379"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84408);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 202.78906,251.42318 2.08399,1.20118 9.6289,-16.67969 -2.08203,-1.20117 z"
id="rect84396"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84441);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 169.0918,226.26889 2.35937,1.36133 4.69336,-8.13086 -2.35937,-1.36133 z"
id="rect84429"
inkscape:connector-curvature="0" />
<path
style="fill:url(#linearGradient84455);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
d="m 234.17188,269.53842 2.08203,1.20312 9.63086,-16.67773 -2.08399,-1.20313 z"
id="rect84443"
inkscape:connector-curvature="0" />
<path
style="fill:#ffffff;fill-rule:evenodd;stroke:#f8ead2;stroke-width:0.52916664;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 215.55025,240.82707 22.49734,12.98884"
id="path84521"
inkscape:connector-curvature="0" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -6,7 +6,7 @@ set -e
#
# Requirements:
# - The current directory should be a checkout of the skopeo source code
# (https://github.com/projectatomic/skopeo). Whatever version is checked out
# (https://github.com/containers/skopeo). Whatever version is checked out
# will be built.
# - The script is intended to be run inside the docker container specified
# in the Dockerfile at the root of the source. In other words:
@@ -19,7 +19,7 @@ set -e
set -o pipefail
export SKOPEO_PKG='github.com/projectatomic/skopeo'
export SKOPEO_PKG='github.com/containers/skopeo'
export SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
export MAKEDIR="$SCRIPTDIR/make"

View File

@@ -4,7 +4,7 @@ if [ -z "$VALIDATE_UPSTREAM" ]; then
# this is kind of an expensive check, so let's not do this twice if we
# are running more than one validate bundlescript
VALIDATE_REPO='https://github.com/projectatomic/skopeo.git'
VALIDATE_REPO='https://github.com/containers/skopeo.git'
VALIDATE_BRANCH='master'
if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then

View File

@@ -1,28 +1,13 @@
#!/bin/bash
source "$(dirname "$BASH_SOURCE")/.validate"
errors=$(go vet $(go list -e ./... | grep -v "$SKOPEO_PKG"/vendor))
IFS=$'\n'
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/' || true) )
unset IFS
errors=()
for f in "${files[@]}"; do
failedVet=$(go vet "$f")
if [ "$failedVet" ]; then
errors+=( "$failedVet" )
fi
done
if [ ${#errors[@]} -eq 0 ]; then
if [ -z "$errors" ]; then
echo 'Congratulations! All Go source files have been vetted.'
else
{
echo "Errors from go vet:"
for err in "${errors[@]}"; do
echo " - $err"
done
echo "$errors"
echo
echo 'Please fix the above errors. You can test via "go vet" and commit the result.'
echo

17
hack/travis_osx.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -e
export GOPATH=$(pwd)/_gopath
export PATH=$GOPATH/bin:$PATH
_containers="${GOPATH}/src/github.com/containers"
mkdir -vp ${_containers}
ln -vsf $(pwd) ${_containers}/skopeo
go version
go get -u github.com/cpuguy83/go-md2man golang.org/x/lint/golint
cd ${_containers}/skopeo
make validate-local test-unit-local binary-local
sudo make install
skopeo -v

View File

@@ -1,15 +0,0 @@
#!/bin/bash
# This file is just wrapper around vndr (github.com/LK4D4/vndr) tool.
# For updating dependencies you should change `vendor.conf` file in root of the
# project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for
# vndr usage.
set -e
if ! hash vndr; then
echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH"
exit 1
fi
vndr "$@"

View File

@@ -5,16 +5,13 @@ import (
"os/exec"
"testing"
"github.com/containers/skopeo/version"
"github.com/go-check/check"
"github.com/projectatomic/skopeo/version"
)
const (
privateRegistryURL0 = "127.0.0.1:5000"
privateRegistryURL1 = "127.0.0.1:5001"
privateRegistryURL2 = "127.0.0.1:5002"
privateRegistryURL3 = "127.0.0.1:5003"
privateRegistryURL4 = "127.0.0.1:5004"
)
func Test(t *testing.T) {
@@ -26,15 +23,13 @@ func init() {
}
type SkopeoSuite struct {
regV1 *testRegistryV1
regV2 *testRegistryV2
regV2Shema1 *testRegistryV2
regV1WithAuth *testRegistryV1 // does v1 support auth?
regV2WithAuth *testRegistryV2
}
func (s *SkopeoSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
}
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
@@ -42,24 +37,14 @@ func (s *SkopeoSuite) TearDownSuite(c *check.C) {
}
func (s *SkopeoSuite) SetUpTest(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.regV1 = setupRegistryV1At(c, privateRegistryURL0, false) // TODO:(runcom)
s.regV2 = setupRegistryV2At(c, privateRegistryURL1, false, false)
s.regV2Shema1 = setupRegistryV2At(c, privateRegistryURL2, false, true)
s.regV1WithAuth = setupRegistryV1At(c, privateRegistryURL3, true) // not used
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL4, true, false)
s.regV2 = setupRegistryV2At(c, privateRegistryURL0, false, false)
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL1, true, false)
}
func (s *SkopeoSuite) TearDownTest(c *check.C) {
// not checking V1 registries now...
if s.regV2 != nil {
s.regV2.Close()
}
if s.regV2Shema1 != nil {
s.regV2Shema1.Close()
}
if s.regV2WithAuth != nil {
//cmd := exec.Command("docker", "logout", s.regV2WithAuth)
//c.Assert(cmd.Run(), check.IsNil)

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"net/http/httptest"
"os"
@@ -14,6 +15,8 @@ import (
"github.com/containers/image/signature"
"github.com/go-check/check"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/image-tools/image"
)
func init() {
@@ -87,8 +90,16 @@ func (s *CopySuite) TearDownSuite(c *check.C) {
}
}
func (s *CopySuite) TestCopyFailsWithManifestList(c *check.C) {
assertSkopeoFails(c, ".*can not copy docker://estesp/busybox:latest: manifest contains multiple images.*", "copy", "docker://estesp/busybox:latest", "dir:somedir")
func (s *CopySuite) TestCopyWithManifestList(c *check.C) {
dir, err := ioutil.TempDir("", "copy-manifest-list")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir)
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:latest", "dir:"+dir)
}
func (s *CopySuite) TestCopyFailsWhenImageOSDoesntMatchRuntimeOS(c *check.C) {
c.Skip("can't run this on Travis")
assertSkopeoFails(c, `.*image operating system "windows" cannot be used on "linux".*`, "copy", "docker://microsoft/windowsservercore", "containers-storage:test")
}
func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
@@ -130,14 +141,23 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
// docker v2s2 -> OCI image layout
// docker v2s2 -> OCI image layout with image name
// ociDest will be created by oci: if it doesn't exist
// so don't create it here to exercise auto-creation
ociDest := "busybox-latest"
ociDest := "busybox-latest-image"
ociImgName := "busybox"
defer os.RemoveAll(ociDest)
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:latest", "oci:"+ociDest+":"+ociImgName)
_, err = os.Stat(ociDest)
c.Assert(err, check.IsNil)
// docker v2s2 -> OCI image layout without image name
ociDest = "busybox-latest-noimage"
defer os.RemoveAll(ociDest)
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:latest", "oci:"+ociDest)
_, err = os.Stat(ociDest)
c.Assert(err, check.IsNil)
}
// Check whether dir: images in dir1 and dir2 are equal, ignoring schema1 signatures.
@@ -236,13 +256,15 @@ func (s *CopySuite) TestCopyOCIRoundTrip(c *check.C) {
c.Assert(out, check.Equals, "")
// For some silly reason we pass a logger to the OCI library here...
//logger := log.New(os.Stderr, "", 0)
logger := log.New(os.Stderr, "", 0)
// TODO: Verify using the upstream OCI image validator.
//err = image.ValidateLayout(oci1, nil, logger)
//c.Assert(err, check.IsNil)
//err = image.ValidateLayout(oci2, nil, logger)
//c.Assert(err, check.IsNil)
// Verify using the upstream OCI image validator, this should catch most
// non-compliance errors. DO NOT REMOVE THIS TEST UNLESS IT'S ABSOLUTELY
// NECESSARY.
err = image.ValidateLayout(oci1, nil, logger)
c.Assert(err, check.IsNil)
err = image.ValidateLayout(oci2, nil, logger)
c.Assert(err, check.IsNil)
// Now verify that everything is identical. Currently this is true, but
// because we recompute the manifests on-the-fly this doesn't necessarily
@@ -360,7 +382,7 @@ func (s *CopySuite) TestCopyDirSignatures(c *check.C) {
// Compression during copy
func (s *CopySuite) TestCopyCompression(c *check.C) {
const uncompresssedLayerFile = "160d823fdc48e62f97ba62df31e55424f8f5eb6b679c865eec6e59adfe304710.tar"
const uncompresssedLayerFile = "160d823fdc48e62f97ba62df31e55424f8f5eb6b679c865eec6e59adfe304710"
topDir, err := ioutil.TempDir("", "compression-top")
c.Assert(err, check.IsNil)
@@ -392,9 +414,7 @@ func (s *CopySuite) TestCopyCompression(c *check.C) {
fis, err := dirf.Readdir(-1)
c.Assert(err, check.IsNil)
for _, fi := range fis {
if strings.HasSuffix(fi.Name(), ".tar") {
c.Assert(fi.Size() < 2048, check.Equals, true)
}
c.Assert(fi.Size() < 2048, check.Equals, true)
}
}
}
@@ -575,6 +595,32 @@ func (s *CopySuite) TestCopySchemaConversion(c *check.C) {
s.testCopySchemaConversionRegistries(c, "docker://"+v2s1DockerRegistryURL+"/schema1", "docker://"+v2DockerRegistryURL+"/schema2")
}
func (s *CopySuite) TestCopyManifestConversion(c *check.C) {
topDir, err := ioutil.TempDir("", "manifest-conversion")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
srcDir := filepath.Join(topDir, "source")
destDir1 := filepath.Join(topDir, "dest1")
destDir2 := filepath.Join(topDir, "dest2")
// oci to v2s1 and vice-versa not supported yet
// get v2s2 manifest type
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+srcDir)
verifyManifestMIMEType(c, srcDir, manifest.DockerV2Schema2MediaType)
// convert from v2s2 to oci
assertSkopeoSucceeds(c, "", "copy", "--format=oci", "dir:"+srcDir, "dir:"+destDir1)
verifyManifestMIMEType(c, destDir1, imgspecv1.MediaTypeImageManifest)
// convert from oci to v2s2
assertSkopeoSucceeds(c, "", "copy", "--format=v2s2", "dir:"+destDir1, "dir:"+destDir2)
verifyManifestMIMEType(c, destDir2, manifest.DockerV2Schema2MediaType)
// convert from v2s2 to v2s1
assertSkopeoSucceeds(c, "", "copy", "--format=v2s1", "dir:"+srcDir, "dir:"+destDir1)
verifyManifestMIMEType(c, destDir1, manifest.DockerV2Schema1SignedMediaType)
// convert from v2s1 to v2s2
assertSkopeoSucceeds(c, "", "copy", "--format=v2s2", "dir:"+destDir1, "dir:"+destDir2)
verifyManifestMIMEType(c, destDir2, manifest.DockerV2Schema2MediaType)
}
func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Registry, schema2Registry string) {
topDir, err := ioutil.TempDir("", "schema-conversion")
c.Assert(err, check.IsNil)

View File

@@ -13,23 +13,10 @@ import (
)
const (
binaryV1 = "docker-registry"
binaryV2 = "registry-v2"
binaryV2Schema1 = "registry-v2-schema1"
)
type testRegistryV1 struct {
cmd *exec.Cmd
url string
dir string
}
func setupRegistryV1At(c *check.C, url string, auth bool) *testRegistryV1 {
return &testRegistryV1{
url: url,
}
}
type testRegistryV2 struct {
cmd *exec.Cmd
url string
@@ -44,7 +31,7 @@ func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistry
c.Assert(err, check.IsNil)
// Wait for registry to be ready to serve requests.
for i := 0; i != 5; i++ {
for i := 0; i != 50; i++ {
if err = reg.Ping(); err == nil {
break
}

View File

@@ -36,15 +36,8 @@ func findFingerprint(lineBytes []byte) (string, error) {
return "", errors.New("No fingerprint found")
}
func (s *SigningSuite) SetUpTest(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
_, err = exec.LookPath(skopeoBinary)
func (s *SigningSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.gpgHome, err = ioutil.TempDir("", "skopeo-gpg")
@@ -59,7 +52,7 @@ func (s *SigningSuite) SetUpTest(c *check.C) {
c.Assert(err, check.IsNil)
}
func (s *SigningSuite) TearDownTest(c *check.C) {
func (s *SigningSuite) TearDownSuite(c *check.C) {
if s.gpgHome != "" {
err := os.RemoveAll(s.gpgHome)
c.Assert(err, check.IsNil)
@@ -70,6 +63,13 @@ func (s *SigningSuite) TearDownTest(c *check.C) {
}
func (s *SigningSuite) TestSignVerifySmoke(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/smoketest"

View File

@@ -1,40 +1,56 @@
github.com/urfave/cli v1.17.0
github.com/kr/pretty v0.1.0
github.com/kr/text v0.1.0
github.com/containers/image master
github.com/opencontainers/go-digest master
gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
github.com/containers/storage master
github.com/Sirupsen/logrus v0.10.0
github.com/sirupsen/logrus v1.0.0
github.com/go-check/check v1
github.com/stretchr/testify v1.1.3
github.com/davecgh/go-spew master
github.com/pmezard/go-difflib master
github.com/pkg/errors master
golang.org/x/crypto master
github.com/ulikunitz/xz v0.5.4
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
github.com/docker/docker v1.13.0
github.com/docker/go-connections 4ccf312bf1d35e5dbda654e57a9be4c3f3cd0366
github.com/vbatts/tar-split v0.10.1
github.com/docker/docker da99009bbb1165d1ac5688b5c81d2f589d418341
github.com/docker/go-connections 7beb39f0b969b075d1325fecb092faf27fd357b6
github.com/containerd/continuity d8fb8589b0e8e85b8c8bbaa8840226d0dfeb7371
github.com/vbatts/tar-split v0.10.2
github.com/gorilla/context 14f550f51a
github.com/gorilla/mux e444e69cbd
github.com/docker/go-units 8a7beacffa3009a9ac66bad506b18ffdd110cf97
golang.org/x/net master
github.com/gogo/protobuf fcdc5011193ff531a548e9b0301828d5a5b97fd8
# end docker deps
golang.org/x/text master
github.com/docker/distribution master
# docker/distributions dependencies
github.com/docker/go-metrics 399ea8c73916000c64c2c76e8da00ca82f8387ab
github.com/prometheus/client_golang c332b6f63c0658a65eca15c0e5247ded801cf564
github.com/prometheus/client_model 99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c
github.com/prometheus/common 89604d197083d4781071d3c65855d24ecfb0a563
github.com/prometheus/procfs cb4147076ac75738c9a7d279075a253c0cc5acbd
github.com/beorn7/perks 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c
github.com/golang/protobuf 8d92cf5fc15a4382f8964b08e1f42a75c0591aa3
# end of docker/distribution dependencies
github.com/docker/libtrust master
github.com/docker/docker-credential-helpers d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1
github.com/opencontainers/runc master
github.com/opencontainers/image-spec v1.0.0-rc5
github.com/opencontainers/image-spec unique-ids https://github.com/mtrmac/image-spec
# -- start OCI image validation requirements.
github.com/opencontainers/runtime-spec v1.0.0-rc4
github.com/opencontainers/image-tools v0.1.0
github.com/opencontainers/runtime-spec v1.0.0
github.com/opencontainers/image-tools 6d941547fa1df31900990b3fb47ec2468c9c6469
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
go4.org master https://github.com/camlistore/go4
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460
# -- end OCI image validation requirements
github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
k8s.io/client-go master
github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
@@ -43,3 +59,8 @@ github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
github.com/opencontainers/selinux master
golang.org/x/sys master
github.com/tchap/go-patricia v2.2.6
github.com/BurntSushi/toml master
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac
github.com/syndtr/gocapability master

21
vendor/github.com/BurntSushi/toml/COPYING generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2013 TOML authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

218
vendor/github.com/BurntSushi/toml/README.md generated vendored Normal file
View File

@@ -0,0 +1,218 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/toml-lang/toml
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
Documentation: https://godoc.org/github.com/BurntSushi/toml
Installation:
```bash
go get github.com/BurntSushi/toml
```
Try the toml validator:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.

509
vendor/github.com/BurntSushi/toml/decode.go generated vendored Normal file
View File

@@ -0,0 +1,509 @@
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

121
vendor/github.com/BurntSushi/toml/decode_meta.go generated vendored Normal file
View File

@@ -0,0 +1,121 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

27
vendor/github.com/BurntSushi/toml/doc.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/toml-lang/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml

568
vendor/github.com/BurntSushi/toml/encode.go generated vendored Normal file
View File

@@ -0,0 +1,568 @@
package toml
import (
"bufio"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
"\"", "\\\"",
"\\", "\\\\",
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
case reflect.Struct:
enc.eStruct(key, rv)
default:
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
continue
}
enc.encode(key.add(mapKey), mrv)
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
continue
}
frv := rv.Field(i)
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
if opts.skip {
continue
}
keyName := sft.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
continue
}
if opts.omitzero && isZero(sf) {
continue
}
enc.encode(key.add(keyName), sf)
}
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

19
vendor/github.com/BurntSushi/toml/encoding_types.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View File

@@ -0,0 +1,18 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

953
vendor/github.com/BurntSushi/toml/lex.go generated vendored Normal file
View File

@@ -0,0 +1,953 @@
package toml
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
itemRawString
itemMultilineString
itemRawMultilineString
itemBool
itemInteger
itemFloat
itemDatetime
itemArray // the start of an array
itemArrayEnd
itemTableStart
itemTableEnd
itemArrayTableStart
itemArrayTableEnd
itemKeyStart
itemCommentStart
itemInlineTableStart
itemInlineTableEnd
)
const (
eof = 0
comma = ','
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
inlineTableStart = '{'
inlineTableEnd = '}'
)
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
// Allow for backing up up to three runes.
// This is necessary because TOML contains 3-rune tokens (""" and ''').
prevWidths [3]int
nprev int // how many of prevWidths are in use
// If we emit an eof, we can still back up, but it is not OK to call
// next again.
atEOF bool
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
}
func (lx *lexer) nextItem() item {
for {
select {
case item := <-lx.items:
return item
default:
lx.state = lx.state(lx)
}
}
}
func lex(input string) *lexer {
lx := &lexer{
input: input,
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
}
return lx
}
func (lx *lexer) push(state stateFn) {
lx.stack = append(lx.stack, state)
}
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
return last
}
func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.start = lx.pos
}
func (lx *lexer) next() (r rune) {
if lx.atEOF {
panic("next called after EOF")
}
if lx.pos >= len(lx.input) {
lx.atEOF = true
return eof
}
if lx.input[lx.pos] == '\n' {
lx.line++
}
lx.prevWidths[2] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[0]
if lx.nprev < 3 {
lx.nprev++
}
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
lx.prevWidths[0] = w
lx.pos += w
return r
}
// ignore skips over the pending input before this point.
func (lx *lexer) ignore() {
lx.start = lx.pos
}
// backup steps back one rune. Can be called only twice between calls to next.
func (lx *lexer) backup() {
if lx.atEOF {
lx.atEOF = false
return
}
if lx.nprev < 1 {
panic("backed up too far")
}
w := lx.prevWidths[0]
lx.prevWidths[0] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[2]
lx.nprev--
lx.pos -= w
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
}
}
// accept consumes the next rune if it's equal to `valid`.
func (lx *lexer) accept(valid rune) bool {
if lx.next() == valid {
return true
}
lx.backup()
return false
}
// peek returns but does not consume the next rune in the input.
func (lx *lexer) peek() rune {
r := lx.next()
lx.backup()
return r
}
// skip ignores all input that matches the given predicate.
func (lx *lexer) skip(pred func(rune) bool) {
for {
r := lx.next()
if pred(r) {
continue
}
lx.backup()
lx.ignore()
return
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (newlines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
}
return nil
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
if isWhitespace(r) || isNL(r) {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
lx.push(lexTop)
return lexCommentStart
case tableStart:
return lexTableStart
case eof:
if lx.pos > lx.start {
return lx.errorf("unexpected EOF")
}
lx.emit(itemEOF)
return nil
}
// At this point, the only valid item can be a key, so we back up
// and let the key lexer do the rest.
lx.backup()
lx.push(lexTopEnd)
return lexKeyStart
}
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a newline. If it sees EOF, it will quit the lexer successfully.
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
// a comment will read to a newline for us.
lx.push(lexTop)
return lexCommentStart
case isWhitespace(r):
return lexTopEnd
case isNL(r):
lx.ignore()
return lexTop
case r == eof:
lx.emit(itemEOF)
return nil
}
return lx.errorf("expected a top-level item to end with a newline, "+
"comment, or EOF, but got %q instead", r)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
// it starts with a character other than '.' and ']'.
// It assumes that '[' has already been consumed.
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
} else {
lx.emit(itemTableStart)
lx.push(lexTableEnd)
}
return lexTableNameStart
}
func lexTableEnd(lx *lexer) stateFn {
lx.emit(itemTableEnd)
return lexTopEnd
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf("expected end of table array name delimiter %q, "+
"but got %q instead", arrayTableEnd, r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
}
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
return lx.errorf("unexpected end of table name " +
"(table names cannot be empty)")
case r == tableSep:
return lx.errorf("unexpected table separator " +
"(table names cannot be empty)")
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.push(lexTableNameEnd)
return lexValue // reuse string lexing
default:
return lexBareTableName
}
}
// lexBareTableName lexes the name of a table. It assumes that at least one
// valid character for the table has already been read.
func lexBareTableName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
return lexBareTableName
}
lx.backup()
lx.emit(itemText)
return lexTableNameEnd
}
// lexTableNameEnd reads the end of a piece of a table name, optionally
// consuming whitespace.
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
lx.ignore()
return lexTableNameStart
case r == tableEnd:
return lx.pop()
default:
return lx.errorf("expected '.' or ']' to end table name, "+
"but got %q instead", r)
}
}
// lexKeyStart consumes a key name up until the first non-whitespace character.
// lexKeyStart will ignore whitespace.
func lexKeyStart(lx *lexer) stateFn {
r := lx.peek()
switch {
case r == keySep:
return lx.errorf("unexpected key separator %q", keySep)
case isWhitespace(r) || isNL(r):
lx.next()
return lexSkip(lx, lexKeyStart)
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.emit(itemKeyStart)
lx.push(lexKeyEnd)
return lexValue // reuse string lexing
default:
lx.ignore()
lx.emit(itemKeyStart)
return lexBareKey
}
}
// lexBareKey consumes the text of a bare key. Assumes that the first character
// (which is not whitespace) has not yet been consumed.
func lexBareKey(lx *lexer) stateFn {
switch r := lx.next(); {
case isBareKeyChar(r):
return lexBareKey
case isWhitespace(r):
lx.backup()
lx.emit(itemText)
return lexKeyEnd
case r == keySep:
lx.backup()
lx.emit(itemText)
return lexKeyEnd
default:
return lx.errorf("bare keys cannot contain %q", r)
}
}
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
// separator).
func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case r == keySep:
return lexSkip(lx, lexValue)
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
default:
return lx.errorf("expected key separator %q, but got %q instead",
keySep, r)
}
}
// lexValue starts the consumption of a value anywhere a value is expected.
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func lexValue(lx *lexer) stateFn {
// We allow whitespace to precede a value, but NOT newlines.
// In array syntax, the array states are responsible for ignoring newlines.
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case isDigit(r):
lx.backup() // avoid an extra state and use the same as above
return lexNumberOrDateStart
}
switch r {
case arrayStart:
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case inlineTableStart:
lx.ignore()
lx.emit(itemInlineTableStart)
return lexInlineTableValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
lx.ignore() // Ignore """
return lexMultilineString
}
lx.backup()
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
lx.ignore() // Ignore """
return lexMultilineRawString
}
lx.backup()
}
lx.ignore() // ignore the "'"
return lexRawString
case '+', '-':
return lexNumberStart
case '.': // special error case, be kind to users
return lx.errorf("floats must start with a digit, not '.'")
}
if unicode.IsLetter(r) {
// Be permissive here; lexBool will give a nice error if the
// user wrote something like
// x = foo
// (i.e. not 'true' or 'false' but is something else word-like.)
lx.backup()
return lexBool
}
return lx.errorf("expected value but found %q instead", r)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and newlines are ignored.
func lexArrayValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
lx.push(lexArrayValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == arrayEnd:
// NOTE(caleb): The spec isn't clear about whether you can have
// a trailing comma or not, so we'll allow it.
return lexArrayEnd
}
lx.backup()
lx.push(lexArrayValueEnd)
return lexValue
}
// lexArrayValueEnd consumes everything between the end of an array value and
// the next value (or the end of the array): it ignores whitespace and newlines
// and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
return lexArrayEnd
}
return lx.errorf(
"expected a comma or array terminator %q, but got %q instead",
arrayEnd, r,
)
}
// lexArrayEnd finishes the lexing of an array.
// It assumes that a ']' has just been consumed.
func lexArrayEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemArrayEnd)
return lx.pop()
}
// lexInlineTableValue consumes one key/value pair in an inline table.
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
func lexInlineTableValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValue)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == inlineTableEnd:
return lexInlineTableEnd
}
lx.backup()
lx.push(lexInlineTableValueEnd)
return lexKeyStart
}
// lexInlineTableValueEnd consumes everything between the end of an inline table
// key/value pair and the next pair (or the end of the table):
// it ignores whitespace and expects either a ',' or a '}'.
func lexInlineTableValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValueEnd)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexInlineTableValue
case r == inlineTableEnd:
return lexInlineTableEnd
}
return lx.errorf("expected a comma or an inline table terminator %q, "+
"but got %q instead", inlineTableEnd, r)
}
// lexInlineTableEnd finishes the lexing of an inline table.
// It assumes that a '}' has just been consumed.
func lexInlineTableEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemInlineTableEnd)
return lx.pop()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func lexString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
lx.backup()
lx.emit(itemString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexString
}
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func lexMultilineString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case '\\':
return lexMultilineStringEscape
case stringEnd:
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
// It assumes that the beginning "'" has already been consumed and ignored.
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == rawStringEnd:
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'''" has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case rawStringEnd:
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemRawMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
return lexMultilineString
}
lx.backup()
lx.push(lexMultilineString)
return lexStringEscape(lx)
}
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'b':
fallthrough
case 't':
fallthrough
case 'n':
fallthrough
case 'f':
fallthrough
case 'r':
fallthrough
case '"':
fallthrough
case '\\':
return lx.pop()
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("invalid escape character %q; only the following "+
"escape characters are allowed: "+
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 4; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected four hexadecimal digits after '\u', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
func lexLongUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 8; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected eight hexadecimal digits after '\U', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '_':
return lexNumber
case 'e', 'E':
return lexFloat
case '.':
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
// lexNumberOrDate consumes either an integer, float or datetime.
func lexNumberOrDate(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '-':
return lexDatetime
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexDatetime consumes a Datetime, to a first approximation.
// The parser validates that it matches one of the accepted formats.
func lexDatetime(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexDatetime
}
switch r {
case '-', 'T', ':', '.', 'Z', '+':
return lexDatetime
}
lx.backup()
lx.emit(itemDatetime)
return lx.pop()
}
// lexNumberStart consumes either an integer or a float. It assumes that a sign
// has already been read, but that *no* digits have been consumed.
// lexNumberStart will move to the appropriate integer or float states.
func lexNumberStart(lx *lexer) stateFn {
// We MUST see a digit. Even floats have to start with a digit.
r := lx.next()
if !isDigit(r) {
if r == '.' {
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
return lexNumber
}
// lexNumber consumes an integer or a float after seeing the first digit.
func lexNumber(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumber
}
switch r {
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexFloat consumes the elements of a float. It allows any sequence of
// float-like characters, so floats emitted by the lexer are only a first
// approximation and must be validated by the parser.
func lexFloat(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexFloat
}
switch r {
case '_', '.', '-', '+', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemFloat)
return lx.pop()
}
// lexBool consumes a bool string: 'true' or 'false.
func lexBool(lx *lexer) stateFn {
var rs []rune
for {
r := lx.next()
if !unicode.IsLetter(r) {
lx.backup()
break
}
rs = append(rs, r)
}
s := string(rs)
switch s {
case "true", "false":
lx.emit(itemBool)
return lx.pop()
}
return lx.errorf("expected value but found %q instead", s)
}
// lexCommentStart begins the lexing of a comment. It will emit
// itemCommentStart and consume no characters, passing control to lexComment.
func lexCommentStart(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemCommentStart)
return lexComment
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first newline character, and pass control
// back to the last state on the stack.
func lexComment(lx *lexer) stateFn {
r := lx.peek()
if isNL(r) || r == eof {
lx.emit(itemText)
return lx.pop()
}
lx.next()
return lexComment
}
// lexSkip ignores all slurped input and moves on to the next state.
func lexSkip(lx *lexer, nextState stateFn) stateFn {
return func(lx *lexer) stateFn {
lx.ignore()
return nextState
}
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
return "Text"
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
return "String"
case itemBool:
return "Bool"
case itemInteger:
return "Integer"
case itemFloat:
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemCommentStart:
return "CommentStart"
}
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
}
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}

592
vendor/github.com/BurntSushi/toml/parse.go generated vendored Normal file
View File

@@ -0,0 +1,592 @@
package toml
import (
"fmt"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
// A list of keys in the order that they appear in the TOML data.
ordered []Key
// the full key for the current hash in scope
context Key
// the base key name for everything except hashes
currentKey string
// rough approximation of line number
approxLine int
// A map of 'key.group.names' to whether they were created implicitly.
implicits map[string]bool
}
type parseError string
func (pe parseError) Error() string {
return string(pe)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(parseError); ok {
return
}
panic(r)
}
}()
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
p.approxLine, p.current(), fmt.Sprintf(format, v...))
panic(parseError(msg))
}
func (p *parser) next() item {
it := p.lx.nextItem()
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart:
p.approxLine = item.line
p.expect(itemText)
case itemTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemTableEnd, kg.typ)
p.establishContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemArrayTableEnd, kg.typ)
p.establishContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart:
kname := p.next()
p.approxLine = kname.line
p.currentKey = p.keyString(kname)
val, typ := p.value(p.next())
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
panic("unreachable")
}
}
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
}
p.bug("Expected boolean value, but got '%s'.", it.val)
case itemInteger:
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseInt(val, 10, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit "+
"signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemFloat:
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be "+
"surrounded by digits", it.val)
}
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed "+
"by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit "+
"IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemDatetime:
var t time.Time
var ok bool
var err error
for _, format := range []string{
"2006-01-02T15:04:05Z07:00",
"2006-01-02T15:04:05",
"2006-01-02",
} {
t, err = time.ParseInLocation(format, it.val, time.Local)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
case itemArray:
array := make([]interface{}, 0)
types := make([]tomlType, 0)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it)
array = append(array, val)
types = append(types, typ)
}
return array, p.typeOfArray(types)
case itemInlineTableStart:
var (
hash = make(map[string]interface{})
outerContext = p.context
outerKey = p.currentKey
)
p.context = append(p.context, p.currentKey)
p.currentKey = ""
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
if it.typ != itemKeyStart {
p.bug("Expected key start but instead found %q, around line %d",
it.val, p.approxLine)
}
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
// retrieve key
k := p.next()
p.approxLine = k.line
kname := p.keyString(k)
// retrieve value
p.currentKey = kname
val, typ := p.value(p.next())
// make sure we keep metadata up to date
p.setType(kname, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
hash[kname] = val
}
p.context = outerContext
p.currentKey = outerKey
return hash, tomlHash
}
p.bug("Unexpected value type: %s", it.typ)
panic("unreachable")
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
accept = false
continue
}
accept = true
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// establishContext sets the current context of the parser,
// where the context is either a hash or an array of hashes. Which one is
// set depends on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) establishContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 5)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as "+
"an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var tmpHash interface{}
var ok bool
hash := p.mapping
keyContext := make(Key, 0)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.bug("Expected hash to have type 'map[string]interface{}', but "+
"it has '%T' instead.", tmpHash)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Typically, if the given key has already been set, then we have
// to raise an error since duplicate keys are disallowed. However,
// it's possible that a key was previously defined implicitly. In this
// case, it is allowed to be redefined concretely. (See the
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// addImplicit sets the given Key as having been created implicitly.
func (p *parser) addImplicit(key Key) {
p.implicits[key.String()] = true
}
// removeImplicit stops tagging the given key as having been implicitly
// created.
func (p *parser) removeImplicit(key Key) {
p.implicits[key.String()] = false
}
// isImplicit returns true if the key group pointed to by the key was created
// implicitly.
func (p *parser) isImplicit(key Key) bool {
return p.implicits[key.String()]
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) == 0 || s[0] != '\n' {
return s
}
return s[1:]
}
func stripEscapedWhitespace(s string) string {
esc := strings.Split(s, "\\\n")
if len(esc) > 1 {
for i := 1; i < len(esc); i++ {
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
}
}
return strings.Join(esc, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}
func isStringType(ty itemType) bool {
return ty == itemString || ty == itemMultilineString ||
ty == itemRawString || ty == itemRawMultilineString
}

91
vendor/github.com/BurntSushi/toml/type_check.go generated vendored Normal file
View File

@@ -0,0 +1,91 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}

242
vendor/github.com/BurntSushi/toml/type_fields.go generated vendored Normal file
View File

@@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

View File

@@ -1,41 +0,0 @@
package logrus
import (
"encoding/json"
"fmt"
)
type JSONFormatter struct {
// TimestampFormat sets the format used for marshaling timestamps.
TimestampFormat string
}
func (f *JSONFormatter) Format(entry *Entry) ([]byte, error) {
data := make(Fields, len(entry.Data)+3)
for k, v := range entry.Data {
switch v := v.(type) {
case error:
// Otherwise errors are ignored by `encoding/json`
// https://github.com/Sirupsen/logrus/issues/137
data[k] = v.Error()
default:
data[k] = v
}
}
prefixFieldClashes(data)
timestampFormat := f.TimestampFormat
if timestampFormat == "" {
timestampFormat = DefaultTimestampFormat
}
data["time"] = entry.Time.Format(timestampFormat)
data["msg"] = entry.Message
data["level"] = entry.Level.String()
serialized, err := json.Marshal(data)
if err != nil {
return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err)
}
return append(serialized, '\n'), nil
}

View File

@@ -1,212 +0,0 @@
package logrus
import (
"io"
"os"
"sync"
)
type Logger struct {
// The logs are `io.Copy`'d to this in a mutex. It's common to set this to a
// file, or leave it default which is `os.Stderr`. You can also set this to
// something more adventorous, such as logging to Kafka.
Out io.Writer
// Hooks for the logger instance. These allow firing events based on logging
// levels and log entries. For example, to send errors to an error tracking
// service, log to StatsD or dump the core on fatal errors.
Hooks LevelHooks
// All log entries pass through the formatter before logged to Out. The
// included formatters are `TextFormatter` and `JSONFormatter` for which
// TextFormatter is the default. In development (when a TTY is attached) it
// logs with colors, but to a file it wouldn't. You can easily implement your
// own that implements the `Formatter` interface, see the `README` or included
// formatters for examples.
Formatter Formatter
// The logging level the logger should log at. This is typically (and defaults
// to) `logrus.Info`, which allows Info(), Warn(), Error() and Fatal() to be
// logged. `logrus.Debug` is useful in
Level Level
// Used to sync writing to the log.
mu sync.Mutex
}
// Creates a new logger. Configuration should be set by changing `Formatter`,
// `Out` and `Hooks` directly on the default logger instance. You can also just
// instantiate your own:
//
// var log = &Logger{
// Out: os.Stderr,
// Formatter: new(JSONFormatter),
// Hooks: make(LevelHooks),
// Level: logrus.DebugLevel,
// }
//
// It's recommended to make this a global instance called `log`.
func New() *Logger {
return &Logger{
Out: os.Stderr,
Formatter: new(TextFormatter),
Hooks: make(LevelHooks),
Level: InfoLevel,
}
}
// Adds a field to the log entry, note that you it doesn't log until you call
// Debug, Print, Info, Warn, Fatal or Panic. It only creates a log entry.
// If you want multiple fields, use `WithFields`.
func (logger *Logger) WithField(key string, value interface{}) *Entry {
return NewEntry(logger).WithField(key, value)
}
// Adds a struct of fields to the log entry. All it does is call `WithField` for
// each `Field`.
func (logger *Logger) WithFields(fields Fields) *Entry {
return NewEntry(logger).WithFields(fields)
}
// Add an error as single field to the log entry. All it does is call
// `WithError` for the given `error`.
func (logger *Logger) WithError(err error) *Entry {
return NewEntry(logger).WithError(err)
}
func (logger *Logger) Debugf(format string, args ...interface{}) {
if logger.Level >= DebugLevel {
NewEntry(logger).Debugf(format, args...)
}
}
func (logger *Logger) Infof(format string, args ...interface{}) {
if logger.Level >= InfoLevel {
NewEntry(logger).Infof(format, args...)
}
}
func (logger *Logger) Printf(format string, args ...interface{}) {
NewEntry(logger).Printf(format, args...)
}
func (logger *Logger) Warnf(format string, args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warnf(format, args...)
}
}
func (logger *Logger) Warningf(format string, args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warnf(format, args...)
}
}
func (logger *Logger) Errorf(format string, args ...interface{}) {
if logger.Level >= ErrorLevel {
NewEntry(logger).Errorf(format, args...)
}
}
func (logger *Logger) Fatalf(format string, args ...interface{}) {
if logger.Level >= FatalLevel {
NewEntry(logger).Fatalf(format, args...)
}
os.Exit(1)
}
func (logger *Logger) Panicf(format string, args ...interface{}) {
if logger.Level >= PanicLevel {
NewEntry(logger).Panicf(format, args...)
}
}
func (logger *Logger) Debug(args ...interface{}) {
if logger.Level >= DebugLevel {
NewEntry(logger).Debug(args...)
}
}
func (logger *Logger) Info(args ...interface{}) {
if logger.Level >= InfoLevel {
NewEntry(logger).Info(args...)
}
}
func (logger *Logger) Print(args ...interface{}) {
NewEntry(logger).Info(args...)
}
func (logger *Logger) Warn(args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warn(args...)
}
}
func (logger *Logger) Warning(args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warn(args...)
}
}
func (logger *Logger) Error(args ...interface{}) {
if logger.Level >= ErrorLevel {
NewEntry(logger).Error(args...)
}
}
func (logger *Logger) Fatal(args ...interface{}) {
if logger.Level >= FatalLevel {
NewEntry(logger).Fatal(args...)
}
os.Exit(1)
}
func (logger *Logger) Panic(args ...interface{}) {
if logger.Level >= PanicLevel {
NewEntry(logger).Panic(args...)
}
}
func (logger *Logger) Debugln(args ...interface{}) {
if logger.Level >= DebugLevel {
NewEntry(logger).Debugln(args...)
}
}
func (logger *Logger) Infoln(args ...interface{}) {
if logger.Level >= InfoLevel {
NewEntry(logger).Infoln(args...)
}
}
func (logger *Logger) Println(args ...interface{}) {
NewEntry(logger).Println(args...)
}
func (logger *Logger) Warnln(args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warnln(args...)
}
}
func (logger *Logger) Warningln(args ...interface{}) {
if logger.Level >= WarnLevel {
NewEntry(logger).Warnln(args...)
}
}
func (logger *Logger) Errorln(args ...interface{}) {
if logger.Level >= ErrorLevel {
NewEntry(logger).Errorln(args...)
}
}
func (logger *Logger) Fatalln(args ...interface{}) {
if logger.Level >= FatalLevel {
NewEntry(logger).Fatalln(args...)
}
os.Exit(1)
}
func (logger *Logger) Panicln(args ...interface{}) {
if logger.Level >= PanicLevel {
NewEntry(logger).Panicln(args...)
}
}

View File

@@ -1,15 +0,0 @@
// +build solaris
package logrus
import (
"os"
"golang.org/x/sys/unix"
)
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal() bool {
_, err := unix.IoctlGetTermios(int(os.Stdout.Fd()), unix.TCGETA)
return err == nil
}

View File

@@ -1,27 +0,0 @@
// Based on ssh/terminal:
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build windows
package logrus
import (
"syscall"
"unsafe"
)
var kernel32 = syscall.NewLazyDLL("kernel32.dll")
var (
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
)
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
fd := syscall.Stderr
var st uint32
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)
return r != 0 && e == 0
}

View File

@@ -1,31 +0,0 @@
package logrus
import (
"bufio"
"io"
"runtime"
)
func (logger *Logger) Writer() *io.PipeWriter {
reader, writer := io.Pipe()
go logger.writerScanner(reader)
runtime.SetFinalizer(writer, writerFinalizer)
return writer
}
func (logger *Logger) writerScanner(reader *io.PipeReader) {
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
logger.Print(scanner.Text())
}
if err := scanner.Err(); err != nil {
logger.Errorf("Error while reading from Writer: %s", err)
}
reader.Close()
}
func writerFinalizer(writer *io.PipeWriter) {
writer.Close()
}

20
vendor/github.com/beorn7/perks/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,20 @@
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

31
vendor/github.com/beorn7/perks/README.md generated vendored Normal file
View File

@@ -0,0 +1,31 @@
# Perks for Go (golang.org)
Perks contains the Go package quantile that computes approximate quantiles over
an unbounded data stream within low memory and CPU bounds.
For more information and examples, see:
http://godoc.org/github.com/bmizerany/perks
A very special thank you and shout out to Graham Cormode (Rutgers University),
Flip Korn (AT&T LabsResearch), S. Muthukrishnan (Rutgers University), and
Divesh Srivastava (AT&T LabsResearch) for their research and publication of
[Effective Computation of Biased Quantiles over Data Streams](http://www.cs.rutgers.edu/~muthu/bquant.pdf)
Thank you, also:
* Armon Dadgar (@armon)
* Andrew Gerrand (@nf)
* Brad Fitzpatrick (@bradfitz)
* Keith Rarick (@kr)
FAQ:
Q: Why not move the quantile package into the project root?
A: I want to add more packages to perks later.
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

292
vendor/github.com/beorn7/perks/quantile/stream.go generated vendored Normal file
View File

@@ -0,0 +1,292 @@
// Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds.
//
// A small amount of accuracy is traded to achieve the above properties.
//
// Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples.
//
// For more detailed information about the algorithm used, see:
//
// Effective Computation of Biased Quantiles over Data Streams
//
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile
import (
"math"
"sort"
)
// Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience.
type Sample struct {
Value float64 `json:",string"`
Width float64 `json:",string"`
Delta float64 `json:",string"`
}
// Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample
func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r
}
return newStream(ƒ)
}
// NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r)
}
return newStream(ƒ)
}
// NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targets map[float64]float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for quantile, epsilon := range targets {
if quantile*s.n <= r {
f = (2 * epsilon * r) / quantile
} else {
f = (2 * epsilon * (s.n - r)) / (1 - quantile)
}
if f < m {
m = f
}
}
return m
}
return newStream(ƒ)
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {
*stream
b Samples
sorted bool
}
func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true}
}
// Insert inserts v into the stream.
func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1})
}
func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample)
s.sorted = false
if len(s.b) == cap(s.b) {
s.flush()
}
}
// Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result.
func (s *Stream) Query(q float64) float64 {
if !s.flushed() {
// Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data.
l := len(s.b)
if l == 0 {
return 0
}
i := int(math.Ceil(float64(l) * q))
if i > 0 {
i -= 1
}
s.maybeSort()
return s.b[i].Value
}
s.flush()
return s.stream.query(q)
}
// Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc.
//
// ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) {
sort.Sort(samples)
s.stream.merge(samples)
}
// Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() {
s.stream.reset()
s.b = s.b[:0]
}
// Samples returns stream samples held by s.
func (s *Stream) Samples() Samples {
if !s.flushed() {
return s.b
}
s.flush()
return s.stream.samples()
}
// Count returns the total number of samples observed in the stream
// since initialization.
func (s *Stream) Count() int {
return len(s.b) + s.stream.count()
}
func (s *Stream) flush() {
s.maybeSort()
s.stream.merge(s.b)
s.b = s.b[:0]
}
func (s *Stream) maybeSort() {
if !s.sorted {
s.sorted = true
sort.Sort(s.b)
}
}
func (s *Stream) flushed() bool {
return len(s.stream.l) > 0
}
type stream struct {
n float64
l []Sample
ƒ invariant
}
func (s *stream) reset() {
s.l = s.l[:0]
s.n = 0
}
func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}})
}
func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly.
var r float64
i := 0
for _, sample := range samples {
for ; i < len(s.l); i++ {
c := s.l[i]
if c.Value > sample.Value {
// Insert at position i.
s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{
sample.Value,
sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly?
}
i++
goto inserted
}
r += c.Width
}
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++
inserted:
s.n += sample.Width
r += sample.Width
}
s.compress()
}
func (s *stream) count() int {
return int(s.n)
}
func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0]
var r float64
for _, c := range s.l[1:] {
r += p.Width
if r+c.Width+c.Delta > t {
return p.Value
}
p = c
}
return p.Value
}
func (s *stream) compress() {
if len(s.l) < 2 {
return
}
x := s.l[len(s.l)-1]
xi := len(s.l) - 1
r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width
s.l[xi] = x
// Remove element at i.
copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1]
xi -= 1
} else {
x = c
xi = i
}
r -= c.Width
}
}
func (s *stream) samples() Samples {
samples := make(Samples, len(s.l))
copy(samples, s.l)
return samples
}

202
vendor/github.com/containerd/continuity/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

74
vendor/github.com/containerd/continuity/README.md generated vendored Normal file
View File

@@ -0,0 +1,74 @@
# continuity
[![GoDoc](https://godoc.org/github.com/containerd/continuity?status.svg)](https://godoc.org/github.com/containerd/continuity)
[![Build Status](https://travis-ci.org/containerd/continuity.svg?branch=master)](https://travis-ci.org/containerd/continuity)
A transport-agnostic, filesystem metadata manifest system
This project is a staging area for experiments in providing transport agnostic
metadata storage.
Please see https://github.com/opencontainers/specs/issues/11 for more details.
## Manifest Format
A continuity manifest encodes filesystem metadata in Protocol Buffers.
Please refer to [proto/manifest.proto](proto/manifest.proto).
## Usage
Build:
```console
$ make
```
Create a manifest (of this repo itself):
```console
$ ./bin/continuity build . > /tmp/a.pb
```
Dump a manifest:
```console
$ ./bin/continuity ls /tmp/a.pb
...
-rw-rw-r-- 270 B /.gitignore
-rw-rw-r-- 88 B /.mailmap
-rw-rw-r-- 187 B /.travis.yml
-rw-rw-r-- 359 B /AUTHORS
-rw-rw-r-- 11 kB /LICENSE
-rw-rw-r-- 1.5 kB /Makefile
...
-rw-rw-r-- 986 B /testutil_test.go
drwxrwxr-x 0 B /version
-rw-rw-r-- 478 B /version/version.go
```
Verify a manifest:
```console
$ ./bin/continuity verify . /tmp/a.pb
```
Break the directory and restore using the manifest:
```console
$ chmod 777 Makefile
$ ./bin/continuity verify . /tmp/a.pb
2017/06/23 08:00:34 error verifying manifest: resource "/Makefile" has incorrect mode: -rwxrwxrwx != -rw-rw-r--
$ ./bin/continuity apply . /tmp/a.pb
$ stat -c %a Makefile
664
$ ./bin/continuity verify . /tmp/a.pb
```
## Contribution Guide
### Building Proto Package
If you change the proto file you will need to rebuild the generated Go with `go generate`.
```console
$ go generate ./proto
```

View File

@@ -0,0 +1,85 @@
package pathdriver
import (
"path/filepath"
)
// PathDriver provides all of the path manipulation functions in a common
// interface. The context should call these and never use the `filepath`
// package or any other package to manipulate paths.
type PathDriver interface {
Join(paths ...string) string
IsAbs(path string) bool
Rel(base, target string) (string, error)
Base(path string) string
Dir(path string) string
Clean(path string) string
Split(path string) (dir, file string)
Separator() byte
Abs(path string) (string, error)
Walk(string, filepath.WalkFunc) error
FromSlash(path string) string
ToSlash(path string) string
Match(pattern, name string) (matched bool, err error)
}
// pathDriver is a simple default implementation calls the filepath package.
type pathDriver struct{}
// LocalPathDriver is the exported pathDriver struct for convenience.
var LocalPathDriver PathDriver = &pathDriver{}
func (*pathDriver) Join(paths ...string) string {
return filepath.Join(paths...)
}
func (*pathDriver) IsAbs(path string) bool {
return filepath.IsAbs(path)
}
func (*pathDriver) Rel(base, target string) (string, error) {
return filepath.Rel(base, target)
}
func (*pathDriver) Base(path string) string {
return filepath.Base(path)
}
func (*pathDriver) Dir(path string) string {
return filepath.Dir(path)
}
func (*pathDriver) Clean(path string) string {
return filepath.Clean(path)
}
func (*pathDriver) Split(path string) (dir, file string) {
return filepath.Split(path)
}
func (*pathDriver) Separator() byte {
return filepath.Separator
}
func (*pathDriver) Abs(path string) (string, error) {
return filepath.Abs(path)
}
// Note that filepath.Walk calls os.Stat, so if the context wants to
// to call Driver.Stat() for Walk, they need to create a new struct that
// overrides this method.
func (*pathDriver) Walk(root string, walkFn filepath.WalkFunc) error {
return filepath.Walk(root, walkFn)
}
func (*pathDriver) FromSlash(path string) string {
return filepath.FromSlash(path)
}
func (*pathDriver) ToSlash(path string) string {
return filepath.ToSlash(path)
}
func (*pathDriver) Match(pattern, name string) (bool, error) {
return filepath.Match(pattern, name)
}

13
vendor/github.com/containerd/continuity/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,13 @@
bazil.org/fuse 371fbbdaa8987b715bdd21d6adc4c9b20155f748
github.com/dustin/go-humanize bb3d318650d48840a39aa21a027c6630e198e626
github.com/golang/protobuf 1e59b77b52bf8e4b449a57e6f79f21226d571845
github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
github.com/opencontainers/go-digest 279bed98673dd5bef374d3b6e4b09e2af76183bf
github.com/pkg/errors f15c970de5b76fac0b59abb32d62c17cc7bed265
github.com/sirupsen/logrus 89742aefa4b206dcf400792f3bd35b542998eb3b
github.com/spf13/cobra 2da4a54c5ceefcee7ca5dd0eea1e18a3b6366489
github.com/spf13/pflag 4c012f6dcd9546820e378d0bdda4d8fc772cdfea
golang.org/x/crypto 9f005a07e0d31d45e6656d241bb5c0f2efd4bc94
golang.org/x/net a337091b0525af65de94df2eb7e98bd9962dcbe2
golang.org/x/sync 450f422ab23cf9881c94e2db30cac0eb1b7cf80c
golang.org/x/sys 665f6529cca930e27b831a0d1dafffbe1c172924

View File

@@ -20,31 +20,62 @@ to another, for example docker container images to OCI images. It also allows
you to copy container images between various registries, possibly converting
them as necessary, and to sign and verify images.
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
containers/image library and takes advantage of its many features.
## Command-line usage
The containers/image project is only a library with no user interface;
you can either incorporate it into your Go programs, or use the `skopeo` tool:
The [skopeo](https://github.com/containers/skopeo) tool uses the
containers/image library and takes advantage of many of its features,
e.g. `skopeo copy` exposes the `containers/image/copy.Image` functionality.
## Dependencies
Dependencies that this library prefers will not be found in the `vendor`
directory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects.
This library does not ship a committed version of its dependencies in a `vendor`
subdirectory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects, and because
types defined in the `vendor` directory would be impossible to use from your projects.
What this project tests against dependencies-wise is located
[here](https://github.com/containers/image/blob/master/vendor.conf).
[in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf).
## Building
For ordinary use, `go build ./...` is sufficient.
If you want to see what the library can do, or an example of how it is called,
consider starting with the [skopeo](https://github.com/containers/skopeo) tool
instead.
When developing this library, please use `make` to take advantage of the tests and validation.
To integrate this library into your project, put it into `$GOPATH` or use
your preferred vendoring tool to include a copy in your project.
Ensure that the dependencies documented [in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf)
are also available
(using those exact versions or different versions of your choosing).
Optionally, you can use the `containers_image_openpgp` build tag (using `go build -tags …`, or `make … BUILDTAGS=…`).
This will use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
This library, by default, also depends on the GpgME and libostree C libraries. Either install them:
```sh
Fedora$ dnf install gpgme-devel libassuan-devel ostree-devel
macOS$ brew install gpgme
```
or use the build tags described below to avoid the dependencies (e.g. using `go build -tags …`)
### Supported build tags
- `containers_image_openpgp`: Use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
the primary downside is that creating new signatures with the Golang-only implementation is not supported.
- `containers_image_ostree_stub`: Instead of importing `ostree:` transport in `github.com/containers/image/transports/alltransports`, use a stub which reports that the transport is not supported. This allows building the library without requiring the `libostree` development libraries. The `github.com/containers/image/ostree` package is completely disabled
and impossible to import when this build tag is in use.
## [Contributing](CONTRIBUTING.md)**
Information about contributing to this project.
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.
## License
ASL 2.0
Apache License 2.0
SPDX-License-Identifier: Apache-2.0
## Contact

View File

@@ -3,16 +3,15 @@ package copy
import (
"bytes"
"compress/gzip"
"context"
"fmt"
"io"
"io/ioutil"
"reflect"
"runtime"
"strings"
"time"
pb "gopkg.in/cheggaaa/pb.v1"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/signature"
@@ -20,6 +19,8 @@ import (
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
pb "gopkg.in/cheggaaa/pb.v1"
)
type digestingReader struct {
@@ -29,23 +30,6 @@ type digestingReader struct {
validationFailed bool
}
// imageCopier allows us to keep track of diffID values for blobs, and other
// data, that we're copying between images, and cache other information that
// might allow us to take some shortcuts
type imageCopier struct {
copiedBlobs map[digest.Digest]digest.Digest
cachedDiffIDs map[digest.Digest]digest.Digest
manifestUpdates *types.ManifestUpdateOptions
dest types.ImageDestination
src types.Image
rawSource types.ImageSource
diffIDsAreNeeded bool
canModifyManifest bool
reportWriter io.Writer
progressInterval time.Duration
progress chan types.ProgressProperties
}
// newDigestingReader returns an io.Reader implementation with contents of source, which will eventually return a non-EOF error
// and set validationFailed to true if the source stream does not match expectedDigest.
func newDigestingReader(source io.Reader, expectedDigest digest.Digest) (*digestingReader, error) {
@@ -84,6 +68,26 @@ func (d *digestingReader) Read(p []byte) (int, error) {
return n, err
}
// copier allows us to keep track of diffID values for blobs, and other
// data shared across one or more images in a possible manifest list.
type copier struct {
cachedDiffIDs map[digest.Digest]digest.Digest
dest types.ImageDestination
rawSource types.ImageSource
reportWriter io.Writer
progressInterval time.Duration
progress chan types.ProgressProperties
}
// imageCopier tracks state specific to a single image (possibly an item of a manifest list)
type imageCopier struct {
c *copier
manifestUpdates *types.ManifestUpdateOptions
src types.Image
diffIDsAreNeeded bool
canModifyManifest bool
}
// Options allows supplying non-default configuration modifying the behavior of CopyImage.
type Options struct {
RemoveSignatures bool // Remove any pre-existing signatures. SignBy will still add a new signature.
@@ -93,11 +97,14 @@ type Options struct {
DestinationCtx *types.SystemContext
ProgressInterval time.Duration // time to wait between reports to signal the progress channel
Progress chan types.ProgressProperties // Reported to when ProgressInterval has arrived for a single artifact+offset.
// manifest MIME type of image set by user. "" is default and means use the autodetection to the the manifest MIME type
ForceManifestMIMEType string
}
// Image copies image from srcRef to destRef, using policyContext to validate
// source image admissibility.
func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) (retErr error) {
// source image admissibility. It returns the manifest which was written to
// the new copy of the image.
func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) (manifest []byte, retErr error) {
// NOTE this function uses an output parameter for the error return value.
// Setting this and returning is the ideal way to return an error.
//
@@ -113,13 +120,9 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
reportWriter = options.ReportWriter
}
writeReport := func(f string, a ...interface{}) {
fmt.Fprintf(reportWriter, f, a...)
}
dest, err := destRef.NewImageDestination(options.DestinationCtx)
dest, err := destRef.NewImageDestination(ctx, options.DestinationCtx)
if err != nil {
return errors.Wrapf(err, "Error initializing destination %s", transports.ImageName(destRef))
return nil, errors.Wrapf(err, "Error initializing destination %s", transports.ImageName(destRef))
}
defer func() {
if err := dest.Close(); err != nil {
@@ -127,97 +130,136 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
}
}()
destSupportedManifestMIMETypes := dest.SupportedManifestMIMETypes()
rawSource, err := srcRef.NewImageSource(options.SourceCtx, destSupportedManifestMIMETypes)
rawSource, err := srcRef.NewImageSource(ctx, options.SourceCtx)
if err != nil {
return errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
return nil, errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
}
unparsedImage := image.UnparsedFromSource(rawSource)
defer func() {
if unparsedImage != nil {
if err := unparsedImage.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (unparsed: %v)", err)
}
if err := rawSource.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (src: %v)", err)
}
}()
c := &copier{
cachedDiffIDs: make(map[digest.Digest]digest.Digest),
dest: dest,
rawSource: rawSource,
reportWriter: reportWriter,
progressInterval: options.ProgressInterval,
progress: options.Progress,
}
unparsedToplevel := image.UnparsedInstance(rawSource, nil)
multiImage, err := isMultiImage(ctx, unparsedToplevel)
if err != nil {
return nil, errors.Wrapf(err, "Error determining manifest MIME type for %s", transports.ImageName(srcRef))
}
if !multiImage {
// The simple case: Just copy a single image.
if manifest, err = c.copyOneImage(ctx, policyContext, options, unparsedToplevel); err != nil {
return nil, err
}
} else {
// This is a manifest list. Choose a single image and copy it.
// FIXME: Copy to destinations which support manifest lists, one image at a time.
instanceDigest, err := image.ChooseManifestInstanceFromManifestList(ctx, options.SourceCtx, unparsedToplevel)
if err != nil {
return nil, errors.Wrapf(err, "Error choosing an image from manifest list %s", transports.ImageName(srcRef))
}
logrus.Debugf("Source is a manifest list; copying (only) instance %s", instanceDigest)
unparsedInstance := image.UnparsedInstance(rawSource, &instanceDigest)
if manifest, err = c.copyOneImage(ctx, policyContext, options, unparsedInstance); err != nil {
return nil, err
}
}
if err := c.dest.Commit(ctx); err != nil {
return nil, errors.Wrap(err, "Error committing the finished image")
}
return manifest, nil
}
// Image copies a single (on-manifest-list) image unparsedImage, using policyContext to validate
// source image admissibility.
func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.PolicyContext, options *Options, unparsedImage *image.UnparsedImage) (manifest []byte, retErr error) {
// The caller is handling manifest lists; this could happen only if a manifest list contains a manifest list.
// Make sure we fail cleanly in such cases.
multiImage, err := isMultiImage(ctx, unparsedImage)
if err != nil {
// FIXME FIXME: How to name a reference for the sub-image?
return nil, errors.Wrapf(err, "Error determining manifest MIME type for %s", transports.ImageName(unparsedImage.Reference()))
}
if multiImage {
return nil, fmt.Errorf("Unexpectedly received a manifest list instead of a manifest for a single image")
}
// Please keep this policy check BEFORE reading any other information about the image.
if allowed, err := policyContext.IsRunningImageAllowed(unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return errors.Wrap(err, "Source image rejected")
// (the multiImage check above only matches the MIME type, which we have received anyway.
// Actual parsing of anything should be deferred.)
if allowed, err := policyContext.IsRunningImageAllowed(ctx, unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return nil, errors.Wrap(err, "Source image rejected")
}
src, err := image.FromUnparsedImage(unparsedImage)
src, err := image.FromUnparsedImage(ctx, options.SourceCtx, unparsedImage)
if err != nil {
return errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(srcRef))
return nil, errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(c.rawSource.Reference()))
}
unparsedImage = nil
defer func() {
if err := src.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (source: %v)", err)
}
}()
if src.IsMultiImage() {
return errors.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
if err := checkImageDestinationForCurrentRuntimeOS(ctx, options.DestinationCtx, src, c.dest); err != nil {
return nil, err
}
var sigs [][]byte
if options.RemoveSignatures {
sigs = [][]byte{}
} else {
writeReport("Getting image source signatures\n")
s, err := src.Signatures()
c.Printf("Getting image source signatures\n")
s, err := src.Signatures(ctx)
if err != nil {
return errors.Wrap(err, "Error reading signatures")
return nil, errors.Wrap(err, "Error reading signatures")
}
sigs = s
}
if len(sigs) != 0 {
writeReport("Checking if image destination supports signatures\n")
if err := dest.SupportsSignatures(); err != nil {
return errors.Wrap(err, "Can not copy signatures")
c.Printf("Checking if image destination supports signatures\n")
if err := c.dest.SupportsSignatures(ctx); err != nil {
return nil, errors.Wrap(err, "Can not copy signatures")
}
}
canModifyManifest := len(sigs) == 0
manifestUpdates := types.ManifestUpdateOptions{}
manifestUpdates.InformationOnly.Destination = dest
ic := imageCopier{
c: c,
manifestUpdates: &types.ManifestUpdateOptions{InformationOnly: types.ManifestUpdateInformation{Destination: c.dest}},
src: src,
// diffIDsAreNeeded is computed later
canModifyManifest: len(sigs) == 0,
}
if err := updateEmbeddedDockerReference(&manifestUpdates, dest, src, canModifyManifest); err != nil {
return err
if err := ic.updateEmbeddedDockerReference(); err != nil {
return nil, err
}
// We compute preferredManifestMIMEType only to show it in error messages.
// Without having to add this context in an error message, we would be happy enough to know only that no conversion is needed.
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest)
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := ic.determineManifestConversion(ctx, c.dest.SupportedManifestMIMETypes(), options.ForceManifestMIMEType)
if err != nil {
return err
return nil, err
}
// If src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates) will be true, it needs to be true by the time we get here.
ic := imageCopier{
copiedBlobs: make(map[digest.Digest]digest.Digest),
cachedDiffIDs: make(map[digest.Digest]digest.Digest),
manifestUpdates: &manifestUpdates,
dest: dest,
src: src,
rawSource: rawSource,
diffIDsAreNeeded: src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates),
canModifyManifest: canModifyManifest,
reportWriter: reportWriter,
progressInterval: options.ProgressInterval,
progress: options.Progress,
}
// If src.UpdatedImageNeedsLayerDiffIDs(ic.manifestUpdates) will be true, it needs to be true by the time we get here.
ic.diffIDsAreNeeded = src.UpdatedImageNeedsLayerDiffIDs(*ic.manifestUpdates)
if err := ic.copyLayers(); err != nil {
return err
if err := ic.copyLayers(ctx); err != nil {
return nil, err
}
// With docker/distribution registries we do not know whether the registry accepts schema2 or schema1 only;
// and at least with the OpenShift registry "acceptschema2" option, there is no way to detect the support
// without actually trying to upload something and getting a types.ManifestTypeRejectedError.
// So, try the preferred manifest MIME type. If the process succeeds, fine…
manifest, err := ic.copyUpdatedConfigAndManifest()
manifest, err = ic.copyUpdatedConfigAndManifest(ctx)
if err != nil {
logrus.Debugf("Writing manifest using preferred type %s failed: %v", preferredManifestMIMEType, err)
// … if it fails, _and_ the failure is because the manifest is rejected, we may have other options.
@@ -225,22 +267,22 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
// We dont have other options.
// In principle the code below would handle this as well, but the resulting error message is fairly ugly.
// Dont bother the user with MIME types if we have no choice.
return err
return nil, err
}
// If the original MIME type is acceptable, determineManifestConversion always uses it as preferredManifestMIMEType.
// So if we are here, we will definitely be trying to convert the manifest.
// With !canModifyManifest, that would just be a string of repeated failures for the same reason,
// With !ic.canModifyManifest, that would just be a string of repeated failures for the same reason,
// so lets bail out early and with a better error message.
if !canModifyManifest {
return errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
if !ic.canModifyManifest {
return nil, errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
}
// errs is a list of errors when trying various manifest types. Also serves as an "upload succeeded" flag when set to nil.
errs := []string{fmt.Sprintf("%s(%v)", preferredManifestMIMEType, err)}
for _, manifestMIMEType := range otherManifestMIMETypeCandidates {
logrus.Debugf("Trying to use manifest type %s…", manifestMIMEType)
manifestUpdates.ManifestMIMEType = manifestMIMEType
attemptedManifest, err := ic.copyUpdatedConfigAndManifest()
ic.manifestUpdates.ManifestMIMEType = manifestMIMEType
attemptedManifest, err := ic.copyUpdatedConfigAndManifest(ctx)
if err != nil {
logrus.Debugf("Upload of manifest type %s failed: %v", manifestMIMEType, err)
errs = append(errs, fmt.Sprintf("%s(%v)", manifestMIMEType, err))
@@ -253,60 +295,100 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
break
}
if errs != nil {
return fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
return nil, fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
}
}
if options.SignBy != "" {
newSig, err := createSignature(dest, manifest, options.SignBy, reportWriter)
newSig, err := c.createSignature(manifest, options.SignBy)
if err != nil {
return err
return nil, err
}
sigs = append(sigs, newSig)
}
writeReport("Storing signatures\n")
if err := dest.PutSignatures(sigs); err != nil {
return errors.Wrap(err, "Error writing signatures")
c.Printf("Storing signatures\n")
if err := c.dest.PutSignatures(ctx, sigs); err != nil {
return nil, errors.Wrap(err, "Error writing signatures")
}
if err := dest.Commit(); err != nil {
return errors.Wrap(err, "Error committing the finished image")
}
return manifest, nil
}
// Printf writes a formatted string to c.reportWriter.
// Note that the method name Printf is not entirely arbitrary: (go tool vet)
// has a built-in list of functions/methods (whatever object they are for)
// which have their format strings checked; for other names we would have
// to pass a parameter to every (go tool vet) invocation.
func (c *copier) Printf(format string, a ...interface{}) {
fmt.Fprintf(c.reportWriter, format, a...)
}
func checkImageDestinationForCurrentRuntimeOS(ctx context.Context, sys *types.SystemContext, src types.Image, dest types.ImageDestination) error {
if dest.MustMatchRuntimeOS() {
wantedOS := runtime.GOOS
if sys != nil && sys.OSChoice != "" {
wantedOS = sys.OSChoice
}
c, err := src.OCIConfig(ctx)
if err != nil {
return errors.Wrapf(err, "Error parsing image configuration")
}
osErr := fmt.Errorf("image operating system %q cannot be used on %q", c.OS, wantedOS)
if wantedOS == "windows" && c.OS == "linux" {
return osErr
} else if wantedOS != "windows" && c.OS == "windows" {
return osErr
}
}
return nil
}
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
func updateEmbeddedDockerReference(manifestUpdates *types.ManifestUpdateOptions, dest types.ImageDestination, src types.Image, canModifyManifest bool) error {
destRef := dest.Reference().DockerReference()
func (ic *imageCopier) updateEmbeddedDockerReference() error {
if ic.c.dest.IgnoresEmbeddedDockerReference() {
return nil // Destination would prefer us not to update the embedded reference.
}
destRef := ic.c.dest.Reference().DockerReference()
if destRef == nil {
return nil // Destination does not care about Docker references
}
if !src.EmbeddedDockerReferenceConflicts(destRef) {
if !ic.src.EmbeddedDockerReferenceConflicts(destRef) {
return nil // No reference embedded in the manifest, or it matches destRef already.
}
if !canModifyManifest {
if !ic.canModifyManifest {
return errors.Errorf("Copying a schema1 image with an embedded Docker reference to %s (Docker reference %s) would invalidate existing signatures. Explicitly enable signature removal to proceed anyway",
transports.ImageName(dest.Reference()), destRef.String())
transports.ImageName(ic.c.dest.Reference()), destRef.String())
}
manifestUpdates.EmbeddedDockerReference = destRef
ic.manifestUpdates.EmbeddedDockerReference = destRef
return nil
}
// copyLayers copies layers from src/rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
func (ic *imageCopier) copyLayers() error {
// copyLayers copies layers from ic.src/ic.c.rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
func (ic *imageCopier) copyLayers(ctx context.Context) error {
srcInfos := ic.src.LayerInfos()
destInfos := []types.BlobInfo{}
diffIDs := []digest.Digest{}
updatedSrcInfos, err := ic.src.LayerInfosForCopy(ctx)
if err != nil {
return err
}
srcInfosUpdated := false
if updatedSrcInfos != nil && !reflect.DeepEqual(srcInfos, updatedSrcInfos) {
if !ic.canModifyManifest {
return errors.Errorf("Internal error: copyLayers() needs to use an updated manifest but that was known to be forbidden")
}
srcInfos = updatedSrcInfos
srcInfosUpdated = true
}
for _, srcLayer := range srcInfos {
var (
destInfo types.BlobInfo
diffID digest.Digest
err error
)
if ic.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
if ic.c.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
// DiffIDs are, currently, needed only when converting from schema1.
// In which case src.LayerInfos will not have URLs because schema1
// does not support them.
@@ -314,9 +396,9 @@ func (ic *imageCopier) copyLayers() error {
return errors.New("getting DiffID for foreign layers is unimplemented")
}
destInfo = srcLayer
fmt.Fprintf(ic.reportWriter, "Skipping foreign layer %q copy to %s\n", destInfo.Digest, ic.dest.Reference().Transport().Name())
ic.c.Printf("Skipping foreign layer %q copy to %s\n", destInfo.Digest, ic.c.dest.Reference().Transport().Name())
} else {
destInfo, diffID, err = ic.copyLayer(srcLayer)
destInfo, diffID, err = ic.copyLayer(ctx, srcLayer)
if err != nil {
return err
}
@@ -328,7 +410,7 @@ func (ic *imageCopier) copyLayers() error {
if ic.diffIDsAreNeeded {
ic.manifestUpdates.InformationOnly.LayerDiffIDs = diffIDs
}
if layerDigestsDiffer(srcInfos, destInfos) {
if srcInfosUpdated || layerDigestsDiffer(srcInfos, destInfos) {
ic.manifestUpdates.LayerInfos = destInfos
}
return nil
@@ -349,7 +431,7 @@ func layerDigestsDiffer(a, b []types.BlobInfo) bool {
// copyUpdatedConfigAndManifest updates the image per ic.manifestUpdates, if necessary,
// stores the resulting config and manifest to the destination, and returns the stored manifest.
func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
func (ic *imageCopier) copyUpdatedConfigAndManifest(ctx context.Context) ([]byte, error) {
pendingImage := ic.src
if !reflect.DeepEqual(*ic.manifestUpdates, types.ManifestUpdateOptions{InformationOnly: ic.manifestUpdates.InformationOnly}) {
if !ic.canModifyManifest {
@@ -359,43 +441,43 @@ func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
// We have set ic.diffIDsAreNeeded based on the preferred MIME type returned by determineManifestConversion.
// So, this can only happen if we are trying to upload using one of the other MIME type candidates.
// Because UpdatedImageNeedsLayerDiffIDs is true only when converting from s1 to s2, this case should only arise
// when ic.dest.SupportedManifestMIMETypes() includes both s1 and s2, the upload using s1 failed, and we are now trying s2.
// when ic.c.dest.SupportedManifestMIMETypes() includes both s1 and s2, the upload using s1 failed, and we are now trying s2.
// Supposedly s2-only registries do not exist or are extremely rare, so failing with this error message is good enough for now.
// If handling such registries turns out to be necessary, we could compute ic.diffIDsAreNeeded based on the full list of manifest MIME type candidates.
return nil, errors.Errorf("Can not convert image to %s, preparing DiffIDs for this case is not supported", ic.manifestUpdates.ManifestMIMEType)
}
pi, err := ic.src.UpdatedImage(*ic.manifestUpdates)
pi, err := ic.src.UpdatedImage(ctx, *ic.manifestUpdates)
if err != nil {
return nil, errors.Wrap(err, "Error creating an updated image manifest")
}
pendingImage = pi
}
manifest, _, err := pendingImage.Manifest()
manifest, _, err := pendingImage.Manifest(ctx)
if err != nil {
return nil, errors.Wrap(err, "Error reading manifest")
}
if err := ic.copyConfig(pendingImage); err != nil {
if err := ic.c.copyConfig(ctx, pendingImage); err != nil {
return nil, err
}
fmt.Fprintf(ic.reportWriter, "Writing manifest to image destination\n")
if err := ic.dest.PutManifest(manifest); err != nil {
ic.c.Printf("Writing manifest to image destination\n")
if err := ic.c.dest.PutManifest(ctx, manifest); err != nil {
return nil, errors.Wrap(err, "Error writing manifest")
}
return manifest, nil
}
// copyConfig copies config.json, if any, from src to dest.
func (ic *imageCopier) copyConfig(src types.Image) error {
func (c *copier) copyConfig(ctx context.Context, src types.Image) error {
srcInfo := src.ConfigInfo()
if srcInfo.Digest != "" {
fmt.Fprintf(ic.reportWriter, "Copying config %s\n", srcInfo.Digest)
configBlob, err := src.ConfigBlob()
c.Printf("Copying config %s\n", srcInfo.Digest)
configBlob, err := src.ConfigBlob(ctx)
if err != nil {
return errors.Wrapf(err, "Error reading config blob %s", srcInfo.Digest)
}
destInfo, err := ic.copyBlobFromStream(bytes.NewReader(configBlob), srcInfo, nil, false)
destInfo, err := c.copyBlobFromStream(ctx, bytes.NewReader(configBlob), srcInfo, nil, false, true)
if err != nil {
return err
}
@@ -415,14 +497,14 @@ type diffIDResult struct {
// copyLayer copies a layer with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied layer, and a value for LayerDiffIDs if diffIDIsNeeded
func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest.Digest, error) {
func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo) (types.BlobInfo, digest.Digest, error) {
// Check if we already have a blob with this digest
haveBlob, extantBlobSize, err := ic.dest.HasBlob(srcInfo)
haveBlob, extantBlobSize, err := ic.c.dest.HasBlob(ctx, srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error checking for blob %s at destination", srcInfo.Digest)
}
// If we already have a cached diffID for this blob, we don't need to compute it
diffIDIsNeeded := ic.diffIDsAreNeeded && (ic.cachedDiffIDs[srcInfo.Digest] == "")
diffIDIsNeeded := ic.diffIDsAreNeeded && (ic.c.cachedDiffIDs[srcInfo.Digest] == "")
// If we already have the blob, and we don't need to recompute the diffID, then we might be able to avoid reading it again
if haveBlob && !diffIDIsNeeded {
// Check the blob sizes match, if we were given a size this time
@@ -431,44 +513,49 @@ func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest
}
srcInfo.Size = extantBlobSize
// Tell the image destination that this blob's delta is being applied again. For some image destinations, this can be faster than using GetBlob/PutBlob
blobinfo, err := ic.dest.ReapplyBlob(srcInfo)
blobinfo, err := ic.c.dest.ReapplyBlob(ctx, srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reapplying blob %s at destination", srcInfo.Digest)
}
fmt.Fprintf(ic.reportWriter, "Skipping fetch of repeat blob %s\n", srcInfo.Digest)
return blobinfo, ic.cachedDiffIDs[srcInfo.Digest], err
ic.c.Printf("Skipping fetch of repeat blob %s\n", srcInfo.Digest)
return blobinfo, ic.c.cachedDiffIDs[srcInfo.Digest], err
}
// Fallback: copy the layer, computing the diffID if we need to do so
fmt.Fprintf(ic.reportWriter, "Copying blob %s\n", srcInfo.Digest)
srcStream, srcBlobSize, err := ic.rawSource.GetBlob(srcInfo)
ic.c.Printf("Copying blob %s\n", srcInfo.Digest)
srcStream, srcBlobSize, err := ic.c.rawSource.GetBlob(ctx, srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
defer srcStream.Close()
blobInfo, diffIDChan, err := ic.copyLayerFromStream(srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
blobInfo, diffIDChan, err := ic.copyLayerFromStream(ctx, srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
diffIDIsNeeded)
if err != nil {
return types.BlobInfo{}, "", err
}
var diffIDResult diffIDResult // = {digest:""}
if diffIDIsNeeded {
diffIDResult = <-diffIDChan
if diffIDResult.err != nil {
return types.BlobInfo{}, "", errors.Wrap(diffIDResult.err, "Error computing layer DiffID")
select {
case <-ctx.Done():
return types.BlobInfo{}, "", ctx.Err()
case diffIDResult := <-diffIDChan:
if diffIDResult.err != nil {
return types.BlobInfo{}, "", errors.Wrap(diffIDResult.err, "Error computing layer DiffID")
}
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
ic.c.cachedDiffIDs[srcInfo.Digest] = diffIDResult.digest
return blobInfo, diffIDResult.digest, nil
}
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
ic.cachedDiffIDs[srcInfo.Digest] = diffIDResult.digest
} else {
return blobInfo, ic.c.cachedDiffIDs[srcInfo.Digest], nil
}
return blobInfo, diffIDResult.digest, nil
}
// copyLayerFromStream is an implementation detail of copyLayer; mostly providing a separate “defer” scope.
// it copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
// perhaps compressing the stream if canCompress,
// and returns a complete blobInfo of the copied blob and perhaps a <-chan diffIDResult if diffIDIsNeeded, to be read by the caller.
func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
func (ic *imageCopier) copyLayerFromStream(ctx context.Context, srcStream io.Reader, srcInfo types.BlobInfo,
diffIDIsNeeded bool) (types.BlobInfo, <-chan diffIDResult, error) {
var getDiffIDRecorder func(compression.DecompressorFunc) io.Writer // = nil
var diffIDChan chan diffIDResult
@@ -493,7 +580,7 @@ func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.Bl
return pipeWriter
}
}
blobInfo, err := ic.copyBlobFromStream(srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest) // Sets err to nil on success
blobInfo, err := ic.c.copyBlobFromStream(ctx, srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest, false) // Sets err to nil on success
return blobInfo, diffIDChan, err
// We need the defer … pipeWriter.CloseWithError() to happen HERE so that the caller can block on reading from diffIDChan
}
@@ -517,6 +604,7 @@ func computeDiffID(stream io.Reader, decompressor compression.DecompressorFunc)
if err != nil {
return "", err
}
defer s.Close()
stream = s
}
@@ -527,9 +615,9 @@ func computeDiffID(stream io.Reader, decompressor compression.DecompressorFunc)
// perhaps sending a copy to an io.Writer if getOriginalLayerCopyWriter != nil,
// perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied blob.
func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
func (c *copier) copyBlobFromStream(ctx context.Context, srcStream io.Reader, srcInfo types.BlobInfo,
getOriginalLayerCopyWriter func(decompressor compression.DecompressorFunc) io.Writer,
canCompress bool) (types.BlobInfo, error) {
canModifyBlob bool, isConfig bool) (types.BlobInfo, error) {
// The copying happens through a pipeline of connected io.Readers.
// === Input: srcStream
@@ -555,13 +643,13 @@ func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.Blo
// === Report progress using a pb.Reader.
bar := pb.New(int(srcInfo.Size)).SetUnits(pb.U_BYTES)
bar.Output = ic.reportWriter
bar.Output = c.reportWriter
bar.SetMaxWidth(80)
bar.ShowTimeLeft = false
bar.ShowPercent = false
bar.Start()
destStream = bar.NewProxyReader(destStream)
defer fmt.Fprint(ic.reportWriter, "\n")
defer bar.Finish()
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.
@@ -570,12 +658,9 @@ func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.Blo
originalLayerReader = destStream
}
// === Compress the layer if it is uncompressed and compression is desired
// === Deal with layer compression/decompression if necessary
var inputInfo types.BlobInfo
if !canCompress || isCompressed || !ic.dest.ShouldCompressLayers() {
logrus.Debugf("Using original blob without modification")
inputInfo = srcInfo
} else {
if canModifyBlob && c.dest.DesiredLayerCompression() == types.Compress && !isCompressed {
logrus.Debugf("Compressing blob on the fly")
pipeReader, pipeWriter := io.Pipe()
defer pipeReader.Close()
@@ -587,21 +672,34 @@ func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.Blo
destStream = pipeReader
inputInfo.Digest = ""
inputInfo.Size = -1
} else if canModifyBlob && c.dest.DesiredLayerCompression() == types.Decompress && isCompressed {
logrus.Debugf("Blob will be decompressed")
s, err := decompressor(destStream)
if err != nil {
return types.BlobInfo{}, err
}
defer s.Close()
destStream = s
inputInfo.Digest = ""
inputInfo.Size = -1
} else {
logrus.Debugf("Using original blob without modification")
inputInfo = srcInfo
}
// === Report progress using the ic.progress channel, if required.
if ic.progress != nil && ic.progressInterval > 0 {
// === Report progress using the c.progress channel, if required.
if c.progress != nil && c.progressInterval > 0 {
destStream = &progressReader{
source: destStream,
channel: ic.progress,
interval: ic.progressInterval,
channel: c.progress,
interval: c.progressInterval,
artifact: srcInfo,
lastTime: time.Now(),
}
}
// === Finally, send the layer stream to dest.
uploadedInfo, err := ic.dest.PutBlob(destStream, inputInfo)
uploadedInfo, err := c.dest.PutBlob(ctx, destStream, inputInfo, isConfig)
if err != nil {
return types.BlobInfo{}, errors.Wrap(err, "Error writing blob")
}

View File

@@ -1,12 +1,13 @@
package copy
import (
"context"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
@@ -37,15 +38,24 @@ func (os *orderedSet) append(s string) {
}
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
// determineManifestConversion updates ic.manifestUpdates to convert manifest to a supported MIME type, if necessary and ic.canModifyManifest.
// Note that the conversion will only happen later, through ic.src.UpdatedImage
// Returns the preferred manifest MIME type (whether we are converting to it or using it unmodified),
// and a list of other possible alternatives, in order.
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) (string, []string, error) {
_, srcType, err := src.Manifest()
func (ic *imageCopier) determineManifestConversion(ctx context.Context, destSupportedManifestMIMETypes []string, forceManifestMIMEType string) (string, []string, error) {
_, srcType, err := ic.src.Manifest(ctx)
if err != nil { // This should have been cached?!
return "", nil, errors.Wrap(err, "Error reading manifest")
}
normalizedSrcType := manifest.NormalizedMIMEType(srcType)
if srcType != normalizedSrcType {
logrus.Debugf("Source manifest MIME type %s, treating it as %s", srcType, normalizedSrcType)
srcType = normalizedSrcType
}
if forceManifestMIMEType != "" {
destSupportedManifestMIMETypes = []string{forceManifestMIMEType}
}
if len(destSupportedManifestMIMETypes) == 0 {
return srcType, []string{}, nil // Anything goes; just use the original as is, do not try any conversions.
@@ -67,10 +77,10 @@ func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, s
if _, ok := supportedByDest[srcType]; ok {
prioritizedTypes.append(srcType)
}
if !canModifyManifest {
// We could also drop the !canModifyManifest parameter and have the caller
if !ic.canModifyManifest {
// We could also drop the !ic.canModifyManifest check and have the caller
// make the choice; it is already doing that to an extent, to improve error
// messages. But it is nice to hide the “if !canModifyManifest, do no conversion”
// messages. But it is nice to hide the “if !ic.canModifyManifest, do no conversion”
// special case in here; the caller can then worry (or not) only about a good UI.
logrus.Debugf("We can't modify the manifest, hoping for the best...")
return srcType, []string{}, nil // Take our chances - FIXME? Or should we fail without trying?
@@ -94,9 +104,18 @@ func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, s
}
preferredType := prioritizedTypes.list[0]
if preferredType != srcType {
manifestUpdates.ManifestMIMEType = preferredType
ic.manifestUpdates.ManifestMIMEType = preferredType
} else {
logrus.Debugf("... will first try using the original manifest unmodified")
}
return preferredType, prioritizedTypes.list[1:], nil
}
// isMultiImage returns true if img is a list of images
func isMultiImage(ctx context.Context, img types.UnparsedImage) (bool, error) {
_, mt, err := img.Manifest(ctx)
if err != nil {
return false, err
}
return manifest.MIMETypeIsMultiImage(mt), nil
}

View File

@@ -1,17 +1,13 @@
package copy
import (
"fmt"
"io"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// createSignature creates a new signature of manifest at (identified by) dest using keyIdentity.
func createSignature(dest types.ImageDestination, manifest []byte, keyIdentity string, reportWriter io.Writer) ([]byte, error) {
// createSignature creates a new signature of manifest using keyIdentity.
func (c *copier) createSignature(manifest []byte, keyIdentity string) ([]byte, error) {
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return nil, errors.Wrap(err, "Error initializing GPG")
@@ -21,12 +17,12 @@ func createSignature(dest types.ImageDestination, manifest []byte, keyIdentity s
return nil, errors.Wrap(err, "Signing not supported")
}
dockerReference := dest.Reference().DockerReference()
dockerReference := c.dest.Reference().DockerReference()
if dockerReference == nil {
return nil, errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
return nil, errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(c.dest.Reference()))
}
fmt.Fprintf(reportWriter, "Signing manifest\n")
c.Printf("Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, keyIdentity)
if err != nil {
return nil, errors.Wrap(err, "Error creating signature")

View File

@@ -1,22 +1,81 @@
package directory
import (
"context"
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const version = "Directory Transport Version: 1.1\n"
// ErrNotContainerImageDir indicates that the directory doesn't match the expected contents of a directory created
// using the 'dir' transport
var ErrNotContainerImageDir = errors.New("not a containers image directory, don't want to overwrite important data")
type dirImageDestination struct {
ref dirReference
ref dirReference
compress bool
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ref dirReference) types.ImageDestination {
return &dirImageDestination{ref}
// newImageDestination returns an ImageDestination for writing to a directory.
func newImageDestination(ref dirReference, compress bool) (types.ImageDestination, error) {
d := &dirImageDestination{ref: ref, compress: compress}
// If directory exists check if it is empty
// if not empty, check whether the contents match that of a container image directory and overwrite the contents
// if the contents don't match throw an error
dirExists, err := pathExists(d.ref.resolvedPath)
if err != nil {
return nil, errors.Wrapf(err, "error checking for path %q", d.ref.resolvedPath)
}
if dirExists {
isEmpty, err := isDirEmpty(d.ref.resolvedPath)
if err != nil {
return nil, err
}
if !isEmpty {
versionExists, err := pathExists(d.ref.versionPath())
if err != nil {
return nil, errors.Wrapf(err, "error checking if path exists %q", d.ref.versionPath())
}
if versionExists {
contents, err := ioutil.ReadFile(d.ref.versionPath())
if err != nil {
return nil, err
}
// check if contents of version file is what we expect it to be
if string(contents) != version {
return nil, ErrNotContainerImageDir
}
} else {
return nil, ErrNotContainerImageDir
}
// delete directory contents so that only one image is in the directory at a time
if err = removeDirContents(d.ref.resolvedPath); err != nil {
return nil, errors.Wrapf(err, "error erasing contents in %q", d.ref.resolvedPath)
}
logrus.Debugf("overwriting existing container image directory %q", d.ref.resolvedPath)
}
} else {
// create directory if it doesn't exist
if err := os.MkdirAll(d.ref.resolvedPath, 0755); err != nil {
return nil, errors.Wrapf(err, "unable to create directory %q", d.ref.resolvedPath)
}
}
// create version file
err = ioutil.WriteFile(d.ref.versionPath(), []byte(version), 0644)
if err != nil {
return nil, errors.Wrapf(err, "error creating version file %q", d.ref.versionPath())
}
return d, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
@@ -36,13 +95,15 @@ func (d *dirImageDestination) SupportedManifestMIMETypes() []string {
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *dirImageDestination) SupportsSignatures() error {
func (d *dirImageDestination) SupportsSignatures(ctx context.Context) error {
return nil
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *dirImageDestination) ShouldCompressLayers() bool {
return false
func (d *dirImageDestination) DesiredLayerCompression() types.LayerCompression {
if d.compress {
return types.Compress
}
return types.PreserveOriginal
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
@@ -51,13 +112,25 @@ func (d *dirImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dirImageDestination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *dirImageDestination) IgnoresEmbeddedDockerReference() bool {
return false // N/A, DockerReference() returns nil.
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
func (d *dirImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
blobFile, err := ioutil.TempFile(d.ref.path, "dir-put-blob")
if err != nil {
return types.BlobInfo{}, err
@@ -73,6 +146,7 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
@@ -99,7 +173,7 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dirImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
func (d *dirImageDestination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
@@ -114,7 +188,7 @@ func (d *dirImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error)
return true, finfo.Size(), nil
}
func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
func (d *dirImageDestination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
@@ -122,11 +196,11 @@ func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo,
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dirImageDestination) PutManifest(manifest []byte) error {
func (d *dirImageDestination) PutManifest(ctx context.Context, manifest []byte) error {
return ioutil.WriteFile(d.ref.manifestPath(), manifest, 0644)
}
func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
func (d *dirImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
for i, sig := range signatures {
if err := ioutil.WriteFile(d.ref.signaturePath(i), sig, 0644); err != nil {
return err
@@ -139,6 +213,42 @@ func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *dirImageDestination) Commit() error {
func (d *dirImageDestination) Commit(ctx context.Context) error {
return nil
}
// returns true if path exists
func pathExists(path string) (bool, error) {
_, err := os.Stat(path)
if err == nil {
return true, nil
}
if err != nil && os.IsNotExist(err) {
return false, nil
}
return false, err
}
// returns true if directory is empty
func isDirEmpty(path string) (bool, error) {
files, err := ioutil.ReadDir(path)
if err != nil {
return false, err
}
return len(files) == 0, nil
}
// deletes the contents of a directory
func removeDirContents(path string) error {
files, err := ioutil.ReadDir(path)
if err != nil {
return err
}
for _, file := range files {
if err := os.RemoveAll(filepath.Join(path, file.Name())); err != nil {
return err
}
}
return nil
}

View File

@@ -1,6 +1,7 @@
package directory
import (
"context"
"io"
"io/ioutil"
"os"
@@ -34,7 +35,12 @@ func (s *dirImageSource) Close() error {
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *dirImageSource) GetManifest() ([]byte, string, error) {
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *dirImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil {
return nil, "", errors.Errorf(`Getting target manifest not supported by "dir:"`)
}
m, err := ioutil.ReadFile(s.ref.manifestPath())
if err != nil {
return nil, "", err
@@ -42,24 +48,27 @@ func (s *dirImageSource) GetManifest() ([]byte, string, error) {
return m, manifest.GuessMIMEType(m), err
}
func (s *dirImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return nil, "", errors.Errorf(`Getting target manifest not supported by "dir:"`)
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
func (s *dirImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(info.Digest))
if err != nil {
return nil, 0, nil
return nil, -1, err
}
fi, err := r.Stat()
if err != nil {
return nil, 0, nil
return nil, -1, err
}
return r, fi.Size(), nil
}
func (s *dirImageSource) GetSignatures() ([][]byte, error) {
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
func (s *dirImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
if instanceDigest != nil {
return nil, errors.Errorf(`Manifests lists are not supported by "dir:"`)
}
signatures := [][]byte{}
for i := 0; ; i++ {
signature, err := ioutil.ReadFile(s.ref.signaturePath(i))
@@ -73,3 +82,8 @@ func (s *dirImageSource) GetSignatures() ([][]byte, error) {
}
return signatures, nil
}
// LayerInfosForCopy() returns updated layer info that should be used when copying, in preference to values in the manifest, if specified.
func (s *dirImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -1,18 +1,18 @@
package directory
import (
"context"
"fmt"
"path/filepath"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
func init() {
@@ -134,31 +134,34 @@ func (ref dirReference) PolicyConfigurationNamespaces() []string {
return res
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned ImageCloser.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref dirReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
func (ref dirReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
src := newImageSource(ref)
return image.FromSource(src)
return image.FromSource(ctx, sys, src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref dirReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func (ref dirReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ref), nil
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref dirReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ref), nil
func (ref dirReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
compress := false
if sys != nil {
compress = sys.DirForceCompress
}
return newImageDestination(ref, compress)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref dirReference) DeleteImage(ctx *types.SystemContext) error {
func (ref dirReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for dir: images")
}
@@ -170,10 +173,15 @@ func (ref dirReference) manifestPath() string {
// layerPath returns a path for a layer tarball within a directory using our conventions.
func (ref dirReference) layerPath(digest digest.Digest) string {
// FIXME: Should we keep the digest identification?
return filepath.Join(ref.path, digest.Hex()+".tar")
return filepath.Join(ref.path, digest.Hex())
}
// signaturePath returns a path for a signature within a directory using our conventions.
func (ref dirReference) signaturePath(index int) string {
return filepath.Join(ref.path, fmt.Sprintf("signature-%d", index+1))
}
// versionPath returns a path for the version file within a directory using our conventions.
func (ref dirReference) versionPath() string {
return filepath.Join(ref.path, "version")
}

View File

@@ -1,6 +1,7 @@
package archive
import (
"context"
"io"
"os"
@@ -15,28 +16,42 @@ type archiveImageDestination struct {
writer io.Closer
}
func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.ImageDestination, error) {
if ref.destinationRef == nil {
return nil, errors.Errorf("docker-archive: destination reference not supplied (must be of form <path>:<reference:tag>)")
}
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_EXCL|os.O_CREATE, 0644)
func newImageDestination(sys *types.SystemContext, ref archiveReference) (types.ImageDestination, error) {
// ref.path can be either a pipe or a regular file
// in the case of a pipe, we require that we can open it for write
// in the case of a regular file, we don't want to overwrite any pre-existing file
// so we check for Size() == 0 below (This is racy, but using O_EXCL would also be racy,
// only in a different way. Either way, its up to the user to not have two writers to the same path.)
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
// FIXME: It should be possible to modify archives, but the only really
// sane way of doing it is to create a copy of the image, modify
// it and then do a rename(2).
if os.IsExist(err) {
err = errors.New("docker-archive doesn't support modifying existing images")
}
return nil, err
return nil, errors.Wrapf(err, "error opening file %q", ref.path)
}
fhStat, err := fh.Stat()
if err != nil {
return nil, errors.Wrapf(err, "error statting file %q", ref.path)
}
if fhStat.Mode().IsRegular() && fhStat.Size() != 0 {
return nil, errors.New("docker-archive doesn't support modifying existing images")
}
tarDest := tarfile.NewDestination(fh, ref.destinationRef)
if sys != nil && sys.DockerArchiveAdditionalTags != nil {
tarDest.AddRepoTags(sys.DockerArchiveAdditionalTags)
}
return &archiveImageDestination{
Destination: tarfile.NewDestination(fh, ref.destinationRef),
Destination: tarDest,
ref: ref,
writer: fh,
}, nil
}
// DesiredLayerCompression indicates if layers must be compressed, decompressed or preserved
func (d *archiveImageDestination) DesiredLayerCompression() types.LayerCompression {
return types.Decompress
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *archiveImageDestination) Reference() types.ImageReference {
@@ -52,6 +67,6 @@ func (d *archiveImageDestination) Close() error {
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *archiveImageDestination) Commit() error {
return d.Destination.Commit()
func (d *archiveImageDestination) Commit(ctx context.Context) error {
return d.Destination.Commit(ctx)
}

View File

@@ -1,9 +1,10 @@
package archive
import (
"github.com/Sirupsen/logrus"
"context"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/sirupsen/logrus"
)
type archiveImageSource struct {
@@ -13,15 +14,18 @@ type archiveImageSource struct {
// newImageSource returns a types.ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref archiveReference) types.ImageSource {
func newImageSource(ctx context.Context, ref archiveReference) (types.ImageSource, error) {
if ref.destinationRef != nil {
logrus.Warnf("docker-archive: references are not supported for sources (ignoring)")
}
src := tarfile.NewSource(ref.path)
src, err := tarfile.NewSourceFromFile(ref.path)
if err != nil {
return nil, err
}
return &archiveImageSource{
Source: src,
ref: ref,
}
}, nil
}
// Reference returns the reference used to set up this source, _as specified by the user_
@@ -30,7 +34,7 @@ func (s *archiveImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *archiveImageSource) Close() error {
return nil
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *archiveImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -1,6 +1,7 @@
package archive
import (
"context"
"fmt"
"strings"
@@ -40,7 +41,9 @@ func (t archiveTransport) ValidatePolicyConfigurationScope(scope string) error {
// archiveReference is an ImageReference for Docker images.
type archiveReference struct {
destinationRef reference.NamedTagged // only used for destinations
// only used for destinations
// archiveReference.destinationRef is optional and can be nil for destinations as well.
destinationRef reference.NamedTagged
path string
}
@@ -125,31 +128,33 @@ func (ref archiveReference) PolicyConfigurationNamespaces() []string {
return []string{}
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned ImageCloser.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref archiveReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src := newImageSource(ctx, ref)
return ctrImage.FromSource(src)
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
func (ref archiveReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
src, err := newImageSource(ctx, ref)
if err != nil {
return nil, err
}
return ctrImage.FromSource(ctx, sys, src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref archiveReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref), nil
func (ref archiveReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref archiveReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
func (ref archiveReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(sys, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref archiveReference) DeleteImage(ctx *types.SystemContext) error {
func (ref archiveReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
// Not really supported, for safety reasons.
return errors.New("Deleting images not implemented for docker-archive: images")
}

View File

@@ -0,0 +1,85 @@
package daemon
import (
"net/http"
"path/filepath"
"github.com/containers/image/types"
dockerclient "github.com/docker/docker/client"
"github.com/docker/go-connections/tlsconfig"
)
const (
// The default API version to be used in case none is explicitly specified
defaultAPIVersion = "1.22"
)
// NewDockerClient initializes a new API client based on the passed SystemContext.
func newDockerClient(sys *types.SystemContext) (*dockerclient.Client, error) {
host := dockerclient.DefaultDockerHost
if sys != nil && sys.DockerDaemonHost != "" {
host = sys.DockerDaemonHost
}
// Sadly, unix:// sockets don't work transparently with dockerclient.NewClient.
// They work fine with a nil httpClient; with a non-nil httpClient, the transports
// TLSClientConfig must be nil (or the client will try using HTTPS over the PF_UNIX socket
// regardless of the values in the *tls.Config), and we would have to call sockets.ConfigureTransport.
//
// We don't really want to configure anything for unix:// sockets, so just pass a nil *http.Client.
//
// Similarly, if we want to communicate over plain HTTP on a TCP socket, we also need to set
// TLSClientConfig to nil. This can be achieved by using the form `http://`
url, err := dockerclient.ParseHostURL(host)
if err != nil {
return nil, err
}
var httpClient *http.Client
if url.Scheme != "unix" {
if url.Scheme == "http" {
httpClient = httpConfig()
} else {
hc, err := tlsConfig(sys)
if err != nil {
return nil, err
}
httpClient = hc
}
}
return dockerclient.NewClient(host, defaultAPIVersion, httpClient, nil)
}
func tlsConfig(sys *types.SystemContext) (*http.Client, error) {
options := tlsconfig.Options{}
if sys != nil && sys.DockerDaemonInsecureSkipTLSVerify {
options.InsecureSkipVerify = true
}
if sys != nil && sys.DockerDaemonCertPath != "" {
options.CAFile = filepath.Join(sys.DockerDaemonCertPath, "ca.pem")
options.CertFile = filepath.Join(sys.DockerDaemonCertPath, "cert.pem")
options.KeyFile = filepath.Join(sys.DockerDaemonCertPath, "key.pem")
}
tlsc, err := tlsconfig.Client(options)
if err != nil {
return nil, err
}
return &http.Client{
Transport: &http.Transport{
TLSClientConfig: tlsc,
},
CheckRedirect: dockerclient.CheckRedirect,
}, nil
}
func httpConfig() *http.Client {
return &http.Client{
Transport: &http.Transport{
TLSClientConfig: nil,
},
CheckRedirect: dockerclient.CheckRedirect,
}
}

View File

@@ -1,19 +1,20 @@
package daemon
import (
"context"
"io"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/pkg/errors"
"golang.org/x/net/context"
"github.com/sirupsen/logrus"
)
type daemonImageDestination struct {
ref daemonReference
mustMatchRuntimeOS bool
*tarfile.Destination // Implements most of types.ImageDestination
// For talking to imageLoadGoroutine
goroutineCancel context.CancelFunc
@@ -24,7 +25,7 @@ type daemonImageDestination struct {
}
// newImageDestination returns a types.ImageDestination for the specified image reference.
func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
func newImageDestination(ctx context.Context, sys *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
if ref.ref == nil {
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
@@ -33,7 +34,12 @@ func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (t
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
var mustMatchRuntimeOS = true
if sys != nil && sys.DockerDaemonHost != client.DefaultDockerHost {
mustMatchRuntimeOS = false
}
c, err := newDockerClient(sys)
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
@@ -42,16 +48,17 @@ func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (t
// Commit() may never be called, so we may never read from this channel; so, make this buffered to allow imageLoadGoroutine to write status and terminate even if we never read it.
statusChannel := make(chan error, 1)
ctx, goroutineCancel := context.WithCancel(context.Background())
go imageLoadGoroutine(ctx, c, reader, statusChannel)
goroutineContext, goroutineCancel := context.WithCancel(ctx)
go imageLoadGoroutine(goroutineContext, c, reader, statusChannel)
return &daemonImageDestination{
ref: ref,
Destination: tarfile.NewDestination(writer, namedTaggedRef),
goroutineCancel: goroutineCancel,
statusChannel: statusChannel,
writer: writer,
committed: false,
ref: ref,
mustMatchRuntimeOS: mustMatchRuntimeOS,
Destination: tarfile.NewDestination(writer, namedTaggedRef),
goroutineCancel: goroutineCancel,
statusChannel: statusChannel,
writer: writer,
committed: false,
}, nil
}
@@ -78,6 +85,16 @@ func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeRe
defer resp.Body.Close()
}
// DesiredLayerCompression indicates if layers must be compressed, decompressed or preserved
func (d *daemonImageDestination) DesiredLayerCompression() types.LayerCompression {
return types.PreserveOriginal
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *daemonImageDestination) MustMatchRuntimeOS() bool {
return d.mustMatchRuntimeOS
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *daemonImageDestination) Close() error {
if !d.committed {
@@ -107,9 +124,9 @@ func (d *daemonImageDestination) Reference() types.ImageReference {
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *daemonImageDestination) Commit() error {
func (d *daemonImageDestination) Commit(ctx context.Context) error {
logrus.Debugf("docker-daemon: Closing tar stream")
if err := d.Destination.Commit(); err != nil {
if err := d.Destination.Commit(ctx); err != nil {
return err
}
if err := d.writer.Close(); err != nil {
@@ -118,6 +135,10 @@ func (d *daemonImageDestination) Commit() error {
d.committed = true // We may still fail, but we are done sending to imageLoadGoroutine.
logrus.Debugf("docker-daemon: Waiting for status")
err := <-d.statusChannel
return err
select {
case <-ctx.Done():
return ctx.Err()
case err := <-d.statusChannel:
return err
}
}

View File

@@ -1,23 +1,16 @@
package daemon
import (
"io"
"io/ioutil"
"os"
"context"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/docker/docker/client"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
type daemonImageSource struct {
ref daemonReference
*tarfile.Source // Implements most of types.ImageSource
tarCopyPath string
}
type layerInfo struct {
@@ -34,42 +27,26 @@ type layerInfo struct {
// (We could, perhaps, expect an exact sequence, assume that the first plaintext file
// is the config, and that the following len(RootFS) files are the layers, but that feels
// way too brittle.)
func newImageSource(ctx *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
func newImageSource(ctx context.Context, sys *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
c, err := newDockerClient(sys)
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
// Per NewReference(), ref.StringWithinTransport() is either an image ID (config digest), or a !reference.NameOnly() reference.
// Either way ImageSave should create a tarball with exactly one image.
inputStream, err := c.ImageSave(context.TODO(), []string{ref.StringWithinTransport()})
inputStream, err := c.ImageSave(ctx, []string{ref.StringWithinTransport()})
if err != nil {
return nil, errors.Wrap(err, "Error loading image from docker engine")
}
defer inputStream.Close()
// FIXME: use SystemContext here.
tarCopyFile, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-daemon-tar")
src, err := tarfile.NewSourceFromStream(inputStream)
if err != nil {
return nil, err
}
defer tarCopyFile.Close()
succeeded := false
defer func() {
if !succeeded {
os.Remove(tarCopyFile.Name())
}
}()
if _, err := io.Copy(tarCopyFile, inputStream); err != nil {
return nil, err
}
succeeded = true
return &daemonImageSource{
ref: ref,
Source: tarfile.NewSource(tarCopyFile.Name()),
tarCopyPath: tarCopyFile.Name(),
ref: ref,
Source: src,
}, nil
}
@@ -79,7 +56,7 @@ func (s *daemonImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *daemonImageSource) Close() error {
return os.Remove(s.tarCopyPath)
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *daemonImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -1,13 +1,16 @@
package daemon
import (
"github.com/pkg/errors"
"context"
"fmt"
"github.com/containers/image/docker/policyconfiguration"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
func init() {
@@ -34,8 +37,15 @@ func (t daemonTransport) ParseReference(reference string) (types.ImageReference,
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t daemonTransport) ValidatePolicyConfigurationScope(scope string) error {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return errors.New(`docker-daemon: does not support any scopes except the default "" one`)
// ID values cannot be effectively namespaced, and are clearly invalid host:port values.
if _, err := digest.Parse(scope); err == nil {
return errors.Errorf(`docker-daemon: can not use algo:digest value %s as a namespace`, scope)
}
// FIXME? We could be verifying the various character set and length restrictions
// from docker/distribution/reference.regexp.go, but other than that there
// are few semantically invalid strings.
return nil
}
// daemonReference is an ImageReference for images managed by a local Docker daemon
@@ -87,6 +97,8 @@ func NewReference(id digest.Digest, ref reference.Named) (types.ImageReference,
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// Most versions of docker/reference do not handle that (ignoring the tag), so reject such input.
// This MAY be accepted in the future.
// (Even if it were supported, the semantics of policy namespaces are unclear - should we drop
// the tag or the digest first?)
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
@@ -136,9 +148,28 @@ func (ref daemonReference) DockerReference() reference.Named {
// Returns "" if configuration identities for these references are not supported.
func (ref daemonReference) PolicyConfigurationIdentity() string {
// We must allow referring to images in the daemon by image ID, otherwise untagged images would not be accessible.
// But the existence of image IDs means that we cant truly well namespace the input; the untagged images would have to fall into the default policy,
// which can be unexpected. So, punt.
return "" // This still allows using the default "" scope to define a policy for this transport.
// But the existence of image IDs means that we cant truly well namespace the input:
// a single image can be namespaced either using the name or the ID depending on how it is named.
//
// Thats fairly unexpected, but we have to cope somehow.
//
// So, use the ordinary docker/policyconfiguration namespacing for named images.
// image IDs all fall into the root namespace.
// Users can set up the root namespace to be either untrusted or rejected,
// and to set up specific trust for named namespaces. This allows verifying image
// identity when a name is known, and unnamed images would be untrusted or rejected.
switch {
case ref.id != "":
return "" // This still allows using the default "" scope to define a global policy for ID-identified images.
case ref.ref != nil:
res, err := policyconfiguration.DockerReferenceIdentity(ref.ref)
if res == "" || err != nil { // Coverage: Should never happen, NewReference above should refuse values which could cause a failure.
panic(fmt.Sprintf("Internal inconsistency: policyconfiguration.DockerReferenceIdentity returned %#v, %v", res, err))
}
return res
default: // Coverage: Should never happen, NewReference above should refuse such values.
panic("Internal inconsistency: daemonReference has empty id and nil ref")
}
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
@@ -148,35 +179,43 @@ func (ref daemonReference) PolicyConfigurationIdentity() string {
// and each following element to be a prefix of the element preceding it.
func (ref daemonReference) PolicyConfigurationNamespaces() []string {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return []string{}
switch {
case ref.id != "":
return []string{}
case ref.ref != nil:
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
default: // Coverage: Should never happen, NewReference above should refuse such values.
panic("Internal inconsistency: daemonReference has empty id and nil ref")
}
}
// NewImage returns a types.Image for this reference.
// The caller must call .Close() on the returned Image.
func (ref daemonReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src, err := newImageSource(ctx, ref)
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned ImageCloser.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
func (ref daemonReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
src, err := newImageSource(ctx, sys, ref)
if err != nil {
return nil, err
}
return image.FromSource(src)
return image.FromSource(ctx, sys, src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref daemonReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref)
func (ref daemonReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, sys, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref daemonReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
func (ref daemonReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, sys, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref daemonReference) DeleteImage(ctx *types.SystemContext) error {
func (ref daemonReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
// Should this just untag the image? Should this stop running containers?
// The semantics is not quite as clear as for remote repositories.
// The user can run (docker rmi) directly anyway, so, for now(?), punt instead of trying to guess what the user meant.

View File

@@ -1,38 +1,35 @@
package docker
import (
"context"
"crypto/tls"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"net/url"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/pkg/docker/config"
"github.com/containers/image/pkg/tlsclientconfig"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/homedir"
"github.com/docker/distribution/registry/client"
"github.com/docker/go-connections/sockets"
"github.com/docker/go-connections/tlsconfig"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
dockerHostname = "docker.io"
dockerRegistry = "registry-1.docker.io"
dockerAuthRegistry = "https://index.docker.io/v1/"
dockerCfg = ".docker"
dockerCfgFileName = "config.json"
dockerCfgObsolete = ".dockercfg"
dockerHostname = "docker.io"
dockerV1Hostname = "index.docker.io"
dockerRegistry = "registry-1.docker.io"
resolvedPingV2URL = "%s://%s/v2/"
resolvedPingV1URL = "%s://%s/v1/_ping"
@@ -48,9 +45,14 @@ const (
extensionSignatureTypeAtomic = "atomic" // extensionSignature.Type
)
// ErrV1NotSupported is returned when we're trying to talk to a
// docker V1 registry.
var ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
var (
// ErrV1NotSupported is returned when we're trying to talk to a
// docker V1 registry.
ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
// ErrUnauthorizedForCredentials is returned when the status code returned is 401
ErrUnauthorizedForCredentials = errors.New("unable to retrieve auth token: invalid username/password")
systemPerHostCertDirPaths = [2]string{"/etc/containers/certs.d", "/etc/docker/certs.d"}
)
// extensionSignature and extensionSignatureList come from github.com/openshift/origin/pkg/dockerregistry/server/signaturedispatcher.go:
// signature represents a Docker image signature.
@@ -67,15 +69,16 @@ type extensionSignatureList struct {
}
type bearerToken struct {
Token string `json:"token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
Token string `json:"token"`
AccessToken string `json:"access_token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
}
// dockerClient is configuration for dealing with a single Docker registry.
type dockerClient struct {
// The following members are set by newDockerClient and do not change afterwards.
ctx *types.SystemContext
sys *types.SystemContext
registry string
username string
password string
@@ -97,6 +100,37 @@ type authScope struct {
actions string
}
// sendAuth determines whether we need authentication for v2 or v1 endpoint.
type sendAuth int
const (
// v2 endpoint with authentication.
v2Auth sendAuth = iota
// v1 endpoint with authentication.
// TODO: Get v1Auth working
// v1Auth
// no authentication, works for both v1 and v2.
noAuth
)
func newBearerTokenFromJSONBlob(blob []byte) (*bearerToken, error) {
token := new(bearerToken)
if err := json.Unmarshal(blob, &token); err != nil {
return nil, err
}
if token.Token == "" {
token.Token = token.AccessToken
}
if token.ExpiresIn < minimumTokenLifetimeSeconds {
token.ExpiresIn = minimumTokenLifetimeSeconds
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
}
if token.IssuedAt.IsZero() {
token.IssuedAt = time.Now().UTC()
}
return token, nil
}
// this is cloned from docker/go-connections because upstream docker has changed
// it and make deps here fails otherwise.
// We'll drop this once we upgrade to docker 1.13.x deps.
@@ -109,150 +143,251 @@ func serverDefault() *tls.Config {
}
}
func newTransport() *http.Transport {
direct := &net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
// dockerCertDir returns a path to a directory to be consumed by tlsclientconfig.SetupCertificates() depending on ctx and hostPort.
func dockerCertDir(sys *types.SystemContext, hostPort string) (string, error) {
if sys != nil && sys.DockerCertPath != "" {
return sys.DockerCertPath, nil
}
tr := &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: direct.Dial,
TLSHandshakeTimeout: 10 * time.Second,
// TODO(dmcgowan): Call close idle connections when complete and use keep alive
DisableKeepAlives: true,
if sys != nil && sys.DockerPerHostCertDirPath != "" {
return filepath.Join(sys.DockerPerHostCertDirPath, hostPort), nil
}
proxyDialer, err := sockets.DialerFromEnvironment(direct)
if err == nil {
tr.Dial = proxyDialer.Dial
var (
hostCertDir string
fullCertDirPath string
)
for _, systemPerHostCertDirPath := range systemPerHostCertDirPaths {
if sys != nil && sys.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(sys.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
}
fullCertDirPath = filepath.Join(hostCertDir, hostPort)
_, err := os.Stat(fullCertDirPath)
if err == nil {
break
}
if os.IsNotExist(err) {
continue
}
if os.IsPermission(err) {
logrus.Debugf("error accessing certs directory due to permissions: %v", err)
continue
}
if err != nil {
return "", err
}
}
return tr
return fullCertDirPath, nil
}
func setupCertificates(dir string, tlsc *tls.Config) error {
if dir == "" {
return nil
}
fs, err := ioutil.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
return err
}
for _, f := range fs {
fullPath := filepath.Join(dir, f.Name())
if strings.HasSuffix(f.Name(), ".crt") {
systemPool, err := tlsconfig.SystemCertPool()
if err != nil {
return errors.Wrap(err, "unable to get system cert pool")
}
tlsc.RootCAs = systemPool
logrus.Debugf("crt: %s", fullPath)
data, err := ioutil.ReadFile(fullPath)
if err != nil {
return err
}
tlsc.RootCAs.AppendCertsFromPEM(data)
}
if strings.HasSuffix(f.Name(), ".cert") {
certName := f.Name()
keyName := certName[:len(certName)-5] + ".key"
logrus.Debugf("cert: %s", fullPath)
if !hasFile(fs, keyName) {
return errors.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
cert, err := tls.LoadX509KeyPair(filepath.Join(dir, certName), filepath.Join(dir, keyName))
if err != nil {
return err
}
tlsc.Certificates = append(tlsc.Certificates, cert)
}
if strings.HasSuffix(f.Name(), ".key") {
keyName := f.Name()
certName := keyName[:len(keyName)-4] + ".cert"
logrus.Debugf("key: %s", fullPath)
if !hasFile(fs, certName) {
return errors.Errorf("missing client certificate %s for key %s", certName, keyName)
}
}
}
return nil
}
func hasFile(files []os.FileInfo, name string) bool {
for _, f := range files {
if f.Name() == name {
return true
}
}
return false
}
// newDockerClient returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
// newDockerClientFromRef returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
// “write” specifies whether the client will be used for "write" access (in particular passed to lookaside.go:toplevelFromSection)
func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
func newDockerClientFromRef(sys *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
registry := reference.Domain(ref.ref)
username, password, err := config.GetAuthentication(sys, reference.Domain(ref.ref))
if err != nil {
return nil, errors.Wrapf(err, "error getting username and password")
}
sigBase, err := configuredSignatureStorageBase(sys, ref, write)
if err != nil {
return nil, err
}
remoteName := reference.Path(ref.ref)
return newDockerClientWithDetails(sys, registry, username, password, actions, sigBase, remoteName)
}
// newDockerClientWithDetails returns a new dockerClient instance for the given parameters
func newDockerClientWithDetails(sys *types.SystemContext, registry, username, password, actions string, sigBase signatureStorageBase, remoteName string) (*dockerClient, error) {
hostName := registry
if registry == dockerHostname {
registry = dockerRegistry
}
username, password, err := getAuth(ctx, reference.Domain(ref.ref))
tr := tlsclientconfig.NewTransport()
tr.TLSClientConfig = serverDefault()
// It is undefined whether the host[:port] string for dockerHostname should be dockerHostname or dockerRegistry,
// because docker/docker does not read the certs.d subdirectory at all in that case. We use the user-visible
// dockerHostname here, because it is more symmetrical to read the configuration in that case as well, and because
// generally the UI hides the existence of the different dockerRegistry. But note that this behavior is
// undocumented and may change if docker/docker changes.
certDir, err := dockerCertDir(sys, hostName)
if err != nil {
return nil, err
}
tr := newTransport()
if ctx != nil && (ctx.DockerCertPath != "" || ctx.DockerInsecureSkipTLSVerify) {
tlsc := &tls.Config{}
if err := setupCertificates(ctx.DockerCertPath, tlsc); err != nil {
return nil, err
}
tlsc.InsecureSkipVerify = ctx.DockerInsecureSkipTLSVerify
tr.TLSClientConfig = tlsc
}
if tr.TLSClientConfig == nil {
tr.TLSClientConfig = serverDefault()
}
client := &http.Client{Transport: tr}
sigBase, err := configuredSignatureStorageBase(ctx, ref, write)
if err != nil {
if err := tlsclientconfig.SetupCertificates(certDir, tr.TLSClientConfig); err != nil {
return nil, err
}
if sys != nil && sys.DockerInsecureSkipTLSVerify {
tr.TLSClientConfig.InsecureSkipVerify = true
}
return &dockerClient{
ctx: ctx,
sys: sys,
registry: registry,
username: username,
password: password,
client: client,
client: &http.Client{Transport: tr},
signatureBase: sigBase,
scope: authScope{
actions: actions,
remoteName: reference.Path(ref.ref),
remoteName: remoteName,
},
}, nil
}
// CheckAuth validates the credentials by attempting to log into the registry
// returns an error if an error occcured while making the http request or the status code received was 401
func CheckAuth(ctx context.Context, sys *types.SystemContext, username, password, registry string) error {
newLoginClient, err := newDockerClientWithDetails(sys, registry, username, password, "", nil, "")
if err != nil {
return errors.Wrapf(err, "error creating new docker client")
}
resp, err := newLoginClient.makeRequest(ctx, "GET", "/v2/", nil, nil, v2Auth)
if err != nil {
return err
}
defer resp.Body.Close()
switch resp.StatusCode {
case http.StatusOK:
return nil
case http.StatusUnauthorized:
return ErrUnauthorizedForCredentials
default:
return errors.Errorf("error occured with status code %d (%s)", resp.StatusCode, http.StatusText(resp.StatusCode))
}
}
// SearchResult holds the information of each matching image
// It matches the output returned by the v1 endpoint
type SearchResult struct {
Name string `json:"name"`
Description string `json:"description"`
// StarCount states the number of stars the image has
StarCount int `json:"star_count"`
IsTrusted bool `json:"is_trusted"`
// IsAutomated states whether the image is an automated build
IsAutomated bool `json:"is_automated"`
// IsOfficial states whether the image is an official build
IsOfficial bool `json:"is_official"`
}
// SearchRegistry queries a registry for images that contain "image" in their name
// The limit is the max number of results desired
// Note: The limit value doesn't work with all registries
// for example registry.access.redhat.com returns all the results without limiting it to the limit value
func SearchRegistry(ctx context.Context, sys *types.SystemContext, registry, image string, limit int) ([]SearchResult, error) {
type V2Results struct {
// Repositories holds the results returned by the /v2/_catalog endpoint
Repositories []string `json:"repositories"`
}
type V1Results struct {
// Results holds the results returned by the /v1/search endpoint
Results []SearchResult `json:"results"`
}
v2Res := &V2Results{}
v1Res := &V1Results{}
// Get credentials from authfile for the underlying hostname
username, password, err := config.GetAuthentication(sys, registry)
if err != nil {
return nil, errors.Wrapf(err, "error getting username and password")
}
// The /v2/_catalog endpoint has been disabled for docker.io therefore the call made to that endpoint will fail
// So using the v1 hostname for docker.io for simplicity of implementation and the fact that it returns search results
if registry == dockerHostname {
registry = dockerV1Hostname
}
client, err := newDockerClientWithDetails(sys, registry, username, password, "", nil, "")
if err != nil {
return nil, errors.Wrapf(err, "error creating new docker client")
}
// Only try the v1 search endpoint if the search query is not empty. If it is
// empty skip to the v2 endpoint.
if image != "" {
// set up the query values for the v1 endpoint
u := url.URL{
Path: "/v1/search",
}
q := u.Query()
q.Set("q", image)
q.Set("n", strconv.Itoa(limit))
u.RawQuery = q.Encode()
logrus.Debugf("trying to talk to v1 search endpoint\n")
resp, err := client.makeRequest(ctx, "GET", u.String(), nil, nil, noAuth)
if err != nil {
logrus.Debugf("error getting search results from v1 endpoint %q: %v", registry, err)
} else {
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
logrus.Debugf("error getting search results from v1 endpoint %q, status code %d (%s)", registry, resp.StatusCode, http.StatusText(resp.StatusCode))
} else {
if err := json.NewDecoder(resp.Body).Decode(v1Res); err != nil {
return nil, err
}
return v1Res.Results, nil
}
}
}
logrus.Debugf("trying to talk to v2 search endpoint\n")
resp, err := client.makeRequest(ctx, "GET", "/v2/_catalog", nil, nil, v2Auth)
if err != nil {
logrus.Debugf("error getting search results from v2 endpoint %q: %v", registry, err)
} else {
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
logrus.Errorf("error getting search results from v2 endpoint %q, status code %d (%s)", registry, resp.StatusCode, http.StatusText(resp.StatusCode))
} else {
if err := json.NewDecoder(resp.Body).Decode(v2Res); err != nil {
return nil, err
}
searchRes := []SearchResult{}
for _, repo := range v2Res.Repositories {
if strings.Contains(repo, image) {
res := SearchResult{
Name: repo,
}
searchRes = append(searchRes, res)
}
}
return searchRes, nil
}
}
return nil, errors.Wrapf(err, "couldn't search registry %q", registry)
}
// makeRequest creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// The host name and schema is taken from the client or autodetected, and the path is relative to it, i.e. the path usually starts with /v2/.
func (c *dockerClient) makeRequest(method, path string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
if err := c.detectProperties(); err != nil {
func (c *dockerClient) makeRequest(ctx context.Context, method, path string, headers map[string][]string, stream io.Reader, auth sendAuth) (*http.Response, error) {
if err := c.detectProperties(ctx); err != nil {
return nil, err
}
url := fmt.Sprintf("%s://%s%s", c.scheme, c.registry, path)
return c.makeRequestToResolvedURL(method, url, headers, stream, -1, true)
return c.makeRequestToResolvedURL(ctx, method, url, headers, stream, -1, auth)
}
// makeRequestToResolvedURL creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// streamLen, if not -1, specifies the length of the data expected on stream.
// makeRequest should generally be preferred.
// TODO(runcom): too many arguments here, use a struct
func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[string][]string, stream io.Reader, streamLen int64, sendAuth bool) (*http.Response, error) {
func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url string, headers map[string][]string, stream io.Reader, streamLen int64, auth sendAuth) (*http.Response, error) {
req, err := http.NewRequest(method, url, stream)
if err != nil {
return nil, err
}
req = req.WithContext(ctx)
if streamLen != -1 { // Do not blindly overwrite if streamLen == -1, http.NewRequest above can figure out the length of bytes.Reader and similar objects without us having to compute it.
req.ContentLength = streamLen
}
@@ -262,10 +397,10 @@ func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[
req.Header.Add(n, hh)
}
}
if c.ctx != nil && c.ctx.DockerRegistryUserAgent != "" {
req.Header.Add("User-Agent", c.ctx.DockerRegistryUserAgent)
if c.sys != nil && c.sys.DockerRegistryUserAgent != "" {
req.Header.Add("User-Agent", c.sys.DockerRegistryUserAgent)
}
if sendAuth {
if auth == v2Auth {
if err := c.setupRequestAuth(req); err != nil {
return nil, err
}
@@ -289,39 +424,51 @@ func (c *dockerClient) setupRequestAuth(req *http.Request) error {
if len(c.challenges) == 0 {
return nil
}
// assume just one...
challenge := c.challenges[0]
switch challenge.Scheme {
case "basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
schemeNames := make([]string, 0, len(c.challenges))
for _, challenge := range c.challenges {
schemeNames = append(schemeNames, challenge.Scheme)
switch challenge.Scheme {
case "basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
var scope string
if c.scope.remoteName != "" && c.scope.actions != "" {
scope = fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
}
token, err := c.getBearerToken(req.Context(), realm, service, scope)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope := fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
return nil
default:
logrus.Debugf("no handler for %s authentication", challenge.Scheme)
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
return nil
}
return errors.Errorf("no handler for %s authentication", challenge.Scheme)
logrus.Infof("None of the challenges sent by server (%s) are supported, trying an unauthenticated request anyway", strings.Join(schemeNames, ", "))
return nil
}
func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToken, error) {
func (c *dockerClient) getBearerToken(ctx context.Context, realm, service, scope string) (*bearerToken, error) {
authReq, err := http.NewRequest("GET", realm, nil)
if err != nil {
return nil, err
}
authReq = authReq.WithContext(ctx)
getParams := authReq.URL.Query()
if c.username != "" {
getParams.Add("account", c.username)
}
if service != "" {
getParams.Add("service", service)
}
@@ -332,7 +479,8 @@ func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToke
if c.username != "" && c.password != "" {
authReq.SetBasicAuth(c.username, c.password)
}
tr := newTransport()
logrus.Debugf("%s %s", authReq.Method, authReq.URL.String())
tr := tlsclientconfig.NewTransport()
// TODO(runcom): insecure for now to contact the external token service
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
client := &http.Client{Transport: tr}
@@ -343,102 +491,38 @@ func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToke
defer res.Body.Close()
switch res.StatusCode {
case http.StatusUnauthorized:
return nil, errors.Errorf("unable to retrieve auth token: 401 unauthorized")
return nil, ErrUnauthorizedForCredentials
case http.StatusOK:
break
default:
return nil, errors.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
return nil, errors.Errorf("unexpected http code: %d (%s), URL: %s", res.StatusCode, http.StatusText(res.StatusCode), authReq.URL)
}
tokenBlob, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, err
}
var token bearerToken
if err := json.Unmarshal(tokenBlob, &token); err != nil {
return nil, err
}
if token.ExpiresIn < minimumTokenLifetimeSeconds {
token.ExpiresIn = minimumTokenLifetimeSeconds
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
}
if token.IssuedAt.IsZero() {
token.IssuedAt = time.Now().UTC()
}
return &token, nil
}
func getAuth(ctx *types.SystemContext, registry string) (string, string, error) {
if ctx != nil && ctx.DockerAuthConfig != nil {
return ctx.DockerAuthConfig.Username, ctx.DockerAuthConfig.Password, nil
}
var dockerAuth dockerConfigFile
dockerCfgPath := filepath.Join(getDefaultConfigDir(".docker"), dockerCfgFileName)
if _, err := os.Stat(dockerCfgPath); err == nil {
j, err := ioutil.ReadFile(dockerCfgPath)
if err != nil {
return "", "", err
}
if err := json.Unmarshal(j, &dockerAuth); err != nil {
return "", "", err
}
} else if os.IsNotExist(err) {
// try old config path
oldDockerCfgPath := filepath.Join(getDefaultConfigDir(dockerCfgObsolete))
if _, err := os.Stat(oldDockerCfgPath); err != nil {
if os.IsNotExist(err) {
return "", "", nil
}
return "", "", errors.Wrap(err, oldDockerCfgPath)
}
j, err := ioutil.ReadFile(oldDockerCfgPath)
if err != nil {
return "", "", err
}
if err := json.Unmarshal(j, &dockerAuth.AuthConfigs); err != nil {
return "", "", err
}
} else if err != nil {
return "", "", errors.Wrap(err, dockerCfgPath)
}
// I'm feeling lucky
if c, exists := dockerAuth.AuthConfigs[registry]; exists {
return decodeDockerAuth(c.Auth)
}
// bad luck; let's normalize the entries first
registry = normalizeRegistry(registry)
normalizedAuths := map[string]dockerAuthConfig{}
for k, v := range dockerAuth.AuthConfigs {
normalizedAuths[normalizeRegistry(k)] = v
}
if c, exists := normalizedAuths[registry]; exists {
return decodeDockerAuth(c.Auth)
}
return "", "", nil
return newBearerTokenFromJSONBlob(tokenBlob)
}
// detectProperties detects various properties of the registry.
// See the dockerClient documentation for members which are affected by this.
func (c *dockerClient) detectProperties() error {
func (c *dockerClient) detectProperties(ctx context.Context) error {
if c.scheme != "" {
return nil
}
ping := func(scheme string) error {
url := fmt.Sprintf(resolvedPingV2URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth)
if err != nil {
logrus.Debugf("Ping %s err %s (%#v)", url, err.Error(), err)
return err
}
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", url, resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return errors.Errorf("error pinging repository, response code %d", resp.StatusCode)
return errors.Errorf("error pinging registry %s, response code %d (%s)", c.registry, resp.StatusCode, http.StatusText(resp.StatusCode))
}
c.challenges = parseAuthHeader(resp.Header)
c.scheme = scheme
@@ -446,20 +530,20 @@ func (c *dockerClient) detectProperties() error {
return nil
}
err := ping("https")
if err != nil && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
if err != nil && c.sys != nil && c.sys.DockerInsecureSkipTLSVerify {
err = ping("http")
}
if err != nil {
err = errors.Wrap(err, "pinging docker registry returned")
if c.ctx != nil && c.ctx.DockerDisableV1Ping {
if c.sys != nil && c.sys.DockerDisableV1Ping {
return err
}
// best effort to understand if we're talking to a V1 registry
pingV1 := func(scheme string) bool {
url := fmt.Sprintf(resolvedPingV1URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth)
if err != nil {
logrus.Debugf("Ping %s err %s (%#v)", url, err.Error(), err)
return false
}
defer resp.Body.Close()
@@ -470,7 +554,7 @@ func (c *dockerClient) detectProperties() error {
return true
}
isV1 := pingV1("https")
if !isV1 && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
if !isV1 && c.sys != nil && c.sys.DockerInsecureSkipTLSVerify {
isV1 = pingV1("http")
}
if isV1 {
@@ -482,15 +566,15 @@ func (c *dockerClient) detectProperties() error {
// getExtensionsSignatures returns signatures from the X-Registry-Supports-Signatures API extension,
// using the original data structures.
func (c *dockerClient) getExtensionsSignatures(ref dockerReference, manifestDigest digest.Digest) (*extensionSignatureList, error) {
func (c *dockerClient) getExtensionsSignatures(ctx context.Context, ref dockerReference, manifestDigest digest.Digest) (*extensionSignatureList, error) {
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(ref.ref), manifestDigest)
res, err := c.makeRequest("GET", path, nil, nil)
res, err := c.makeRequest(ctx, "GET", path, nil, nil, v2Auth)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, client.HandleErrorResponse(res)
return nil, errors.Wrapf(client.HandleErrorResponse(res), "Error downloading signatures for %s in %s", manifestDigest, ref.ref.Name())
}
body, err := ioutil.ReadAll(res.Body)
if err != nil {
@@ -503,55 +587,3 @@ func (c *dockerClient) getExtensionsSignatures(ref dockerReference, manifestDige
}
return &parsedBody, nil
}
func getDefaultConfigDir(confPath string) string {
return filepath.Join(homedir.Get(), confPath)
}
type dockerAuthConfig struct {
Auth string `json:"auth,omitempty"`
}
type dockerConfigFile struct {
AuthConfigs map[string]dockerAuthConfig `json:"auths"`
}
func decodeDockerAuth(s string) (string, string, error) {
decoded, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return "", "", err
}
parts := strings.SplitN(string(decoded), ":", 2)
if len(parts) != 2 {
// if it's invalid just skip, as docker does
return "", "", nil
}
user := parts[0]
password := strings.Trim(parts[1], "\x00")
return user, password, nil
}
// convertToHostname converts a registry url which has http|https prepended
// to just an hostname.
// Copied from github.com/docker/docker/registry/auth.go
func convertToHostname(url string) string {
stripped := url
if strings.HasPrefix(url, "http://") {
stripped = strings.TrimPrefix(url, "http://")
} else if strings.HasPrefix(url, "https://") {
stripped = strings.TrimPrefix(url, "https://")
}
nameParts := strings.SplitN(stripped, "/", 2)
return nameParts[0]
}
func normalizeRegistry(registry string) string {
normalized := convertToHostname(registry)
switch normalized {
case "registry-1.docker.io", "docker.io":
return "index.docker.io"
}
return normalized
}

View File

@@ -1,9 +1,12 @@
package docker
import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"strings"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
@@ -11,26 +14,26 @@ import (
"github.com/pkg/errors"
)
// Image is a Docker-specific implementation of types.Image with a few extra methods
// Image is a Docker-specific implementation of types.ImageCloser with a few extra methods
// which are specific to Docker.
type Image struct {
types.Image
types.ImageCloser
src *dockerImageSource
}
// newImage returns a new Image interface type after setting up
// a client to the registry hosting the given image.
// The caller must call .Close() on the returned Image.
func newImage(ctx *types.SystemContext, ref dockerReference) (types.Image, error) {
s, err := newImageSource(ctx, ref, nil)
func newImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) (types.ImageCloser, error) {
s, err := newImageSource(sys, ref)
if err != nil {
return nil, err
}
img, err := image.FromSource(s)
img, err := image.FromSource(ctx, sys, s)
if err != nil {
return nil, err
}
return &Image{Image: img, src: s}, nil
return &Image{ImageCloser: img, src: s}, nil
}
// SourceRefFullName returns a fully expanded name for the repository this image is in.
@@ -38,24 +41,67 @@ func (i *Image) SourceRefFullName() string {
return i.src.ref.ref.Name()
}
// GetRepositoryTags list all tags available in the repository. Note that this has no connection with the tag(s) used for this specific image, if any.
func (i *Image) GetRepositoryTags() ([]string, error) {
path := fmt.Sprintf(tagsPath, reference.Path(i.src.ref.ref))
res, err := i.src.c.makeRequest("GET", path, nil, nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
// print url also
return nil, errors.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
}
type tagsRes struct {
Tags []string
}
tags := &tagsRes{}
if err := json.NewDecoder(res.Body).Decode(tags); err != nil {
return nil, err
}
return tags.Tags, nil
// GetRepositoryTags list all tags available in the repository. The tag
// provided inside the ImageReference will be ignored. (This is a
// backward-compatible shim method which calls the module-level
// GetRepositoryTags)
func (i *Image) GetRepositoryTags(ctx context.Context) ([]string, error) {
return GetRepositoryTags(ctx, i.src.c.sys, i.src.ref)
}
// GetRepositoryTags list all tags available in the repository. The tag
// provided inside the ImageReference will be ignored.
func GetRepositoryTags(ctx context.Context, sys *types.SystemContext, ref types.ImageReference) ([]string, error) {
dr, ok := ref.(dockerReference)
if !ok {
return nil, errors.Errorf("ref must be a dockerReference")
}
path := fmt.Sprintf(tagsPath, reference.Path(dr.ref))
client, err := newDockerClientFromRef(sys, dr, false, "pull")
if err != nil {
return nil, errors.Wrap(err, "failed to create client")
}
tags := make([]string, 0)
for {
res, err := client.makeRequest(ctx, "GET", path, nil, nil, v2Auth)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
// print url also
return nil, errors.Errorf("Invalid status code returned when fetching tags list %d (%s)", res.StatusCode, http.StatusText(res.StatusCode))
}
var tagsHolder struct {
Tags []string
}
if err = json.NewDecoder(res.Body).Decode(&tagsHolder); err != nil {
return nil, err
}
tags = append(tags, tagsHolder.Tags...)
link := res.Header.Get("Link")
if link == "" {
break
}
linkURLStr := strings.Trim(strings.Split(link, ";")[0], "<>")
linkURL, err := url.Parse(linkURLStr)
if err != nil {
return tags, err
}
// can be relative or absolute, but we only want the path (and I
// guess we're in trouble if it forwards to a new place...)
path = linkURL.Path
if linkURL.RawQuery != "" {
path += "?"
path += linkURL.RawQuery
}
}
return tags, nil
}

View File

@@ -2,6 +2,7 @@ package docker
import (
"bytes"
"context"
"crypto/rand"
"encoding/json"
"fmt"
@@ -12,7 +13,6 @@ import (
"os"
"path/filepath"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
@@ -20,24 +20,11 @@ import (
"github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var manifestMIMETypes = []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
func supportedManifestMIMETypesMap() map[string]bool {
m := make(map[string]bool, len(manifestMIMETypes))
for _, mt := range manifestMIMETypes {
m[mt] = true
}
return m
}
type dockerImageDestination struct {
ref dockerReference
c *dockerClient
@@ -46,8 +33,8 @@ type dockerImageDestination struct {
}
// newImageDestination creates a new ImageDestination for the specified image reference.
func newImageDestination(ctx *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
c, err := newDockerClient(ctx, ref, true, "pull,push")
func newImageDestination(sys *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
c, err := newDockerClientFromRef(sys, ref, true, "pull,push")
if err != nil {
return nil, err
}
@@ -69,13 +56,18 @@ func (d *dockerImageDestination) Close() error {
}
func (d *dockerImageDestination) SupportedManifestMIMETypes() []string {
return manifestMIMETypes
return []string{
imgspecv1.MediaTypeImageManifest,
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *dockerImageDestination) SupportsSignatures() error {
if err := d.c.detectProperties(); err != nil {
func (d *dockerImageDestination) SupportsSignatures(ctx context.Context) error {
if err := d.c.detectProperties(ctx); err != nil {
return err
}
switch {
@@ -88,9 +80,8 @@ func (d *dockerImageDestination) SupportsSignatures() error {
}
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *dockerImageDestination) ShouldCompressLayers() bool {
return true
func (d *dockerImageDestination) DesiredLayerCompression() types.LayerCompression {
return types.Compress
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
@@ -99,6 +90,18 @@ func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dockerImageDestination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *dockerImageDestination) IgnoresEmbeddedDockerReference() bool {
return false // We do want the manifest updated; older registry versions refuse manifests if the embedded reference does not match.
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }
@@ -113,9 +116,9 @@ func (c *sizeCounter) Write(p []byte) (n int, err error) {
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
func (d *dockerImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
if inputInfo.Digest.String() != "" {
haveBlob, size, err := d.HasBlob(inputInfo)
haveBlob, size, err := d.HasBlob(ctx, inputInfo)
if err != nil {
return types.BlobInfo{}, err
}
@@ -127,14 +130,14 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
// FIXME? Chunked upload, progress reporting, etc.
uploadPath := fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref))
logrus.Debugf("Uploading %s", uploadPath)
res, err := d.c.makeRequest("POST", uploadPath, nil, nil)
res, err := d.c.makeRequest(ctx, "POST", uploadPath, nil, nil, v2Auth)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusAccepted {
logrus.Debugf("Error initiating layer upload, response %#v", *res)
return types.BlobInfo{}, errors.Errorf("Error initiating layer upload to %s, status %d", uploadPath, res.StatusCode)
return types.BlobInfo{}, errors.Wrapf(client.HandleErrorResponse(res), "Error initiating layer upload to %s in %s", uploadPath, d.c.registry)
}
uploadLocation, err := res.Location()
if err != nil {
@@ -144,7 +147,7 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
digester := digest.Canonical.Digester()
sizeCounter := &sizeCounter{}
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
res, err = d.c.makeRequestToResolvedURL("PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
res, err = d.c.makeRequestToResolvedURL(ctx, "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, v2Auth)
if err != nil {
logrus.Debugf("Error uploading layer chunked, response %#v", res)
return types.BlobInfo{}, err
@@ -163,14 +166,14 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
// TODO: check inputInfo.Digest == computedDigest https://github.com/containers/image/pull/70#discussion_r77646717
locationQuery.Set("digest", computedDigest.String())
uploadLocation.RawQuery = locationQuery.Encode()
res, err = d.c.makeRequestToResolvedURL("PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
res, err = d.c.makeRequestToResolvedURL(ctx, "PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, v2Auth)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
logrus.Debugf("Error uploading layer, response %#v", *res)
return types.BlobInfo{}, errors.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
return types.BlobInfo{}, errors.Wrapf(client.HandleErrorResponse(res), "Error uploading layer to %s", uploadLocation)
}
logrus.Debugf("Upload of layer %s complete", computedDigest)
@@ -181,14 +184,14 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
func (d *dockerImageDestination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
checkPath := fmt.Sprintf(blobsPath, reference.Path(d.ref.ref), info.Digest.String())
logrus.Debugf("Checking %s", checkPath)
res, err := d.c.makeRequest("HEAD", checkPath, nil, nil)
res, err := d.c.makeRequest(ctx, "HEAD", checkPath, nil, nil, v2Auth)
if err != nil {
return false, -1, err
}
@@ -199,16 +202,16 @@ func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, erro
return true, getBlobSize(res), nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return false, -1, errors.Errorf("not authorized to read from destination repository %s", reference.Path(d.ref.ref))
return false, -1, errors.Wrapf(client.HandleErrorResponse(res), "Error checking whether a blob %s exists in %s", info.Digest, d.ref.ref.Name())
case http.StatusNotFound:
logrus.Debugf("... not present")
return false, -1, nil
default:
return false, -1, errors.Errorf("failed to read from destination repository %s: %v", reference.Path(d.ref.ref), http.StatusText(res.StatusCode))
return false, -1, errors.Errorf("failed to read from destination repository %s: %d (%s)", reference.Path(d.ref.ref), res.StatusCode, http.StatusText(res.StatusCode))
}
}
func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
func (d *dockerImageDestination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
@@ -216,7 +219,7 @@ func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInf
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dockerImageDestination) PutManifest(m []byte) error {
func (d *dockerImageDestination) PutManifest(ctx context.Context, m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
return err
@@ -234,13 +237,13 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
if mimeType != "" {
headers["Content-Type"] = []string{mimeType}
}
res, err := d.c.makeRequest("PUT", path, headers, bytes.NewReader(m))
res, err := d.c.makeRequest(ctx, "PUT", path, headers, bytes.NewReader(m), v2Auth)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest to %s", path)
if !successStatus(res.StatusCode) {
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest %s to %s", refTail, d.ref.ref.Name())
if isManifestInvalidError(errors.Cause(err)) {
err = types.ManifestTypeRejectedError{Err: err}
}
@@ -249,6 +252,12 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
return nil
}
// successStatus returns true if the argument is a successful HTTP response
// code (in the range 200 - 399 inclusive).
func successStatus(status int) bool {
return status >= 200 && status <= 399
}
// isManifestInvalidError returns true iff err from client.HandleErrorReponse is a “manifest invalid” error.
func isManifestInvalidError(err error) bool {
errors, ok := err.(errcode.Errors)
@@ -265,19 +274,19 @@ func isManifestInvalidError(err error) bool {
return ec.ErrorCode() == v2.ErrorCodeManifestInvalid || ec.ErrorCode() == v2.ErrorCodeTagInvalid
}
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
func (d *dockerImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
// Do not fail if we dont really need to support signatures.
if len(signatures) == 0 {
return nil
}
if err := d.c.detectProperties(); err != nil {
if err := d.c.detectProperties(ctx); err != nil {
return err
}
switch {
case d.c.signatureBase != nil:
return d.putSignaturesToLookaside(signatures)
case d.c.supportsSignatures:
return d.putSignaturesToAPIExtension(signatures)
return d.putSignaturesToAPIExtension(ctx, signatures)
default:
return errors.Errorf("X-Registry-Supports-Signatures extension not supported, and lookaside is not configured")
}
@@ -376,7 +385,7 @@ func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error
}
// putSignaturesToAPIExtension implements PutSignatures() using the X-Registry-Supports-Signatures API extension.
func (d *dockerImageDestination) putSignaturesToAPIExtension(signatures [][]byte) error {
func (d *dockerImageDestination) putSignaturesToAPIExtension(ctx context.Context, signatures [][]byte) error {
// Skip dealing with the manifest digest, or reading the old state, if not necessary.
if len(signatures) == 0 {
return nil
@@ -391,7 +400,7 @@ func (d *dockerImageDestination) putSignaturesToAPIExtension(signatures [][]byte
// always adds signatures. Eventually we should also allow removing signatures,
// but the X-Registry-Supports-Signatures API extension does not support that yet.
existingSignatures, err := d.c.getExtensionsSignatures(d.ref, d.manifestDigest)
existingSignatures, err := d.c.getExtensionsSignatures(ctx, d.ref, d.manifestDigest)
if err != nil {
return err
}
@@ -433,7 +442,7 @@ sigExists:
}
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), d.manifestDigest.String())
res, err := d.c.makeRequest("PUT", path, nil, bytes.NewReader(body))
res, err := d.c.makeRequest(ctx, "PUT", path, nil, bytes.NewReader(body), v2Auth)
if err != nil {
return err
}
@@ -444,7 +453,7 @@ sigExists:
logrus.Debugf("Error body %s", string(body))
}
logrus.Debugf("Error uploading signature, status %d, %#v", res.StatusCode, res)
return errors.Errorf("Error uploading signature to %s, status %d", path, res.StatusCode)
return errors.Wrapf(client.HandleErrorResponse(res), "Error uploading signature to %s in %s", path, d.c.registry)
}
}
@@ -455,6 +464,6 @@ sigExists:
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *dockerImageDestination) Commit() error {
func (d *dockerImageDestination) Commit(ctx context.Context) error {
return nil
}

View File

@@ -1,6 +1,7 @@
package docker
import (
"context"
"fmt"
"io"
"io/ioutil"
@@ -10,51 +11,33 @@ import (
"os"
"strconv"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type dockerImageSource struct {
ref dockerReference
requestedManifestMIMETypes []string
c *dockerClient
ref dockerReference
c *dockerClient
// State
cachedManifest []byte // nil if not loaded yet
cachedManifestMIMEType string // Only valid if cachedManifest != nil
}
// newImageSource creates a new ImageSource for the specified image reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// newImageSource creates a new ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref dockerReference, requestedManifestMIMETypes []string) (*dockerImageSource, error) {
c, err := newDockerClient(ctx, ref, false, "pull")
func newImageSource(sys *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
c, err := newDockerClientFromRef(sys, ref, false, "pull")
if err != nil {
return nil, err
}
if requestedManifestMIMETypes == nil {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
supportedMIMEs := supportedManifestMIMETypesMap()
acceptableRequestedMIMEs := false
for _, mtrequested := range requestedManifestMIMETypes {
if supportedMIMEs[mtrequested] {
acceptableRequestedMIMEs = true
break
}
}
if !acceptableRequestedMIMEs {
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
}
return &dockerImageSource{
ref: ref,
requestedManifestMIMETypes: requestedManifestMIMETypes,
c: c,
c: c,
}, nil
}
@@ -69,6 +52,11 @@ func (s *dockerImageSource) Close() error {
return nil
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *dockerImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}
// simplifyContentType drops parameters from a HTTP media type (see https://tools.ietf.org/html/rfc7231#section-3.1.1.1)
// Alternatively, an empty string is returned unchanged, and invalid values are "simplified" to an empty string.
func simplifyContentType(contentType string) string {
@@ -84,25 +72,30 @@ func simplifyContentType(contentType string) string {
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
err := s.ensureManifestIsLoaded()
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *dockerImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil {
return s.fetchManifest(ctx, instanceDigest.String())
}
err := s.ensureManifestIsLoaded(ctx)
if err != nil {
return nil, "", err
}
return s.cachedManifest, s.cachedManifestMIMEType, nil
}
func (s *dockerImageSource) fetchManifest(tagOrDigest string) ([]byte, string, error) {
func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest string) ([]byte, string, error) {
path := fmt.Sprintf(manifestPath, reference.Path(s.ref.ref), tagOrDigest)
headers := make(map[string][]string)
headers["Accept"] = s.requestedManifestMIMETypes
res, err := s.c.makeRequest("GET", path, headers, nil)
headers["Accept"] = manifest.DefaultRequestedManifestMIMETypes
res, err := s.c.makeRequest(ctx, "GET", path, headers, nil, v2Auth)
if err != nil {
return nil, "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, "", client.HandleErrorResponse(res)
return nil, "", errors.Wrapf(client.HandleErrorResponse(res), "Error reading manifest %s in %s", tagOrDigest, s.ref.ref.Name())
}
manblob, err := ioutil.ReadAll(res.Body)
if err != nil {
@@ -111,20 +104,14 @@ func (s *dockerImageSource) fetchManifest(tagOrDigest string) ([]byte, string, e
return manblob, simplifyContentType(res.Header.Get("Content-Type")), nil
}
// GetTargetManifest returns an image's manifest given a digest.
// This is mainly used to retrieve a single image's manifest out of a manifest list.
func (s *dockerImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return s.fetchManifest(digest.String())
}
// ensureManifestIsLoaded sets s.cachedManifest and s.cachedManifestMIMEType
//
// ImageSource implementations are not required or expected to do any caching,
// but because our signatures are “attached” to the manifest digest,
// we need to ensure that the digest of the manifest returned by GetManifest
// and used by GetSignatures are consistent, otherwise we would get spurious
// we need to ensure that the digest of the manifest returned by GetManifest(ctx, nil)
// and used by GetSignatures(ctx, nil) are consistent, otherwise we would get spurious
// signature verification failures when pulling while a tag is being updated.
func (s *dockerImageSource) ensureManifestIsLoaded() error {
func (s *dockerImageSource) ensureManifestIsLoaded(ctx context.Context) error {
if s.cachedManifest != nil {
return nil
}
@@ -134,7 +121,7 @@ func (s *dockerImageSource) ensureManifestIsLoaded() error {
return err
}
manblob, mt, err := s.fetchManifest(reference)
manblob, mt, err := s.fetchManifest(ctx, reference)
if err != nil {
return err
}
@@ -144,19 +131,20 @@ func (s *dockerImageSource) ensureManifestIsLoaded() error {
return nil
}
func (s *dockerImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, error) {
func (s *dockerImageSource) getExternalBlob(ctx context.Context, urls []string) (io.ReadCloser, int64, error) {
var (
resp *http.Response
err error
)
for _, url := range urls {
resp, err = s.c.makeRequestToResolvedURL("GET", url, nil, nil, -1, false)
resp, err = s.c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth)
if err == nil {
if resp.StatusCode != http.StatusOK {
err = errors.Errorf("error fetching external blob from %q: %d", url, resp.StatusCode)
err = errors.Errorf("error fetching external blob from %q: %d (%s)", url, resp.StatusCode, http.StatusText(resp.StatusCode))
logrus.Debug(err)
continue
}
break
}
}
if resp.Body != nil && err == nil {
@@ -174,45 +162,64 @@ func getBlobSize(resp *http.Response) int64 {
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
func (s *dockerImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
if len(info.URLs) != 0 {
return s.getExternalBlob(info.URLs)
return s.getExternalBlob(ctx, info.URLs)
}
path := fmt.Sprintf(blobsPath, reference.Path(s.ref.ref), info.Digest.String())
logrus.Debugf("Downloading %s", path)
res, err := s.c.makeRequest("GET", path, nil, nil)
res, err := s.c.makeRequest(ctx, "GET", path, nil, nil, v2Auth)
if err != nil {
return nil, 0, err
}
if res.StatusCode != http.StatusOK {
// print url also
return nil, 0, errors.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
return nil, 0, errors.Errorf("Invalid status code returned when fetching blob %d (%s)", res.StatusCode, http.StatusText(res.StatusCode))
}
return res.Body, getBlobSize(res), nil
}
func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
if err := s.c.detectProperties(); err != nil {
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
func (s *dockerImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
if err := s.c.detectProperties(ctx); err != nil {
return nil, err
}
switch {
case s.c.signatureBase != nil:
return s.getSignaturesFromLookaside()
return s.getSignaturesFromLookaside(ctx, instanceDigest)
case s.c.supportsSignatures:
return s.getSignaturesFromAPIExtension()
return s.getSignaturesFromAPIExtension(ctx, instanceDigest)
default:
return [][]byte{}, nil
}
}
// manifestDigest returns a digest of the manifest, from instanceDigest if non-nil; or from the supplied reference,
// or finally, from a fetched manifest.
func (s *dockerImageSource) manifestDigest(ctx context.Context, instanceDigest *digest.Digest) (digest.Digest, error) {
if instanceDigest != nil {
return *instanceDigest, nil
}
if digested, ok := s.ref.ref.(reference.Digested); ok {
d := digested.Digest()
if d.Algorithm() == digest.Canonical {
return d, nil
}
}
if err := s.ensureManifestIsLoaded(ctx); err != nil {
return "", err
}
return manifest.Digest(s.cachedManifest)
}
// getSignaturesFromLookaside implements GetSignatures() from the lookaside location configured in s.c.signatureBase,
// which is not nil.
func (s *dockerImageSource) getSignaturesFromLookaside() ([][]byte, error) {
if err := s.ensureManifestIsLoaded(); err != nil {
return nil, err
}
manifestDigest, err := manifest.Digest(s.cachedManifest)
func (s *dockerImageSource) getSignaturesFromLookaside(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
manifestDigest, err := s.manifestDigest(ctx, instanceDigest)
if err != nil {
return nil, err
}
@@ -224,7 +231,7 @@ func (s *dockerImageSource) getSignaturesFromLookaside() ([][]byte, error) {
if url == nil {
return nil, errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
}
signature, missing, err := s.getOneSignature(url)
signature, missing, err := s.getOneSignature(ctx, url)
if err != nil {
return nil, err
}
@@ -239,7 +246,7 @@ func (s *dockerImageSource) getSignaturesFromLookaside() ([][]byte, error) {
// getOneSignature downloads one signature from url.
// If it successfully determines that the signature does not exist, returns with missing set to true and error set to nil.
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, missing bool, err error) {
func (s *dockerImageSource) getOneSignature(ctx context.Context, url *url.URL) (signature []byte, missing bool, err error) {
switch url.Scheme {
case "file":
logrus.Debugf("Reading %s", url.Path)
@@ -254,7 +261,12 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
case "http", "https":
logrus.Debugf("GET %s", url)
res, err := s.c.client.Get(url.String())
req, err := http.NewRequest("GET", url.String(), nil)
if err != nil {
return nil, false, err
}
req = req.WithContext(ctx)
res, err := s.c.client.Do(req)
if err != nil {
return nil, false, err
}
@@ -262,7 +274,7 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
if res.StatusCode == http.StatusNotFound {
return nil, true, nil
} else if res.StatusCode != http.StatusOK {
return nil, false, errors.Errorf("Error reading signature from %s: status %d", url.String(), res.StatusCode)
return nil, false, errors.Errorf("Error reading signature from %s: status %d (%s)", url.String(), res.StatusCode, http.StatusText(res.StatusCode))
}
sig, err := ioutil.ReadAll(res.Body)
if err != nil {
@@ -276,16 +288,13 @@ func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, mis
}
// getSignaturesFromAPIExtension implements GetSignatures() using the X-Registry-Supports-Signatures API extension.
func (s *dockerImageSource) getSignaturesFromAPIExtension() ([][]byte, error) {
if err := s.ensureManifestIsLoaded(); err != nil {
return nil, err
}
manifestDigest, err := manifest.Digest(s.cachedManifest)
func (s *dockerImageSource) getSignaturesFromAPIExtension(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
manifestDigest, err := s.manifestDigest(ctx, instanceDigest)
if err != nil {
return nil, err
}
parsedBody, err := s.c.getExtensionsSignatures(s.ref, manifestDigest)
parsedBody, err := s.c.getExtensionsSignatures(ctx, s.ref, manifestDigest)
if err != nil {
return nil, err
}
@@ -300,8 +309,15 @@ func (s *dockerImageSource) getSignaturesFromAPIExtension() ([][]byte, error) {
}
// deleteImage deletes the named image from the registry, if supported.
func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
c, err := newDockerClient(ctx, ref, true, "push")
func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) error {
// docker/distribution does not document what action should be used for deleting images.
//
// Current docker/distribution requires "pull" for reading the manifest and "delete" for deleting it.
// quay.io requires "push" (an explicit "pull" is unnecessary), does not grant any token (fails parsing the request) if "delete" is included.
// OpenShift ignores the action string (both the password and the token is an OpenShift API token identifying a user).
//
// We have to hard-code a single string, luckily both docker/distribution and quay.io support "*" to mean "everything".
c, err := newDockerClientFromRef(sys, ref, true, "*")
if err != nil {
return err
}
@@ -316,7 +332,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
return err
}
getPath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), refTail)
get, err := c.makeRequest("GET", getPath, headers, nil)
get, err := c.makeRequest(ctx, "GET", getPath, headers, nil, v2Auth)
if err != nil {
return err
}
@@ -338,7 +354,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
// When retrieving the digest from a registry >= 2.3 use the following header:
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
delete, err := c.makeRequest("DELETE", deletePath, headers, nil)
delete, err := c.makeRequest(ctx, "DELETE", deletePath, headers, nil, v2Auth)
if err != nil {
return err
}

View File

@@ -1,6 +1,7 @@
package docker
import (
"context"
"fmt"
"strings"
@@ -122,31 +123,30 @@ func (ref dockerReference) PolicyConfigurationNamespaces() []string {
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned ImageCloser.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref dockerReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return newImage(ctx, ref)
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
func (ref dockerReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
return newImage(ctx, sys, ref)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref dockerReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ctx, ref, requestedManifestMIMETypes)
func (ref dockerReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(sys, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref dockerReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
func (ref dockerReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(sys, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref dockerReference) DeleteImage(ctx *types.SystemContext) error {
return deleteImage(ctx, ref)
func (ref dockerReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
return deleteImage(ctx, sys, ref)
}
// tagOrDigest returns a tag or digest from the reference.

View File

@@ -9,12 +9,12 @@ import (
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/ghodss/yaml"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// systemRegistriesDirPath is the path to registries.d, used for locating lookaside Docker signature storage.
@@ -45,9 +45,9 @@ type registryNamespace struct {
type signatureStorageBase *url.URL // The only documented value is nil, meaning storage is not supported.
// configuredSignatureStorageBase reads configuration to find an appropriate signature storage URL for ref, for write access if “write”.
func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReference, write bool) (signatureStorageBase, error) {
func configuredSignatureStorageBase(sys *types.SystemContext, ref dockerReference, write bool) (signatureStorageBase, error) {
// FIXME? Loading and parsing the config could be cached across calls.
dirPath := registriesDirPath(ctx)
dirPath := registriesDirPath(sys)
logrus.Debugf(`Using registries.d directory %s for sigstore configuration`, dirPath)
config, err := loadAndMergeConfig(dirPath)
if err != nil {
@@ -74,13 +74,13 @@ func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReferenc
}
// registriesDirPath returns a path to registries.d
func registriesDirPath(ctx *types.SystemContext) string {
if ctx != nil {
if ctx.RegistriesDirPath != "" {
return ctx.RegistriesDirPath
func registriesDirPath(sys *types.SystemContext) string {
if sys != nil {
if sys.RegistriesDirPath != "" {
return sys.RegistriesDirPath
}
if ctx.RootForImplicitAbsolutePaths != "" {
return filepath.Join(ctx.RootForImplicitAbsolutePaths, systemRegistriesDirPath)
if sys.RootForImplicitAbsolutePaths != "" {
return filepath.Join(sys.RootForImplicitAbsolutePaths, systemRegistriesDirPath)
}
}
return systemRegistriesDirPath

View File

@@ -3,57 +3,51 @@ package tarfile
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/internal/tmpdir"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
// Destination is a partial implementation of types.ImageDestination for writing to an io.Writer.
type Destination struct {
writer io.Writer
tar *tar.Writer
repoTag string
writer io.Writer
tar *tar.Writer
repoTags []reference.NamedTagged
// Other state.
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
config []byte
}
// NewDestination returns a tarfile.Destination for the specified io.Writer.
func NewDestination(dest io.Writer, ref reference.NamedTagged) *Destination {
// For github.com/docker/docker consumers, this works just as well as
// refString := ref.String()
// because when reading the RepoTags strings, github.com/docker/docker/reference
// normalizes both of them to the same value.
//
// Doing it this way to include the normalized-out `docker.io[/library]` does make
// a difference for github.com/projectatomic/docker consumers, with the
// “Add --add-registry and --block-registry options to docker daemon” patch.
// These consumers treat reference strings which include a hostname and reference
// strings without a hostname differently.
//
// Using the host name here is more explicit about the intent, and it has the same
// effect as (docker pull) in projectatomic/docker, which tags the result using
// a hostname-qualified reference.
// See https://github.com/containers/image/issues/72 for a more detailed
// analysis and explanation.
refString := fmt.Sprintf("%s:%s", ref.Name(), ref.Tag())
return &Destination{
writer: dest,
tar: tar.NewWriter(dest),
repoTag: refString,
blobs: make(map[digest.Digest]types.BlobInfo),
repoTags := []reference.NamedTagged{}
if ref != nil {
repoTags = append(repoTags, ref)
}
return &Destination{
writer: dest,
tar: tar.NewWriter(dest),
repoTags: repoTags,
blobs: make(map[digest.Digest]types.BlobInfo),
}
}
// AddRepoTags adds the specified tags to the destination's repoTags.
func (d *Destination) AddRepoTags(tags []reference.NamedTagged) {
d.repoTags = append(d.repoTags, tags...)
}
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
@@ -66,50 +60,50 @@ func (d *Destination) SupportedManifestMIMETypes() []string {
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *Destination) SupportsSignatures() error {
func (d *Destination) SupportsSignatures(ctx context.Context) error {
return errors.Errorf("Storing signatures for docker tar files is not supported")
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *Destination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *Destination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *Destination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *Destination) IgnoresEmbeddedDockerReference() bool {
return false // N/A, we only accept schema2 images where EmbeddedDockerReferenceConflicts() is always false.
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
if inputInfo.Digest.String() == "" {
return types.BlobInfo{}, errors.Errorf("Can not stream a blob with unknown digest to docker tarfile")
}
ok, size, err := d.HasBlob(inputInfo)
if err != nil {
return types.BlobInfo{}, err
}
if ok {
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
}
if inputInfo.Size == -1 { // Ouch, we need to stream the blob into a temporary file just to determine the size.
func (d *Destination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
// Ouch, we need to stream the blob into a temporary file just to determine the size.
// When the layer is decompressed, we also have to generate the digest on uncompressed datas.
if inputInfo.Size == -1 || inputInfo.Digest.String() == "" {
logrus.Debugf("docker tarfile: input with unknown size, streaming to disk first ...")
streamCopy, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-tarfile-blob")
streamCopy, err := ioutil.TempFile(tmpdir.TemporaryDirectoryForBigFiles(), "docker-tarfile-blob")
if err != nil {
return types.BlobInfo{}, err
}
defer os.Remove(streamCopy.Name())
defer streamCopy.Close()
size, err := io.Copy(streamCopy, stream)
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
size, err := io.Copy(streamCopy, tee)
if err != nil {
return types.BlobInfo{}, err
}
@@ -118,17 +112,43 @@ func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types
return types.BlobInfo{}, err
}
inputInfo.Size = size // inputInfo is a struct, so we are only modifying our copy.
if inputInfo.Digest == "" {
inputInfo.Digest = digester.Digest()
}
stream = streamCopy
logrus.Debugf("... streaming done")
}
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
if err := d.sendFile(inputInfo.Digest.String(), inputInfo.Size, tee); err != nil {
// Maybe the blob has been already sent
ok, size, err := d.HasBlob(ctx, inputInfo)
if err != nil {
return types.BlobInfo{}, err
}
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}
return types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}, nil
if ok {
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
}
if isConfig {
buf, err := ioutil.ReadAll(stream)
if err != nil {
return types.BlobInfo{}, errors.Wrap(err, "Error reading Config file stream")
}
d.config = buf
if err := d.sendFile(inputInfo.Digest.Hex()+".json", inputInfo.Size, bytes.NewReader(buf)); err != nil {
return types.BlobInfo{}, errors.Wrap(err, "Error writing Config file")
}
} else {
// Note that this can't be e.g. filepath.Join(l.Digest.Hex(), legacyLayerFileName); due to the way
// writeLegacyLayerMetadata constructs layer IDs differently from inputinfo.Digest values (as described
// inside it), most of the layers would end up in subdirectories alone without any metadata; (docker load)
// tries to load every subdirectory as an image and fails if the config is missing. So, keep the layers
// in the root of the tarball.
if err := d.sendFile(inputInfo.Digest.Hex()+".tar", inputInfo.Size, stream); err != nil {
return types.BlobInfo{}, err
}
}
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: inputInfo.Digest, Size: inputInfo.Size}
return types.BlobInfo{Digest: inputInfo.Digest, Size: inputInfo.Size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with
@@ -137,7 +157,7 @@ func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types
// the blob must also be returned. If the destination does not contain the
// blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil); it
// returns a non-nil error only on an unexpected failure.
func (d *Destination) HasBlob(info types.BlobInfo) (bool, int64, error) {
func (d *Destination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf("Can not check for a blob with unknown digest")
}
@@ -152,18 +172,38 @@ func (d *Destination) HasBlob(info types.BlobInfo) (bool, int64, error) {
// returned false. Like HasBlob and unlike PutBlob, the digest can not be
// empty. If the blob is a filesystem layer, this signifies that the changes
// it describes need to be applied again when composing a filesystem tree.
func (d *Destination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
func (d *Destination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
func (d *Destination) createRepositoriesFile(rootLayerID string) error {
repositories := map[string]map[string]string{}
for _, repoTag := range d.repoTags {
if val, ok := repositories[repoTag.Name()]; ok {
val[repoTag.Tag()] = rootLayerID
} else {
repositories[repoTag.Name()] = map[string]string{repoTag.Tag(): rootLayerID}
}
}
b, err := json.Marshal(repositories)
if err != nil {
return errors.Wrap(err, "Error marshaling repositories")
}
if err := d.sendBytes(legacyRepositoriesFileName, b); err != nil {
return errors.Wrap(err, "Error writing config json file")
}
return nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *Destination) PutManifest(m []byte) error {
func (d *Destination) PutManifest(ctx context.Context, m []byte) error {
// We do not bother with types.ManifestTypeRejectedError; our .SupportedManifestMIMETypes() above is already providing only one alternative,
// so the caller trying a different manifest kind would be pointless.
var man schema2Manifest
var man manifest.Schema2
if err := json.Unmarshal(m, &man); err != nil {
return errors.Wrap(err, "Error parsing manifest")
}
@@ -171,14 +211,42 @@ func (d *Destination) PutManifest(m []byte) error {
return errors.Errorf("Unsupported manifest type, need a Docker schema 2 manifest")
}
layerPaths := []string{}
for _, l := range man.Layers {
layerPaths = append(layerPaths, l.Digest.String())
layerPaths, lastLayerID, err := d.writeLegacyLayerMetadata(man.LayersDescriptors)
if err != nil {
return err
}
items := []manifestItem{{
Config: man.Config.Digest.String(),
RepoTags: []string{d.repoTag},
if len(man.LayersDescriptors) > 0 {
if err := d.createRepositoriesFile(lastLayerID); err != nil {
return err
}
}
repoTags := []string{}
for _, tag := range d.repoTags {
// For github.com/docker/docker consumers, this works just as well as
// refString := ref.String()
// because when reading the RepoTags strings, github.com/docker/docker/reference
// normalizes both of them to the same value.
//
// Doing it this way to include the normalized-out `docker.io[/library]` does make
// a difference for github.com/projectatomic/docker consumers, with the
// “Add --add-registry and --block-registry options to docker daemon” patch.
// These consumers treat reference strings which include a hostname and reference
// strings without a hostname differently.
//
// Using the host name here is more explicit about the intent, and it has the same
// effect as (docker pull) in projectatomic/docker, which tags the result using
// a hostname-qualified reference.
// See https://github.com/containers/image/issues/72 for a more detailed
// analysis and explanation.
refString := fmt.Sprintf("%s:%s", tag.Name(), tag.Tag())
repoTags = append(repoTags, refString)
}
items := []ManifestItem{{
Config: man.ConfigDescriptor.Digest.Hex() + ".json",
RepoTags: repoTags,
Layers: layerPaths,
Parent: "",
LayerSources: nil,
@@ -189,12 +257,81 @@ func (d *Destination) PutManifest(m []byte) error {
}
// FIXME? Do we also need to support the legacy format?
return d.sendFile(manifestFileName, int64(len(itemsBytes)), bytes.NewReader(itemsBytes))
return d.sendBytes(manifestFileName, itemsBytes)
}
// writeLegacyLayerMetadata writes legacy VERSION and configuration files for all layers
func (d *Destination) writeLegacyLayerMetadata(layerDescriptors []manifest.Schema2Descriptor) (layerPaths []string, lastLayerID string, err error) {
var chainID digest.Digest
lastLayerID = ""
for i, l := range layerDescriptors {
// This chainID value matches the computation in docker/docker/layer.CreateChainID …
if chainID == "" {
chainID = l.Digest
} else {
chainID = digest.Canonical.FromString(chainID.String() + " " + l.Digest.String())
}
// … but note that this image ID does not match docker/docker/image/v1.CreateID. At least recent
// versions allocate new IDs on load, as long as the IDs we use are unique / cannot loop.
//
// Overall, the goal of computing a digest dependent on the full history is to avoid reusing an image ID
// (and possibly creating a loop in the "parent" links) if a layer with the same DiffID appears two or more
// times in layersDescriptors. The ChainID values are sufficient for this, the v1.CreateID computation
// which also mixes in the full image configuration seems unnecessary, at least as long as we are storing
// only a single image per tarball, i.e. all DiffID prefixes are unique (cant differ only with
// configuration).
layerID := chainID.Hex()
physicalLayerPath := l.Digest.Hex() + ".tar"
// The layer itself has been stored into physicalLayerPath in PutManifest.
// So, use that path for layerPaths used in the non-legacy manifest
layerPaths = append(layerPaths, physicalLayerPath)
// ... and create a symlink for the legacy format;
if err := d.sendSymlink(filepath.Join(layerID, legacyLayerFileName), filepath.Join("..", physicalLayerPath)); err != nil {
return nil, "", errors.Wrap(err, "Error creating layer symbolic link")
}
b := []byte("1.0")
if err := d.sendBytes(filepath.Join(layerID, legacyVersionFileName), b); err != nil {
return nil, "", errors.Wrap(err, "Error writing VERSION file")
}
// The legacy format requires a config file per layer
layerConfig := make(map[string]interface{})
layerConfig["id"] = layerID
// The root layer doesn't have any parent
if lastLayerID != "" {
layerConfig["parent"] = lastLayerID
}
// The root layer configuration file is generated by using subpart of the image configuration
if i == len(layerDescriptors)-1 {
var config map[string]*json.RawMessage
err := json.Unmarshal(d.config, &config)
if err != nil {
return nil, "", errors.Wrap(err, "Error unmarshaling config")
}
for _, attr := range [7]string{"architecture", "config", "container", "container_config", "created", "docker_version", "os"} {
layerConfig[attr] = config[attr]
}
}
b, err := json.Marshal(layerConfig)
if err != nil {
return nil, "", errors.Wrap(err, "Error marshaling layer config")
}
if err := d.sendBytes(filepath.Join(layerID, legacyConfigFileName), b); err != nil {
return nil, "", errors.Wrap(err, "Error writing config json file")
}
lastLayerID = layerID
}
return layerPaths, lastLayerID, nil
}
type tarFI struct {
path string
size int64
path string
size int64
isSymlink bool
}
func (t *tarFI) Name() string {
@@ -204,6 +341,9 @@ func (t *tarFI) Size() int64 {
return t.size
}
func (t *tarFI) Mode() os.FileMode {
if t.isSymlink {
return os.ModeSymlink
}
return 0444
}
func (t *tarFI) ModTime() time.Time {
@@ -216,6 +356,21 @@ func (t *tarFI) Sys() interface{} {
return nil
}
// sendSymlink sends a symlink into the tar stream.
func (d *Destination) sendSymlink(path string, target string) error {
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: 0, isSymlink: true}, target)
if err != nil {
return nil
}
logrus.Debugf("Sending as tar link %s -> %s", path, target)
return d.tar.WriteHeader(hdr)
}
// sendBytes sends a path into the tar stream.
func (d *Destination) sendBytes(path string, b []byte) error {
return d.sendFile(path, int64(len(b)), bytes.NewReader(b))
}
// sendFile sends a file into the tar stream.
func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader) error {
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: expectedSize}, "")
@@ -226,6 +381,7 @@ func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader
if err := d.tar.WriteHeader(hdr); err != nil {
return err
}
// TODO: This can take quite some time, and should ideally be cancellable using a context.Context.
size, err := io.Copy(d.tar, stream)
if err != nil {
return err
@@ -239,7 +395,7 @@ func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader
// PutSignatures adds the given signatures to the docker tarfile (currently not
// supported). MUST be called after PutManifest (signatures reference manifest
// contents)
func (d *Destination) PutSignatures(signatures [][]byte) error {
func (d *Destination) PutSignatures(ctx context.Context, signatures [][]byte) error {
if len(signatures) != 0 {
return errors.Errorf("Storing signatures for docker tar files is not supported")
}
@@ -248,6 +404,6 @@ func (d *Destination) PutSignatures(signatures [][]byte) error {
// Commit finishes writing data to the underlying io.Writer.
// It is the caller's responsibility to close it, if necessary.
func (d *Destination) Commit() error {
func (d *Destination) Commit(ctx context.Context) error {
return d.tar.Close()
}

View File

@@ -3,12 +3,14 @@ package tarfile
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"io"
"io/ioutil"
"os"
"path"
"github.com/containers/image/internal/tmpdir"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/types"
@@ -18,13 +20,14 @@ import (
// Source is a partial implementation of types.ImageSource for reading from tarPath.
type Source struct {
tarPath string
tarPath string
removeTarPathOnClose bool // Remove temp file on close if true
// The following data is only available after ensureCachedDataIsPresent() succeeds
tarManifest *manifestItem // nil if not available yet.
tarManifest *ManifestItem // nil if not available yet.
configBytes []byte
configDigest digest.Digest
orderedDiffIDList []diffID
knownLayers map[diffID]*layerInfo
orderedDiffIDList []digest.Digest
knownLayers map[digest.Digest]*layerInfo
// Other state
generatedManifest []byte // Private cache for GetManifest(), nil if not set yet.
}
@@ -34,14 +37,75 @@ type layerInfo struct {
size int64
}
// NewSource returns a tarfile.Source for the specified path.
func NewSource(path string) *Source {
// TODO: We could add support for multiple images in a single archive, so
// that people could use docker-archive:opensuse.tar:opensuse:leap as
// the source of an image.
return &Source{
tarPath: path,
// TODO: We could add support for multiple images in a single archive, so
// that people could use docker-archive:opensuse.tar:opensuse:leap as
// the source of an image.
// To do for both the NewSourceFromFile and NewSourceFromStream functions
// NewSourceFromFile returns a tarfile.Source for the specified path.
func NewSourceFromFile(path string) (*Source, error) {
file, err := os.Open(path)
if err != nil {
return nil, errors.Wrapf(err, "error opening file %q", path)
}
defer file.Close()
// If the file is already not compressed we can just return the file itself
// as a source. Otherwise we pass the stream to NewSourceFromStream.
stream, isCompressed, err := compression.AutoDecompress(file)
if err != nil {
return nil, errors.Wrapf(err, "Error detecting compression for file %q", path)
}
defer stream.Close()
if !isCompressed {
return &Source{
tarPath: path,
}, nil
}
return NewSourceFromStream(stream)
}
// NewSourceFromStream returns a tarfile.Source for the specified inputStream,
// which can be either compressed or uncompressed. The caller can close the
// inputStream immediately after NewSourceFromFile returns.
func NewSourceFromStream(inputStream io.Reader) (*Source, error) {
// FIXME: use SystemContext here.
// Save inputStream to a temporary file
tarCopyFile, err := ioutil.TempFile(tmpdir.TemporaryDirectoryForBigFiles(), "docker-tar")
if err != nil {
return nil, errors.Wrap(err, "error creating temporary file")
}
defer tarCopyFile.Close()
succeeded := false
defer func() {
if !succeeded {
os.Remove(tarCopyFile.Name())
}
}()
// In order to be compatible with docker-load, we need to support
// auto-decompression (it's also a nice quality-of-life thing to avoid
// giving users really confusing "invalid tar header" errors).
uncompressedStream, _, err := compression.AutoDecompress(inputStream)
if err != nil {
return nil, errors.Wrap(err, "Error auto-decompressing input")
}
defer uncompressedStream.Close()
// Copy the plain archive to the temporary file.
//
// TODO: This can take quite some time, and should ideally be cancellable
// using a context.Context.
if _, err := io.Copy(tarCopyFile, uncompressedStream); err != nil {
return nil, errors.Wrapf(err, "error copying contents to temporary file %q", tarCopyFile.Name())
}
succeeded = true
return &Source{
tarPath: tarCopyFile.Name(),
removeTarPathOnClose: true,
}, nil
}
// tarReadCloser is a way to close the backing file of a tar.Reader when the user no longer needs the tar component.
@@ -145,23 +209,28 @@ func (s *Source) ensureCachedDataIsPresent() error {
return err
}
// Check to make sure length is 1
if len(tarManifest) != 1 {
return errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(tarManifest))
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest.Config)
configBytes, err := s.readTarComponent(tarManifest[0].Config)
if err != nil {
return err
}
var parsedConfig image // Most fields ommitted, we only care about layer DiffIDs.
var parsedConfig manifest.Schema2Image // There's a lot of info there, but we only really care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest.Config)
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest[0].Config)
}
knownLayers, err := s.prepareLayerData(tarManifest, &parsedConfig)
knownLayers, err := s.prepareLayerData(&tarManifest[0], &parsedConfig)
if err != nil {
return err
}
// Success; commit.
s.tarManifest = tarManifest
s.tarManifest = &tarManifest[0]
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
@@ -170,28 +239,38 @@ func (s *Source) ensureCachedDataIsPresent() error {
}
// loadTarManifest loads and decodes the manifest.json.
func (s *Source) loadTarManifest() (*manifestItem, error) {
func (s *Source) loadTarManifest() ([]ManifestItem, error) {
// FIXME? Do we need to deal with the legacy format?
bytes, err := s.readTarComponent(manifestFileName)
if err != nil {
return nil, err
}
var items []manifestItem
var items []ManifestItem
if err := json.Unmarshal(bytes, &items); err != nil {
return nil, errors.Wrap(err, "Error decoding tar manifest.json")
}
if len(items) != 1 {
return nil, errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(items))
}
return &items[0], nil
return items, nil
}
func (s *Source) prepareLayerData(tarManifest *manifestItem, parsedConfig *image) (map[diffID]*layerInfo, error) {
// Close removes resources associated with an initialized Source, if any.
func (s *Source) Close() error {
if s.removeTarPathOnClose {
return os.Remove(s.tarPath)
}
return nil
}
// LoadTarManifest loads and decodes the manifest.json
func (s *Source) LoadTarManifest() ([]ManifestItem, error) {
return s.loadTarManifest()
}
func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *manifest.Schema2Image) (map[digest.Digest]*layerInfo, error) {
// Collect layer data available in manifest and config.
if len(tarManifest.Layers) != len(parsedConfig.RootFS.DiffIDs) {
return nil, errors.Errorf("Inconsistent layer count: %d in manifest, %d in config", len(tarManifest.Layers), len(parsedConfig.RootFS.DiffIDs))
}
knownLayers := map[diffID]*layerInfo{}
knownLayers := map[digest.Digest]*layerInfo{}
unknownLayerSizes := map[string]*layerInfo{} // Points into knownLayers, a "to do list" of items with unknown sizes.
for i, diffID := range parsedConfig.RootFS.DiffIDs {
if _, ok := knownLayers[diffID]; ok {
@@ -228,7 +307,25 @@ func (s *Source) prepareLayerData(tarManifest *manifestItem, parsedConfig *image
return nil, err
}
if li, ok := unknownLayerSizes[h.Name]; ok {
li.size = h.Size
// Since GetBlob will decompress layers that are compressed we need
// to do the decompression here as well, otherwise we will
// incorrectly report the size. Pretty critical, since tools like
// umoci always compress layer blobs. Obviously we only bother with
// the slower method of checking if it's compressed.
uncompressedStream, isCompressed, err := compression.AutoDecompress(t)
if err != nil {
return nil, errors.Wrapf(err, "Error auto-decompressing %s to determine its size", h.Name)
}
defer uncompressedStream.Close()
uncompressedSize := h.Size
if isCompressed {
uncompressedSize, err = io.Copy(ioutil.Discard, uncompressedStream)
if err != nil {
return nil, errors.Wrapf(err, "Error reading %s to find its size", h.Name)
}
}
li.size = uncompressedSize
delete(unknownLayerSizes, h.Name)
}
}
@@ -241,28 +338,34 @@ func (s *Source) prepareLayerData(tarManifest *manifestItem, parsedConfig *image
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *Source) GetManifest() ([]byte, string, error) {
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *Source) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil {
// How did we even get here? GetManifest(ctx, nil) has returned a manifest.DockerV2Schema2MediaType.
return nil, "", errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
}
if s.generatedManifest == nil {
if err := s.ensureCachedDataIsPresent(); err != nil {
return nil, "", err
}
m := schema2Manifest{
m := manifest.Schema2{
SchemaVersion: 2,
MediaType: manifest.DockerV2Schema2MediaType,
Config: distributionDescriptor{
ConfigDescriptor: manifest.Schema2Descriptor{
MediaType: manifest.DockerV2Schema2ConfigMediaType,
Size: int64(len(s.configBytes)),
Digest: s.configDigest,
},
Layers: []distributionDescriptor{},
LayersDescriptors: []manifest.Schema2Descriptor{},
}
for _, diffID := range s.orderedDiffIDList {
li, ok := s.knownLayers[diffID]
if !ok {
return nil, "", errors.Errorf("Internal inconsistency: Information about layer %s missing", diffID)
}
m.Layers = append(m.Layers, distributionDescriptor{
Digest: digest.Digest(diffID), // diffID is a digest of the uncompressed tarball
m.LayersDescriptors = append(m.LayersDescriptors, manifest.Schema2Descriptor{
Digest: diffID, // diffID is a digest of the uncompressed tarball
MediaType: manifest.DockerV2Schema2LayerMediaType,
Size: li.size,
})
@@ -276,27 +379,26 @@ func (s *Source) GetManifest() ([]byte, string, error) {
return s.generatedManifest, manifest.DockerV2Schema2MediaType, nil
}
// GetTargetManifest returns an image's manifest given a digest. This is mainly used to retrieve a single image's manifest
// out of a manifest list.
func (s *Source) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
// How did we even get here? GetManifest() above has returned a manifest.DockerV2Schema2MediaType.
return nil, "", errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
}
type readCloseWrapper struct {
// uncompressedReadCloser is an io.ReadCloser that closes both the uncompressed stream and the underlying input.
type uncompressedReadCloser struct {
io.Reader
closeFunc func() error
underlyingCloser func() error
uncompressedCloser func() error
}
func (r readCloseWrapper) Close() error {
if r.closeFunc != nil {
return r.closeFunc()
func (r uncompressedReadCloser) Close() error {
var res error
if err := r.uncompressedCloser(); err != nil {
res = err
}
return nil
if err := r.underlyingCloser(); err != nil && res == nil {
res = err
}
return res
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
func (s *Source) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
if err := s.ensureCachedDataIsPresent(); err != nil {
return nil, 0, err
}
@@ -305,11 +407,17 @@ func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
return ioutil.NopCloser(bytes.NewReader(s.configBytes)), int64(len(s.configBytes)), nil
}
if li, ok := s.knownLayers[diffID(info.Digest)]; ok { // diffID is a digest of the uncompressed tarball,
stream, err := s.openTarComponent(li.path)
if li, ok := s.knownLayers[info.Digest]; ok { // diffID is a digest of the uncompressed tarball,
underlyingStream, err := s.openTarComponent(li.path)
if err != nil {
return nil, 0, err
}
closeUnderlyingStream := true
defer func() {
if closeUnderlyingStream {
underlyingStream.Close()
}
}()
// In order to handle the fact that digests != diffIDs (and thus that a
// caller which is trying to verify the blob will run into problems),
@@ -323,22 +431,17 @@ func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
// be verifing a "digest" which is not the actual layer's digest (but
// is instead the DiffID).
decompressFunc, reader, err := compression.DetectCompression(stream)
uncompressedStream, _, err := compression.AutoDecompress(underlyingStream)
if err != nil {
return nil, 0, errors.Wrapf(err, "Detecting compression in blob %s", info.Digest)
return nil, 0, errors.Wrapf(err, "Error auto-decompressing blob %s", info.Digest)
}
if decompressFunc != nil {
reader, err = decompressFunc(reader)
if err != nil {
return nil, 0, errors.Wrapf(err, "Decompressing blob %s stream", info.Digest)
}
}
newStream := readCloseWrapper{
Reader: reader,
closeFunc: stream.Close,
newStream := uncompressedReadCloser{
Reader: uncompressedStream,
underlyingCloser: underlyingStream.Close,
uncompressedCloser: uncompressedStream.Close,
}
closeUnderlyingStream = false
return newStream, li.size, nil
}
@@ -347,6 +450,13 @@ func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
}
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
func (s *Source) GetSignatures() ([][]byte, error) {
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
func (s *Source) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
if instanceDigest != nil {
// How did we even get here? GetManifest(ctx, nil) has returned a manifest.DockerV2Schema2MediaType.
return nil, errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
}
return [][]byte{}, nil
}

View File

@@ -1,53 +1,28 @@
package tarfile
import "github.com/opencontainers/go-digest"
import (
"github.com/containers/image/manifest"
"github.com/opencontainers/go-digest"
)
// Various data structures.
// Based on github.com/docker/docker/image/tarexport/tarexport.go
const (
manifestFileName = "manifest.json"
// legacyLayerFileName = "layer.tar"
// legacyConfigFileName = "json"
// legacyVersionFileName = "VERSION"
// legacyRepositoriesFileName = "repositories"
manifestFileName = "manifest.json"
legacyLayerFileName = "layer.tar"
legacyConfigFileName = "json"
legacyVersionFileName = "VERSION"
legacyRepositoriesFileName = "repositories"
)
type manifestItem struct {
// ManifestItem is an element of the array stored in the top-level manifest.json file.
type ManifestItem struct {
Config string
RepoTags []string
Layers []string
Parent imageID `json:",omitempty"`
LayerSources map[diffID]distributionDescriptor `json:",omitempty"`
Parent imageID `json:",omitempty"`
LayerSources map[digest.Digest]manifest.Schema2Descriptor `json:",omitempty"`
}
type imageID string
type diffID digest.Digest
// Based on github.com/docker/distribution/blobs.go
type distributionDescriptor struct {
MediaType string `json:"mediaType,omitempty"`
Size int64 `json:"size,omitempty"`
Digest digest.Digest `json:"digest,omitempty"`
URLs []string `json:"urls,omitempty"`
}
// Based on github.com/docker/distribution/manifest/schema2/manifest.go
// FIXME: We are repeating this all over the place; make a public copy?
type schema2Manifest struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType,omitempty"`
Config distributionDescriptor `json:"config"`
Layers []distributionDescriptor `json:"layers"`
}
// Based on github.com/docker/docker/image/image.go
// MOST CONTENT OMITTED AS UNNECESSARY
type image struct {
RootFS *rootFS `json:"rootfs,omitempty"`
}
type rootFS struct {
Type string `json:"type"`
DiffIDs []diffID `json:"diff_ids,omitempty"`
}

View File

@@ -0,0 +1,66 @@
{
"title": "JSON embedded in an atomic container signature",
"description": "This schema is a supplement to atomic-signature.md in this directory.\n\nConsumers of the JSON MUST use the processing rules documented in atomic-signature.md, especially the requirements for the 'critical' subjobject.\n\nWhenever this schema and atomic-signature.md, or the github.com/containers/image/signature implementation, differ,\nit is the atomic-signature.md document, or the github.com/containers/image/signature implementation, which governs.\n\nUsers are STRONGLY RECOMMENDED to use the github.com/containeres/image/signature implementation instead of writing\ntheir own, ESPECIALLY when consuming signatures, so that the policy.json format can be shared by all image consumers.\n",
"type": "object",
"required": [
"critical",
"optional"
],
"additionalProperties": false,
"properties": {
"critical": {
"type": "object",
"required": [
"type",
"image",
"identity"
],
"additionalProperties": false,
"properties": {
"type": {
"type": "string",
"enum": [
"atomic container signature"
]
},
"image": {
"type": "object",
"required": [
"docker-manifest-digest"
],
"additionalProperties": false,
"properties": {
"docker-manifest-digest": {
"type": "string"
}
}
},
"identity": {
"type": "object",
"required": [
"docker-reference"
],
"additionalProperties": false,
"properties": {
"docker-reference": {
"type": "string"
}
}
}
}
},
"optional": {
"type": "object",
"description": "All members are optional, but if they are included, they must be valid.",
"additionalProperties": true,
"properties": {
"creator": {
"type": "string"
},
"timestamp": {
"type": "integer"
}
}
}
}
}

View File

@@ -0,0 +1,241 @@
% atomic-signature(5) Atomic signature format
% Miloslav Trmač
% March 2017
# Atomic signature format
This document describes the format of “atomic” container signatures,
as implemented by the `github.com/containers/image/signature` package.
Most users should be able to consume these signatures by using the `github.com/containers/image/signature` package
(preferably through the higher-level `signature.PolicyContext` interface)
without having to care about the details of the format described below.
This documentation exists primarily for maintainers of the package
and to allow independent reimplementations.
## High-level overview
The signature provides an end-to-end authenticated claim that a container image
has been approved by a specific party (e.g. the creator of the image as their work,
an automated build system as a result of an automated build,
a company IT department approving the image for production) under a specified _identity_
(e.g. an OS base image / specific application, with a specific version).
An atomic container signature consists of a cryptographic signature which identifies
and authenticates who signed the image, and carries as a signed payload a JSON document.
The JSON document identifies the image being signed, claims a specific identity of the
image and if applicable, contains other information about the image.
The signatures do not modify the container image (the layers, configuration, manifest, …);
e.g. their presence does not change the manifest digest used to identify the image in
docker/distribution servers; rather, the signatures are associated with an immutable image.
An image can have any number of signatures so signature distribution systems SHOULD support
associating more than one signature with an image.
## The cryptographic signature
As distributed, the atomic container signature is a blob which contains a cryptographic signature
in an industry-standard format, carrying a signed JSON payload (i.e. the blob contains both the
JSON document and a signature of the JSON document; it is not a “detached signature” with
independent blobs containing the JSON document and a cryptographic signature).
Currently the only defined cryptographic signature format is an OpenPGP signature (RFC 4880),
but others may be added in the future. (The blob does not contain metadata identifying the
cryptographic signature format. It is expected that most formats are sufficiently self-describing
that this is not necessary and the configured expected public key provides another indication
of the expected cryptographic signature format. Such metadata may be added in the future for
newly added cryptographic signature formats, if necessary.)
Consumers of atomic container signatures SHOULD verify the cryptographic signature
against one or more trusted public keys
(e.g. defined in a [policy.json signature verification policy file](policy.json.md))
before parsing or processing the JSON payload in _any_ way,
in particular they SHOULD stop processing the container signature
if the cryptographic signature verification fails, without even starting to process the JSON payload.
(Consumers MAY extract identification of the signing key and other metadata from the cryptographic signature,
and the JSON payload, without verifying the signature, if the purpose is to allow managing the signature blobs,
e.g. to list the authors and image identities of signatures associated with a single container image;
if so, they SHOULD design the output of such processing to minimize the risk of users considering the output trusted
or in any way usable for making policy decisions about the image.)
### OpenPGP signature verification
When verifying a cryptographic signature in the OpenPGP format,
the consumer MUST verify at least the following aspects of the signature
(like the `github.com/containers/image/signature` package does):
- The blob MUST be a “Signed Message” as defined RFC 4880 section 11.3.
(e.g. it MUST NOT be an unsigned “Literal Message”, or any other non-signature format).
- The signature MUST have been made by an expected key trusted for the purpose (and the specific container image).
- The signature MUST be correctly formed and pass the cryptographic validation.
- The signature MUST correctly authenticate the included JSON payload
(in particular, the parsing of the JSON payload MUST NOT start before the complete payload has been cryptographically authenticated).
- The signature MUST NOT be expired.
The consumer SHOULD have tests for its verification code which verify that signatures failing any of the above are rejected.
## JSON processing and forward compatibility
The payload of the cryptographic signature is a JSON document (RFC 7159).
Consumers SHOULD parse it very strictly,
refusing any signature which violates the expected format (e.g. missing members, incorrect member types)
or can be interpreted ambiguously (e.g. a duplicated member in a JSON object).
Any violations of the JSON format or of other requirements in this document MAY be accepted if the JSON document can be recognized
to have been created by a known-incorrect implementation (see [`optional.creator`](#optionalcreator) below)
and if the semantics of the invalid document, as created by such an implementation, is clear.
The top-level value of the JSON document MUST be a JSON object with exactly two members, `critical` and `optional`,
each a JSON object.
The `critical` object MUST contain a `type` member identifying the document as an atomic container signature
(as defined [below](#criticaltype))
and signature consumers MUST reject signatures which do not have this member or in which this member does not have the expected value.
To ensure forward compatibility (allowing older signature consumers to correctly
accept or reject signatures created at a later date, with possible extensions to this format),
consumers MUST reject the signature if the `critical` object, or _any_ of its subobjects,
contain _any_ member or data value which is unrecognized, unsupported, invalid, or in any other way unexpected.
At a minimum, this includes unrecognized members in a JSON object, or incorrect types of expected members.
For the same reason, consumers SHOULD accept any members with unrecognized names in the `optional` object,
and MAY accept signatures where the object member is recognized but unsupported, or the value of the member is unsupported.
Consumers still SHOULD reject signatures where a member of an `optional` object is supported but the value is recognized as invalid.
## JSON data format
An example of the full format follows, with detailed description below.
To reiterate, consumers of the signature SHOULD perform successful cryptographic verification,
and MUST reject unexpected data in the `critical` object, or in the top-level object, as described above.
```json
{
"critical": {
"type": "atomic container signature",
"image": {
"docker-manifest-digest": "sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e"
},
"identity": {
"docker-reference": "docker.io/library/busybox:latest"
}
},
"optional": {
"creator": "some software package v1.0.1-35",
"timestamp": 1483228800,
}
}
```
### `critical`
This MUST be a JSON object which contains data critical to correctly evaluating the validity of a signature.
Consumers MUST reject any signature where the `critical` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data.
### `critical.type`
This MUST be a string with a string value exactly equal to `atomic container signature` (three words, including the spaces).
Signature consumers MUST reject signatures which do not have this member or this member does not have exactly the expected value.
(The consumers MAY support signatures with a different value of the `type` member, if any is defined in the future;
if so, the rest of the JSON document is interpreted according to rules defining that value of `critical.type`,
not by this document.)
### `critical.image`
This MUST be a JSON object which identifies the container image this signature applies to.
Consumers MUST reject any signature where the `critical.image` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data.
(Currently only the `docker-manifest-digest` way of identifying a container image is defined;
alternatives to this may be defined in the future,
but existing consumers are required to reject signatures which use formats they do not support.)
### `critical.image.docker-manifest-digest`
This MUST be a JSON string, in the `github.com/opencontainers/go-digest.Digest` string format.
The value of this member MUST match the manifest of the signed container image, as implemented in the docker/distribution manifest addressing system.
The consumer of the signature SHOULD verify the manifest digest against a fully verified signature before processing the contents of the image manifest in any other way
(e.g. parsing the manifest further or downloading layers of the image).
Implementation notes:
* A single container image manifest may have several valid manifest digest values, using different algorithms.
* For “signed” [docker/distribution schema 1](https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md) manifests,
the manifest digest applies to the payload of the JSON web signature, not to the raw manifest blob.
### `critical.identity`
This MUST be a JSON object which identifies the claimed identity of the image (usually the purpose of the image, or the application, along with a version information),
as asserted by the author of the signature.
Consumers MUST reject any signature where the `critical.identity` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data.
(Currently only the `docker-reference` way of claiming an image identity/purpose is defined;
alternatives to this may be defined in the future,
but existing consumers are required to reject signatures which use formats they do not support.)
### `critical.identity.docker-reference`
This MUST be a JSON string, in the `github.com/docker/distribution/reference` string format,
and using the same normalization semantics (where e.g. `busybox:latest` is equivalent to `docker.io/library/busybox:latest`).
If the normalization semantics allows multiple string representations of the claimed identity with equivalent meaning,
the `critical.identity.docker-reference` member SHOULD use the fully explicit form (including the full host name and namespaces).
The value of this member MUST match the image identity/purpose expected by the consumer of the image signature and the image
(again, accounting for the `docker/distribution/reference` normalization semantics).
In the most common case, this means that the `critical.identity.docker-reference` value must be equal to the docker/distribution reference used to refer to or download the image.
However, depending on the specific application, users or system administrators may accept less specific matches
(e.g. ignoring the tag value in the signature when pulling the `:latest` tag or when referencing an image by digest),
or they may require `critical.identity.docker-reference` values with a completely different namespace to the reference used to refer to/download the image
(e.g. requiring a `critical.identity.docker-reference` value which identifies the image as coming from a supplier when fetching it from a company-internal mirror of approved images).
The software performing this verification SHOULD allow the users to define such a policy using the [policy.json signature verification policy file format](policy.json.md).
The `critical.identity.docker-reference` value SHOULD contain either a tag or digest;
in most cases, it SHOULD use a tag rather than a digest. (See also the default [`matchRepoDigestOrExact` matching semantics in `policy.json`](policy.json.md#signedby).)
### `optional`
This MUST be a JSON object.
Consumers SHOULD accept any members with unrecognized names in the `optional` object,
and MAY accept a signature where the object member is recognized but unsupported, or the value of the member is valid but unsupported.
Consumers still SHOULD reject any signature where a member of an `optional` object is supported but the value is recognized as invalid.
### `optional.creator`
If present, this MUST be a JSON string, identifying the name and version of the software which has created the signature.
The contents of this string is not defined in detail; however each implementation creating atomic container signatures:
- SHOULD define the contents to unambiguously define the software in practice (e.g. it SHOULD contain the name of the software, not only the version number)
- SHOULD use a build and versioning process which ensures that the contents of this string (e.g. an included version number)
changes whenever the format or semantics of the generated signature changes in any way;
it SHOULD not be possible for two implementations which use a different format or semantics to have the same `optional.creator` value
- SHOULD use a format which is reasonably easy to parse in software (perhaps using a regexp),
and which makes it easy enough to recognize a range of versions of a specific implementation
(e.g. the version of the implementation SHOULD NOT be only a git hash, because they dont have an easily defined ordering;
the string should contain a version number, or at least a date of the commit).
Consumers of atomic container signatures MAY recognize specific values or sets of values of `optional.creator`
(perhaps augmented with `optional.timestamp`),
and MAY change their processing of the signature based on these values
(usually to acommodate violations of this specification in past versions of the signing software which cannot be fixed retroactively),
as long as the semantics of the invalid document, as created by such an implementation, is clear.
If consumers of signatures do change their behavior based on the `optional.creator` value,
they SHOULD take care that the way they process the signatures is not inconsistent with
strictly validating signature consumers.
(I.e. it is acceptable for a consumer to accept a signature based on a specific `optional.creator` value
if other implementations would completely reject the signature,
but it would be very undesirable for the two kinds of implementations to accept the signature in different
and inconsistent situations.)
### `optional.timestamp`
If present, this MUST be a JSON number, which is representable as a 64-bit integer, and identifies the time when the signature was created
as the number of seconds since the UNIX epoch (Jan 1 1970 00:00 UTC).

View File

@@ -0,0 +1,283 @@
% CONTAINERS-POLICY.JSON(5) policy.json Man Page
% Miloslav Trmač
% September 2016
# NAME
containers-policy.json - syntax for the signature verification policy file
## DESCRIPTION
Signature verification policy files are used to specify policy, e.g. trusted keys,
applicable when deciding whether to accept an image, or individual signatures of that image, as valid.
The default policy is stored (unless overridden at compile-time) at `/etc/containers/policy.json`;
applications performing verification may allow using a different policy instead.
## FORMAT
The signature verification policy file, usually called `policy.json`,
uses a JSON format. Unlike some other JSON files, its parsing is fairly strict:
unrecognized, duplicated or otherwise invalid fields cause the entire file,
and usually the entire operation, to be rejected.
The purpose of the policy file is to define a set of *policy requirements* for a container image,
usually depending on its location (where it is being pulled from) or otherwise defined identity.
Policy requirements can be defined for:
- An individual *scope* in a *transport*.
The *transport* values are the same as the transport prefixes when pushing/pulling images (e.g. `docker:`, `atomic:`),
and *scope* values are defined by each transport; see below for more details.
Usually, a scope can be defined to match a single image, and various prefixes of
such a most specific scope define namespaces of matching images.
- A default policy for a single transport, expressed using an empty string as a scope
- A global default policy.
If multiple policy requirements match a given image, only the requirements from the most specific match apply,
the more general policy requirements definitions are ignored.
This is expressed in JSON using the top-level syntax
```js
{
"default": [/* policy requirements: global default */]
"transports": {
transport_name: {
"": [/* policy requirements: default for transport $transport_name */],
scope_1: [/* policy requirements: default for $scope_1 in $transport_name */],
scope_2: [/*…*/]
/*…*/
},
transport_name_2: {/*…*/}
/*…*/
}
}
```
The global `default` set of policy requirements is mandatory; all of the other fields
(`transports` itself, any specific transport, the transport-specific default, etc.) are optional.
<!-- NOTE: Keep this in sync with transports/transports.go! -->
## Supported transports and their scopes
### `atomic:`
The `atomic:` transport refers to images in an Atomic Registry.
Supported scopes use the form _hostname_[`:`_port_][`/`_namespace_[`/`_imagestream_ [`:`_tag_]]],
i.e. either specifying a complete name of a tagged image, or prefix denoting
a host/namespace/image stream.
*Note:* The _hostname_ and _port_ refer to the Docker registry host and port (the one used
e.g. for `docker pull`), _not_ to the OpenShift API host and port.
### `dir:`
The `dir:` transport refers to images stored in local directories.
Supported scopes are paths of directories (either containing a single image or
subdirectories possibly containing images).
*Note:* The paths must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored.
The top-level scope `"/"` is forbidden; use the transport default scope `""`,
for consistency with other transports.
### `docker:`
The `docker:` transport refers to images in a registry implementing the "Docker Registry HTTP API V2".
Scopes matching individual images are named Docker references *in the fully expanded form*, either
using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`).
More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest),
a repository namespace, or a registry host (by only specifying the host name).
### `oci:`
The `oci:` transport refers to images in directories compliant with "Open Container Image Layout Specification".
Supported scopes use the form _directory_`:`_tag_, and _directory_ referring to
a directory containing one or more tags, or any of the parent directories.
*Note:* See `dir:` above for semantics and restrictions on the directory paths, they apply to `oci:` equivalently.
### `tarball:`
The `tarball:` transport refers to tarred up container root filesystems.
Scopes are ignored.
## Policy Requirements
Using the mechanisms above, a set of policy requirements is looked up. The policy requirements
are represented as a JSON array of individual requirement objects. For an image to be accepted,
*all* of the requirements must be satisfied simulatenously.
The policy requirements can also be used to decide whether an individual signature is accepted (= is signed by a recognized key of a known author);
in that case some requirements may apply only to some signatures, but each signature must be accepted by *at least one* requirement object.
The following requirement objects are supported:
### `insecureAcceptAnything`
A simple requirement with the following syntax
```json
{"type":"insecureAcceptAnything"}
```
This requirement accepts any image (but note that other requirements in the array still apply).
When deciding to accept an individual signature, this requirement does not have any effect; it does *not* cause the signature to be accepted, though.
This is useful primarily for policy scopes where no signature verification is required;
because the array of policy requirements must not be empty, this requirement is used
to represent the lack of requirements explicitly.
### `reject`
A simple requirement with the following syntax:
```json
{"type":"reject"}
```
This requirement rejects every image, and every signature.
### `signedBy`
This requirement requires an image to be signed with an expected identity, or accepts a signature if it is using an expected identity and key.
```js
{
"type": "signedBy",
"keyType": "GPGKeys", /* The only currently supported value */
"keyPath": "/path/to/local/keyring/file",
"keyData": "base64-encoded-keyring-data",
"signedIdentity": identity_requirement
}
```
<!-- Later: other keyType values -->
Exactly one of `keyPath` and `keyData` must be present, containing a GPG keyring of one or more public keys. Only signatures made by these keys are accepted.
The `signedIdentity` field, a JSON object, specifies what image identity the signature claims about the image.
One of the following alternatives are supported:
- The identity in the signature must exactly match the image identity. Note that with this, referencing an image by digest (with a signature claiming a _repository_`:`_tag_ identity) will fail.
```json
{"type":"matchExact"}
```
- If the image identity carries a tag, the identity in the signature must exactly match;
if the image identity uses a digest reference, the identity in the signature must be in the same repository as the image identity (using any tag).
(Note that with images identified using digest references, the digest from the reference is validated even before signature verification starts.)
```json
{"type":"matchRepoDigestOrExact"}
```
- The identity in the signature must be in the same repository as the image identity. This is useful e.g. to pull an image using the `:latest` tag when the image is signed with a tag specifing an exact image version.
```json
{"type":"matchRepository"}
```
- The identity in the signature must exactly match a specified identity.
This is useful e.g. when locally mirroring images signed using their public identity.
```js
{
"type": "exactReference",
"dockerReference": docker_reference_value
}
```
- The identity in the signature must be in the same repository as a specified identity.
This combines the properties of `matchRepository` and `exactReference`.
```js
{
"type": "exactRepository",
"dockerRepository": docker_repository_value
}
```
If the `signedIdentity` field is missing, it is treated as `matchRepoDigestOrExact`.
*Note*: `matchExact`, `matchRepoDigestOrExact` and `matchRepository` can be only used if a Docker-like image identity is
provided by the transport. In particular, the `dir:` and `oci:` transports can be only
used with `exactReference` or `exactRepository`.
<!-- ### `signedBaseLayer` -->
## Examples
It is *strongly* recommended to set the `default` policy to `reject`, and then
selectively allow individual transports and scopes as desired.
### A reasonably locked-down system
(Note that the `/*`…`*/` comments are not valid in JSON, and must not be used in real policies.)
```js
{
"default": [{"type": "reject"}], /* Reject anything not explicitly allowed */
"transports": {
"docker": {
/* Allow installing images from a specific repository namespace, without cryptographic verification.
This namespace includes images like openshift/hello-openshift and openshift/origin. */
"docker.io/openshift": [{"type": "insecureAcceptAnything"}],
/* Similarly, allow installing the “official” busybox images. Note how the fully expanded
form, with the explicit /library/, must be used. */
"docker.io/library/busybox": [{"type": "insecureAcceptAnything"}]
/* Other docker: images use the global default policy and are rejected */
},
"dir": {
"": [{"type": "insecureAcceptAnything"}] /* Allow any images originating in local directories */
},
"atomic": {
/* The common case: using a known key for a repository or set of repositories */
"hostname:5000/myns/official": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/path/to/official-pubkey.gpg"
}
],
/* A more complex example, for a repository which contains a mirror of a third-party product,
which must be signed-off by local IT */
"hostname:5000/vendor/product": [
{ /* Require the image to be signed by the original vendor, using the vendor's repository location. */
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/path/to/vendor-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "vendor-hostname/product/repository"
}
},
{ /* Require the image to _also_ be signed by a local reviewer. */
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/path/to/reviewer-pubkey.gpg"
}
]
}
}
}
```
### Completely disable security, allow all images, do not trust any signatures
```json
{
"default": [{"type": "insecureAcceptAnything"}]
}
```
## SEE ALSO
atomic(1)
## HISTORY
August 2018, Rename to containers-policy.json(5) by Valentin Rothberg <vrothberg@suse.com>
September 2016, Originally compiled by Miloslav Trmač <mitr@redhat.com>

View File

@@ -0,0 +1,56 @@
% CONTAINERS-REGISTRIES.CONF(5) System-wide registry configuration file
% Brent Baude
% Aug 2017
# NAME
containers-registries.conf - Syntax of System Registry Configuration File
# DESCRIPTION
The CONTAINERS-REGISTRIES configuration file is a system-wide configuration
file for container image registries. The file format is TOML. The valid
categories are: 'registries.search', 'registries.insecure', and
'registries.block'.
By default, the configuration file is located at `/etc/containers/registries.conf`.
# FORMAT
The TOML_format is used to build a simple list format for registries under three
categories: `registries.search`, `registries.insecure`, and `registries.block`.
You can list multiple registries using a comma separated list.
Search registries are used when the caller of a container runtime does not fully specify the
container image that they want to execute. These registries are prepended onto the front
of the specified container image until the named image is found at a registry.
Insecure Registries. By default container runtimes use TLS when retrieving images
from a registry. If the registry is not setup with TLS, then the container runtime
will fail to pull images from the registry. If you add the registry to the list of
insecure registries then the container runtime will attempt use standard web protocols to
pull the image. It also allows you to pull from a registry with self-signed certificates.
Note insecure registries can be used for any registry, not just the registries listed
under search.
Block Registries. The registries in this category are are not pulled from when
retrieving images.
# EXAMPLE
The following example configuration defines two searchable registries, one
insecure registry, and two blocked registries.
```
[registries.search]
registries = ['registry1.com', 'registry2.com']
[registries.insecure]
registries = ['registry3.com']
[registries.block]
registries = ['registry.untrusted.com', 'registry.unsafe.com']
```
# HISTORY
Aug 2018, Renamed to containers-registries.conf(5) by Valentin Rothberg <vrothberg@suse.com>
Jun 2018, Updated by Tom Sweeney <tsweeney@redhat.com>
Aug 2017, Originally compiled by Brent Baude <bbaude@redhat.com>

View File

@@ -0,0 +1,128 @@
% CONTAINERS-REGISTRIES.D(5) Registries.d Man Page
% Miloslav Trmač
% August 2016
# NAME
containers-registries.d - Directory for various registries configurations
# DESCRIPTION
The registries configuration directory contains configuration for various registries
(servers storing remote container images), and for content stored in them,
so that the configuration does not have to be provided in command-line options over and over for every command,
and so that it can be shared by all users of containers/image.
By default (unless overridden at compile-time), the registries configuration directory is `/etc/containers/registries.d`;
applications may allow using a different directory instead.
## Directory Structure
The directory may contain any number of files with the extension `.yaml`,
each using the YAML format. Other than the mandatory extension, names of the files
dont matter.
The contents of these files are merged together; to have a well-defined and easy to understand
behavior, there can be only one configuration section describing a single namespace within a registry
(in particular there can be at most one one `default-docker` section across all files,
and there can be at most one instance of any key under the the `docker` section;
these sections are documented later).
Thus, it is forbidden to have two conflicting configurations for a single registry or scope,
and it is also forbidden to split a configuration for a single registry or scope across
more than one file (even if they are not semantically in conflict).
## Registries, Scopes and Search Order
Each YAML file must contain a “YAML mapping” (key-value pairs). Two top-level keys are defined:
- `default-docker` is the _configuration section_ (as documented below)
for registries implementing "Docker Registry HTTP API V2".
This key is optional.
- `docker` is a mapping, using individual registries implementing "Docker Registry HTTP API V2",
or namespaces and individual images within these registries, as keys;
the value assigned to any such key is a _configuration section_.
This key is optional.
Scopes matching individual images are named Docker references *in the fully expanded form*, either
using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`).
More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest),
a repository namespace, or a registry host (and a port if it differs from the default).
Note that if a registry is accessed using a hostname+port configuration, the port-less hostname
is _not_ used as parent scope.
When searching for a configuration to apply for an individual container image, only
the configuration for the most-precisely matching scope is used; configuration using
more general scopes is ignored. For example, if _any_ configuration exists for
`docker.io/library/busybox`, the configuration for `docker.io` is ignored
(even if some element of the configuration is defined for `docker.io` and not for `docker.io/library/busybox`).
## Individual Configuration Sections
A single configuration section is selected for a container image using the process
described above. The configuration section is a YAML mapping, with the following keys:
- `sigstore-staging` defines an URL of of the signature storage, used for editing it (adding or deleting signatures).
This key is optional; if it is missing, `sigstore` below is used.
- `sigstore` defines an URL of the signature storage.
This URL is used for reading existing signatures,
and if `sigstore-staging` does not exist, also for adding or removing them.
This key is optional; if it is missing, no signature storage is defined (no signatures
are download along with images, adding new signatures is possible only if `sigstore-staging` is defined).
## Examples
### Using Containers from Various Origins
The following demonstrates how to to consume and run images from various registries and namespaces:
```yaml
docker:
registry.database-supplier.com:
sigstore: https://sigstore.database-supplier.com
distribution.great-middleware.org:
sigstore: https://security-team.great-middleware.org/sigstore
docker.io/web-framework:
sigstore: https://sigstore.web-framework.io:8080
```
### Developing and Signing Containers, Staging Signatures
For developers in `example.com`:
- Consume most container images using the public servers also used by clients.
- Use a separate sigure storage for an container images in a namespace corresponding to the developers' department, with a staging storage used before publishing signatures.
- Craft an individual exception for a single branch a specific developer is working on locally.
```yaml
docker:
registry.example.com:
sigstore: https://registry-sigstore.example.com
registry.example.com/mydepartment:
sigstore: https://sigstore.mydepartment.example.com
sigstore-staging: file:///mnt/mydepartment/sigstore-staging
registry.example.com/mydepartment/myproject:mybranch:
sigstore: http://localhost:4242/sigstore
sigstore-staging: file:///home/useraccount/webroot/sigstore
```
### A Global Default
If a company publishes its products using a different domain, and different registry hostname for each of them, it is still possible to use a single signature storage server
without listing each domain individually. This is expected to rarely happen, usually only for staging new signatures.
```yaml
default-docker:
sigstore-staging: file:///mnt/company/common-sigstore-staging
```
# AUTHORS
Miloslav Trmač <mitr@redhat.com>

View File

@@ -0,0 +1,136 @@
# Signature access protocols
The `github.com/containers/image` library supports signatures implemented as blobs “attached to” an image.
Some image transports (local storage formats and remote procotocols) implement these signatures natively
or trivially; for others, the protocol extensions described below are necessary.
## docker/distribution registries—separate storage
### Usage
Any existing docker/distribution registry, whether or not it natively supports signatures,
can be augmented with separate signature storage by configuring a signature storage URL in [`registries.d`](registries.d.md).
`registries.d` can be configured to use one storage URL for a whole docker/distribution server,
or also separate URLs for smaller namespaces or individual repositories within the server
(which e.g. allows image authors to manage their own signature storage while publishing
the images on the public `docker.io` server).
The signature storage URL defines a root of a path hierarchy.
It can be either a `file:///…` URL, pointing to a local directory structure,
or a `http`/`https` URL, pointing to a remote server.
`file:///` signature storage can be both read and written, `http`/`https` only supports reading.
The same path hierarchy is used in both cases, so the HTTP/HTTPS server can be
a simple static web server serving a directory structure created by writing to a `file:///` signature storage.
(This of course does not prevent other server implementations,
e.g. a HTTP server reading signatures from a database.)
The usual workflow for producing and distributing images using the separate storage mechanism
is to configure the repository in `registries.d` with `sigstore-staging` URL pointing to a private
`file:///` staging area, and a `sigstore` URL pointing to a public web server.
To publish an image, the image author would sign the image as necessary (e.g. using `skopeo copy`),
and then copy the created directory structure from the `file:///` staging area
to a subdirectory of a webroot of the public web server so that they are accessible using the public `sigstore` URL.
The author would also instruct consumers of the image to, or provide a `registries.d` configuration file to,
set up a `sigstore` URL pointing to the public web server.
### Path structure
Given a _base_ signature storage URL configured in `registries.d` as mentioned above,
and a container image stored in a docker/distribution registry using the _fully-expanded_ name
_hostname_`/`_namespaces_`/`_name_{`@`_digest_,`:`_tag_} (e.g. for `docker.io/library/busybox:latest`,
_namespaces_ is `library`, even if the user refers to the image using the shorter syntax as `busybox:latest`),
signatures are accessed using URLs of the form
> _base_`/`_namespaces_`/`_name_`@`_digest-algo_`=`_digest-value_`/signature-`_index_
where _digest-algo_`:`_digest-value_ is a manifest digest usable for referencing the relevant image manifest
(i.e. even if the user referenced the image using a tag,
the signature storage is always disambiguated using digest references).
Note that in the URLs used for signatures,
_digest-algo_ and _digest-value_ are separated using the `=` character,
not `:` like when acessing the manifest using the docker/distribution API.
Within the URL, _index_ is a decimal integer (in the canonical form), starting with 1.
Signatures are stored at URLs with successive _index_ values; to read all of them, start with _index_=1,
and continue reading signatures and increasing _index_ as long as signatures with these _index_ values exist.
Similarly, to add one more signature to an image, find the first _index_ which does not exist, and
then store the new signature using that _index_ value.
There is no way to list existing signatures other than iterating through the successive _index_ values,
and no way to download all of the signatures at once.
### Examples
For a docker/distribution image available as `busybox@sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e`
(or as `busybox:latest` if the `latest` tag points to to a manifest with the same digest),
and with a `registries.d` configuration specifying a `sigstore` URL `https://example.com/sigstore` for the same image,
the following URLs would be accessed to download all signatures:
> - `https://example.com/sigstore/library/busybox@sha256=817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e/signature-1`
> - `https://example.com/sigstore/library/busybox@sha256=817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e/signature-2`
> - …
For a docker/distribution image available as `example.com/ns1/ns2/ns3/repo@somedigest:digestvalue` and the same
`sigstore` URL, the signatures would be available at
> `https://example.com/sigstore/ns1/ns2/ns3/repo@somedigest=digestvalue/signature-1`
and so on.
## (OpenShift) docker/distribution API extension
As of https://github.com/openshift/origin/pull/12504/ , the OpenShift-embedded registry also provides
an extension of the docker/distribution API which allows simpler access to the signatures,
using only the docker/distribution API endpoint.
This API is not inherently OpenShift-specific (e.g. the client does not need to know the OpenShift API endpoint,
and credentials sufficient to access the docker/distribution API server are sufficient to access signatures as well),
and it is the preferred way implement signature storage in registries.
See https://github.com/openshift/openshift-docs/pull/3556 for the upstream documentation of the API.
To read the signature, any user with access to an image can use the `/extensions/v2/…/signatures/…`
path to read an array of signatures. Use only the signature objects
which have `version` equal to `2`, `type` equal to `atomic`, and read the signature from `content`;
ignore the other fields of the signature object.
To add a single signature, `PUT` a new object with `version` set to `2`, `type` set to `atomic`,
and `content` set to the signature. Also set `name` to an unique name with the form
_digest_`@`_per-image-name_, where _digest_ is an image manifest digest (also used in the URL),
and _per-image-name_ is any unique identifier.
To add more than one signature, add them one at a time. This API does not allow deleting signatures.
Note that because signatures are stored within the cluster-wide image objects,
i.e. different namespaces can not associate different sets of signatures to the same image,
updating signatures requires a cluster-wide access to the `imagesignatures` resource
(by default available to the `system:image-signer` role),
## OpenShift-embedded registries
The OpenShift-embedded registry implements the ordinary docker/distribution API,
and it also exposes images through the OpenShift REST API (available through the “API master” servers).
Note: OpenShift versions 1.5 and later support the above-described [docker/distribution API extension](#openshift-dockerdistribution-api-extension),
which is easier to set up and should usually be preferred.
Continue reading for details on using older versions of OpenShift.
As of https://github.com/openshift/origin/pull/9181,
signatures are exposed through the OpenShift API
(i.e. to access the complete image, it is necessary to use both APIs,
in particular to know the URLs for both the docker/distribution and the OpenShift API master endpoints).
To read the signature, any user with access to an image can use the `imagestreamimages` namespaced
resource to read an `Image` object and its `Signatures` array. Use only the `ImageSignature` objects
which have `Type` equal to `atomic`, and read the signature from `Content`; ignore the other fields of
the `ImageSignature` object.
To add or remove signatures, use the cluster-wide (non-namespaced) `imagesignatures` resource,
with `Type` set to `atomic` and `Content` set to the signature. Signature names must have the form
_digest_`@`_per-image-name_, where _digest_ is an image manifest digest (OpenShift “image name”),
and _per-image-name_ is any unique identifier.
Note that because signatures are stored within the cluster-wide image objects,
i.e. different namespaces can not associate different sets of signatures to the same image,
updating signatures requires a cluster-wide access to the `imagesignatures` resource
(by default available to the `system:image-signer` role),
and deleting signatures is strongly discouraged
(it deletes the signature from all namespaces which contain the same image).

View File

@@ -1,7 +1,9 @@
package image
import (
"context"
"encoding/json"
"fmt"
"runtime"
"github.com/containers/image/manifest"
@@ -16,12 +18,12 @@ type platformSpec struct {
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
Variant string `json:"variant,omitempty"`
Features []string `json:"features,omitempty"`
Features []string `json:"features,omitempty"` // removed in OCI
}
// A manifestDescriptor references a platform-specific manifest.
type manifestDescriptor struct {
descriptor
manifest.Schema2Descriptor
Platform platformSpec `json:"platform"`
}
@@ -31,22 +33,36 @@ type manifestList struct {
Manifests []manifestDescriptor `json:"manifests"`
}
func manifestSchema2FromManifestList(src types.ImageSource, manblob []byte) (genericManifest, error) {
list := manifestList{}
if err := json.Unmarshal(manblob, &list); err != nil {
return nil, err
// chooseDigestFromManifestList parses blob as a schema2 manifest list,
// and returns the digest of the image appropriate for the current environment.
func chooseDigestFromManifestList(sys *types.SystemContext, blob []byte) (digest.Digest, error) {
wantedArch := runtime.GOARCH
if sys != nil && sys.ArchitectureChoice != "" {
wantedArch = sys.ArchitectureChoice
}
wantedOS := runtime.GOOS
if sys != nil && sys.OSChoice != "" {
wantedOS = sys.OSChoice
}
list := manifestList{}
if err := json.Unmarshal(blob, &list); err != nil {
return "", err
}
var targetManifestDigest digest.Digest
for _, d := range list.Manifests {
if d.Platform.Architecture == runtime.GOARCH && d.Platform.OS == runtime.GOOS {
targetManifestDigest = d.Digest
break
if d.Platform.Architecture == wantedArch && d.Platform.OS == wantedOS {
return d.Digest, nil
}
}
if targetManifestDigest == "" {
return nil, errors.New("no supported platform found in manifest list")
return "", fmt.Errorf("no image found in manifest list for architecture %s, OS %s", wantedArch, wantedOS)
}
func manifestSchema2FromManifestList(ctx context.Context, sys *types.SystemContext, src types.ImageSource, manblob []byte) (genericManifest, error) {
targetManifestDigest, err := chooseDigestFromManifestList(sys, manblob)
if err != nil {
return nil, err
}
manblob, mt, err := src.GetTargetManifest(targetManifestDigest)
manblob, mt, err := src.GetManifest(ctx, &targetManifestDigest)
if err != nil {
return nil, err
}
@@ -59,5 +75,20 @@ func manifestSchema2FromManifestList(src types.ImageSource, manblob []byte) (gen
return nil, errors.Errorf("Manifest image does not match selected manifest digest %s", targetManifestDigest)
}
return manifestInstanceFromBlob(src, manblob, mt)
return manifestInstanceFromBlob(ctx, sys, src, manblob, mt)
}
// ChooseManifestInstanceFromManifestList returns a digest of a manifest appropriate
// for the current system from the manifest available from src.
func ChooseManifestInstanceFromManifestList(ctx context.Context, sys *types.SystemContext, src types.UnparsedImage) (digest.Digest, error) {
// For now this only handles manifest.DockerV2ListMediaType; we can generalize it later,
// probably along with manifest list editing.
blob, mt, err := src.Manifest(ctx)
if err != nil {
return "", err
}
if mt != manifest.DockerV2ListMediaType {
return "", fmt.Errorf("Internal error: Trying to select an image from a non-manifest-list manifest type %s", mt)
}
return chooseDigestFromManifestList(sys, blob)
}

View File

@@ -1,10 +1,7 @@
package image
import (
"encoding/json"
"regexp"
"strings"
"time"
"context"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
@@ -14,87 +11,29 @@ import (
"github.com/pkg/errors"
)
var (
validHex = regexp.MustCompile(`^([a-f0-9]{64})$`)
)
type fsLayersSchema1 struct {
BlobSum digest.Digest `json:"blobSum"`
}
type historySchema1 struct {
V1Compatibility string `json:"v1Compatibility"`
}
// historySchema1 is a string containing this. It is similar to v1Image but not the same, in particular note the ThrowAway field.
type v1Compatibility struct {
ID string `json:"id"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig struct {
Cmd []string
} `json:"container_config,omitempty"`
Author string `json:"author,omitempty"`
ThrowAway bool `json:"throwaway,omitempty"`
}
type manifestSchema1 struct {
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []fsLayersSchema1 `json:"fsLayers"`
History []historySchema1 `json:"history"`
SchemaVersion int `json:"schemaVersion"`
m *manifest.Schema1
}
func manifestSchema1FromManifest(manifest []byte) (genericManifest, error) {
mschema1 := &manifestSchema1{}
if err := json.Unmarshal(manifest, mschema1); err != nil {
return nil, err
}
if mschema1.SchemaVersion != 1 {
return nil, errors.Errorf("unsupported schema version %d", mschema1.SchemaVersion)
}
if len(mschema1.FSLayers) != len(mschema1.History) {
return nil, errors.New("length of history not equal to number of layers")
}
if len(mschema1.FSLayers) == 0 {
return nil, errors.New("no FSLayers in manifest")
}
if err := fixManifestLayers(mschema1); err != nil {
return nil, err
}
return mschema1, nil
}
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
func manifestSchema1FromComponents(ref reference.Named, fsLayers []fsLayersSchema1, history []historySchema1, architecture string) genericManifest {
var name, tag string
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
name = reference.Path(ref)
if tagged, ok := ref.(reference.NamedTagged); ok {
tag = tagged.Tag()
}
}
return &manifestSchema1{
Name: name,
Tag: tag,
Architecture: architecture,
FSLayers: fsLayers,
History: history,
SchemaVersion: 1,
}
}
func (m *manifestSchema1) serialize() ([]byte, error) {
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
unsigned, err := json.Marshal(*m)
func manifestSchema1FromManifest(manifestBlob []byte) (genericManifest, error) {
m, err := manifest.Schema1FromManifest(manifestBlob)
if err != nil {
return nil, err
}
return manifest.AddDummyV2S1Signature(unsigned)
return &manifestSchema1{m: m}, nil
}
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
func manifestSchema1FromComponents(ref reference.Named, fsLayers []manifest.Schema1FSLayers, history []manifest.Schema1History, architecture string) (genericManifest, error) {
m, err := manifest.Schema1FromComponents(ref, fsLayers, history, architecture)
if err != nil {
return nil, err
}
return &manifestSchema1{m: m}, nil
}
func (m *manifestSchema1) serialize() ([]byte, error) {
return m.m.Serialize()
}
func (m *manifestSchema1) manifestMIMEType() string {
@@ -104,35 +43,31 @@ func (m *manifestSchema1) manifestMIMEType() string {
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{}
return m.m.ConfigInfo()
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema1) ConfigBlob() ([]byte, error) {
func (m *manifestSchema1) ConfigBlob(context.Context) ([]byte, error) {
return nil, nil
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestSchema1) OCIConfig() (*imgspecv1.Image, error) {
func (m *manifestSchema1) OCIConfig(ctx context.Context) (*imgspecv1.Image, error) {
v2s2, err := m.convertToManifestSchema2(nil, nil)
if err != nil {
return nil, err
}
return v2s2.OCIConfig()
return v2s2.OCIConfig(ctx)
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
layers := make([]types.BlobInfo, len(m.FSLayers))
for i, layer := range m.FSLayers { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
layers[(len(m.FSLayers)-1)-i] = types.BlobInfo{Digest: layer.BlobSum, Size: -1}
}
return layers
return manifestLayerInfosToBlobInfos(m.m.LayerInfos())
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
@@ -153,53 +88,36 @@ func (m *manifestSchema1) EmbeddedDockerReferenceConflicts(ref reference.Named)
} else {
tag = ""
}
return m.Name != name || m.Tag != tag
return m.m.Name != name || m.m.Tag != tag
}
func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
v1 := &v1Image{}
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
Tag: m.Tag,
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
}, nil
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *manifestSchema1) Inspect(context.Context) (*types.ImageInspectInfo, error) {
return m.m.Inspect(nil)
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return options.ManifestMIMEType == manifest.DockerV2Schema2MediaType
return (options.ManifestMIMEType == manifest.DockerV2Schema2MediaType || options.ManifestMIMEType == imgspecv1.MediaTypeImageManifest)
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m
func (m *manifestSchema1) UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error) {
copy := manifestSchema1{m: manifest.Schema1Clone(m.m)}
if options.LayerInfos != nil {
// Our LayerInfos includes empty layers (where m.History.V1Compatibility->ThrowAway), so expect them to be included here as well.
if len(copy.FSLayers) != len(options.LayerInfos) {
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.FSLayers), len(options.LayerInfos))
}
for i, info := range options.LayerInfos {
// (docker push) sets up m.History.V1Compatibility->{Id,Parent} based on values of info.Digest,
// but (docker pull) ignores them in favor of computing DiffIDs from uncompressed data, except verifying the child->parent links and uniqueness.
// So, we don't bother recomputing the IDs in m.History.V1Compatibility.
copy.FSLayers[(len(options.LayerInfos)-1)-i].BlobSum = info.Digest
if err := copy.m.UpdateLayerInfos(options.LayerInfos); err != nil {
return nil, err
}
}
if options.EmbeddedDockerReference != nil {
copy.Name = reference.Path(options.EmbeddedDockerReference)
copy.m.Name = reference.Path(options.EmbeddedDockerReference)
if tagged, isTagged := options.EmbeddedDockerReference.(reference.NamedTagged); isTagged {
copy.Tag = tagged.Tag()
copy.m.Tag = tagged.Tag()
} else {
copy.Tag = ""
copy.m.Tag = ""
}
}
@@ -209,7 +127,21 @@ func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (typ
// We have 2 MIME types for schema 1, which are basically equivalent (even the un-"Signed" MIME type will be rejected if there isnt a signature; so,
// handle conversions between them by doing nothing.
case manifest.DockerV2Schema2MediaType:
return copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
m2, err := copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
if err != nil {
return nil, err
}
return memoryImageFromManifest(m2), nil
case imgspecv1.MediaTypeImageManifest:
// We can't directly convert to OCI, but we can transitively convert via a Docker V2.2 Distribution manifest
m2, err := copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
if err != nil {
return nil, err
}
return m2.UpdatedImage(ctx, types.ManifestUpdateOptions{
ManifestMIMEType: imgspecv1.MediaTypeImageManifest,
InformationOnly: options.InformationOnly,
})
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema1SignedMediaType, options.ManifestMIMEType)
}
@@ -217,103 +149,29 @@ func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (typ
return memoryImageFromManifest(&copy), nil
}
// fixManifestLayers, after validating the supplied manifest
// (to use correctly-formatted IDs, and to not have non-consecutive ID collisions in manifest.History),
// modifies manifest to only have one entry for each layer ID in manifest.History (deleting the older duplicates,
// both from manifest.History and manifest.FSLayers).
// Note that even after this succeeds, manifest.FSLayers may contain duplicate entries
// (for Dockerfile operations which change the configuration but not the filesystem).
func fixManifestLayers(manifest *manifestSchema1) error {
type imageV1 struct {
ID string
Parent string
}
// Per the specification, we can assume that len(manifest.FSLayers) == len(manifest.History)
imgs := make([]*imageV1, len(manifest.FSLayers))
for i := range manifest.FSLayers {
img := &imageV1{}
if err := json.Unmarshal([]byte(manifest.History[i].V1Compatibility), img); err != nil {
return err
}
imgs[i] = img
if err := validateV1ID(img.ID); err != nil {
return err
}
}
if imgs[len(imgs)-1].Parent != "" {
return errors.New("Invalid parent ID in the base layer of the image")
}
// check general duplicates to error instead of a deadlock
idmap := make(map[string]struct{})
var lastID string
for _, img := range imgs {
// skip IDs that appear after each other, we handle those later
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
return errors.Errorf("ID %+v appears multiple times in manifest", img.ID)
}
lastID = img.ID
idmap[lastID] = struct{}{}
}
// backwards loop so that we keep the remaining indexes after removing items
for i := len(imgs) - 2; i >= 0; i-- {
if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue
manifest.FSLayers = append(manifest.FSLayers[:i], manifest.FSLayers[i+1:]...)
manifest.History = append(manifest.History[:i], manifest.History[i+1:]...)
} else if imgs[i].Parent != imgs[i+1].ID {
return errors.Errorf("Invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent)
}
}
return nil
}
func validateV1ID(id string) error {
if ok := validHex.MatchString(id); !ok {
return errors.Errorf("image ID %q is invalid", id)
}
return nil
}
// Based on github.com/docker/docker/distribution/pull_v2.go
func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.BlobInfo, layerDiffIDs []digest.Digest) (types.Image, error) {
if len(m.History) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.BlobInfo, layerDiffIDs []digest.Digest) (genericManifest, error) {
if len(m.m.ExtractedV1Compatibility) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on FSLayers[0] and ExtractedV1Compatibility[0] existing.
return nil, errors.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema2MediaType)
}
if len(m.History) != len(m.FSLayers) {
return nil, errors.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.History), len(m.FSLayers))
if len(m.m.ExtractedV1Compatibility) != len(m.m.FSLayers) {
return nil, errors.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.m.ExtractedV1Compatibility), len(m.m.FSLayers))
}
if uploadedLayerInfos != nil && len(uploadedLayerInfos) != len(m.FSLayers) {
return nil, errors.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.FSLayers))
if uploadedLayerInfos != nil && len(uploadedLayerInfos) != len(m.m.FSLayers) {
return nil, errors.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.m.FSLayers))
}
if layerDiffIDs != nil && len(layerDiffIDs) != len(m.FSLayers) {
return nil, errors.Errorf("Internal error: collected %d DiffID values, but schema1 manifest has %d fsLayers", len(layerDiffIDs), len(m.FSLayers))
if layerDiffIDs != nil && len(layerDiffIDs) != len(m.m.FSLayers) {
return nil, errors.Errorf("Internal error: collected %d DiffID values, but schema1 manifest has %d fsLayers", len(layerDiffIDs), len(m.m.FSLayers))
}
rootFS := rootFS{
Type: "layers",
DiffIDs: []digest.Digest{},
BaseLayer: "",
}
var layers []descriptor
history := make([]imageHistory, len(m.History))
for v1Index := len(m.History) - 1; v1Index >= 0; v1Index-- {
v2Index := (len(m.History) - 1) - v1Index
// Build a list of the diffIDs for the non-empty layers.
diffIDs := []digest.Digest{}
var layers []manifest.Schema2Descriptor
for v1Index := len(m.m.ExtractedV1Compatibility) - 1; v1Index >= 0; v1Index-- {
v2Index := (len(m.m.ExtractedV1Compatibility) - 1) - v1Index
var v1compat v1Compatibility
if err := json.Unmarshal([]byte(m.History[v1Index].V1Compatibility), &v1compat); err != nil {
return nil, errors.Wrapf(err, "Error decoding history entry %d", v1Index)
}
history[v2Index] = imageHistory{
Created: v1compat.Created,
Author: v1compat.Author,
CreatedBy: strings.Join(v1compat.ContainerConfig.Cmd, " "),
Comment: v1compat.Comment,
EmptyLayer: v1compat.ThrowAway,
}
if !v1compat.ThrowAway {
if !m.m.ExtractedV1Compatibility[v1Index].ThrowAway {
var size int64
if uploadedLayerInfos != nil {
size = uploadedLayerInfos[v2Index].Size
@@ -322,54 +180,23 @@ func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.Bl
if layerDiffIDs != nil {
d = layerDiffIDs[v2Index]
}
layers = append(layers, descriptor{
layers = append(layers, manifest.Schema2Descriptor{
MediaType: "application/vnd.docker.image.rootfs.diff.tar.gzip",
Size: size,
Digest: m.FSLayers[v1Index].BlobSum,
Digest: m.m.FSLayers[v1Index].BlobSum,
})
rootFS.DiffIDs = append(rootFS.DiffIDs, d)
diffIDs = append(diffIDs, d)
}
}
configJSON, err := configJSONFromV1Config([]byte(m.History[0].V1Compatibility), rootFS, history)
configJSON, err := m.m.ToSchema2Config(diffIDs)
if err != nil {
return nil, err
}
configDescriptor := descriptor{
configDescriptor := manifest.Schema2Descriptor{
MediaType: "application/vnd.docker.container.image.v1+json",
Size: int64(len(configJSON)),
Digest: digest.FromBytes(configJSON),
}
m2 := manifestSchema2FromComponents(configDescriptor, nil, configJSON, layers)
return memoryImageFromManifest(m2), nil
}
func configJSONFromV1Config(v1ConfigJSON []byte, rootFS rootFS, history []imageHistory) ([]byte, error) {
// github.com/docker/docker/image/v1/imagev1.go:MakeConfigFromV1Config unmarshals and re-marshals the input if docker_version is < 1.8.3 to remove blank fields;
// we don't do that here. FIXME? Should we? AFAICT it would only affect the digest value of the schema2 manifest, and we don't particularly need that to be
// a consistently reproducible value.
// Preserve everything we don't specifically know about.
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
rawContents := map[string]*json.RawMessage{}
if err := json.Unmarshal(v1ConfigJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
return nil, err
}
delete(rawContents, "id")
delete(rawContents, "parent")
delete(rawContents, "Size")
delete(rawContents, "parent_id")
delete(rawContents, "layer_id")
delete(rawContents, "throwaway")
updates := map[string]interface{}{"rootfs": rootFS, "history": history}
for field, value := range updates {
encoded, err := json.Marshal(value)
if err != nil {
return nil, err
}
rawContents[field] = (*json.RawMessage)(&encoded)
}
return json.Marshal(rawContents)
return manifestSchema2FromComponents(configDescriptor, nil, configJSON, layers), nil
}

View File

@@ -2,88 +2,79 @@ package image
import (
"bytes"
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"io/ioutil"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// gzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
// GzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
// This comes from github.com/docker/distribution/manifest/schema1/config_builder.go; there is
// a non-zero embedded timestamp; we could zero that, but that would just waste storage space
// in registries, so lets use the same values.
var gzippedEmptyLayer = []byte{
var GzippedEmptyLayer = []byte{
31, 139, 8, 0, 0, 9, 110, 136, 0, 255, 98, 24, 5, 163, 96, 20, 140, 88,
0, 8, 0, 0, 255, 255, 46, 175, 181, 239, 0, 4, 0, 0,
}
// gzippedEmptyLayerDigest is a digest of gzippedEmptyLayer
const gzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
type descriptor struct {
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
URLs []string `json:"urls,omitempty"`
}
// GzippedEmptyLayerDigest is a digest of GzippedEmptyLayer
const GzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
type manifestSchema2 struct {
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
m *manifest.Schema2
}
func manifestSchema2FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
v2s2 := manifestSchema2{src: src}
if err := json.Unmarshal(manifest, &v2s2); err != nil {
func manifestSchema2FromManifest(src types.ImageSource, manifestBlob []byte) (genericManifest, error) {
m, err := manifest.Schema2FromManifest(manifestBlob)
if err != nil {
return nil, err
}
return &v2s2, nil
return &manifestSchema2{
src: src,
m: m,
}, nil
}
// manifestSchema2FromComponents builds a new manifestSchema2 from the supplied data:
func manifestSchema2FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
func manifestSchema2FromComponents(config manifest.Schema2Descriptor, src types.ImageSource, configBlob []byte, layers []manifest.Schema2Descriptor) genericManifest {
return &manifestSchema2{
src: src,
configBlob: configBlob,
SchemaVersion: 2,
MediaType: manifest.DockerV2Schema2MediaType,
ConfigDescriptor: config,
LayersDescriptors: layers,
src: src,
configBlob: configBlob,
m: manifest.Schema2FromComponents(config, layers),
}
}
func (m *manifestSchema2) serialize() ([]byte, error) {
return json.Marshal(*m)
return m.m.Serialize()
}
func (m *manifestSchema2) manifestMIMEType() string {
return m.MediaType
return m.m.MediaType
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema2) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
return m.m.ConfigInfo()
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestSchema2) OCIConfig() (*imgspecv1.Image, error) {
configBlob, err := m.ConfigBlob()
func (m *manifestSchema2) OCIConfig(ctx context.Context) (*imgspecv1.Image, error) {
configBlob, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
@@ -99,16 +90,12 @@ func (m *manifestSchema2) OCIConfig() (*imgspecv1.Image, error) {
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
func (m *manifestSchema2) ConfigBlob(ctx context.Context) ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestSchema2")
}
stream, _, err := m.src.GetBlob(types.BlobInfo{
Digest: m.ConfigDescriptor.Digest,
Size: m.ConfigDescriptor.Size,
URLs: m.ConfigDescriptor.URLs,
})
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromSchema2Descriptor(m.m.ConfigDescriptor))
if err != nil {
return nil, err
}
@@ -118,8 +105,8 @@ func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.ConfigDescriptor.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
if computedDigest != m.m.ConfigDescriptor.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.m.ConfigDescriptor.Digest)
}
m.configBlob = blob
}
@@ -130,15 +117,7 @@ func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{
Digest: layer.Digest,
Size: layer.Size,
URLs: layer.URLs,
})
}
return blobs
return manifestLayerInfosToBlobInfos(m.m.LayerInfos())
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
@@ -148,22 +127,20 @@ func (m *manifestSchema2) EmbeddedDockerReferenceConflicts(ref reference.Named)
return false
}
func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
return nil, err
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *manifestSchema2) Inspect(ctx context.Context) (*types.ImageInspectInfo, error) {
getter := func(info types.BlobInfo) ([]byte, error) {
if info.Digest != m.ConfigInfo().Digest {
// Shouldn't ever happen
return nil, errors.New("asked for a different config blob")
}
config, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
return config, nil
}
v1 := &v1Image{}
if err := json.Unmarshal(config, v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
}, nil
return m.m.Inspect(getter)
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
@@ -175,17 +152,15 @@ func (m *manifestSchema2) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUp
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
func (m *manifestSchema2) UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error) {
copy := manifestSchema2{ // NOTE: This is not a deep copy, it still shares slices etc.
src: m.src,
configBlob: m.configBlob,
m: manifest.Schema2Clone(m.m),
}
if options.LayerInfos != nil {
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
copy.LayersDescriptors[i].URLs = info.URLs
if err := copy.m.UpdateLayerInfos(options.LayerInfos); err != nil {
return nil, err
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1 to schema2, but we really don't care.
@@ -193,9 +168,9 @@ func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (typ
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema1SignedMediaType, manifest.DockerV2Schema1MediaType:
return copy.convertToManifestSchema1(options.InformationOnly.Destination)
return copy.convertToManifestSchema1(ctx, options.InformationOnly.Destination)
case imgspecv1.MediaTypeImageManifest:
return copy.convertToManifestOCI1()
return copy.convertToManifestOCI1(ctx)
default:
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema2MediaType, options.ManifestMIMEType)
}
@@ -203,8 +178,17 @@ func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (typ
return memoryImageFromManifest(&copy), nil
}
func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
configOCI, err := m.OCIConfig()
func oci1DescriptorFromSchema2Descriptor(d manifest.Schema2Descriptor) imgspecv1.Descriptor {
return imgspecv1.Descriptor{
MediaType: d.MediaType,
Size: d.Size,
Digest: d.Digest,
URLs: d.URLs,
}
}
func (m *manifestSchema2) convertToManifestOCI1(ctx context.Context) (types.Image, error) {
configOCI, err := m.OCIConfig(ctx)
if err != nil {
return nil, err
}
@@ -213,16 +197,16 @@ func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
return nil, err
}
config := descriptor{
config := imgspecv1.Descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Size: int64(len(configOCIBytes)),
Digest: digest.FromBytes(configOCIBytes),
}
layers := make([]descriptor, len(m.LayersDescriptors))
layers := make([]imgspecv1.Descriptor, len(m.m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
if m.LayersDescriptors[idx].MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
layers[idx] = oci1DescriptorFromSchema2Descriptor(m.m.LayersDescriptors[idx])
if m.m.LayersDescriptors[idx].MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable
} else {
// we assume layers are gzip'ed because docker v2s2 only deals with
@@ -236,19 +220,19 @@ func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
}
// Based on docker/distribution/manifest/schema1/config_builder.go
func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination) (types.Image, error) {
configBytes, err := m.ConfigBlob()
func (m *manifestSchema2) convertToManifestSchema1(ctx context.Context, dest types.ImageDestination) (types.Image, error) {
configBytes, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
imageConfig := &image{}
imageConfig := &manifest.Schema2Image{}
if err := json.Unmarshal(configBytes, imageConfig); err != nil {
return nil, err
}
// Build fsLayers and History, discarding all configs. We will patch the top-level config in later.
fsLayers := make([]fsLayersSchema1, len(imageConfig.History))
history := make([]historySchema1, len(imageConfig.History))
fsLayers := make([]manifest.Schema1FSLayers, len(imageConfig.History))
history := make([]manifest.Schema1History, len(imageConfig.History))
nonemptyLayerIndex := 0
var parentV1ID string // Set in the loop
v1ID := ""
@@ -265,21 +249,21 @@ func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination)
if historyEntry.EmptyLayer {
if !haveGzippedEmptyLayer {
logrus.Debugf("Uploading empty layer during conversion to schema 1")
info, err := dest.PutBlob(bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Digest: gzippedEmptyLayerDigest, Size: int64(len(gzippedEmptyLayer))})
info, err := dest.PutBlob(ctx, bytes.NewReader(GzippedEmptyLayer), types.BlobInfo{Digest: GzippedEmptyLayerDigest, Size: int64(len(GzippedEmptyLayer))}, false)
if err != nil {
return nil, errors.Wrap(err, "Error uploading empty layer")
}
if info.Digest != gzippedEmptyLayerDigest {
return nil, errors.Errorf("Internal error: Uploaded empty layer has digest %#v instead of %s", info.Digest, gzippedEmptyLayerDigest)
if info.Digest != GzippedEmptyLayerDigest {
return nil, errors.Errorf("Internal error: Uploaded empty layer has digest %#v instead of %s", info.Digest, GzippedEmptyLayerDigest)
}
haveGzippedEmptyLayer = true
}
blobDigest = gzippedEmptyLayerDigest
blobDigest = GzippedEmptyLayerDigest
} else {
if nonemptyLayerIndex >= len(m.LayersDescriptors) {
return nil, errors.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.LayersDescriptors))
if nonemptyLayerIndex >= len(m.m.LayersDescriptors) {
return nil, errors.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.m.LayersDescriptors))
}
blobDigest = m.LayersDescriptors[nonemptyLayerIndex].Digest
blobDigest = m.m.LayersDescriptors[nonemptyLayerIndex].Digest
nonemptyLayerIndex++
}
@@ -290,7 +274,7 @@ func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination)
}
v1ID = v
fakeImage := v1Compatibility{
fakeImage := manifest.Schema1V1Compatibility{
ID: v1ID,
Parent: parentV1ID,
Comment: historyEntry.Comment,
@@ -304,8 +288,8 @@ func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination)
return nil, errors.Errorf("Internal error: Error creating v1compatibility for %#v", fakeImage)
}
fsLayers[v1Index] = fsLayersSchema1{BlobSum: blobDigest}
history[v1Index] = historySchema1{V1Compatibility: string(v1CompatibilityBytes)}
fsLayers[v1Index] = manifest.Schema1FSLayers{BlobSum: blobDigest}
history[v1Index] = manifest.Schema1History{V1Compatibility: string(v1CompatibilityBytes)}
// Note that parentV1ID of the top layer is preserved when exiting this loop
}
@@ -320,7 +304,10 @@ func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination)
}
history[0].V1Compatibility = string(v1Config)
m1 := manifestSchema1FromComponents(dest.Reference().DockerReference(), fsLayers, history, imageConfig.Architecture)
m1, err := manifestSchema1FromComponents(dest.Reference().DockerReference(), fsLayers, history, imageConfig.Architecture)
if err != nil {
return nil, err // This should never happen, we should have created all the components correctly.
}
return memoryImageFromManifest(m1), nil
}

View File

@@ -1,57 +1,15 @@
package image
import (
"time"
"context"
"fmt"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/strslice"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type config struct {
Cmd strslice.StrSlice
Labels map[string]string
}
type v1Image struct {
ID string `json:"id,omitempty"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig *config `json:"container_config,omitempty"`
DockerVersion string `json:"docker_version,omitempty"`
Author string `json:"author,omitempty"`
// Config is the configuration of the container received from the client
Config *config `json:"config,omitempty"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
}
type image struct {
v1Image
History []imageHistory `json:"history,omitempty"`
RootFS *rootFS `json:"rootfs,omitempty"`
}
type imageHistory struct {
Created time.Time `json:"created"`
Author string `json:"author,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
Comment string `json:"comment,omitempty"`
EmptyLayer bool `json:"empty_layer,omitempty"`
}
type rootFS struct {
Type string `json:"type"`
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
BaseLayer string `json:"base_layer,omitempty"`
}
// genericManifest is an interface for parsing, modifying image manifests and related data.
// Note that the public methods are intended to be a subset of types.Image
// so that embedding a genericManifest into structs works.
@@ -64,11 +22,11 @@ type genericManifest interface {
ConfigInfo() types.BlobInfo
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
ConfigBlob() ([]byte, error)
ConfigBlob(context.Context) ([]byte, error)
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
OCIConfig() (*imgspecv1.Image, error)
OCIConfig(context.Context) (*imgspecv1.Image, error)
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
@@ -77,53 +35,39 @@ type genericManifest interface {
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
imageInspectInfo() (*types.ImageInspectInfo, error) // To be called by inspectManifest
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
Inspect(context.Context) (*types.ImageInspectInfo, error)
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error)
UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error)
}
func manifestInstanceFromBlob(src types.ImageSource, manblob []byte, mt string) (genericManifest, error) {
switch mt {
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
// need to happen within the ImageSource.
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, "application/json":
// manifestInstanceFromBlob returns a genericManifest implementation for (manblob, mt) in src.
// If manblob is a manifest list, it implicitly chooses an appropriate image from the list.
func manifestInstanceFromBlob(ctx context.Context, sys *types.SystemContext, src types.ImageSource, manblob []byte, mt string) (genericManifest, error) {
switch manifest.NormalizedMIMEType(mt) {
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
return manifestSchema1FromManifest(manblob)
case imgspecv1.MediaTypeImageManifest:
return manifestOCI1FromManifest(src, manblob)
case manifest.DockerV2Schema2MediaType:
return manifestSchema2FromManifest(src, manblob)
case manifest.DockerV2ListMediaType:
return manifestSchema2FromManifestList(src, manblob)
default:
// If it's not a recognized manifest media type, or we have failed determining the type, we'll try one last time
// to deserialize using v2s1 as per https://github.com/docker/distribution/blob/master/manifests.go#L108
// and https://github.com/docker/distribution/blob/master/manifest/schema1/manifest.go#L50
//
// Crane registries can also return "text/plain", or pretty much anything else depending on a file extension “recognized” in the tag.
// This makes no real sense, but it happens
// because requests for manifests are
// redirected to a content distribution
// network which is configured that way. See https://bugzilla.redhat.com/show_bug.cgi?id=1389442
return manifestSchema1FromManifest(manblob)
return manifestSchema2FromManifestList(ctx, sys, src, manblob)
default: // Note that this may not be reachable, manifest.NormalizedMIMEType has a default for unknown values.
return nil, fmt.Errorf("Unimplemented manifest MIME type %s", mt)
}
}
// inspectManifest is an implementation of types.Image.Inspect
func inspectManifest(m genericManifest) (*types.ImageInspectInfo, error) {
info, err := m.imageInspectInfo()
if err != nil {
return nil, err
}
layers := m.LayerInfos()
info.Layers = make([]string, len(layers))
// manifestLayerInfosToBlobInfos extracts a []types.BlobInfo from a []manifest.LayerInfo.
func manifestLayerInfosToBlobInfos(layers []manifest.LayerInfo) []types.BlobInfo {
blobs := make([]types.BlobInfo, len(layers))
for i, layer := range layers {
info.Layers[i] = layer.Digest.String()
blobs[i] = layer.BlobInfo
}
return info, nil
return blobs
}

View File

@@ -1,6 +1,8 @@
package image
import (
"context"
"github.com/pkg/errors"
"github.com/containers/image/types"
@@ -31,18 +33,13 @@ func (i *memoryImage) Reference() types.ImageReference {
return nil
}
// Close removes resources associated with an initialized UnparsedImage, if any.
func (i *memoryImage) Close() error {
return nil
}
// Size returns the size of the image as stored, if known, or -1 if not.
func (i *memoryImage) Size() (int64, error) {
return -1, nil
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Manifest() ([]byte, string, error) {
func (i *memoryImage) Manifest(ctx context.Context) ([]byte, string, error) {
if i.serializedManifest == nil {
m, err := i.genericManifest.serialize()
if err != nil {
@@ -54,18 +51,15 @@ func (i *memoryImage) Manifest() ([]byte, string, error) {
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Signatures() ([][]byte, error) {
func (i *memoryImage) Signatures(ctx context.Context) ([][]byte, error) {
// Modifying an image invalidates signatures; a caller asking the updated image for signatures
// is probably confused.
return nil, errors.New("Internal error: Image.Signatures() is not supported for images modified in memory")
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (i *memoryImage) Inspect() (*types.ImageInspectInfo, error) {
return inspectManifest(i.genericManifest)
}
// IsMultiImage returns true if the image's manifest is a list of images, false otherwise.
func (i *memoryImage) IsMultiImage() bool {
return false
// LayerInfosForCopy returns an updated set of layer blob information which may not match the manifest.
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (i *memoryImage) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -1,6 +1,7 @@
package image
import (
"context"
"encoding/json"
"io/ioutil"
@@ -13,34 +14,33 @@ import (
)
type manifestOCI1 struct {
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of m.Config.
m *manifest.OCI1
}
func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
oci := manifestOCI1{src: src}
if err := json.Unmarshal(manifest, &oci); err != nil {
func manifestOCI1FromManifest(src types.ImageSource, manifestBlob []byte) (genericManifest, error) {
m, err := manifest.OCI1FromManifest(manifestBlob)
if err != nil {
return nil, err
}
return &oci, nil
return &manifestOCI1{
src: src,
m: m,
}, nil
}
// manifestOCI1FromComponents builds a new manifestOCI1 from the supplied data:
func manifestOCI1FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
func manifestOCI1FromComponents(config imgspecv1.Descriptor, src types.ImageSource, configBlob []byte, layers []imgspecv1.Descriptor) genericManifest {
return &manifestOCI1{
src: src,
configBlob: configBlob,
SchemaVersion: 2,
ConfigDescriptor: config,
LayersDescriptors: layers,
src: src,
configBlob: configBlob,
m: manifest.OCI1FromComponents(config, layers),
}
}
func (m *manifestOCI1) serialize() ([]byte, error) {
return json.Marshal(*m)
return m.m.Serialize()
}
func (m *manifestOCI1) manifestMIMEType() string {
@@ -50,21 +50,17 @@ func (m *manifestOCI1) manifestMIMEType() string {
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestOCI1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
return m.m.ConfigInfo()
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
func (m *manifestOCI1) ConfigBlob(ctx context.Context) ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestOCI1")
}
stream, _, err := m.src.GetBlob(types.BlobInfo{
Digest: m.ConfigDescriptor.Digest,
Size: m.ConfigDescriptor.Size,
URLs: m.ConfigDescriptor.URLs,
})
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromOCI1Descriptor(m.m.Config))
if err != nil {
return nil, err
}
@@ -74,8 +70,8 @@ func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.ConfigDescriptor.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
if computedDigest != m.m.Config.Digest {
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.m.Config.Digest)
}
m.configBlob = blob
}
@@ -85,8 +81,8 @@ func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestOCI1) OCIConfig() (*imgspecv1.Image, error) {
cb, err := m.ConfigBlob()
func (m *manifestOCI1) OCIConfig(ctx context.Context) (*imgspecv1.Image, error) {
cb, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
@@ -101,11 +97,7 @@ func (m *manifestOCI1) OCIConfig() (*imgspecv1.Image, error) {
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size})
}
return blobs
return manifestLayerInfosToBlobInfos(m.m.LayerInfos())
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
@@ -115,22 +107,20 @@ func (m *manifestOCI1) EmbeddedDockerReferenceConflicts(ref reference.Named) boo
return false
}
func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
return nil, err
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *manifestOCI1) Inspect(ctx context.Context) (*types.ImageInspectInfo, error) {
getter := func(info types.BlobInfo) ([]byte, error) {
if info.Digest != m.ConfigInfo().Digest {
// Shouldn't ever happen
return nil, errors.New("asked for a different config blob")
}
config, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
return config, nil
}
v1 := &v1Image{}
if err := json.Unmarshal(config, v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
}, nil
return m.m.Inspect(getter)
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
@@ -142,22 +132,31 @@ func (m *manifestOCI1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdat
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
func (m *manifestOCI1) UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error) {
copy := manifestOCI1{ // NOTE: This is not a deep copy, it still shares slices etc.
src: m.src,
configBlob: m.configBlob,
m: manifest.OCI1Clone(m.m),
}
if options.LayerInfos != nil {
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
if err := copy.m.UpdateLayerInfos(options.LayerInfos); err != nil {
return nil, err
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1, but we really don't care.
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
// We can't directly convert to V1, but we can transitively convert via a V2 image
m2, err := copy.convertToManifestSchema2()
if err != nil {
return nil, err
}
return m2.UpdatedImage(ctx, types.ManifestUpdateOptions{
ManifestMIMEType: options.ManifestMIMEType,
InformationOnly: options.InformationOnly,
})
case manifest.DockerV2Schema2MediaType:
return copy.convertToManifestSchema2()
default:
@@ -167,17 +166,26 @@ func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.
return memoryImageFromManifest(&copy), nil
}
func schema2DescriptorFromOCI1Descriptor(d imgspecv1.Descriptor) manifest.Schema2Descriptor {
return manifest.Schema2Descriptor{
MediaType: d.MediaType,
Size: d.Size,
Digest: d.Digest,
URLs: d.URLs,
}
}
func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
// Create a copy of the descriptor.
config := m.ConfigDescriptor
config := schema2DescriptorFromOCI1Descriptor(m.m.Config)
// The only difference between OCI and DockerSchema2 is the mediatypes. The
// media type of the manifest is handled by manifestSchema2FromComponents.
config.MediaType = manifest.DockerV2Schema2ConfigMediaType
layers := make([]descriptor, len(m.LayersDescriptors))
layers := make([]manifest.Schema2Descriptor, len(m.m.Layers))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
layers[idx] = schema2DescriptorFromOCI1Descriptor(m.m.Layers[idx])
layers[idx].MediaType = manifest.DockerV2Schema2LayerMediaType
}

View File

@@ -4,12 +4,23 @@
package image
import (
"github.com/containers/image/manifest"
"context"
"github.com/containers/image/types"
)
// FromSource returns a types.Image implementation for source.
// The caller must call .Close() on the returned Image.
// imageCloser implements types.ImageCloser, perhaps allowing simple users
// to use a single object without having keep a reference to a types.ImageSource
// only to call types.ImageSource.Close().
type imageCloser struct {
types.Image
src types.ImageSource
}
// FromSource returns a types.ImageCloser implementation for the default instance of source.
// If source is a manifest list, .Manifest() still returns the manifest list,
// but other methods transparently return data from an appropriate image instance.
//
// The caller must call .Close() on the returned ImageCloser.
//
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
@@ -18,8 +29,19 @@ import (
//
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage instead of calling this function.
func FromSource(src types.ImageSource) (types.Image, error) {
return FromUnparsedImage(UnparsedFromSource(src))
func FromSource(ctx context.Context, sys *types.SystemContext, src types.ImageSource) (types.ImageCloser, error) {
img, err := FromUnparsedImage(ctx, sys, UnparsedInstance(src, nil))
if err != nil {
return nil, err
}
return &imageCloser{
Image: img,
src: src,
}, nil
}
func (ic *imageCloser) Close() error {
return ic.src.Close()
}
// sourcedImage is a general set of utilities for working with container images,
@@ -38,27 +60,22 @@ type sourcedImage struct {
}
// FromUnparsedImage returns a types.Image implementation for unparsed.
// The caller must call .Close() on the returned Image.
// If unparsed represents a manifest list, .Manifest() still returns the manifest list,
// but other methods transparently return data from an appropriate single image.
//
// FromSource “takes ownership” of the input UnparsedImage and will call uparsed.Close()
// when the image is closed. (This does not prevent callers from using both the
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
// keep a reference to the Image.)
func FromUnparsedImage(unparsed *UnparsedImage) (types.Image, error) {
// The Image must not be used after the underlying ImageSource is Close()d.
func FromUnparsedImage(ctx context.Context, sys *types.SystemContext, unparsed *UnparsedImage) (types.Image, error) {
// Note that the input parameter above is specifically *image.UnparsedImage, not types.UnparsedImage:
// we want to be able to use unparsed.src. We could make that an explicit interface, but, well,
// this is the only UnparsedImage implementation around, anyway.
// Also, we do not explicitly implement types.Image.Close; we let the implementation fall through to
// unparsed.Close.
// NOTE: It is essential for signature verification that all parsing done in this object happens on the same manifest which is returned by unparsed.Manifest().
manifestBlob, manifestMIMEType, err := unparsed.Manifest()
manifestBlob, manifestMIMEType, err := unparsed.Manifest(ctx)
if err != nil {
return nil, err
}
parsedManifest, err := manifestInstanceFromBlob(unparsed.src, manifestBlob, manifestMIMEType)
parsedManifest, err := manifestInstanceFromBlob(ctx, sys, unparsed.src, manifestBlob, manifestMIMEType)
if err != nil {
return nil, err
}
@@ -77,14 +94,10 @@ func (i *sourcedImage) Size() (int64, error) {
}
// Manifest overrides the UnparsedImage.Manifest to always use the fields which we have already fetched.
func (i *sourcedImage) Manifest() ([]byte, string, error) {
func (i *sourcedImage) Manifest(ctx context.Context) ([]byte, string, error) {
return i.manifestBlob, i.manifestMIMEType, nil
}
func (i *sourcedImage) Inspect() (*types.ImageInspectInfo, error) {
return inspectManifest(i.genericManifest)
}
func (i *sourcedImage) IsMultiImage() bool {
return i.manifestMIMEType == manifest.DockerV2ListMediaType
func (i *sourcedImage) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return i.UnparsedImage.src.LayerInfosForCopy(ctx)
}

View File

@@ -1,6 +1,8 @@
package image
import (
"context"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
@@ -9,8 +11,10 @@ import (
)
// UnparsedImage implements types.UnparsedImage .
// An UnparsedImage is a pair of (ImageSource, instance digest); it can represent either a manifest list or a single image instance.
type UnparsedImage struct {
src types.ImageSource
instanceDigest *digest.Digest
cachedManifest []byte // A private cache for Manifest(); nil if not yet known.
// A private cache for Manifest(), may be the empty string if guessing failed.
// Valid iff cachedManifest is not nil.
@@ -18,49 +22,41 @@ type UnparsedImage struct {
cachedSignatures [][]byte // A private cache for Signatures(); nil if not yet known.
}
// UnparsedFromSource returns a types.UnparsedImage implementation for source.
// The caller must call .Close() on the returned UnparsedImage.
// UnparsedInstance returns a types.UnparsedImage implementation for (source, instanceDigest).
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list).
//
// UnparsedFromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
// keep a reference to the UnparsedImage.)
func UnparsedFromSource(src types.ImageSource) *UnparsedImage {
return &UnparsedImage{src: src}
// The UnparsedImage must not be used after the underlying ImageSource is Close()d.
func UnparsedInstance(src types.ImageSource, instanceDigest *digest.Digest) *UnparsedImage {
return &UnparsedImage{
src: src,
instanceDigest: instanceDigest,
}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *UnparsedImage) Reference() types.ImageReference {
// Note that this does not depend on instanceDigest; e.g. all instances within a manifest list need to be signed with the manifest list identity.
return i.src.Reference()
}
// Close removes resources associated with an initialized UnparsedImage, if any.
func (i *UnparsedImage) Close() error {
return i.src.Close()
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Manifest() ([]byte, string, error) {
func (i *UnparsedImage) Manifest(ctx context.Context) ([]byte, string, error) {
if i.cachedManifest == nil {
m, mt, err := i.src.GetManifest()
m, mt, err := i.src.GetManifest(ctx, i.instanceDigest)
if err != nil {
return nil, "", err
}
// ImageSource.GetManifest does not do digest verification, but we do;
// this immediately protects also any user of types.Image.
ref := i.Reference().DockerReference()
if ref != nil {
if canonical, ok := ref.(reference.Canonical); ok {
digest := digest.Digest(canonical.Digest())
matches, err := manifest.MatchesDigest(m, digest)
if err != nil {
return nil, "", errors.Wrap(err, "Error computing manifest digest")
}
if !matches {
return nil, "", errors.Errorf("Manifest does not match provided manifest digest %s", digest)
}
if digest, haveDigest := i.expectedManifestDigest(); haveDigest {
matches, err := manifest.MatchesDigest(m, digest)
if err != nil {
return nil, "", errors.Wrap(err, "Error computing manifest digest")
}
if !matches {
return nil, "", errors.Errorf("Manifest does not match provided manifest digest %s", digest)
}
}
@@ -70,10 +66,26 @@ func (i *UnparsedImage) Manifest() ([]byte, string, error) {
return i.cachedManifest, i.cachedManifestMIMEType, nil
}
// expectedManifestDigest returns a the expected value of the manifest digest, and an indicator whether it is known.
// The bool return value seems redundant with digest != ""; it is used explicitly
// to refuse (unexpected) situations when the digest exists but is "".
func (i *UnparsedImage) expectedManifestDigest() (digest.Digest, bool) {
if i.instanceDigest != nil {
return *i.instanceDigest, true
}
ref := i.Reference().DockerReference()
if ref != nil {
if canonical, ok := ref.(reference.Canonical); ok {
return canonical.Digest(), true
}
}
return "", false
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Signatures() ([][]byte, error) {
func (i *UnparsedImage) Signatures(ctx context.Context) ([][]byte, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignatures()
sigs, err := i.src.GetSignatures(ctx, i.instanceDigest)
if err != nil {
return nil, err
}

View File

@@ -0,0 +1,29 @@
package tmpdir
import (
"os"
"runtime"
)
// unixTempDirForBigFiles is the directory path to store big files on non Windows systems.
// You can override this at build time with
// -ldflags '-X github.com/containers/image/internal/tmpdir.unixTempDirForBigFiles=$your_path'
var unixTempDirForBigFiles = builtinUnixTempDirForBigFiles
// builtinUnixTempDirForBigFiles is the directory path to store big files.
// Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
// DO NOT change this, instead see unixTempDirForBigFiles above.
const builtinUnixTempDirForBigFiles = "/var/tmp"
// TemporaryDirectoryForBigFiles returns a directory for temporary (big) files.
// On non Windows systems it avoids the use of os.TempDir(), because the default temporary directory usually falls under /tmp
// which on systemd based systems could be the unsuitable tmpfs filesystem.
func TemporaryDirectoryForBigFiles() string {
var temporaryDirectoryForBigFiles string
if runtime.GOOS == "windows" {
temporaryDirectoryForBigFiles = os.TempDir()
} else {
temporaryDirectoryForBigFiles = unixTempDirForBigFiles
}
return temporaryDirectoryForBigFiles
}

View File

@@ -0,0 +1,315 @@
package manifest
import (
"encoding/json"
"regexp"
"strings"
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/docker/docker/api/types/versions"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// Schema1FSLayers is an entry of the "fsLayers" array in docker/distribution schema 1.
type Schema1FSLayers struct {
BlobSum digest.Digest `json:"blobSum"`
}
// Schema1History is an entry of the "history" array in docker/distribution schema 1.
type Schema1History struct {
V1Compatibility string `json:"v1Compatibility"`
}
// Schema1 is a manifest in docker/distribution schema 1.
type Schema1 struct {
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []Schema1FSLayers `json:"fsLayers"`
History []Schema1History `json:"history"` // Keep this in sync with ExtractedV1Compatibility!
ExtractedV1Compatibility []Schema1V1Compatibility `json:"-"` // Keep this in sync with History! Does not contain the full config (Schema2V1Image)
SchemaVersion int `json:"schemaVersion"`
}
type schema1V1CompatibilityContainerConfig struct {
Cmd []string
}
// Schema1V1Compatibility is a v1Compatibility in docker/distribution schema 1.
type Schema1V1Compatibility struct {
ID string `json:"id"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig schema1V1CompatibilityContainerConfig `json:"container_config,omitempty"`
Author string `json:"author,omitempty"`
ThrowAway bool `json:"throwaway,omitempty"`
}
// Schema1FromManifest creates a Schema1 manifest instance from a manifest blob.
// (NOTE: The instance is not necessary a literal representation of the original blob,
// layers with duplicate IDs are eliminated.)
func Schema1FromManifest(manifest []byte) (*Schema1, error) {
s1 := Schema1{}
if err := json.Unmarshal(manifest, &s1); err != nil {
return nil, err
}
if s1.SchemaVersion != 1 {
return nil, errors.Errorf("unsupported schema version %d", s1.SchemaVersion)
}
if err := s1.initialize(); err != nil {
return nil, err
}
if err := s1.fixManifestLayers(); err != nil {
return nil, err
}
return &s1, nil
}
// Schema1FromComponents creates an Schema1 manifest instance from the supplied data.
func Schema1FromComponents(ref reference.Named, fsLayers []Schema1FSLayers, history []Schema1History, architecture string) (*Schema1, error) {
var name, tag string
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
name = reference.Path(ref)
if tagged, ok := ref.(reference.NamedTagged); ok {
tag = tagged.Tag()
}
}
s1 := Schema1{
Name: name,
Tag: tag,
Architecture: architecture,
FSLayers: fsLayers,
History: history,
SchemaVersion: 1,
}
if err := s1.initialize(); err != nil {
return nil, err
}
return &s1, nil
}
// Schema1Clone creates a copy of the supplied Schema1 manifest.
func Schema1Clone(src *Schema1) *Schema1 {
copy := *src
return &copy
}
// initialize initializes ExtractedV1Compatibility and verifies invariants, so that the rest of this code can assume a minimally healthy manifest.
func (m *Schema1) initialize() error {
if len(m.FSLayers) != len(m.History) {
return errors.New("length of history not equal to number of layers")
}
if len(m.FSLayers) == 0 {
return errors.New("no FSLayers in manifest")
}
m.ExtractedV1Compatibility = make([]Schema1V1Compatibility, len(m.History))
for i, h := range m.History {
if err := json.Unmarshal([]byte(h.V1Compatibility), &m.ExtractedV1Compatibility[i]); err != nil {
return errors.Wrapf(err, "Error parsing v2s1 history entry %d", i)
}
}
return nil
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
func (m *Schema1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{}
}
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *Schema1) LayerInfos() []LayerInfo {
layers := make([]LayerInfo, len(m.FSLayers))
for i, layer := range m.FSLayers { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
layers[(len(m.FSLayers)-1)-i] = LayerInfo{
BlobInfo: types.BlobInfo{Digest: layer.BlobSum, Size: -1},
EmptyLayer: m.ExtractedV1Compatibility[i].ThrowAway,
}
}
return layers
}
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
func (m *Schema1) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
// Our LayerInfos includes empty layers (where m.ExtractedV1Compatibility[].ThrowAway), so expect them to be included here as well.
if len(m.FSLayers) != len(layerInfos) {
return errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.FSLayers), len(layerInfos))
}
m.FSLayers = make([]Schema1FSLayers, len(layerInfos))
for i, info := range layerInfos {
// (docker push) sets up m.ExtractedV1Compatibility[].{Id,Parent} based on values of info.Digest,
// but (docker pull) ignores them in favor of computing DiffIDs from uncompressed data, except verifying the child->parent links and uniqueness.
// So, we don't bother recomputing the IDs in m.History.V1Compatibility.
m.FSLayers[(len(layerInfos)-1)-i].BlobSum = info.Digest
}
return nil
}
// Serialize returns the manifest in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
func (m *Schema1) Serialize() ([]byte, error) {
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
unsigned, err := json.Marshal(*m)
if err != nil {
return nil, err
}
return AddDummyV2S1Signature(unsigned)
}
// fixManifestLayers, after validating the supplied manifest
// (to use correctly-formatted IDs, and to not have non-consecutive ID collisions in m.History),
// modifies manifest to only have one entry for each layer ID in m.History (deleting the older duplicates,
// both from m.History and m.FSLayers).
// Note that even after this succeeds, m.FSLayers may contain duplicate entries
// (for Dockerfile operations which change the configuration but not the filesystem).
func (m *Schema1) fixManifestLayers() error {
// m.initialize() has verified that len(m.FSLayers) == len(m.History)
for _, compat := range m.ExtractedV1Compatibility {
if err := validateV1ID(compat.ID); err != nil {
return err
}
}
if m.ExtractedV1Compatibility[len(m.ExtractedV1Compatibility)-1].Parent != "" {
return errors.New("Invalid parent ID in the base layer of the image")
}
// check general duplicates to error instead of a deadlock
idmap := make(map[string]struct{})
var lastID string
for _, img := range m.ExtractedV1Compatibility {
// skip IDs that appear after each other, we handle those later
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
return errors.Errorf("ID %+v appears multiple times in manifest", img.ID)
}
lastID = img.ID
idmap[lastID] = struct{}{}
}
// backwards loop so that we keep the remaining indexes after removing items
for i := len(m.ExtractedV1Compatibility) - 2; i >= 0; i-- {
if m.ExtractedV1Compatibility[i].ID == m.ExtractedV1Compatibility[i+1].ID { // repeated ID. remove and continue
m.FSLayers = append(m.FSLayers[:i], m.FSLayers[i+1:]...)
m.History = append(m.History[:i], m.History[i+1:]...)
m.ExtractedV1Compatibility = append(m.ExtractedV1Compatibility[:i], m.ExtractedV1Compatibility[i+1:]...)
} else if m.ExtractedV1Compatibility[i].Parent != m.ExtractedV1Compatibility[i+1].ID {
return errors.Errorf("Invalid parent ID. Expected %v, got %v", m.ExtractedV1Compatibility[i+1].ID, m.ExtractedV1Compatibility[i].Parent)
}
}
return nil
}
var validHex = regexp.MustCompile(`^([a-f0-9]{64})$`)
func validateV1ID(id string) error {
if ok := validHex.MatchString(id); !ok {
return errors.Errorf("image ID %q is invalid", id)
}
return nil
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *Schema1) Inspect(_ func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error) {
s1 := &Schema2V1Image{}
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), s1); err != nil {
return nil, err
}
i := &types.ImageInspectInfo{
Tag: m.Tag,
Created: &s1.Created,
DockerVersion: s1.DockerVersion,
Architecture: s1.Architecture,
Os: s1.OS,
Layers: layerInfosToStrings(m.LayerInfos()),
}
if s1.Config != nil {
i.Labels = s1.Config.Labels
}
return i, nil
}
// ToSchema2Config builds a schema2-style configuration blob using the supplied diffIDs.
func (m *Schema1) ToSchema2Config(diffIDs []digest.Digest) ([]byte, error) {
// Convert the schema 1 compat info into a schema 2 config, constructing some of the fields
// that aren't directly comparable using info from the manifest.
if len(m.History) == 0 {
return nil, errors.New("image has no layers")
}
s1 := Schema2V1Image{}
config := []byte(m.History[0].V1Compatibility)
err := json.Unmarshal(config, &s1)
if err != nil {
return nil, errors.Wrapf(err, "error decoding configuration")
}
// Images created with versions prior to 1.8.3 require us to re-encode the encoded object,
// adding some fields that aren't "omitempty".
if s1.DockerVersion != "" && versions.LessThan(s1.DockerVersion, "1.8.3") {
config, err = json.Marshal(&s1)
if err != nil {
return nil, errors.Wrapf(err, "error re-encoding compat image config %#v", s1)
}
}
// Build the history.
convertedHistory := []Schema2History{}
for _, compat := range m.ExtractedV1Compatibility {
hitem := Schema2History{
Created: compat.Created,
CreatedBy: strings.Join(compat.ContainerConfig.Cmd, " "),
Author: compat.Author,
Comment: compat.Comment,
EmptyLayer: compat.ThrowAway,
}
convertedHistory = append([]Schema2History{hitem}, convertedHistory...)
}
// Build the rootfs information. We need the decompressed sums that we've been
// calculating to fill in the DiffIDs. It's expected (but not enforced by us)
// that the number of diffIDs corresponds to the number of non-EmptyLayer
// entries in the history.
rootFS := &Schema2RootFS{
Type: "layers",
DiffIDs: diffIDs,
}
// And now for some raw manipulation.
raw := make(map[string]*json.RawMessage)
err = json.Unmarshal(config, &raw)
if err != nil {
return nil, errors.Wrapf(err, "error re-decoding compat image config %#v: %v", s1)
}
// Drop some fields.
delete(raw, "id")
delete(raw, "parent")
delete(raw, "parent_id")
delete(raw, "layer_id")
delete(raw, "throwaway")
delete(raw, "Size")
// Add the history and rootfs information.
rootfs, err := json.Marshal(rootFS)
if err != nil {
return nil, errors.Errorf("error encoding rootfs information %#v: %v", rootFS, err)
}
rawRootfs := json.RawMessage(rootfs)
raw["rootfs"] = &rawRootfs
history, err := json.Marshal(convertedHistory)
if err != nil {
return nil, errors.Errorf("error encoding history information %#v: %v", convertedHistory, err)
}
rawHistory := json.RawMessage(history)
raw["history"] = &rawHistory
// Encode the result.
config, err = json.Marshal(raw)
if err != nil {
return nil, errors.Errorf("error re-encoding compat image config %#v: %v", s1, err)
}
return config, nil
}
// ImageID computes an ID which can uniquely identify this image by its contents.
func (m *Schema1) ImageID(diffIDs []digest.Digest) (string, error) {
image, err := m.ToSchema2Config(diffIDs)
if err != nil {
return "", err
}
return digest.FromBytes(image).Hex(), nil
}

View File

@@ -0,0 +1,254 @@
package manifest
import (
"encoding/json"
"time"
"github.com/containers/image/pkg/strslice"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// Schema2Descriptor is a “descriptor” in docker/distribution schema 2.
type Schema2Descriptor struct {
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
URLs []string `json:"urls,omitempty"`
}
// BlobInfoFromSchema2Descriptor returns a types.BlobInfo based on the input schema 2 descriptor.
func BlobInfoFromSchema2Descriptor(desc Schema2Descriptor) types.BlobInfo {
return types.BlobInfo{
Digest: desc.Digest,
Size: desc.Size,
URLs: desc.URLs,
MediaType: desc.MediaType,
}
}
// Schema2 is a manifest in docker/distribution schema 2.
type Schema2 struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
ConfigDescriptor Schema2Descriptor `json:"config"`
LayersDescriptors []Schema2Descriptor `json:"layers"`
}
// Schema2Port is a Port, a string containing port number and protocol in the
// format "80/tcp", from docker/go-connections/nat.
type Schema2Port string
// Schema2PortSet is a PortSet, a collection of structs indexed by Port, from
// docker/go-connections/nat.
type Schema2PortSet map[Schema2Port]struct{}
// Schema2HealthConfig is a HealthConfig, which holds configuration settings
// for the HEALTHCHECK feature, from docker/docker/api/types/container.
type Schema2HealthConfig struct {
// Test is the test to perform to check that the container is healthy.
// An empty slice means to inherit the default.
// The options are:
// {} : inherit healthcheck
// {"NONE"} : disable healthcheck
// {"CMD", args...} : exec arguments directly
// {"CMD-SHELL", command} : run command with system's default shell
Test []string `json:",omitempty"`
// Zero means to inherit. Durations are expressed as integer nanoseconds.
StartPeriod time.Duration `json:",omitempty"` // StartPeriod is the time to wait after starting before running the first check.
Interval time.Duration `json:",omitempty"` // Interval is the time to wait between checks.
Timeout time.Duration `json:",omitempty"` // Timeout is the time to wait before considering the check to have hung.
// Retries is the number of consecutive failures needed to consider a container as unhealthy.
// Zero means inherit.
Retries int `json:",omitempty"`
}
// Schema2Config is a Config in docker/docker/api/types/container.
type Schema2Config struct {
Hostname string // Hostname
Domainname string // Domainname
User string // User that will run the command(s) inside the container, also support user:group
AttachStdin bool // Attach the standard input, makes possible user interaction
AttachStdout bool // Attach the standard output
AttachStderr bool // Attach the standard error
ExposedPorts Schema2PortSet `json:",omitempty"` // List of exposed ports
Tty bool // Attach standard streams to a tty, including stdin if it is not closed.
OpenStdin bool // Open stdin
StdinOnce bool // If true, close stdin after the 1 attached client disconnects.
Env []string // List of environment variable to set in the container
Cmd strslice.StrSlice // Command to run when starting the container
Healthcheck *Schema2HealthConfig `json:",omitempty"` // Healthcheck describes how to check the container is healthy
ArgsEscaped bool `json:",omitempty"` // True if command is already escaped (Windows specific)
Image string // Name of the image as it was passed by the operator (e.g. could be symbolic)
Volumes map[string]struct{} // List of volumes (mounts) used for the container
WorkingDir string // Current directory (PWD) in the command will be launched
Entrypoint strslice.StrSlice // Entrypoint to run when starting the container
NetworkDisabled bool `json:",omitempty"` // Is network disabled
MacAddress string `json:",omitempty"` // Mac Address of the container
OnBuild []string // ONBUILD metadata that were defined on the image Dockerfile
Labels map[string]string // List of labels set to this container
StopSignal string `json:",omitempty"` // Signal to stop a container
StopTimeout *int `json:",omitempty"` // Timeout (in seconds) to stop a container
Shell strslice.StrSlice `json:",omitempty"` // Shell for shell-form of RUN, CMD, ENTRYPOINT
}
// Schema2V1Image is a V1Image in docker/docker/image.
type Schema2V1Image struct {
// ID is a unique 64 character identifier of the image
ID string `json:"id,omitempty"`
// Parent is the ID of the parent image
Parent string `json:"parent,omitempty"`
// Comment is the commit message that was set when committing the image
Comment string `json:"comment,omitempty"`
// Created is the timestamp at which the image was created
Created time.Time `json:"created"`
// Container is the id of the container used to commit
Container string `json:"container,omitempty"`
// ContainerConfig is the configuration of the container that is committed into the image
ContainerConfig Schema2Config `json:"container_config,omitempty"`
// DockerVersion specifies the version of Docker that was used to build the image
DockerVersion string `json:"docker_version,omitempty"`
// Author is the name of the author that was specified when committing the image
Author string `json:"author,omitempty"`
// Config is the configuration of the container received from the client
Config *Schema2Config `json:"config,omitempty"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
// Size is the total size of the image including all layers it is composed of
Size int64 `json:",omitempty"`
}
// Schema2RootFS is a description of how to build up an image's root filesystem, from docker/docker/image.
type Schema2RootFS struct {
Type string `json:"type"`
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
}
// Schema2History stores build commands that were used to create an image, from docker/docker/image.
type Schema2History struct {
// Created is the timestamp at which the image was created
Created time.Time `json:"created"`
// Author is the name of the author that was specified when committing the image
Author string `json:"author,omitempty"`
// CreatedBy keeps the Dockerfile command used while building the image
CreatedBy string `json:"created_by,omitempty"`
// Comment is the commit message that was set when committing the image
Comment string `json:"comment,omitempty"`
// EmptyLayer is set to true if this history item did not generate a
// layer. Otherwise, the history item is associated with the next
// layer in the RootFS section.
EmptyLayer bool `json:"empty_layer,omitempty"`
}
// Schema2Image is an Image in docker/docker/image.
type Schema2Image struct {
Schema2V1Image
Parent digest.Digest `json:"parent,omitempty"`
RootFS *Schema2RootFS `json:"rootfs,omitempty"`
History []Schema2History `json:"history,omitempty"`
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
}
// Schema2FromManifest creates a Schema2 manifest instance from a manifest blob.
func Schema2FromManifest(manifest []byte) (*Schema2, error) {
s2 := Schema2{}
if err := json.Unmarshal(manifest, &s2); err != nil {
return nil, err
}
return &s2, nil
}
// Schema2FromComponents creates an Schema2 manifest instance from the supplied data.
func Schema2FromComponents(config Schema2Descriptor, layers []Schema2Descriptor) *Schema2 {
return &Schema2{
SchemaVersion: 2,
MediaType: DockerV2Schema2MediaType,
ConfigDescriptor: config,
LayersDescriptors: layers,
}
}
// Schema2Clone creates a copy of the supplied Schema2 manifest.
func Schema2Clone(src *Schema2) *Schema2 {
copy := *src
return &copy
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
func (m *Schema2) ConfigInfo() types.BlobInfo {
return BlobInfoFromSchema2Descriptor(m.ConfigDescriptor)
}
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *Schema2) LayerInfos() []LayerInfo {
blobs := []LayerInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, LayerInfo{
BlobInfo: BlobInfoFromSchema2Descriptor(layer),
EmptyLayer: false,
})
}
return blobs
}
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
func (m *Schema2) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
if len(m.LayersDescriptors) != len(layerInfos) {
return errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.LayersDescriptors), len(layerInfos))
}
original := m.LayersDescriptors
m.LayersDescriptors = make([]Schema2Descriptor, len(layerInfos))
for i, info := range layerInfos {
m.LayersDescriptors[i].MediaType = original[i].MediaType
m.LayersDescriptors[i].Digest = info.Digest
m.LayersDescriptors[i].Size = info.Size
m.LayersDescriptors[i].URLs = info.URLs
}
return nil
}
// Serialize returns the manifest in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
func (m *Schema2) Serialize() ([]byte, error) {
return json.Marshal(*m)
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *Schema2) Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error) {
config, err := configGetter(m.ConfigInfo())
if err != nil {
return nil, err
}
s2 := &Schema2Image{}
if err := json.Unmarshal(config, s2); err != nil {
return nil, err
}
i := &types.ImageInspectInfo{
Tag: "",
Created: &s2.Created,
DockerVersion: s2.DockerVersion,
Architecture: s2.Architecture,
Os: s2.OS,
Layers: layerInfosToStrings(m.LayerInfos()),
}
if s2.Config != nil {
i.Labels = s2.Config.Labels
}
return i, nil
}
// ImageID computes an ID which can uniquely identify this image by its contents.
func (m *Schema2) ImageID([]digest.Digest) (string, error) {
if err := m.ConfigDescriptor.Digest.Validate(); err != nil {
return "", err
}
return m.ConfigDescriptor.Digest.Hex(), nil
}

View File

@@ -2,7 +2,9 @@ package manifest
import (
"encoding/json"
"fmt"
"github.com/containers/image/types"
"github.com/docker/libtrust"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
@@ -38,6 +40,45 @@ var DefaultRequestedManifestMIMETypes = []string{
DockerV2ListMediaType,
}
// Manifest is an interface for parsing, modifying image manifests in isolation.
// Callers can either use this abstract interface without understanding the details of the formats,
// or instantiate a specific implementation (e.g. manifest.OCI1) and access the public members
// directly.
//
// See types.Image for functionality not limited to manifests, including format conversions and config parsing.
// This interface is similar to, but not strictly equivalent to, the equivalent methods in types.Image.
type Manifest interface {
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
ConfigInfo() types.BlobInfo
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []LayerInfo
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
UpdateLayerInfos(layerInfos []types.BlobInfo) error
// ImageID computes an ID which can uniquely identify this image by its contents, irrespective
// of which (of possibly more than one simultaneously valid) reference was used to locate the
// image, and unchanged by whether or how the layers are compressed. The result takes the form
// of the hexadecimal portion of a digest.Digest.
ImageID(diffIDs []digest.Digest) (string, error)
// Inspect returns various information for (skopeo inspect) parsed from the manifest,
// incorporating information from a configuration blob returned by configGetter, if
// the underlying image format is expected to include a configuration blob.
Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error)
// Serialize returns the manifest in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
Serialize() ([]byte, error)
}
// LayerInfo is an extended version of types.BlobInfo for low-level users of Manifest.LayerInfos.
type LayerInfo struct {
types.BlobInfo
EmptyLayer bool // The layer is an “empty”/“throwaway” one, and may or may not be physically represented in various transport / storage systems. false if the manifest type does not have the concept.
}
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
// FIXME? We should, in general, prefer out-of-band MIME type instead of blindly parsing the manifest,
// but we may not have such metadata available (e.g. when the manifest is a local file).
@@ -142,3 +183,62 @@ func AddDummyV2S1Signature(manifest []byte) ([]byte, error) {
}
return js.PrettySignature("signatures")
}
// MIMETypeIsMultiImage returns true if mimeType is a list of images
func MIMETypeIsMultiImage(mimeType string) bool {
return mimeType == DockerV2ListMediaType
}
// NormalizedMIMEType returns the effective MIME type of a manifest MIME type returned by a server,
// centralizing various workarounds.
func NormalizedMIMEType(input string) string {
switch input {
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
// need to happen within the ImageSource.
case "application/json":
return DockerV2Schema1SignedMediaType
case DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType,
imgspecv1.MediaTypeImageManifest,
DockerV2Schema2MediaType,
DockerV2ListMediaType:
return input
default:
// If it's not a recognized manifest media type, or we have failed determining the type, we'll try one last time
// to deserialize using v2s1 as per https://github.com/docker/distribution/blob/master/manifests.go#L108
// and https://github.com/docker/distribution/blob/master/manifest/schema1/manifest.go#L50
//
// Crane registries can also return "text/plain", or pretty much anything else depending on a file extension “recognized” in the tag.
// This makes no real sense, but it happens
// because requests for manifests are
// redirected to a content distribution
// network which is configured that way. See https://bugzilla.redhat.com/show_bug.cgi?id=1389442
return DockerV2Schema1SignedMediaType
}
}
// FromBlob returns a Manifest instance for the specified manifest blob and the corresponding MIME type
func FromBlob(manblob []byte, mt string) (Manifest, error) {
switch NormalizedMIMEType(mt) {
case DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType:
return Schema1FromManifest(manblob)
case imgspecv1.MediaTypeImageManifest:
return OCI1FromManifest(manblob)
case DockerV2Schema2MediaType:
return Schema2FromManifest(manblob)
case DockerV2ListMediaType:
return nil, fmt.Errorf("Treating manifest lists as individual manifests is not implemented")
default: // Note that this may not be reachable, NormalizedMIMEType has a default for unknown values.
return nil, fmt.Errorf("Unimplemented manifest MIME type %s", mt)
}
}
// layerInfosToStrings converts a list of layer infos, presumably obtained from a Manifest.LayerInfos()
// method call, into a format suitable for inclusion in a types.ImageInspectInfo structure.
func layerInfosToStrings(infos []LayerInfo) []string {
layers := make([]string, len(infos))
for i, info := range infos {
layers[i] = info.Digest.String()
}
return layers
}

129
vendor/github.com/containers/image/manifest/oci.go generated vendored Normal file
View File

@@ -0,0 +1,129 @@
package manifest
import (
"encoding/json"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// BlobInfoFromOCI1Descriptor returns a types.BlobInfo based on the input OCI1 descriptor.
func BlobInfoFromOCI1Descriptor(desc imgspecv1.Descriptor) types.BlobInfo {
return types.BlobInfo{
Digest: desc.Digest,
Size: desc.Size,
URLs: desc.URLs,
Annotations: desc.Annotations,
MediaType: desc.MediaType,
}
}
// OCI1 is a manifest.Manifest implementation for OCI images.
// The underlying data from imgspecv1.Manifest is also available.
type OCI1 struct {
imgspecv1.Manifest
}
// OCI1FromManifest creates an OCI1 manifest instance from a manifest blob.
func OCI1FromManifest(manifest []byte) (*OCI1, error) {
oci1 := OCI1{}
if err := json.Unmarshal(manifest, &oci1); err != nil {
return nil, err
}
return &oci1, nil
}
// OCI1FromComponents creates an OCI1 manifest instance from the supplied data.
func OCI1FromComponents(config imgspecv1.Descriptor, layers []imgspecv1.Descriptor) *OCI1 {
return &OCI1{
imgspecv1.Manifest{
Versioned: specs.Versioned{SchemaVersion: 2},
Config: config,
Layers: layers,
},
}
}
// OCI1Clone creates a copy of the supplied OCI1 manifest.
func OCI1Clone(src *OCI1) *OCI1 {
return &OCI1{
Manifest: src.Manifest,
}
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
func (m *OCI1) ConfigInfo() types.BlobInfo {
return BlobInfoFromOCI1Descriptor(m.Config)
}
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *OCI1) LayerInfos() []LayerInfo {
blobs := []LayerInfo{}
for _, layer := range m.Layers {
blobs = append(blobs, LayerInfo{
BlobInfo: BlobInfoFromOCI1Descriptor(layer),
EmptyLayer: false,
})
}
return blobs
}
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
func (m *OCI1) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
if len(m.Layers) != len(layerInfos) {
return errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.Layers), len(layerInfos))
}
original := m.Layers
m.Layers = make([]imgspecv1.Descriptor, len(layerInfos))
for i, info := range layerInfos {
m.Layers[i].MediaType = original[i].MediaType
m.Layers[i].Digest = info.Digest
m.Layers[i].Size = info.Size
m.Layers[i].Annotations = info.Annotations
m.Layers[i].URLs = info.URLs
}
return nil
}
// Serialize returns the manifest in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
func (m *OCI1) Serialize() ([]byte, error) {
return json.Marshal(*m)
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *OCI1) Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error) {
config, err := configGetter(m.ConfigInfo())
if err != nil {
return nil, err
}
v1 := &imgspecv1.Image{}
if err := json.Unmarshal(config, v1); err != nil {
return nil, err
}
d1 := &Schema2V1Image{}
json.Unmarshal(config, d1)
i := &types.ImageInspectInfo{
Tag: "",
Created: v1.Created,
DockerVersion: d1.DockerVersion,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
Layers: layerInfosToStrings(m.LayerInfos()),
}
return i, nil
}
// ImageID computes an ID which can uniquely identify this image by its contents.
func (m *OCI1) ImageID([]digest.Digest) (string, error) {
if err := m.Config.Digest.Validate(); err != nil {
return "", err
}
return m.Config.Digest.Hex(), nil
}

View File

@@ -0,0 +1,139 @@
package archive
import (
"context"
"io"
"os"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
)
type ociArchiveImageDestination struct {
ref ociArchiveReference
unpackedDest types.ImageDestination
tempDirRef tempDirOCIRef
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ctx context.Context, sys *types.SystemContext, ref ociArchiveReference) (types.ImageDestination, error) {
tempDirRef, err := createOCIRef(ref.image)
if err != nil {
return nil, errors.Wrapf(err, "error creating oci reference")
}
unpackedDest, err := tempDirRef.ociRefExtracted.NewImageDestination(ctx, sys)
if err != nil {
if err := tempDirRef.deleteTempDir(); err != nil {
return nil, errors.Wrapf(err, "error deleting temp directory", tempDirRef.tempDirectory)
}
return nil, err
}
return &ociArchiveImageDestination{ref: ref,
unpackedDest: unpackedDest,
tempDirRef: tempDirRef}, nil
}
// Reference returns the reference used to set up this destination.
func (d *ociArchiveImageDestination) Reference() types.ImageReference {
return d.ref
}
// Close removes resources associated with an initialized ImageDestination, if any
// Close deletes the temp directory of the oci-archive image
func (d *ociArchiveImageDestination) Close() error {
defer d.tempDirRef.deleteTempDir()
return d.unpackedDest.Close()
}
func (d *ociArchiveImageDestination) SupportedManifestMIMETypes() []string {
return d.unpackedDest.SupportedManifestMIMETypes()
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures
func (d *ociArchiveImageDestination) SupportsSignatures(ctx context.Context) error {
return d.unpackedDest.SupportsSignatures(ctx)
}
func (d *ociArchiveImageDestination) DesiredLayerCompression() types.LayerCompression {
return d.unpackedDest.DesiredLayerCompression()
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ociArchiveImageDestination) AcceptsForeignLayerURLs() bool {
return d.unpackedDest.AcceptsForeignLayerURLs()
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise
func (d *ociArchiveImageDestination) MustMatchRuntimeOS() bool {
return d.unpackedDest.MustMatchRuntimeOS()
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *ociArchiveImageDestination) IgnoresEmbeddedDockerReference() bool {
return d.unpackedDest.IgnoresEmbeddedDockerReference()
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
func (d *ociArchiveImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
return d.unpackedDest.PutBlob(ctx, stream, inputInfo, isConfig)
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob
func (d *ociArchiveImageDestination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
return d.unpackedDest.HasBlob(ctx, info)
}
func (d *ociArchiveImageDestination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
return d.unpackedDest.ReapplyBlob(ctx, info)
}
// PutManifest writes manifest to the destination
func (d *ociArchiveImageDestination) PutManifest(ctx context.Context, m []byte) error {
return d.unpackedDest.PutManifest(ctx, m)
}
func (d *ociArchiveImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
return d.unpackedDest.PutSignatures(ctx, signatures)
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted
// after the directory is made, it is tarred up into a file and the directory is deleted
func (d *ociArchiveImageDestination) Commit(ctx context.Context) error {
if err := d.unpackedDest.Commit(ctx); err != nil {
return errors.Wrapf(err, "error storing image %q", d.ref.image)
}
// path of directory to tar up
src := d.tempDirRef.tempDirectory
// path to save tarred up file
dst := d.ref.resolvedFile
return tarDirectory(src, dst)
}
// tar converts the directory at src and saves it to dst
func tarDirectory(src, dst string) error {
// input is a stream of bytes from the archive of the directory at path
input, err := archive.Tar(src, archive.Uncompressed)
if err != nil {
return errors.Wrapf(err, "error retrieving stream of bytes from %q", src)
}
// creates the tar file
outFile, err := os.Create(dst)
if err != nil {
return errors.Wrapf(err, "error creating tar file %q", dst)
}
defer outFile.Close()
// copies the contents of the directory to the tar file
// TODO: This can take quite some time, and should ideally be cancellable using a context.Context.
_, err = io.Copy(outFile, input)
return err
}

View File

@@ -0,0 +1,95 @@
package archive
import (
"context"
"io"
ocilayout "github.com/containers/image/oci/layout"
"github.com/containers/image/types"
digest "github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
type ociArchiveImageSource struct {
ref ociArchiveReference
unpackedSrc types.ImageSource
tempDirRef tempDirOCIRef
}
// newImageSource returns an ImageSource for reading from an existing directory.
// newImageSource untars the file and saves it in a temp directory
func newImageSource(ctx context.Context, sys *types.SystemContext, ref ociArchiveReference) (types.ImageSource, error) {
tempDirRef, err := createUntarTempDir(ref)
if err != nil {
return nil, errors.Wrap(err, "error creating temp directory")
}
unpackedSrc, err := tempDirRef.ociRefExtracted.NewImageSource(ctx, sys)
if err != nil {
if err := tempDirRef.deleteTempDir(); err != nil {
return nil, errors.Wrapf(err, "error deleting temp directory", tempDirRef.tempDirectory)
}
return nil, err
}
return &ociArchiveImageSource{ref: ref,
unpackedSrc: unpackedSrc,
tempDirRef: tempDirRef}, nil
}
// LoadManifestDescriptor loads the manifest
func LoadManifestDescriptor(imgRef types.ImageReference) (imgspecv1.Descriptor, error) {
ociArchRef, ok := imgRef.(ociArchiveReference)
if !ok {
return imgspecv1.Descriptor{}, errors.Errorf("error typecasting, need type ociArchiveReference")
}
tempDirRef, err := createUntarTempDir(ociArchRef)
if err != nil {
return imgspecv1.Descriptor{}, errors.Wrap(err, "error creating temp directory")
}
defer tempDirRef.deleteTempDir()
descriptor, err := ocilayout.LoadManifestDescriptor(tempDirRef.ociRefExtracted)
if err != nil {
return imgspecv1.Descriptor{}, errors.Wrap(err, "error loading index")
}
return descriptor, nil
}
// Reference returns the reference used to set up this source.
func (s *ociArchiveImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
// Close deletes the temporary directory at dst
func (s *ociArchiveImageSource) Close() error {
defer s.tempDirRef.deleteTempDir()
return s.unpackedSrc.Close()
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *ociArchiveImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
return s.unpackedSrc.GetManifest(ctx, instanceDigest)
}
// GetBlob returns a stream for the specified blob, and the blob's size.
func (s *ociArchiveImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
return s.unpackedSrc.GetBlob(ctx, info)
}
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
func (s *ociArchiveImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
return s.unpackedSrc.GetSignatures(ctx, instanceDigest)
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *ociArchiveImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}

Some files were not shown because too many files have changed in this diff Show More