Compare commits

...

186 Commits

Author SHA1 Message Date
Antonio Murdaca
9e971b4937 Merge pull request #85 from runcom/bump-v0.1.13
bump to v0.1.13
2016-05-31 16:43:55 +02:00
Antonio Murdaca
bd018696bd bump to v0.1.13
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-31 16:35:08 +02:00
Antonio Murdaca
ad7eb5d221 Merge pull request #84 from mtrmac/gpgme-32bit
Rerun hack/vendor.sh to fix build on 32-bit systems
2016-05-31 16:28:26 +02:00
Miloslav Trmač
80ccbaa021 Rerun hack/vendor.sh to fix build on 32-bit systems
i.e. to pick up https://github.com/proglottis/gpgme/pull/10

Fixes #80.
2016-05-31 16:12:44 +02:00
Antonio Murdaca
c24b42177e Merge pull request #83 from projectatomic/remove-from-api
Remove ManifestMIMETypes
2016-05-31 11:28:51 +02:00
Antonio Murdaca
6fc6d809e0 Remove ManifestMIMETypes
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-31 11:19:09 +02:00
Miloslav Trmač
0d95328125 Merge pull request #79 from mtrmac/image-from-imagesource
Make types.Image Docker-independent, add docker.GenericImageFromSource
2016-05-30 17:35:34 +02:00
Miloslav Trmač
41dbbc9b50 Support dir:… as an image specification in (skopeo {inspect,layers})
This is not expected to be that useful in production; for now it serves
as a demonstration of the concept, and it allows (skopeo inspect) to be
clumsily used as parser of stand-alone manifests (by creating a dir:
structure with that manifest).

(skopeo layers) support follows naturally, but is even less useful.
2016-05-28 02:11:32 +02:00
Miloslav Trmač
323b56a049 Make types.Image Docker-independent
The remaining uses of the dependencies, in (skopeo inspect), now check
whether their types.Image is a docker.Image and call the docker.Image
functions directly.

This does not change behavior for Docker images.

For non-Docker images (which can't happen yet), the Name field is
removed; RepoTags remain and are reported as empty, because using
json:",omitempty" would also omit an empty list for Docker images.
2016-05-28 02:11:32 +02:00
Miloslav Trmač
ea643e8658 Use types.ImageSource instead of *dockerImageSource in genericImage
This finally makes genericImage Docker-independent.

(dockerImage is still the only implementation of types.Image.)
2016-05-28 02:11:32 +02:00
Miloslav Trmač
cada464c90 Split dockerImage to genericImage and docker.Image
The code not dependent on specifics of DockerImageSource now lives in
docker.genericImage; the rest directly in docker.Image.

docker.Image remains the only implementation of types.Image at this
point, but that will change.
2016-05-28 02:09:22 +02:00
Miloslav Trmač
0da4307aea Split SourceRefFullName from types.Image.Inspect
This is the only Docker-specific aspect of types.Image.Inspect.

This does not change behavior; plausibly we might want to replace the
Name value in (skopeo inspect) by something else which is not dependent
on Docker, but that can be a separate work later.

Adds a FIXME? in docker_image.go for consistency with
dockerImage.GetRepositoryTags, both will be removed later in the
patchset.
2016-05-28 02:08:12 +02:00
Miloslav Trmač
143e3602ae Split Docker-independent parts of dockerImage into docker/image.go
This does not change the code at all, only moving things around now.
2016-05-28 02:04:48 +02:00
Miloslav Trmač
0314fdb49e Remove temporary variables in (skopeo inspect)
We abort on failure to get the data anyway, so there is no need to use
temporaries to avoid modifying outputData on failure.

This is not a simplification yet, but handling optional (e.g.
Docker-specific) data this way will be simpler, and handling
non-optional data the same way will be more consistent.
2016-05-28 02:04:48 +02:00
Antonio Murdaca
847b5bff85 Merge pull request #78 from runcom/bump-0113dev
bump to v0.1.13-dev
2016-05-27 12:42:29 +02:00
Antonio Murdaca
864568bbd9 bump to v0.1.13-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-27 12:26:54 +02:00
Antonio Murdaca
015f1c8c9a Merge pull request #77 from runcom/bump-0112
bump to v0.1.12
2016-05-27 12:25:54 +02:00
Antonio Murdaca
46bb9a0698 bump to v0.1.12
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-27 12:16:25 +02:00
Antonio Murdaca
62af96c5c9 Merge pull request #74 from mtrmac/key-import
Add SigningMechanism.ImportKeysFromBytes
2016-05-25 16:35:42 +02:00
Miloslav Trmač
aee0abb5d2 Add SigningMechanism.ImportKeysFromBytes
This will be needed for verification against specified public keys.

Also rerun hack/vendor.sh to pick up import support from
github.com/mtrmac/gpgme .
2016-05-25 16:04:20 +02:00
Miloslav Trmač
721a628f4a Merge pull request #76 from mtrmac/policy-config
Update a comment for prInsecureAcceptAnything
2016-05-25 16:03:30 +02:00
Miloslav Trmač
10280f2e0d Update a comment for prInsecureAcceptAnything 2016-05-25 15:53:12 +02:00
Miloslav Trmač
9ccfc6a423 Merge pull request #55 from mtrmac/policy-config
Add policy configuration data structures, construction and parsing
2016-05-25 15:46:53 +02:00
Miloslav Trmač
d9b1c229e5 Add policy configuration data structures, construction and parsing 2016-05-24 20:24:15 +02:00
Miloslav Trmač
7a8602c54c Add paranoidUnmarshalJSONObject() helper
This allows unmarshaling JSON data and refusing any ambiguous input, to
make sure users don't make mistakes when writing policy.

This might be a bit easier with reflection, but we will need the
non-reflection variant (for unmarshaling a map type) anyway, and quite a
few users which do ultimately unmarshal into a struct need to override
the type of one or more fields, so reflection would force them to define
temporary fields - not necessarily all that better.
2016-05-24 18:16:33 +02:00
Antonio Murdaca
dbb47e6bb6 Merge pull request #72 from runcom/godoc
Godoc
2016-05-24 17:37:53 +02:00
Antonio Murdaca
67119f4875 add doc.go stub
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-24 17:29:12 +02:00
Miloslav Trmač
3d1201007e Merge pull request #73 from runcom/mimetypes-choose
add the possibility to choose image's MIME type
2016-05-24 17:10:07 +02:00
Antonio Murdaca
15f478e26b add the possibility to choose image's MIME type
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-24 16:52:39 +02:00
Miloslav Trmač
0abbb9a2ce Merge pull request #69 from runcom/re-mimetypes
add mimetypes
2016-05-23 21:00:28 +02:00
Antonio Murdaca
7d12b66fb8 add mimetypes
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-23 20:40:52 +02:00
Miloslav Trmač
814a2a6f94 Merge pull request #70 from projectatomic/cleanups
Cleanups
2016-05-23 19:29:06 +02:00
Antonio Murdaca
4036b3543e cleanup API
moving stuff around (godoc.org review)

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-23 17:20:55 +02:00
Miloslav Trmač
1cf55db9be Merge pull request #61 from runcom/fix-godoc
signature: remove pkg fixtures
2016-05-23 16:40:12 +02:00
Antonio Murdaca
7c5db83261 Remove signature/fixtures subpackage
This will make the output of godoc cleaner, we can't filter out the
subpackage otherwise.

Also copy the needed fixture into the integration subpackage, instead of
referring to it using ../signature/fixtures (and we can't import
signature/fixtures_info-test.go now).

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-21 21:21:58 +02:00
Antonio Murdaca
7522e6c99c Merge pull request #68 from mtrmac/cleanups
Random cleanups
2016-05-21 10:51:42 +02:00
Miloslav Trmač
09f33a7c2c Remove a redundant check
reference.WithDefaultTag is already calling reference.IsNameOnly, so we
don't need to guard it on the outside.
2016-05-21 04:48:33 +02:00
Miloslav Trmač
521e3ce0eb Remove duplicated test 2016-05-21 04:46:57 +02:00
Miloslav Trmač
532fae24ac Merge pull request #65 from runcom/fix-headers
provide a way to pass multi values-single key headers
2016-05-19 18:43:25 +02:00
Antonio Murdaca
c661fad3eb provide a way to pass multi values-signle key headers
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-18 17:02:53 +02:00
Miloslav Trmač
09f82a5ad2 Merge pull request #60 from projectatomic/more-cleanups
move dockerutils under docker
2016-05-17 17:52:10 +02:00
Antonio Murdaca
e775248b96 move dockerutils under docker/utils
also remove fixtures pkg as it would clutter godoc (there's not need
to have a .go files with fixtures)

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-17 17:35:32 +02:00
Antonio Murdaca
df618e5f7a Merge pull request #58 from mtrmac/api-general
General image API cleanups
2016-05-17 15:58:08 +02:00
Miloslav Trmač
6a357b6fcc Add comments on the use of the API and the general direction. 2016-05-17 15:50:17 +02:00
Miloslav Trmač
7598aab521 Clean up Image.Inspect
This does not change behavior.

Rename types.DockerImageManifest to types.ImageInspectInfo.

This naming more accurately reflects what the function does and how it is
expected to be used.

(The only outstanding non-inspection piece is the Name field, which is
kind of a subset of GetIntendedDockerReference() right now. Not sure
whether that is intentional.)

Also fold makeImageManifest into its only user.
2016-05-16 20:50:46 +02:00
Miloslav Trmač
9766f72760 Move listing of repository tags from Image.Manifest to a separate Image.GetRepositoryTags
This does not change behavior.

Splits listing of repository tags, which is not a property of an image,
from the image.Manifest gathering of information about an image.
2016-05-16 20:50:46 +02:00
Miloslav Trmač
60e8d63712 Compute the digest in (skopeo inspect) instead of trusting the registry
Compute the digest ourselves, the registry is in general untrusted and
computing it ourserlves is easy enough.

The stop passing the unverifiedCanonicalDigest value around, simplifying
ImageSource.GetManifest and related code.  In particular, remove
retrieveRawManifest and have internal users just call Manifest() now that
we don't need the digest.
2016-05-16 20:50:45 +02:00
Miloslav Trmač
e3d257e7b5 Decouple (skopeo inspect) output formatting from types.Image.Manifest
Does not change behavior.

This will allow us to move collecting some of the data to the (skopeo
inspect) code and to have a more focused types.Image API, where
types.Image.Manifest() does not return a grab bag of manifest-unrelated
data, eventually.

For how it actually makes the coupling more explicit by having
types.Image.Manifest() return a types.DockerImageManifest instead of the
too generic types.ImageManifest.  We will need to think about which
parts of DockerImageManifest are truly generic, later.
2016-05-16 20:50:45 +02:00
Antonio Murdaca
c75f0f6780 Merge pull request #59 from mtrmac/api-for-signatures
API for signatures - drop Get prefixes on quick getters
2016-05-16 20:43:48 +02:00
Miloslav Trmač
c38ed76969 Rename GetIndendedDockerReference to IntendedDockerReference 2016-05-16 20:33:13 +02:00
Miloslav Trmač
119609b871 Drop the Get prefix from types.Image.GetManifest and GetSignatures
Keeps the Get prefix on the equivalent methods on types.ImageSource, to
hint that they may be slow.
2016-05-16 20:29:52 +02:00
Miloslav Trmač
dc7a05ebf9 Rename types.Image.Manifest to types.Image.Inspect
Does not change behavior.

This better expresses the purpose of this method (it is working with
more, currently much more, than the manifest), and frees up the Manifest
method name for a simple getter of the raw blob.
2016-05-16 20:24:39 +02:00
Antonio Murdaca
d4eb69e1ab Merge pull request #57 from mtrmac/api-for-signatures
API for signatures
2016-05-16 20:10:03 +02:00
Miloslav Trmač
e4913bd0b0 Add GetIntendedDockerReference to types.Image and types.ImageSource
This will be necessary for signature verification and related policy
evaluation in the future.
2016-05-16 19:25:11 +02:00
Miloslav Trmač
feb9de4845 Add GetManifest and GetSignatures to types.Image
No change in behavior.

These functions are guaranteed-cached versions of the same method in
types.ImageSource.  Both will be needed for signature policy evaluation,
and the symmetry with ImageSource is nice.

Also replaces the equivalent RawManifest method, preferring to keep
the same naming convention as types.ImageSource.
2016-05-16 19:25:11 +02:00
Antonio Murdaca
a39474c817 Merge pull request #56 from mtrmac/sub-pkgs
Move directory, docker and openshift from cmd/skopeo to their own subpackages
2016-05-16 18:44:18 +02:00
Miloslav Trmač
f526328b30 Move directory, docker and openshift from cmd/skopeo to their own subpackages
Does not change behavior.  This is a straightforward move and update of
package references, except for:

- Adding a duplicate definition of manifestSchema1 to
  cmd/skopeo/copy.go.  This will need to be cleaned up later, for now
  preferring to make no design changes in this commit.
- Renaming parseDockerImage to NewDockerImage, to both make it public
  and consistent with common golang conventions.
2016-05-16 18:32:32 +02:00
Antonio Murdaca
b48c78b154 Merge pull request #54 from mtrmac/cleanups
signature cleanups
2016-05-16 16:01:12 +02:00
Miloslav Trmač
c8d8608b57 Move the x() helper from signature_test.go to json_test.go
It will be used in other tests as well.
2016-05-16 14:57:05 +02:00
Miloslav Trmač
35ba0edf0d Remove an unused savedEnvironment type 2016-05-16 14:57:05 +02:00
Miloslav Trmač
345d0c3e2b Reset a json.Unmarshal destination right before the call
… similar to how we do it in other places.
2016-05-16 14:57:05 +02:00
Miloslav Trmač
2ddaa122ab s/tryUnmarshalModified/tryUnmarshalModifiedSignature/g
We will be adding similar test helpers for other types as well, so avoid
the naming conflict.
2016-05-16 14:57:05 +02:00
Miloslav Trmač
7f7c71836c Move strict JSON parsing utilities into a separate file.
No semantic change, only a reorganization: The utilities now return
jsonFormatError instead of InvalidSignatureError, but their only
caller maps it back.
2016-05-16 14:57:05 +02:00
Miloslav Trmač
b5e8413d22 Add compile-time checks that privateSignature implements json.Marshaler and json.Unmarshaler 2016-05-16 14:57:05 +02:00
Miloslav Trmač
14686616c1 Add an API stability warning to mechanism.go 2016-05-16 14:57:05 +02:00
Antonio Murdaca
c89bc5cc4a Merge pull request #53 from mtrmac/copy-no-digest
Don‘t write the mainfest digest on stdout in (skopeo copy)
2016-05-14 10:01:00 +02:00
Miloslav Trmač
6db8872406 Don‘t write the mainfest digest on stdout in (skopeo copy)
The dir: source type does not return the value, the value is
untrusted/not validated, and it is not at all clear why we should print
it in the first place.
2016-05-11 17:36:02 +02:00
Miloslav Trmač
a4fba7b0a0 Merge pull request #51 from projectatomic/refactor-for-lib
*: move pkg main into cmd/skopeo/
2016-05-10 12:29:11 +02:00
Antonio Murdaca
3dc3957607 *: move pkg main into cmd/skopeo/
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-10 11:04:03 +02:00
Antonio Murdaca
0bd8bea9ec Merge pull request #50 from jwhonce/wip/manpage
Correct man page formatting
2016-05-09 23:20:40 +02:00
Jhon Honce
780bd132ac Correct man page formatting
Signed-off-by: Jhon Honce <jhonce@redhat.com>
2016-05-09 13:48:31 -07:00
Miloslav Trmač
dbdb03eddb Merge pull request #49 from projectatomic/split-docker
*: split docker.go for future pkg creation
2016-05-09 21:30:38 +02:00
Antonio Murdaca
f43a92a78f *: split docker.go for future pkg creation
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-05-09 21:13:17 +02:00
Antonio Murdaca
9229d72a37 Merge pull request #48 from mtrmac/gpgme-update
Rerun hack/vendor.sh to pick up gpgme changes
2016-05-07 11:35:49 +02:00
Miloslav Trmač
5a2b4005bb Rerun hack/vendor.sh to pick up gpgme changes
In particular, https://github.com/proglottis/gpgme/pull/8 .
2016-05-07 02:33:51 +02:00
Antonio Murdaca
9d24de4c57 Merge pull request #45 from mtrmac/gpgme-update
Rerun hack/vendor.sh to pick up gpgme changes
2016-05-06 22:58:53 +02:00
Miloslav Trmač
fe37c71a4f Rerun hack/vendor.sh to pick up gpgme changes
See https://github.com/proglottis/gpgme/pull/7 for the full discussion.

Resolves #42 .
2016-05-06 22:44:30 +02:00
Antonio Murdaca
28973c0a2d Merge pull request #43 from mtrmac/openshift-copypasta
Add Atomic Registry support for push and pull, and a new “copy” command
2016-05-05 15:39:28 +02:00
Miloslav Trmač
026acb2a57 Add a --sign-by flag to the (skopeo copy) command.
This expects a GPG key fingerprint as a value of the argument (though
other key identification methods, like mitr@volny.cz, happen to work).

Do we need to namespace this (gpg:…)?

Note that this is unusable at the moment because only the dir: backend
implements storing signatures, and this backend does can not determine
the canonical Docker reference to use as a signed image identity.
2016-05-04 17:32:51 +02:00
Miloslav Trmač
da24e319af Add CanonicalDockerReference to ImageDestination
This is necessary to resolve the canonical form of a reference for
signing it.
2016-05-04 17:32:51 +02:00
Miloslav Trmač
2e48975b8b Add a "copy" command for copying images
This copies an image from ImageSource to ImageDestination, e.g.

skopeo copy atomic:mitr/busybox:latest dir:t-down # pull
skopeo copy dir:t-up atomic:mitr/busybox:latest # push
2016-05-04 17:32:51 +02:00
Miloslav Trmač
56f9c987a2 Add utilities for parsing Docker URIs into ImageSource and ImageDestination objects
This finally uses all of the ImageSource and ImageDestination
implementations, though these utilities are in turn not used yet.

Adds unresolved FIXME (FIXME!!) notes for the tlsVerify default value;
for now, the code follows the existing parseImage semantics.

Also note the naming inconsistency: dir:…, atomic:…, but
docker://… .  I think the non-// names are cleaner, but if we are
committed to docker://…, just being consistent might be better.
2016-05-04 17:32:51 +02:00
Miloslav Trmač
36d4353229 Add OpenShift implementations of ImageSource and ImageDestination
Note that this assumes that both (docker login) and (oc login) has
happened, the credentials can be read from the usual config files,
and that the default OpenShift instance should be used.

This includes copy&pasted/modified/simplified code from OpenShift
and Kubernetes, primarily for config file parsing and setting up
TLS and HTTP authentication.

This is much smaller than linking to the upstream OpenShift client
libraries, which via various abstractions and registration drag in much
(dozens of megabytes) more code.

The primary loss from this simplification is automatic conversions
between various versions of the API objects, both for the REST API and
for local configuration storage.

This does not contain downloading/uploading signatures, which depends on
server-side support.
2016-05-04 17:32:51 +02:00
Miloslav Trmač
935eee7592 Add an ImageDestination implementation for the Docker Registry
Note that this does not allow uploading under new tags; Docker Registry
requires the tag to be present within the manifest, i.e. we might need
to modify the (possibly signed) manifest.

For now, uploading manifests only identified by a digest is sufficient
for the Atomic Registry; tagging happens in OpenShift imagestreams.
2016-05-04 17:32:51 +02:00
Miloslav Trmač
0587501ff0 Split dockerClient from dockerImageSource
The dockerClient encapsulates makeRequest and authentication setup, and
will be shared between the pull and push code.

This is only a restructuring, does not change behavior.

The dockerImage->dockerImageSource->dockerClient inclusion chain is
somewhat ugly, hopefully eventually we will move the remaining
dockerImage functionality either to dockerutils or to the top level, and
then eliminate it.
2016-05-04 17:32:51 +02:00
Miloslav Trmač
2790d9a1c3 Make dockerutils.GuessManifestMIMEType public
The Docker Registry manifest upload should supply a Content-Type, and
guessing from the contents is the easiest we can do right now.

Also eliminate dockerutils.manifestMIMEType, it is making it too
difficult to use the returned value to be worth the extra safety.
2016-05-04 17:32:51 +02:00
Antonio Murdaca
696eb74918 Merge pull request #30 from mtrmac/cleanups
Cleanups
2016-05-04 17:29:14 +02:00
Miloslav Trmač
654050b7e8 Fix handling of !tlsVerify when certPath is not set 2016-05-04 17:19:59 +02:00
Miloslav Trmač
7bee2da169 Use the provided method in dockerImageSource.makeRequest instead of hard-coding GET 2016-05-04 17:19:59 +02:00
Miloslav Trmač
60fbdd3988 Do not assume GetManifest is called before other dockerImageSource methods
Call dockerImageSource.ping() in .makeRequest() if needed, instead of
expecting a caller to do it (which only happened in GetManifest).

This required splitting the URLs into the baseURL (dependent on .ping()
result) and the suffix (independent of it), which was a simplification
anyway.

Also rename WWWAuthenticate to wwwAuthenticate, it is a private cache
field.
2016-05-04 17:19:59 +02:00
Miloslav Trmač
14bc664b48 Remove a redundant dockerImageSource.makeRequest parameter
It is always computed in the same, or equivalent, way.

Also remove pingResponse.needsAuth, only used in the above.
2016-05-04 17:19:59 +02:00
Antonio Murdaca
749a8e3b82 Merge pull request #38 from jwhonce/wip/manpage
Add disclaimer to man page for sign-* commands
2016-05-04 17:10:06 +02:00
Jhon Honce
0470e7fb0f Add disclaimer to man page for sign-* commands
Signed-off-by: Jhon Honce <jhonce@redhat.com>
2016-05-04 07:36:38 -07:00
Antonio Murdaca
266f0b8487 Merge pull request #29 from mtrmac/source-dest
Add ImageSource and ImageDestination abstractions
2016-05-04 13:03:23 +02:00
Miloslav Trmač
fd41449410 Use dirImageDestination for writing to local files in docker.go
This will hopefully allow better reuse of the "copy images" code from
docker.go in the future.

No behavior change, the dirImageDestination code was based on the code
this commit is replacing.
2016-05-02 19:43:16 +02:00
Miloslav Trmač
af126bc68c Add an ImageSource and ImageDestination implementation for local directories
This is consistent with the (skopeo layers) storage layout; otherwise it
is expected to be used primarily as an a debugging aid when working on
more complex image transfers (e.g. directly from OpenShift to a running
Docker daemon), allowing them to be split to two simpler problems
between one complex storage mechanism and a simple directory.

Not used yet, users will be added in future commits.
2016-05-02 19:43:16 +02:00
Miloslav Trmač
e169c311d3 Add an ImageSource implementation to docker.go
The ImageSource type does not provide all of the functionality of
docker.go, but we will be able to reuse the ImageSource parts in an
OpenShift client.

This is only a restructuring, does not change behavior.
2016-05-02 19:43:16 +02:00
Miloslav Trmač
a4aedae063 Add types.ImageSource and types.ImageDestination
Right now, only a declaration.

This will allow writing generalized push/pull between various storage
mechanisms, and reuse of the Docker Registry client code for the Docker
Registry embedded in OpenShift.
2016-05-02 19:43:16 +02:00
Antonio Murdaca
aff6aa7c2c Merge pull request #41 from projectatomic/fix-url
docker.go: do not concatenate url in ping
2016-04-29 16:51:38 +02:00
Antonio Murdaca
6d74750bba docker.go: do not concatenate url in ping
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-04-29 15:23:06 +02:00
Antonio Murdaca
2b3a4cfdfe Merge pull request #39 from mtrmac/update-gpgme
Update mtrmac/gpgme vendor to fix build on CentOS 7
2016-04-26 18:30:20 +02:00
Miloslav Trmač
e76eecd533 Update mtrmac/gpgme vendor to fix build on CentOS 7 2016-04-26 17:57:40 +02:00
Antonio Murdaca
dfc6352108 Merge pull request #37 from mtrmac/v2s1-manifest-followup
v2s1 manifest followup
2016-04-25 18:08:54 +02:00
Miloslav Trmač
23899acadd Create a new subpackage "dockerutils", starting with manifest computation
Move the manifest computation (with v2s1 signature stripping) out of
skopeo/signature into a separate package; it is necessary in the
OpenShift client as well, unrelated to signatures.

Other Docker-specific utilities, like getting a list of layer blobsums
from a manifest, may be also moved here in the future.
2016-04-25 17:27:51 +02:00
Miloslav Trmač
7a7dd84818 Fix fixture file name
It is “manifest version 2, schema 1”, not v1.
2016-04-25 17:27:51 +02:00
Antonio Murdaca
8374928f74 Merge pull request #35 from jwhonce/wip/manpage
Update man page
2016-04-22 09:13:31 +02:00
Jhon Honce
b52d3c85c6 * Update Authors 2016-04-21 13:44:12 -07:00
Jhon Honce
eab73f3d51 Update man page
Resolves https://github.com/projectatomic/skopeo/issues/12

* Convert man page from markdown to nroff
* Fill out man page
* Remove TODO's from go code regarding man page
* Additional information on building instructions
* Update Makfile

Signed-off-by: Jhon Honce <jhonce@redhat.com>
2016-04-21 09:46:02 -07:00
Antonio Murdaca
918a4d9110 Merge pull request #36 from projectatomic/fix-creds
fix invalid credentials error
2016-04-21 15:33:14 +02:00
Antonio Murdaca
c7be79e190 fix invalid credentials error
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-04-21 14:32:49 +02:00
Antonio Murdaca
28f2fedab9 Merge pull request #34 from mtrmac/v1s1-manifest-digest
Strip signatures from v1s1 manifests before computing the digest
2016-04-19 18:03:57 +02:00
Miloslav Trmač
4e19770a1b Strip signatures from v1s1 manifests before computing the digest 2016-04-19 17:37:04 +02:00
Antonio Murdaca
68a614d463 Merge pull request #33 from mtrmac/cgo-pthread-ordering-workaround
Add a workaround for a glibc bug when -lphtread precedes -lgpgme
2016-04-13 22:40:58 +02:00
Miloslav Trmač
e782275c2e Add a workaround for a glibc bug when -lphtread precedes -lgpgme 2016-04-13 21:42:19 +02:00
Antonio Murdaca
9d10b0b4ea Merge pull request #32 from mtrmac/integration-diagnostics
Test command output before eror status in signing integration tests
2016-04-12 10:11:39 +02:00
Miloslav Trmač
dd7c2d44fa Log command output on failures in signing integration tests 2016-04-11 17:41:57 +02:00
Antonio Murdaca
c4e48c8f85 Merge pull request #31 from mtrmac/vendor-fixes
Vendor fixes
2016-04-07 09:00:01 +02:00
Miloslav Trmač
f7b81b5627 Fix dependency computation
Set GOPATH to start with ./vendor so that we use the dependencies in our
vendored versions instead of dependencies in whatever other version is
elsewhere in GOPATH.

And then undo it when trying to list the non-vendor subpackages in the
current directory.
2016-04-05 17:46:58 +02:00
Miloslav Trmač
96b96735ed Allow keeping vendor subdirectories in vendored packages
github.com/coreos/etcd as of v2.2.5 uses a Godeps subdirectory, and
imports packages by including the Godeps path fragments directly in the
package name; so we can't just remove the subdirectory and vendor the
included package directly.  So, add a flag to clone() to surpress
removing the vendor subdirectories.
2016-04-05 17:46:53 +02:00
Antonio Murdaca
d7ae061a83 Merge pull request #28 from runcom/fix-gitcommit
remove cmd/ subdir
2016-03-25 12:21:26 +01:00
Antonio Murdaca
1423aab202 remove cmd/ subdir
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-25 12:14:43 +01:00
Antonio Murdaca
9e982f0b1d Merge pull request #25 from mtrmac/signing
Signing
2016-03-24 17:40:30 +01:00
Miloslav Trmač
03f6cb89e6 Add standalone-sign and standalone-verify commands 2016-03-24 17:06:30 +01:00
Miloslav Trmač
69d5a131c9 Add signing and verification to the signature package 2016-03-24 11:32:23 +01:00
Miloslav Trmač
9595b3336f Signature JSON encoding/decoding
Adds stretchr/testify dependency.
2016-03-24 11:32:23 +01:00
Antonio Murdaca
10a41bd0fc Merge pull request #24 from mtrmac/all-dependencies
Do not clean test-only dependencies from vendor packages
2016-03-24 00:00:46 +01:00
Miloslav Trmač
e6841d0a27 Do not clean test-only dependencies from vendor packages
Instead of only checking dependencies of the "main" packages, include
also test dependencies of all subpackages of the project, and their
transitive dependencies.
2016-03-23 17:17:21 +01:00
Antonio Murdaca
79f09478b4 Merge pull request #22 from runcom/tls
support --cert-path and --tls-verify
2016-03-23 15:40:32 +01:00
Antonio Murdaca
1ce21cd233 support cert-path and tls-verify flags
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-23 15:35:07 +01:00
Antonio Murdaca
70a6c7b21d urls const(s)
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-23 12:40:32 +01:00
Antonio Murdaca
3a09e2bf8e clean vendors and bootstrap tls verify
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-23 10:27:59 +01:00
Antonio Murdaca
d204183544 Merge pull request #21 from runcom/fix-makefile
fix makefile
2016-03-22 18:21:01 +01:00
Antonio Murdaca
e78053938b fix makefile
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-22 18:10:59 +01:00
Antonio Murdaca
37ebb81936 Merge pull request #15 from runcom/enhanc
drop docker/ code
2016-03-22 16:34:37 +01:00
Antonio Murdaca
c02155340e include needed deps
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-22 16:29:16 +01:00
Antonio Murdaca
fed651449e remove vendors ftw
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-22 16:02:55 +01:00
Antonio Murdaca
aa6d271975 refactor image manifest
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-22 15:50:41 +01:00
Antonio Murdaca
50a2ed1124 fix CI and remove docker/ dir
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-22 15:50:41 +01:00
Antonio Murdaca
103420769f drop docker/ code
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-22 15:50:41 +01:00
Antonio Murdaca
7be01242a8 remove duplicate code
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-22 15:50:40 +01:00
Antonio Murdaca
60c5561c84 Merge pull request #20 from mtrmac/fix-vendor
Fix hack/.vendor-helper.sh for main package move.
2016-03-22 14:41:39 +01:00
Miloslav Trmač
f7ebc0a595 Fix hack/.vendor-helper.sh for main package move.
Otherwise the "clean" step of hack/vendor.sh would drop most .go files
from vendor/ as unused.

Also commits refreshed versions of a few of the vendored packages.
2016-03-22 14:33:15 +01:00
Antonio Murdaca
ffcb8f862f Merge pull request #19 from mtrmac/unit-tests
Add infrastructure for running unit tests
2016-03-22 14:22:36 +01:00
Miloslav Trmač
b815271f16 Add collective test targets:
- (make check): GNU coding standards-compliant primary entry point,
  running all available tests in the best environment (i.e. Docker
  container).
- (make test-all-local): Local entry point, running only tests
  which do not require a special environment; intended for IDE
  integration and quick turnaround cycles.

Also modifies the Travis configuration to run (make check), to prevent
duplication.
2016-03-22 14:12:56 +01:00
Miloslav Trmač
a4fd447146 Run unit tests in Travis 2016-03-22 14:12:56 +01:00
Miloslav Trmač
ab36f62d59 Add 'test-unit' and 'test-unit-local' Makefile targets
Running unit tests without the integration tests is non-trivial, so add
a Makefile target to help with this.
2016-03-22 14:12:56 +01:00
Antonio Murdaca
fde4c74547 Merge pull request #17 from mtrmac/validate-uncommitted
Use contents of local checkout instead of last commit for validation
2016-03-21 15:41:39 +01:00
Miloslav Trmač
d08a3812d2 Use contents of local checkout instead of last commit for validation
Validating only committed files is not useful in the natural
  $test_everything_passes; commit; push
workflow; the failures will not be caught locally, only by Travis later
(and only if PRs are used instead of direct commits to master).

So, use the working directory state instead of last commit for
validations; and remove misleading comments in checks which already use
the working directory state.
2016-03-21 14:55:32 +01:00
Antonio Murdaca
0d94172288 fix gitignore
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-19 09:53:56 +01:00
Antonio Murdaca
a73078ea75 refactor structure and print git commit in version
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-19 09:32:59 +01:00
Antonio Murdaca
8cf22b9ca2 fix make install target
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-18 16:46:25 +01:00
Antonio Murdaca
5f4a5653ac add todo
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-17 12:02:03 +01:00
Antonio Murdaca
648f2f8bc5 output raw manifest for v2 registries
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-17 11:55:49 +01:00
Antonio Murdaca
6a00ce47d2 import cifetch code and add layers command
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-17 11:48:04 +01:00
Antonio Murdaca
d8fbf24c25 types: more on interfaces
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 17:54:24 +01:00
Antonio Murdaca
b97d7fc68f thoughts on interfaces
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 13:24:42 +01:00
Antonio Murdaca
5d740b4611 fix to interfaces
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 13:23:08 +01:00
Antonio Murdaca
0440473c63 add comment to interfaces
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 11:52:24 +01:00
Antonio Murdaca
73509ef227 fix
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 11:51:20 +01:00
Antonio Murdaca
41329ca504 attempt abstract interface
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 11:46:39 +01:00
Antonio Murdaca
b46d977403 add todos
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 10:43:46 +01:00
Antonio Murdaca
0109708048 fix missing comma in tests
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 10:26:54 +01:00
Antonio Murdaca
adbf487541 fix readme, tests and vendors
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 10:21:36 +01:00
Antonio Murdaca
3fd3adc58e support multi commands
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-16 10:06:31 +01:00
Antonio Murdaca
d0fd876d7e update codegangsta/cli + fix Travis + todos
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-09 08:16:45 +01:00
Antonio Murdaca
c9d544c8fb fix golint RUN
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-09 08:08:00 +01:00
Antonio Murdaca
0715f36de8 add golint
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-09 07:50:04 +01:00
Antonio Murdaca
beef95f21a update readme
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-08 16:31:47 +01:00
Antonio Murdaca
ae27cb93db change license to GPLv2
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-08 16:25:51 +01:00
Antonio Murdaca
213b0505dc bump to v0.1.12-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-08 09:49:51 +01:00
Antonio Murdaca
8094910c9a adapt code for projectatomic github
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-08 09:32:20 +01:00
Antonio Murdaca
82b121caf1 update docker code and adapt code
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 16:50:37 +01:00
Antonio Murdaca
89631ab4f1 fix building readme
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 13:08:51 +01:00
Antonio Murdaca
4d7ac1999e add another todo about v2 only
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 13:04:49 +01:00
Antonio Murdaca
370f1bc685 reg v1 setup wip
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 13:02:52 +01:00
Antonio Murdaca
51c104bde0 registry binary v1 working
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 12:57:28 +01:00
Antonio Murdaca
286ab4e144 build registry v1 for testing
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 11:48:16 +01:00
Antonio Murdaca
34fed9cabc update todo
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 10:54:15 +01:00
Antonio Murdaca
01b3c23ffa add todo
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 09:06:19 +01:00
Antonio Murdaca
7fc29e2323 fix readme
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 08:55:45 +01:00
Antonio Murdaca
b03817412b add dnf for fedora installation
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 08:55:03 +01:00
Antonio Murdaca
03b19aa069 fix readme
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-04 08:53:10 +01:00
Antonio Murdaca
69b84ce650 fix building section in readme
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-03-03 08:10:26 +01:00
Antonio Murdaca
044e9b5390 bump v0.1.10-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-02-29 12:20:40 +01:00
Antonio Murdaca
30db2ad7fc bump v0.1.9
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-02-29 12:20:12 +01:00
Antonio Murdaca
b8d3588b54 bump v0.1.9-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2016-02-29 12:18:52 +01:00
488 changed files with 62185 additions and 39914 deletions

5
.gitignore vendored
View File

@@ -1,2 +1,3 @@
skopeo
skopeo.1
/skopeo
/skopeo.1
/layers-*

View File

@@ -7,5 +7,4 @@ notifications:
email: false
script:
- make validate
- make test-integration
- make check

View File

@@ -1,14 +1,24 @@
FROM fedora
RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md2man golang-github-Sirupsen-logrus-devel golang-github-codegangsta-cli-devel golang-golangorg-net-devel
RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md2man \
# gpgme bindings deps
libassuan-devel gpgme-devel \
gnupg \
# registry v1 deps
xz-devel \
python-devel \
python-pip \
swig \
redhat-rpm-config \
openssl-devel \
patch
# Install two versions of the registry. The first is an older version that
# Install three versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests.
# and schema2 manifests. Install registry v1 also.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
#ENV REGISTRY_COMMIT_v1 TODO(runcom)
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
@@ -18,14 +28,22 @@ RUN set -x \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH"
&& rm -rf "$GOPATH" \
&& export DRV1="$(mktemp -d)" \
&& git clone https://github.com/docker/docker-registry.git "$DRV1" \
# no need for setuptools since we have a version conflict with fedora
&& sed -i.bak s/setuptools==5.8//g "$DRV1/requirements/main.txt" \
&& sed -i.bak s/setuptools==5.8//g "$DRV1/depends/docker-registry-core/requirements/main.txt" \
&& pip install "$DRV1/depends/docker-registry-core" \
&& pip install file://"$DRV1#egg=docker-registry[bugsnag,newrelic,cors]" \
&& patch $(python -c 'import boto; import os; print os.path.dirname(boto.__file__)')/connection.py \
< "$DRV1/contrib/boto_header_patch.diff" \
&& dnf -y update && dnf install -y m2crypto
ENV GOPATH /usr/share/gocode:/go
WORKDIR /go/src/github.com/runcom/skopeo
ENV PATH $GOPATH/bin:/usr/share/gocode/bin:$PATH
RUN go get github.com/golang/lint/golint
WORKDIR /go/src/github.com/projectatomic/skopeo
COPY . /go/src/github.com/projectatomic/skopeo
#ENTRYPOINT ["hack/dind"]
COPY . /go/src/github.com/runcom/skopeo
# remove distro-supplied dependencies, so we build against them and the rest from vendor/
RUN rm -rf /go/src/github.com/runcom/skopeo/vendor/golang.org && rm -rf /go/src/github.com/runcom/skopeo/vendor/github.com/Sirupsen && rm -rf /go/src/github.com/runcom/skopeo/vendor/github.com/codegangsta

468
LICENSE
View File

@@ -1,191 +1,339 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Preamble
1. Definitions.
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
The precise terms and conditions for copying, distribution and
modification follow.
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
END OF TERMS AND CONDITIONS
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
Copyright 2016 Antonio Murdaca <runcom@redhat.com>
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
http://www.apache.org/licenses/LICENSE-2.0
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

View File

@@ -1,10 +1,12 @@
.PHONY: all binary build clean install install-binary man shell test-integration
.PHONY: all binary build clean install install-binary shell test-integration
export GO15VENDOREXPERIMENT=1
PREFIX ?= ${DESTDIR}/usr
INSTALLDIR=${PREFIX}/bin
MANINSTALLDIR=${PREFIX}/share/man
# TODO(runcom)
#BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
DOCKER_IMAGE := skopeo-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
@@ -19,33 +21,50 @@ ifeq ($(INTERACTIVE), 1)
endif
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
all: man binary
GIT_COMMIT := $(shell git rev-parse HEAD 2> /dev/null || true)
all: binary
binary:
go build -o ${DEST}skopeo .
go build -ldflags "-X main.gitCommit=${GIT_COMMIT}" -o ${DEST}skopeo ./cmd/skopeo
build-container:
docker build ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" .
clean:
rm -f skopeo
rm -f skopeo.1
install: install-binary
install -m 644 skopeo.1 ${MANINSTALLDIR}/man1/
install -m 644 man1/skopeo.1 ${MANINSTALLDIR}/man1/
# TODO(runcom)
#install -m 644 completion/bash/skopeo ${BASHINSTALLDIR}/
install-binary:
install -d -m 0755 ${INSTALLDIR}
install -m 755 skopeo ${INSTALLDIR}
man:
go-md2man -in man/skopeo.1.md -out skopeo.1
shell: build-container
$(DOCKER_RUN_DOCKER) bash
check: validate test-unit test-integration
# The tests can run out of entropy and block in containers, so replace /dev/random.
test-integration: build-container
$(DOCKER_RUN_DOCKER) hack/make.sh test-integration
$(DOCKER_RUN_DOCKER) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; hack/make.sh test-integration'
test-unit: build-container
# Just call (make test unit-local) here instead of worrying about environment differences, e.g. GO15VENDOREXPERIMENT.
$(DOCKER_RUN_DOCKER) make test-unit-local
validate: build-container
$(DOCKER_RUN_DOCKER) hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
# This target is only intended for development, e.g. executing it from an IDE. Use (make test) for CI or pre-release testing.
test-all-local: validate-local test-unit-local
validate-local:
hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
test-unit-local:
go test $$(go list -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')

View File

@@ -1,17 +1,17 @@
skopeo [![Build Status](https://travis-ci.org/runcom/skopeo.svg?branch=master)](https://travis-ci.org/runcom/skopeo)
skopeo [![Build Status](https://travis-ci.org/projectatomic/skopeo.svg?branch=master)](https://travis-ci.org/projectatomic/skopeo)
=
_Please be aware `skopeo` is still work in progress_
`skopeo` is a command line utility which is able to _inspect_ a repository on a Docker registry.
`skopeo` is a command line utility which is able to _inspect_ a repository on a Docker registry and fetch images layers.
By _inspect_ I mean it fetches the repository's manifest and it is able to show you a `docker inspect`-like
json output about a whole repository or a tag. This tool, in constrast to `docker inspect`, helps you gather useful information about
json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about
a repository or a tag before pulling it (using disk space) - e.g. - which tags are available for the given repository? which labels the image has?
Examples:
```sh
# show repository's labels of rhel7:latest
$ skopeo registry.access.redhat.com/rhel7 | jq '.Config.Labels'
$ skopeo inspect docker://registry.access.redhat.com/rhel7 | jq '.Config.Labels'
{
"Architecture": "x86_64",
"Authoritative_Registry": "registry.access.redhat.com",
@@ -24,7 +24,7 @@ $ skopeo registry.access.redhat.com/rhel7 | jq '.Config.Labels'
}
# show repository's tags
$ skopeo docker.io/fedora | jq '.RepoTags'
$ skopeo inspect docker://docker.io/fedora | jq '.RepoTags'
[
"20",
"21",
@@ -36,8 +36,12 @@ $ skopeo docker.io/fedora | jq '.RepoTags'
]
# show image's digest
$ skopeo docker.io/fedora:rawhide | jq '.Digest'
$ skopeo inspect docker://docker.io/fedora:rawhide | jq '.Digest'
"sha256:905b4846938c8aef94f52f3e41a11398ae5b40f5855fb0e40ed9c157e721d7f8"
# show image's label "Name"
$ skopeo inspect docker://registry.access.redhat.com/rhel7 | jq '.Config.Labels.Name'
"rhel7/rhel"
```
Private registries with authentication
@@ -61,15 +65,15 @@ $ cat /home/runcom/.docker/config.json
}
# we can see I'm already authenticated via docker login so everything will be fine
$ skopeo myregistrydomain.com:5000/busybox
$ skopeo inspect docker://myregistrydomain.com:5000/busybox
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
# let's try now to fake a non existent Docker's config file
$ skopeo --docker-cfg="" myregistrydomain.com:5000/busybox
$ skopeo --docker-cfg="" inspect docker://myregistrydomain.com:5000/busybox
FATA[0000] Get https://myregistrydomain.com:5000/v2/busybox/manifests/latest: no basic auth credentials
# passing --username and --password - we can see that everything goes fine
$ skopeo --docker-cfg="" --username=testuser --password=testpassword myregistrydomain.com:5000/busybox
$ skopeo --docker-cfg="" --username=testuser --password=testpassword inspect docker://myregistrydomain.com:5000/busybox
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
```
If your cli config is found but it doesn't contain the necessary credentials for the queried registry
@@ -79,10 +83,18 @@ Building
-
To build `skopeo` you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag. Also, make sure to clone the repository in your `GOPATH` - otherwise compilation fails.
```sh
$ cd $GOPATH/src/github.com # make sure you have github.com folder otherwise just create it
$ git clone https://github.com/runcom/skopeo
$ cd runcom && make binary
$ cd $GOPATH/src
$ mkdir -p github.com/projectatomic
$ cd projectatomic
$ git clone https://github.com/projectatomic/skopeo
$ cd skopeo && make binary
```
You may need to install additional development packages: gpgme-devel and libassuan-devel
```sh
# dnf install gpgme-devel libassuan-devel
```
Man:
-
To build the man page you need [`go-md2man`](https://github.com/cpuguy83/go-md2man) available on your system, then:
@@ -91,9 +103,14 @@ $ make man
```
Installing
-
If you built from source:
```sh
$ sudo make install
```
`skopeo` is also available from Fedora 23:
```sh
sudo dnf install skopeo
```
Tests
-
_You need Docker installed on your system in order to run the test suite_
@@ -102,6 +119,10 @@ $ make test-integration
```
TODO
-
- update README with `layers` command
- list all images on registry?
- registry v2 search?
- download layers in parallel and support docker load tar(s)
- show repo tags via flag or when reference isn't tagged or digested
- add tests (integration with deployed registries in container - Docker-like)
- support rkt/appc image spec
@@ -112,4 +133,4 @@ NOT TODO
License
-
ASL 2.0
GPLv2

View File

@@ -0,0 +1,32 @@
package main
/*
This is a pretty horrible workaround. Due to a glibc bug
https://bugzilla.redhat.com/show_bug.cgi?id=1326903 , we must ensure we link
with -lgpgme before -lpthread. Such arguments come from various packages
using cgo, and the ordering of these arguments is, with current (go tool link),
dependent on the order in which the cgo-using packages are found in a
breadth-first search following dependencies, starting from “main”.
Thus, if
import "net"
is processed before
import "…/skopeo/signature"
it will, in the next level of the BFS, pull in "runtime/cgo" (a dependency of
"net") before "mtrmac/gpgme" (a dependency of "…/skopeo/signature"), causing
-lpthread (used by "runtime/cgo") to be used before -lgpgme.
This might be possible to work around by careful import ordering, or by removing
a direct dependency on "net", but that would be very fragile.
So, until the above bug is fixed, add -lgpgme directly in the "main" package
to ensure the needed build order.
Unfortunately, this workaround needs to be applied at the top level of any user
of "…/skopeo/signature"; it cannot be added to "…/skopeo/signature" itself,
by that time this package is first processed by the linker, a -lpthread may
already be queued and it would be too late.
*/
// #cgo LDFLAGS: -lgpgme
import "C"

120
cmd/skopeo/copy.go Normal file
View File

@@ -0,0 +1,120 @@
package main
import (
"encoding/json"
"github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/projectatomic/skopeo/docker/utils"
"github.com/projectatomic/skopeo/signature"
)
// FIXME: Also handle schema2, and put this elsewhere:
// docker.go contains similar code, more sophisticated
// (at the very least the deduplication should be reused from there).
func manifestLayers(manifest []byte) ([]string, error) {
man := manifestSchema1{}
if err := json.Unmarshal(manifest, &man); err != nil {
return nil, err
}
layers := []string{}
for _, layer := range man.FSLayers {
layers = append(layers, layer.BlobSum)
}
return layers, nil
}
// FIXME: this is a copy from docker_image.go and does not belong here.
type manifestSchema1 struct {
Name string
Tag string
FSLayers []struct {
BlobSum string `json:"blobSum"`
} `json:"fsLayers"`
History []struct {
V1Compatibility string `json:"v1Compatibility"`
} `json:"history"`
// TODO(runcom) verify the downloaded manifest
//Signature []byte `json:"signature"`
}
func copyHandler(context *cli.Context) {
if len(context.Args()) != 2 {
logrus.Fatal("Usage: copy source destination")
}
src, err := parseImageSource(context, context.Args()[0])
if err != nil {
logrus.Fatalf("Error initializing %s: %s", context.Args()[0], err.Error())
}
dest, err := parseImageDestination(context, context.Args()[1])
if err != nil {
logrus.Fatalf("Error initializing %s: %s", context.Args()[1], err.Error())
}
signBy := context.String("sign-by")
manifest, _, err := src.GetManifest([]string{utils.DockerV2Schema1MIMEType})
if err != nil {
logrus.Fatalf("Error reading manifest: %s", err.Error())
}
layers, err := manifestLayers(manifest)
if err != nil {
logrus.Fatalf("Error parsing manifest: %s", err.Error())
}
for _, layer := range layers {
stream, err := src.GetLayer(layer)
if err != nil {
logrus.Fatalf("Error reading layer %s: %s", layer, err.Error())
}
defer stream.Close()
if err := dest.PutLayer(layer, stream); err != nil {
logrus.Fatalf("Error writing layer: %s", err.Error())
}
}
sigs, err := src.GetSignatures()
if err != nil {
logrus.Fatalf("Error reading signatures: %s", err.Error())
}
if signBy != "" {
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
logrus.Fatalf("Error initializing GPG: %s", err.Error())
}
dockerReference, err := dest.CanonicalDockerReference()
if err != nil {
logrus.Fatalf("Error determining canonical Docker reference: %s", err.Error())
}
newSig, err := signature.SignDockerManifest(manifest, dockerReference, mech, signBy)
if err != nil {
logrus.Fatalf("Error creating signature: %s", err.Error())
}
sigs = append(sigs, newSig)
}
if err := dest.PutSignatures(sigs); err != nil {
logrus.Fatalf("Error writing signatures: %s", err.Error())
}
// FIXME: We need to call PutManifest after PutLayer and PutSignatures. This seems ugly; move to a "set properties" + "commit" model?
if err := dest.PutManifest(manifest); err != nil {
logrus.Fatalf("Error writing manifest: %s", err.Error())
}
}
var copyCmd = cli.Command{
Name: "copy",
Action: copyHandler,
// FIXME: Do we need to namespace the GPG aspect?
Flags: []cli.Flag{
cli.StringFlag{
Name: "sign-by",
Usage: "sign the image using a GPG key with the specified fingerprint",
},
},
}

86
cmd/skopeo/inspect.go Normal file
View File

@@ -0,0 +1,86 @@
package main
import (
"encoding/json"
"fmt"
"time"
"github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/projectatomic/skopeo/docker"
"github.com/projectatomic/skopeo/docker/utils"
)
// inspectOutput is the output format of (skopeo inspect), primarily so that we can format it with a simple json.MarshalIndent.
type inspectOutput struct {
Name string `json:",omitempty"`
Tag string
Digest string
RepoTags []string
Created time.Time
DockerVersion string
Labels map[string]string
Architecture string
Os string
Layers []string
}
var inspectCmd = cli.Command{
Name: "inspect",
Usage: "inspect images on a registry",
Flags: []cli.Flag{
cli.BoolFlag{
Name: "raw",
Usage: "output raw manifest",
},
},
Action: func(c *cli.Context) {
img, err := parseImage(c)
if err != nil {
logrus.Fatal(err)
}
rawManifest, err := img.Manifest()
if err != nil {
logrus.Fatal(err)
}
if c.Bool("raw") {
fmt.Println(string(rawManifest))
return
}
imgInspect, err := img.Inspect()
if err != nil {
logrus.Fatal(err)
}
outputData := inspectOutput{
Name: "", // Possibly overridden for a docker.Image.
Tag: imgInspect.Tag,
// Digest is set below.
RepoTags: []string{}, // Possibly overriden for a docker.Image.
Created: imgInspect.Created,
DockerVersion: imgInspect.DockerVersion,
Labels: imgInspect.Labels,
Architecture: imgInspect.Architecture,
Os: imgInspect.Os,
Layers: imgInspect.Layers,
}
outputData.Digest, err = utils.ManifestDigest(rawManifest)
if err != nil {
logrus.Fatalf("Error computing manifest digest: %s", err.Error())
}
if dockerImg, ok := img.(*docker.Image); ok {
outputData.Name, err = dockerImg.SourceRefFullName()
if err != nil {
logrus.Fatalf("Error getting expanded repository name: %s", err.Error())
}
outputData.RepoTags, err = dockerImg.GetRepositoryTags()
if err != nil {
logrus.Fatalf("Error determining repository tags: %s", err.Error())
}
}
out, err := json.MarshalIndent(outputData, "", " ")
if err != nil {
logrus.Fatal(err)
}
fmt.Println(string(out))
},
}

21
cmd/skopeo/layers.go Normal file
View File

@@ -0,0 +1,21 @@
package main
import (
"github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
)
// TODO(runcom): document args and usage
var layersCmd = cli.Command{
Name: "layers",
Usage: "get images layers",
Action: func(c *cli.Context) {
img, err := parseImage(c)
if err != nil {
logrus.Fatal(err)
}
if err := img.Layers(c.Args().Tail()...); err != nil {
logrus.Fatal(err)
}
},
}

72
cmd/skopeo/main.go Normal file
View File

@@ -0,0 +1,72 @@
package main
import (
"fmt"
"os"
"github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/projectatomic/skopeo/version"
)
// gitCommit will be the hash that the binary was built from
// and will be populated by the Makefile
var gitCommit = ""
const (
usage = `interact with registries`
)
func main() {
app := cli.NewApp()
app.Name = "skopeo"
if gitCommit != "" {
app.Version = fmt.Sprintf("%s commit: %s", version.Version, gitCommit)
} else {
app.Version = version.Version
}
app.Usage = usage
// TODO(runcom)
//app.EnableBashCompletion = true
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug",
Usage: "enable debug output",
},
cli.StringFlag{
Name: "username",
Value: "",
Usage: "registry username",
},
cli.StringFlag{
Name: "password",
Value: "",
Usage: "registry password",
},
cli.StringFlag{
Name: "cert-path",
Value: "",
Usage: "Certificates path to connect to the given registry (cert.pem, key.pem)",
},
cli.BoolFlag{
Name: "tls-verify",
Usage: "Whether to verify certificates or not",
},
}
app.Before = func(c *cli.Context) error {
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
return nil
}
app.Commands = []cli.Command{
copyCmd,
inspectCmd,
layersCmd,
standaloneSignCmd,
standaloneVerifyCmd,
}
if err := app.Run(os.Args); err != nil {
logrus.Fatal(err)
}
}

86
cmd/skopeo/signing.go Normal file
View File

@@ -0,0 +1,86 @@
package main
import (
"fmt"
"io/ioutil"
"github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/projectatomic/skopeo/signature"
)
func standaloneSign(context *cli.Context) {
outputFile := context.String("output")
if len(context.Args()) != 3 || outputFile == "" {
logrus.Fatal("Usage: skopeo standalone-sign manifest docker-reference key-fingerprint -o signature")
}
manifestPath := context.Args()[0]
dockerReference := context.Args()[1]
fingerprint := context.Args()[2]
manifest, err := ioutil.ReadFile(manifestPath)
if err != nil {
logrus.Fatalf("Error reading %s: %s", manifestPath, err.Error())
}
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
logrus.Fatalf("Error initializing GPG: %s", err.Error())
}
signature, err := signature.SignDockerManifest(manifest, dockerReference, mech, fingerprint)
if err != nil {
logrus.Fatalf("Error creating signature: %s", err.Error())
}
if err := ioutil.WriteFile(outputFile, signature, 0644); err != nil {
logrus.Fatalf("Error writing signature to %s: %s", outputFile, err.Error())
}
}
var standaloneSignCmd = cli.Command{
Name: "standalone-sign",
Usage: "Create a signature using local files",
Action: standaloneSign,
Flags: []cli.Flag{
cli.StringFlag{
Name: "output, o",
Usage: "output signature file name",
},
},
}
func standaloneVerify(context *cli.Context) {
if len(context.Args()) != 4 {
logrus.Fatal("Usage: skopeo standalone-verify manifest docker-reference key-fingerprint signature")
}
manifestPath := context.Args()[0]
expectedDockerReference := context.Args()[1]
expectedFingerprint := context.Args()[2]
signaturePath := context.Args()[3]
unverifiedManifest, err := ioutil.ReadFile(manifestPath)
if err != nil {
logrus.Fatalf("Error reading manifest from %s: %s", signaturePath, err.Error())
}
unverifiedSignature, err := ioutil.ReadFile(signaturePath)
if err != nil {
logrus.Fatalf("Error reading signature from %s: %s", signaturePath, err.Error())
}
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
logrus.Fatalf("Error initializing GPG: %s", err.Error())
}
sig, err := signature.VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest, expectedDockerReference, mech, expectedFingerprint)
if err != nil {
logrus.Fatalf("Error verifying signature: %s", err.Error())
}
fmt.Printf("Signature verified, digest %s\n", sig.DockerManifestDigest)
}
var standaloneVerifyCmd = cli.Command{
Name: "standalone-verify",
Usage: "Verify a signature using local files",
Action: standaloneVerify,
}

74
cmd/skopeo/utils.go Normal file
View File

@@ -0,0 +1,74 @@
package main
import (
"fmt"
"strings"
"github.com/codegangsta/cli"
"github.com/projectatomic/skopeo/directory"
"github.com/projectatomic/skopeo/docker"
"github.com/projectatomic/skopeo/openshift"
"github.com/projectatomic/skopeo/types"
)
const (
// atomicPrefix is the URL-like schema prefix used for Atomic registry image references.
atomicPrefix = "atomic:"
// dockerPrefix is the URL-like schema prefix used for Docker image references.
dockerPrefix = "docker://"
// directoryPrefix is the URL-like schema prefix used for local directories (for debugging)
directoryPrefix = "dir:"
)
// ParseImage converts image URL-like string to an initialized handler for that image.
func parseImage(c *cli.Context) (types.Image, error) {
var (
imgName = c.Args().First()
certPath = c.GlobalString("cert-path")
tlsVerify = c.GlobalBool("tls-verify")
)
switch {
case strings.HasPrefix(imgName, dockerPrefix):
return docker.NewDockerImage(strings.TrimPrefix(imgName, dockerPrefix), certPath, tlsVerify)
//case strings.HasPrefix(img, appcPrefix):
//
case strings.HasPrefix(imgName, directoryPrefix):
src := directory.NewDirImageSource(strings.TrimPrefix(imgName, directoryPrefix))
return docker.GenericImageFromSource(src), nil
}
return nil, fmt.Errorf("no valid prefix provided")
}
// parseImageSource converts image URL-like string to an ImageSource.
func parseImageSource(c *cli.Context, name string) (types.ImageSource, error) {
var (
certPath = c.GlobalString("cert-path")
tlsVerify = c.GlobalBool("tls-verify") // FIXME!! defaults to false?
)
switch {
case strings.HasPrefix(name, dockerPrefix):
return docker.NewDockerImageSource(strings.TrimPrefix(name, dockerPrefix), certPath, tlsVerify)
case strings.HasPrefix(name, atomicPrefix):
return openshift.NewOpenshiftImageSource(strings.TrimPrefix(name, atomicPrefix), certPath, tlsVerify)
case strings.HasPrefix(name, directoryPrefix):
return directory.NewDirImageSource(strings.TrimPrefix(name, directoryPrefix)), nil
}
return nil, fmt.Errorf("Unrecognized image reference %s", name)
}
// parseImageDestination converts image URL-like string to an ImageDestination.
func parseImageDestination(c *cli.Context, name string) (types.ImageDestination, error) {
var (
certPath = c.GlobalString("cert-path")
tlsVerify = c.GlobalBool("tls-verify") // FIXME!! defaults to false?
)
switch {
case strings.HasPrefix(name, dockerPrefix):
return docker.NewDockerImageDestination(strings.TrimPrefix(name, dockerPrefix), certPath, tlsVerify)
case strings.HasPrefix(name, atomicPrefix):
return openshift.NewOpenshiftImageDestination(strings.TrimPrefix(name, atomicPrefix), certPath, tlsVerify)
case strings.HasPrefix(name, directoryPrefix):
return directory.NewDirImageDestination(strings.TrimPrefix(name, directoryPrefix)), nil
}
return nil, fmt.Errorf("Unrecognized image reference %s", name)
}

113
directory/directory.go Normal file
View File

@@ -0,0 +1,113 @@
package directory
import (
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"strings"
"github.com/projectatomic/skopeo/types"
)
// manifestPath returns a path for the manifest within a directory using our conventions.
func manifestPath(dir string) string {
return filepath.Join(dir, "manifest.json")
}
// manifestPath returns a path for a layer tarball within a directory using our conventions.
func layerPath(dir string, digest string) string {
// FIXME: Should we keep the digest identification?
return filepath.Join(dir, strings.TrimPrefix(digest, "sha256:")+".tar")
}
// manifestPath returns a path for a signature within a directory using our conventions.
func signaturePath(dir string, index int) string {
return filepath.Join(dir, fmt.Sprintf("signature-%d", index+1))
}
type dirImageDestination struct {
dir string
}
// NewDirImageDestination returns an ImageDestination for writing to an existing directory.
func NewDirImageDestination(dir string) types.ImageDestination {
return &dirImageDestination{dir}
}
func (d *dirImageDestination) CanonicalDockerReference() (string, error) {
return "", fmt.Errorf("Can not determine canonical Docker reference for a local directory")
}
func (d *dirImageDestination) PutManifest(manifest []byte) error {
return ioutil.WriteFile(manifestPath(d.dir), manifest, 0644)
}
func (d *dirImageDestination) PutLayer(digest string, stream io.Reader) error {
layerFile, err := os.Create(layerPath(d.dir, digest))
if err != nil {
return err
}
defer layerFile.Close()
if _, err := io.Copy(layerFile, stream); err != nil {
return err
}
if err := layerFile.Sync(); err != nil {
return err
}
return nil
}
func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
for i, sig := range signatures {
if err := ioutil.WriteFile(signaturePath(d.dir, i), sig, 0644); err != nil {
return err
}
}
return nil
}
type dirImageSource struct {
dir string
}
// NewDirImageSource returns an ImageSource reading from an existing directory.
func NewDirImageSource(dir string) types.ImageSource {
return &dirImageSource{dir}
}
// IntendedDockerReference returns the full, unambiguous, Docker reference for this image, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
// May be "" if unknown.
func (s *dirImageSource) IntendedDockerReference() string {
return ""
}
// it's up to the caller to determine the MIME type of the returned manifest's bytes
func (s *dirImageSource) GetManifest(_ []string) ([]byte, string, error) {
m, err := ioutil.ReadFile(manifestPath(s.dir))
if err != nil {
return nil, "", err
}
return m, "", err
}
func (s *dirImageSource) GetLayer(digest string) (io.ReadCloser, error) {
return os.Open(layerPath(s.dir, digest))
}
func (s *dirImageSource) GetSignatures() ([][]byte, error) {
signatures := [][]byte{}
for i := 0; ; i++ {
signature, err := ioutil.ReadFile(signaturePath(s.dir, i))
if err != nil {
if os.IsNotExist(err) {
break
}
return nil, err
}
signatures = append(signatures, signature)
}
return signatures, nil
}

24
doc.go Normal file
View File

@@ -0,0 +1,24 @@
// Package skopeo provides libraries and commands to interact with containers images.
//
// package main
//
// import (
// "fmt"
//
// "github.com/projectatomic/skopeo/docker"
// )
//
// func main() {
// img, err := docker.NewDockerImage("fedora", "", false)
// if err != nil {
// panic(err)
// }
// b, err := img.Manifest()
// if err != nil {
// panic(err)
// }
// fmt.Printf("%s", string(b))
// }
//
// TODO(runcom)
package skopeo

362
docker/docker_client.go Normal file
View File

@@ -0,0 +1,362 @@
package docker
import (
"crypto/tls"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/pkg/homedir"
)
const (
dockerHostname = "docker.io"
dockerRegistry = "registry-1.docker.io"
dockerAuthRegistry = "https://index.docker.io/v1/"
dockerCfg = ".docker"
dockerCfgFileName = "config.json"
dockerCfgObsolete = ".dockercfg"
baseURL = "%s://%s/v2/"
tagsURL = "%s/tags/list"
manifestURL = "%s/manifests/%s"
blobsURL = "%s/blobs/%s"
blobUploadURL = "%s/blobs/uploads/?digest=%s"
)
// dockerClient is configuration for dealing with a single Docker registry.
type dockerClient struct {
registry string
username string
password string
wwwAuthenticate string // Cache of a value set by ping() if scheme is not empty
scheme string // Cache of a value returned by a successful ping() if not empty
transport *http.Transport
}
// newDockerClient returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
func newDockerClient(refHostname, certPath string, tlsVerify bool) (*dockerClient, error) {
var registry string
if refHostname == dockerHostname {
registry = dockerRegistry
} else {
registry = refHostname
}
username, password, err := getAuth(refHostname)
if err != nil {
return nil, err
}
var tr *http.Transport
if certPath != "" || !tlsVerify {
tlsc := &tls.Config{}
if certPath != "" {
cert, err := tls.LoadX509KeyPair(filepath.Join(certPath, "cert.pem"), filepath.Join(certPath, "key.pem"))
if err != nil {
return nil, fmt.Errorf("Error loading x509 key pair: %s", err)
}
tlsc.Certificates = append(tlsc.Certificates, cert)
}
tlsc.InsecureSkipVerify = !tlsVerify
tr = &http.Transport{
TLSClientConfig: tlsc,
}
}
return &dockerClient{
registry: registry,
username: username,
password: password,
transport: tr,
}, nil
}
func (c *dockerClient) makeRequest(method, url string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
if c.scheme == "" {
pr, err := c.ping()
if err != nil {
return nil, err
}
c.wwwAuthenticate = pr.WWWAuthenticate
c.scheme = pr.scheme
}
url = fmt.Sprintf(baseURL, c.scheme, c.registry) + url
req, err := http.NewRequest(method, url, stream)
if err != nil {
return nil, err
}
req.Header.Set("Docker-Distribution-API-Version", "registry/2.0")
for n, h := range headers {
for _, hh := range h {
req.Header.Add(n, hh)
}
}
if c.wwwAuthenticate != "" {
if err := c.setupRequestAuth(req); err != nil {
return nil, err
}
}
client := &http.Client{}
if c.transport != nil {
client.Transport = c.transport
}
logrus.Debugf("%s %s", method, url)
res, err := client.Do(req)
if err != nil {
return nil, err
}
return res, nil
}
func (c *dockerClient) setupRequestAuth(req *http.Request) error {
tokens := strings.SplitN(strings.TrimSpace(c.wwwAuthenticate), " ", 2)
if len(tokens) != 2 {
return fmt.Errorf("expected 2 tokens in WWW-Authenticate: %d, %s", len(tokens), c.wwwAuthenticate)
}
switch tokens[0] {
case "Basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "Bearer":
client := &http.Client{}
if c.transport != nil {
client.Transport = c.transport
}
res, err := client.Do(req)
if err != nil {
return err
}
hdr := res.Header.Get("WWW-Authenticate")
if hdr == "" || res.StatusCode != http.StatusUnauthorized {
// no need for bearer? wtf?
return nil
}
tokens = strings.Split(hdr, " ")
tokens = strings.Split(tokens[1], ",")
var realm, service, scope string
for _, token := range tokens {
if strings.HasPrefix(token, "realm") {
realm = strings.Trim(token[len("realm="):], "\"")
}
if strings.HasPrefix(token, "service") {
service = strings.Trim(token[len("service="):], "\"")
}
if strings.HasPrefix(token, "scope") {
scope = strings.Trim(token[len("scope="):], "\"")
}
}
if realm == "" {
return fmt.Errorf("missing realm in bearer auth challenge")
}
if service == "" {
return fmt.Errorf("missing service in bearer auth challenge")
}
// The scope can be empty if we're not getting a token for a specific repo
//if scope == "" && repo != "" {
if scope == "" {
return fmt.Errorf("missing scope in bearer auth challenge")
}
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
return nil
}
return fmt.Errorf("no handler for %s authentication", tokens[0])
// support docker bearer with authconfig's Auth string? see docker2aci
}
func (c *dockerClient) getBearerToken(realm, service, scope string) (string, error) {
authReq, err := http.NewRequest("GET", realm, nil)
if err != nil {
return "", err
}
getParams := authReq.URL.Query()
getParams.Add("service", service)
if scope != "" {
getParams.Add("scope", scope)
}
authReq.URL.RawQuery = getParams.Encode()
if c.username != "" && c.password != "" {
authReq.SetBasicAuth(c.username, c.password)
}
// insecure for now to contact the external token service
tr := &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}}
client := &http.Client{Transport: tr}
res, err := client.Do(authReq)
if err != nil {
return "", err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusUnauthorized:
return "", fmt.Errorf("unable to retrieve auth token: 401 unauthorized")
case http.StatusOK:
break
default:
return "", fmt.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
}
tokenBlob, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
}
tokenStruct := struct {
Token string `json:"token"`
}{}
if err := json.Unmarshal(tokenBlob, &tokenStruct); err != nil {
return "", err
}
// TODO(runcom): reuse tokens?
//hostAuthTokens, ok = rb.hostsV2AuthTokens[req.URL.Host]
//if !ok {
//hostAuthTokens = make(map[string]string)
//rb.hostsV2AuthTokens[req.URL.Host] = hostAuthTokens
//}
//hostAuthTokens[repo] = tokenStruct.Token
return tokenStruct.Token, nil
}
func getAuth(hostname string) (string, string, error) {
// TODO(runcom): get this from *cli.Context somehow
//if username != "" && password != "" {
//return username, password, nil
//}
if hostname == dockerHostname {
hostname = dockerAuthRegistry
}
dockerCfgPath := filepath.Join(getDefaultConfigDir(".docker"), dockerCfgFileName)
if _, err := os.Stat(dockerCfgPath); err == nil {
j, err := ioutil.ReadFile(dockerCfgPath)
if err != nil {
return "", "", err
}
var dockerAuth dockerConfigFile
if err := json.Unmarshal(j, &dockerAuth); err != nil {
return "", "", err
}
// try the normal case
if c, ok := dockerAuth.AuthConfigs[hostname]; ok {
return decodeDockerAuth(c.Auth)
}
} else if os.IsNotExist(err) {
oldDockerCfgPath := filepath.Join(getDefaultConfigDir(dockerCfgObsolete))
if _, err := os.Stat(oldDockerCfgPath); err != nil {
return "", "", nil //missing file is not an error
}
j, err := ioutil.ReadFile(oldDockerCfgPath)
if err != nil {
return "", "", err
}
var dockerAuthOld map[string]dockerAuthConfigObsolete
if err := json.Unmarshal(j, &dockerAuthOld); err != nil {
return "", "", err
}
if c, ok := dockerAuthOld[hostname]; ok {
return decodeDockerAuth(c.Auth)
}
} else {
// if file is there but we can't stat it for any reason other
// than it doesn't exist then stop
return "", "", fmt.Errorf("%s - %v", dockerCfgPath, err)
}
return "", "", nil
}
type apiErr struct {
Code string
Message string
Detail interface{}
}
type pingResponse struct {
WWWAuthenticate string
APIVersion string
scheme string
errors []apiErr
}
func (c *dockerClient) ping() (*pingResponse, error) {
client := &http.Client{}
if c.transport != nil {
client.Transport = c.transport
}
ping := func(scheme string) (*pingResponse, error) {
url := fmt.Sprintf(baseURL, scheme, c.registry)
resp, err := client.Get(url)
logrus.Debugf("Ping %s err %#v", url, err)
if err != nil {
return nil, err
}
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", scheme+"://"+c.registry+"/v2/", resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return nil, fmt.Errorf("error pinging repository, response code %d", resp.StatusCode)
}
pr := &pingResponse{}
pr.WWWAuthenticate = resp.Header.Get("WWW-Authenticate")
pr.APIVersion = resp.Header.Get("Docker-Distribution-Api-Version")
pr.scheme = scheme
if resp.StatusCode == http.StatusUnauthorized {
type APIErrors struct {
Errors []apiErr
}
errs := &APIErrors{}
if err := json.NewDecoder(resp.Body).Decode(errs); err != nil {
return nil, err
}
pr.errors = errs.Errors
}
return pr, nil
}
scheme := "https"
pr, err := ping(scheme)
if err != nil {
scheme = "http"
pr, err = ping(scheme)
if err == nil {
return pr, nil
}
}
return pr, err
}
func getDefaultConfigDir(confPath string) string {
return filepath.Join(homedir.Get(), confPath)
}
type dockerAuthConfigObsolete struct {
Auth string `json:"auth"`
}
type dockerAuthConfig struct {
Auth string `json:"auth,omitempty"`
}
type dockerConfigFile struct {
AuthConfigs map[string]dockerAuthConfig `json:"auths"`
}
func decodeDockerAuth(s string) (string, string, error) {
decoded, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return "", "", err
}
parts := strings.SplitN(string(decoded), ":", 2)
if len(parts) != 2 {
// if it's invalid just skip, as docker does
return "", "", nil
}
user := parts[0]
password := strings.Trim(parts[1], "\x00")
return user, password, nil
}

69
docker/docker_image.go Normal file
View File

@@ -0,0 +1,69 @@
package docker
import (
"encoding/json"
"fmt"
"net/http"
"github.com/projectatomic/skopeo/types"
)
// Image is a Docker-specific implementation of types.Image with a few extra methods
// which are specific to Docker.
type Image struct {
genericImage
}
// NewDockerImage returns a new Image interface type after setting up
// a client to the registry hosting the given image.
func NewDockerImage(img, certPath string, tlsVerify bool) (types.Image, error) {
s, err := newDockerImageSource(img, certPath, tlsVerify)
if err != nil {
return nil, err
}
return &Image{genericImage{src: s}}, nil
}
// By construction a, docker.Image.genericImage.src must be a dockerImageSource.
// dockerSource returns it.
func (i *Image) dockerSource() (*dockerImageSource, error) {
if src, ok := i.genericImage.src.(*dockerImageSource); ok {
return src, nil
}
return nil, fmt.Errorf("Unexpected internal inconsistency, docker.Image not based on dockerImageSource")
}
// SourceRefFullName returns a fully expanded name for the repository this image is in.
func (i *Image) SourceRefFullName() (string, error) {
src, err := i.dockerSource()
if err != nil {
return "", err
}
return src.ref.FullName(), nil
}
// GetRepositoryTags list all tags available in the repository. Note that this has no connection with the tag(s) used for this specific image, if any.
func (i *Image) GetRepositoryTags() ([]string, error) {
src, err := i.dockerSource()
if err != nil {
return nil, err
}
url := fmt.Sprintf(tagsURL, src.ref.RemoteName())
res, err := src.c.makeRequest("GET", url, nil, nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
// print url also
return nil, fmt.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
}
type tagsRes struct {
Tags []string
}
tags := &tagsRes{}
if err := json.NewDecoder(res.Body).Decode(tags); err != nil {
return nil, err
}
return tags.Tags, nil
}

110
docker/docker_image_dest.go Normal file
View File

@@ -0,0 +1,110 @@
package docker
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"net/http"
"github.com/Sirupsen/logrus"
"github.com/projectatomic/skopeo/docker/utils"
"github.com/projectatomic/skopeo/reference"
"github.com/projectatomic/skopeo/types"
)
type dockerImageDestination struct {
ref reference.Named
tag string
c *dockerClient
}
// NewDockerImageDestination creates a new ImageDestination for the specified image and connection specification.
func NewDockerImageDestination(img, certPath string, tlsVerify bool) (types.ImageDestination, error) {
ref, tag, err := parseDockerImageName(img)
if err != nil {
return nil, err
}
c, err := newDockerClient(ref.Hostname(), certPath, tlsVerify)
if err != nil {
return nil, err
}
return &dockerImageDestination{
ref: ref,
tag: tag,
c: c,
}, nil
}
func (d *dockerImageDestination) CanonicalDockerReference() (string, error) {
return fmt.Sprintf("%s:%s", d.ref.Name(), d.tag), nil
}
func (d *dockerImageDestination) PutManifest(manifest []byte) error {
// FIXME: This only allows upload by digest, not creating a tag. See the
// corresponding comment in NewOpenshiftImageDestination.
digest, err := utils.ManifestDigest(manifest)
if err != nil {
return err
}
url := fmt.Sprintf(manifestURL, d.ref.RemoteName(), digest)
headers := map[string][]string{}
mimeType := utils.GuessManifestMIMEType(manifest)
if mimeType != "" {
headers["Content-Type"] = []string{mimeType}
}
res, err := d.c.makeRequest("PUT", url, headers, bytes.NewReader(manifest))
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
body, err := ioutil.ReadAll(res.Body)
if err == nil {
logrus.Debugf("Error body %s", string(body))
}
logrus.Debugf("Error uploading manifest, status %d, %#v", res.StatusCode, res)
return fmt.Errorf("Error uploading manifest to %s, status %d", url, res.StatusCode)
}
return nil
}
func (d *dockerImageDestination) PutLayer(digest string, stream io.Reader) error {
checkURL := fmt.Sprintf(blobsURL, d.ref.RemoteName(), digest)
logrus.Debugf("Checking %s", checkURL)
res, err := d.c.makeRequest("HEAD", checkURL, nil, nil)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode == http.StatusOK && res.Header.Get("Docker-Content-Digest") == digest {
logrus.Debugf("... already exists, not uploading")
return nil
}
logrus.Debugf("... failed, status %d", res.StatusCode)
// FIXME? Chunked upload, progress reporting, etc.
uploadURL := fmt.Sprintf(blobUploadURL, d.ref.RemoteName(), digest)
logrus.Debugf("Uploading %s", uploadURL)
// FIXME: Set Content-Length?
res, err = d.c.makeRequest("POST", uploadURL, map[string][]string{"Content-Type": {"application/octet-stream"}}, stream)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
logrus.Debugf("Error uploading, status %d", res.StatusCode)
return fmt.Errorf("Error uploading to %s, status %d", uploadURL, res.StatusCode)
}
return nil
}
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return fmt.Errorf("Pushing signatures to a Docker Registry is not supported")
}
return nil
}

View File

@@ -0,0 +1,96 @@
package docker
import (
"fmt"
"io"
"io/ioutil"
"net/http"
"github.com/Sirupsen/logrus"
"github.com/projectatomic/skopeo/reference"
"github.com/projectatomic/skopeo/types"
)
type errFetchManifest struct {
statusCode int
body []byte
}
func (e errFetchManifest) Error() string {
return fmt.Sprintf("error fetching manifest: status code: %d, body: %s", e.statusCode, string(e.body))
}
type dockerImageSource struct {
ref reference.Named
tag string
c *dockerClient
}
// newDockerImageSource is the same as NewDockerImageSource, only it returns the more specific *dockerImageSource type.
func newDockerImageSource(img, certPath string, tlsVerify bool) (*dockerImageSource, error) {
ref, tag, err := parseDockerImageName(img)
if err != nil {
return nil, err
}
c, err := newDockerClient(ref.Hostname(), certPath, tlsVerify)
if err != nil {
return nil, err
}
return &dockerImageSource{
ref: ref,
tag: tag,
c: c,
}, nil
}
// NewDockerImageSource creates a new ImageSource for the specified image and connection specification.
func NewDockerImageSource(img, certPath string, tlsVerify bool) (types.ImageSource, error) {
return newDockerImageSource(img, certPath, tlsVerify)
}
// IntendedDockerReference returns the full, unambiguous, Docker reference for this image, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
// May be "" if unknown.
func (s *dockerImageSource) IntendedDockerReference() string {
return fmt.Sprintf("%s:%s", s.ref.Name(), s.tag)
}
func (s *dockerImageSource) GetManifest(mimetypes []string) ([]byte, string, error) {
url := fmt.Sprintf(manifestURL, s.ref.RemoteName(), s.tag)
// TODO(runcom) set manifest version header! schema1 for now - then schema2 etc etc and v1
// TODO(runcom) NO, switch on the resulter manifest like Docker is doing
headers := make(map[string][]string)
headers["Accept"] = mimetypes
res, err := s.c.makeRequest("GET", url, headers, nil)
if err != nil {
return nil, "", err
}
defer res.Body.Close()
manblob, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, "", err
}
if res.StatusCode != http.StatusOK {
return nil, "", errFetchManifest{res.StatusCode, manblob}
}
// We might validate manblob against the Docker-Content-Digest header here to protect against transport errors.
return manblob, res.Header.Get("Content-Type"), nil
}
func (s *dockerImageSource) GetLayer(digest string) (io.ReadCloser, error) {
url := fmt.Sprintf(blobsURL, s.ref.RemoteName(), digest)
logrus.Infof("Downloading %s", url)
res, err := s.c.makeRequest("GET", url, nil, nil)
if err != nil {
return nil, err
}
if res.StatusCode != http.StatusOK {
// print url also
return nil, fmt.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
}
return res.Body, nil
}
func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
return [][]byte{}, nil
}

20
docker/docker_utils.go Normal file
View File

@@ -0,0 +1,20 @@
package docker
import "github.com/projectatomic/skopeo/reference"
// parseDockerImageName converts a string into a reference and tag value.
func parseDockerImageName(img string) (reference.Named, string, error) {
ref, err := reference.ParseNamed(img)
if err != nil {
return nil, "", err
}
ref = reference.WithDefaultTag(ref)
var tag string
switch x := ref.(type) {
case reference.Canonical:
tag = x.Digest().String()
case reference.NamedTagged:
tag = x.Tag()
}
return ref, tag, nil
}

261
docker/image.go Normal file
View File

@@ -0,0 +1,261 @@
package docker
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"regexp"
"strings"
"time"
"github.com/projectatomic/skopeo/directory"
"github.com/projectatomic/skopeo/docker/utils"
"github.com/projectatomic/skopeo/types"
)
var (
validHex = regexp.MustCompile(`^([a-f0-9]{64})$`)
)
// genericImage is a general set of utilities for working with container images,
// whatever is their underlying location (i.e. dockerImageSource-independent).
type genericImage struct {
src types.ImageSource
cachedManifest []byte // Private cache for Manifest(); nil if not yet known.
cachedSignatures [][]byte // Private cache for Signatures(); nil if not yet known.
}
// GenericImageFromSource returns a types.Image implementation for source.
// NOTE: This is currently an internal testing helper, do not rely on this as
// a stable API. There might be an ImageFromSource eventually, but it would not be
// in the skopeo/docker package.
func GenericImageFromSource(src types.ImageSource) types.Image {
return &genericImage{src: src}
}
// IntendedDockerReference returns the full, unambiguous, Docker reference for this image, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
// May be "" if unknown.
func (i *genericImage) IntendedDockerReference() string {
return i.src.IntendedDockerReference()
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *genericImage) Manifest() ([]byte, error) {
if i.cachedManifest == nil {
m, _, err := i.src.GetManifest([]string{utils.DockerV2Schema1MIMEType})
if err != nil {
return nil, err
}
i.cachedManifest = m
}
return i.cachedManifest, nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *genericImage) Signatures() ([][]byte, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignatures()
if err != nil {
return nil, err
}
i.cachedSignatures = sigs
}
return i.cachedSignatures, nil
}
func (i *genericImage) Inspect() (*types.ImageInspectInfo, error) {
// TODO(runcom): unused version param for now, default to docker v2-1
m, err := i.getSchema1Manifest()
if err != nil {
return nil, err
}
ms1, ok := m.(*manifestSchema1)
if !ok {
return nil, fmt.Errorf("error retrivieng manifest schema1")
}
v1 := &v1Image{}
if err := json.Unmarshal([]byte(ms1.History[0].V1Compatibility), v1); err != nil {
return nil, err
}
return &types.ImageInspectInfo{
Tag: ms1.Tag,
DockerVersion: v1.DockerVersion,
Created: v1.Created,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
Layers: ms1.GetLayers(),
}, nil
}
type config struct {
Labels map[string]string
}
type v1Image struct {
// Config is the configuration of the container received from the client
Config *config `json:"config,omitempty"`
// DockerVersion specifies version on which image is built
DockerVersion string `json:"docker_version,omitempty"`
// Created timestamp when image was created
Created time.Time `json:"created"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
}
// TODO(runcom)
func (i *genericImage) DockerTar() ([]byte, error) {
return nil, nil
}
// will support v1 one day...
type manifest interface {
String() string
GetLayers() []string
}
type manifestSchema1 struct {
Name string
Tag string
FSLayers []struct {
BlobSum string `json:"blobSum"`
} `json:"fsLayers"`
History []struct {
V1Compatibility string `json:"v1Compatibility"`
} `json:"history"`
// TODO(runcom) verify the downloaded manifest
//Signature []byte `json:"signature"`
}
func (m *manifestSchema1) GetLayers() []string {
layers := make([]string, len(m.FSLayers))
for i, layer := range m.FSLayers {
layers[i] = layer.BlobSum
}
return layers
}
func (m *manifestSchema1) String() string {
return fmt.Sprintf("%s-%s", sanitize(m.Name), sanitize(m.Tag))
}
func sanitize(s string) string {
return strings.Replace(s, "/", "-", -1)
}
func (i *genericImage) getSchema1Manifest() (manifest, error) {
manblob, err := i.Manifest()
if err != nil {
return nil, err
}
mschema1 := &manifestSchema1{}
if err := json.Unmarshal(manblob, mschema1); err != nil {
return nil, err
}
if err := fixManifestLayers(mschema1); err != nil {
return nil, err
}
// TODO(runcom): verify manifest schema 1, 2 etc
//if len(m.FSLayers) != len(m.History) {
//return nil, fmt.Errorf("length of history not equal to number of layers for %q", ref.String())
//}
//if len(m.FSLayers) == 0 {
//return nil, fmt.Errorf("no FSLayers in manifest for %q", ref.String())
//}
return mschema1, nil
}
func (i *genericImage) Layers(layers ...string) error {
m, err := i.getSchema1Manifest()
if err != nil {
return err
}
tmpDir, err := ioutil.TempDir(".", "layers-"+m.String()+"-")
if err != nil {
return err
}
dest := directory.NewDirImageDestination(tmpDir)
data, err := json.Marshal(m)
if err != nil {
return err
}
if err := dest.PutManifest(data); err != nil {
return err
}
if len(layers) == 0 {
layers = m.GetLayers()
}
for _, l := range layers {
if !strings.HasPrefix(l, "sha256:") {
l = "sha256:" + l
}
if err := i.getLayer(dest, l); err != nil {
return err
}
}
return nil
}
func (i *genericImage) getLayer(dest types.ImageDestination, digest string) error {
stream, err := i.src.GetLayer(digest)
if err != nil {
return err
}
defer stream.Close()
return dest.PutLayer(digest, stream)
}
func fixManifestLayers(manifest *manifestSchema1) error {
type imageV1 struct {
ID string
Parent string
}
imgs := make([]*imageV1, len(manifest.FSLayers))
for i := range manifest.FSLayers {
img := &imageV1{}
if err := json.Unmarshal([]byte(manifest.History[i].V1Compatibility), img); err != nil {
return err
}
imgs[i] = img
if err := validateV1ID(img.ID); err != nil {
return err
}
}
if imgs[len(imgs)-1].Parent != "" {
return errors.New("Invalid parent ID in the base layer of the image.")
}
// check general duplicates to error instead of a deadlock
idmap := make(map[string]struct{})
var lastID string
for _, img := range imgs {
// skip IDs that appear after each other, we handle those later
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID)
}
lastID = img.ID
idmap[lastID] = struct{}{}
}
// backwards loop so that we keep the remaining indexes after removing items
for i := len(imgs) - 2; i >= 0; i-- {
if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue
manifest.FSLayers = append(manifest.FSLayers[:i], manifest.FSLayers[i+1:]...)
manifest.History = append(manifest.History[:i], manifest.History[i+1:]...)
} else if imgs[i].Parent != imgs[i+1].ID {
return fmt.Errorf("Invalid parent ID. Expected %v, got %v.", imgs[i+1].ID, imgs[i].Parent)
}
}
return nil
}
func validateV1ID(id string) error {
if ok := validHex.MatchString(id); !ok {
return fmt.Errorf("image ID %q is invalid", id)
}
return nil
}

View File

@@ -1,292 +0,0 @@
package docker
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/docker/distribution/digest"
distreference "github.com/docker/distribution/reference"
"github.com/docker/docker/api"
"github.com/docker/docker/cliconfig"
"github.com/docker/docker/image"
"github.com/docker/docker/opts"
versionPkg "github.com/docker/docker/pkg/version"
"github.com/docker/docker/reference"
"github.com/docker/docker/registry"
engineTypes "github.com/docker/engine-api/types"
registryTypes "github.com/docker/engine-api/types/registry"
"github.com/runcom/skopeo/types"
"golang.org/x/net/context"
)
// fallbackError wraps an error that can possibly allow fallback to a different
// endpoint.
type fallbackError struct {
// err is the error being wrapped.
err error
// confirmedV2 is set to true if it was confirmed that the registry
// supports the v2 protocol. This is used to limit fallbacks to the v1
// protocol.
confirmedV2 bool
}
// Error renders the FallbackError as a string.
func (f fallbackError) Error() string {
return f.err.Error()
}
type manifestFetcher interface {
Fetch(ctx context.Context, ref reference.Named) (*types.ImageInspect, error)
}
func validateName(name string) error {
distref, err := distreference.ParseNamed(name)
if err != nil {
return err
}
hostname, _ := distreference.SplitHostname(distref)
if hostname == "" {
return fmt.Errorf("Please use a fully qualified repository name")
}
return nil
}
func GetData(c *cli.Context, name string) (*types.ImageInspect, error) {
if err := validateName(name); err != nil {
return nil, err
}
ref, err := reference.ParseNamed(name)
if err != nil {
return nil, err
}
repoInfo, err := registry.ParseRepositoryInfo(ref)
if err != nil {
return nil, err
}
authConfig, err := getAuthConfig(c, repoInfo.Index)
if err != nil {
return nil, err
}
if err := validateRepoName(repoInfo.Name()); err != nil {
return nil, err
}
options := &registry.Options{}
options.Mirrors = opts.NewListOpts(nil)
options.InsecureRegistries = opts.NewListOpts(nil)
options.InsecureRegistries.Set("0.0.0.0/0")
registryService := registry.NewService(options)
// TODO(runcom): hacky, provide a way of passing tls cert (flag?) to be used to lookup
for _, ic := range registryService.Config.IndexConfigs {
ic.Secure = false
}
endpoints, err := registryService.LookupPullEndpoints(repoInfo)
if err != nil {
return nil, err
}
logrus.Debugf("endpoints: %v", endpoints)
var (
ctx = context.Background()
lastErr error
discardNoSupportErrors bool
imgInspect *types.ImageInspect
confirmedV2 bool
)
for _, endpoint := range endpoints {
// make sure I can reach the registry, same as docker pull does
v1endpoint, err := endpoint.ToV1Endpoint(nil)
if err != nil {
return nil, err
}
if _, err := v1endpoint.Ping(); err != nil {
if strings.Contains(err.Error(), "timeout") {
return nil, err
}
continue
}
if confirmedV2 && endpoint.Version == registry.APIVersion1 {
logrus.Debugf("Skipping v1 endpoint %s because v2 registry was detected", endpoint.URL)
continue
}
logrus.Debugf("Trying to fetch image manifest of %s repository from %s %s", repoInfo.Name(), endpoint.URL, endpoint.Version)
//fetcher, err := newManifestFetcher(endpoint, repoInfo, config)
fetcher, err := newManifestFetcher(endpoint, repoInfo, authConfig, registryService)
if err != nil {
lastErr = err
continue
}
if imgInspect, err = fetcher.Fetch(ctx, ref); err != nil {
// Was this fetch cancelled? If so, don't try to fall back.
fallback := false
select {
case <-ctx.Done():
default:
if fallbackErr, ok := err.(fallbackError); ok {
fallback = true
confirmedV2 = confirmedV2 || fallbackErr.confirmedV2
err = fallbackErr.err
}
}
if fallback {
if _, ok := err.(registry.ErrNoSupport); !ok {
// Because we found an error that's not ErrNoSupport, discard all subsequent ErrNoSupport errors.
discardNoSupportErrors = true
// save the current error
lastErr = err
} else if !discardNoSupportErrors {
// Save the ErrNoSupport error, because it's either the first error or all encountered errors
// were also ErrNoSupport errors.
lastErr = err
}
continue
}
logrus.Debugf("Not continuing with error: %v", err)
return nil, err
}
return imgInspect, nil
}
if lastErr == nil {
lastErr = fmt.Errorf("no endpoints found for %s", ref.String())
}
return nil, lastErr
}
func newManifestFetcher(endpoint registry.APIEndpoint, repoInfo *registry.RepositoryInfo, authConfig engineTypes.AuthConfig, registryService *registry.Service) (manifestFetcher, error) {
switch endpoint.Version {
case registry.APIVersion2:
return &v2ManifestFetcher{
endpoint: endpoint,
authConfig: authConfig,
service: registryService,
repoInfo: repoInfo,
}, nil
case registry.APIVersion1:
return &v1ManifestFetcher{
endpoint: endpoint,
authConfig: authConfig,
service: registryService,
repoInfo: repoInfo,
}, nil
}
return nil, fmt.Errorf("unknown version %d for registry %s", endpoint.Version, endpoint.URL)
}
func getAuthConfig(c *cli.Context, index *registryTypes.IndexInfo) (engineTypes.AuthConfig, error) {
var (
username = c.GlobalString("username")
password = c.GlobalString("password")
cfg = c.GlobalString("docker-cfg")
defAuthConfig = engineTypes.AuthConfig{
Username: c.GlobalString("username"),
Password: c.GlobalString("password"),
Email: "stub@example.com",
}
)
//
// FINAL TODO(runcom): avoid returning empty config! just fallthrough and return
// the first useful authconfig
//
// TODO(runcom): ??? atomic needs this
// TODO(runcom): implement this to opt-in for docker-cfg, no need to make this
// work by default with docker's conf
//useDockerConf := c.GlobalString("use-docker-cfg")
if username != "" && password != "" {
return defAuthConfig, nil
}
confFile, err := cliconfig.Load(cfg)
if err != nil {
return engineTypes.AuthConfig{}, err
}
authConfig := registry.ResolveAuthConfig(confFile.AuthConfigs, index)
logrus.Debugf("authConfig for %s: %v", index.Name, authConfig)
return authConfig, nil
}
func validateRepoName(name string) error {
if name == "" {
return fmt.Errorf("Repository name can't be empty")
}
if name == api.NoBaseImageSpecifier {
return fmt.Errorf("'%s' is a reserved name", api.NoBaseImageSpecifier)
}
return nil
}
func makeImageInspect(img *image.Image, tag string, dgst digest.Digest, tagList []string) *types.ImageInspect {
var digest string
if err := dgst.Validate(); err == nil {
digest = dgst.String()
}
return &types.ImageInspect{
Tag: tag,
Digest: digest,
RepoTags: tagList,
Comment: img.Comment,
Created: img.Created.Format(time.RFC3339Nano),
ContainerConfig: &img.ContainerConfig,
DockerVersion: img.DockerVersion,
Author: img.Author,
Config: img.Config,
Architecture: img.Architecture,
Os: img.OS,
}
}
func makeRawConfigFromV1Config(imageJSON []byte, rootfs *image.RootFS, history []image.History) (map[string]*json.RawMessage, error) {
var dver struct {
DockerVersion string `json:"docker_version"`
}
if err := json.Unmarshal(imageJSON, &dver); err != nil {
return nil, err
}
useFallback := versionPkg.Version(dver.DockerVersion).LessThan("1.8.3")
if useFallback {
var v1Image image.V1Image
err := json.Unmarshal(imageJSON, &v1Image)
if err != nil {
return nil, err
}
imageJSON, err = json.Marshal(v1Image)
if err != nil {
return nil, err
}
}
var c map[string]*json.RawMessage
if err := json.Unmarshal(imageJSON, &c); err != nil {
return nil, err
}
c["rootfs"] = rawJSON(rootfs)
c["history"] = rawJSON(history)
return c, nil
}
func rawJSON(value interface{}) *json.RawMessage {
jsonval, err := json.Marshal(value)
if err != nil {
return nil
}
return (*json.RawMessage)(&jsonval)
}

View File

@@ -1,168 +0,0 @@
package docker
import (
"encoding/json"
"errors"
"fmt"
"strings"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution"
"github.com/docker/distribution/registry/client/transport"
"github.com/docker/docker/image"
"github.com/docker/docker/image/v1"
"github.com/docker/docker/reference"
"github.com/docker/docker/registry"
engineTypes "github.com/docker/engine-api/types"
"github.com/runcom/skopeo/types"
"golang.org/x/net/context"
)
type v1ManifestFetcher struct {
endpoint registry.APIEndpoint
repoInfo *registry.RepositoryInfo
repo distribution.Repository
confirmedV2 bool
// wrap in a config?
authConfig engineTypes.AuthConfig
service *registry.Service
session *registry.Session
}
func (mf *v1ManifestFetcher) Fetch(ctx context.Context, ref reference.Named) (*types.ImageInspect, error) {
var (
imgInspect *types.ImageInspect
)
if _, isCanonical := ref.(reference.Canonical); isCanonical {
// Allowing fallback, because HTTPS v1 is before HTTP v2
return nil, fallbackError{err: registry.ErrNoSupport{errors.New("Cannot pull by digest with v1 registry")}}
}
tlsConfig, err := mf.service.TLSConfig(mf.repoInfo.Index.Name)
if err != nil {
return nil, err
}
// Adds Docker-specific headers as well as user-specified headers (metaHeaders)
tr := transport.NewTransport(
registry.NewTransport(tlsConfig),
//registry.DockerHeaders(mf.config.MetaHeaders)...,
registry.DockerHeaders(nil)...,
)
client := registry.HTTPClient(tr)
//v1Endpoint, err := mf.endpoint.ToV1Endpoint(mf.config.MetaHeaders)
v1Endpoint, err := mf.endpoint.ToV1Endpoint(nil)
if err != nil {
logrus.Debugf("Could not get v1 endpoint: %v", err)
return nil, fallbackError{err: err}
}
mf.session, err = registry.NewSession(client, &mf.authConfig, v1Endpoint)
if err != nil {
logrus.Debugf("Fallback from error: %s", err)
return nil, fallbackError{err: err}
}
imgInspect, err = mf.fetchWithSession(ctx, ref)
if err != nil {
return nil, err
}
return imgInspect, nil
}
func (mf *v1ManifestFetcher) fetchWithSession(ctx context.Context, ref reference.Named) (*types.ImageInspect, error) {
repoData, err := mf.session.GetRepositoryData(mf.repoInfo)
if err != nil {
if strings.Contains(err.Error(), "HTTP code: 404") {
return nil, fmt.Errorf("Error: image %s not found", mf.repoInfo.RemoteName())
}
// Unexpected HTTP error
return nil, err
}
var tagsList map[string]string
tagsList, err = mf.session.GetRemoteTags(repoData.Endpoints, mf.repoInfo)
if err != nil {
logrus.Errorf("unable to get remote tags: %s", err)
return nil, err
}
logrus.Debugf("Retrieving the tag list")
tagged, isTagged := ref.(reference.NamedTagged)
var tagID, tag string
if isTagged {
tag = tagged.Tag()
tagsList[tagged.Tag()] = tagID
} else {
ref, err = reference.WithTag(ref, reference.DefaultTag)
if err != nil {
return nil, err
}
tagged, _ := ref.(reference.NamedTagged)
tag = tagged.Tag()
tagsList[tagged.Tag()] = tagID
}
tagID, err = mf.session.GetRemoteTag(repoData.Endpoints, mf.repoInfo, tag)
if err == registry.ErrRepoNotFound {
return nil, fmt.Errorf("Tag %s not found in repository %s", tag, mf.repoInfo.FullName())
}
if err != nil {
logrus.Errorf("unable to get remote tags: %s", err)
return nil, err
}
tagList := []string{}
for tag := range tagsList {
tagList = append(tagList, tag)
}
img := repoData.ImgList[tagID]
var pulledImg *image.Image
for _, ep := range mf.repoInfo.Index.Mirrors {
if pulledImg, err = mf.pullImageJSON(img.ID, ep, repoData.Tokens); err != nil {
// Don't report errors when pulling from mirrors.
logrus.Debugf("Error pulling image json of %s:%s, mirror: %s, %s", mf.repoInfo.FullName(), img.Tag, ep, err)
continue
}
break
}
if pulledImg == nil {
for _, ep := range repoData.Endpoints {
if pulledImg, err = mf.pullImageJSON(img.ID, ep, repoData.Tokens); err != nil {
// It's not ideal that only the last error is returned, it would be better to concatenate the errors.
logrus.Infof("Error pulling image json of %s:%s, endpoint: %s, %v", mf.repoInfo.FullName(), img.Tag, ep, err)
continue
}
break
}
}
if err != nil {
return nil, fmt.Errorf("Error pulling image (%s) from %s, %v", img.Tag, mf.repoInfo.FullName(), err)
}
if pulledImg == nil {
return nil, fmt.Errorf("No such image %s:%s", mf.repoInfo.FullName(), tag)
}
return makeImageInspect(pulledImg, tag, "", tagList), nil
}
func (mf *v1ManifestFetcher) pullImageJSON(imgID, endpoint string, token []string) (*image.Image, error) {
imgJSON, _, err := mf.session.GetRemoteImageJSON(imgID, endpoint)
if err != nil {
return nil, err
}
h, err := v1.HistoryFromConfig(imgJSON, false)
if err != nil {
return nil, err
}
configRaw, err := makeRawConfigFromV1Config(imgJSON, image.NewRootFS(), []image.History{h})
if err != nil {
return nil, err
}
config, err := json.Marshal(configRaw)
if err != nil {
return nil, err
}
img, err := image.NewFromJSON(config)
if err != nil {
return nil, err
}
return img, nil
}

View File

@@ -1,474 +0,0 @@
package docker
import (
"encoding/json"
"errors"
"fmt"
"runtime"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution"
"github.com/docker/distribution/digest"
"github.com/docker/distribution/manifest/manifestlist"
"github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/manifest/schema2"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/client"
dockerdistribution "github.com/docker/docker/distribution"
"github.com/docker/docker/image"
"github.com/docker/docker/image/v1"
"github.com/docker/docker/reference"
"github.com/docker/docker/registry"
engineTypes "github.com/docker/engine-api/types"
"github.com/runcom/skopeo/types"
"golang.org/x/net/context"
)
type v2ManifestFetcher struct {
endpoint registry.APIEndpoint
repoInfo *registry.RepositoryInfo
repo distribution.Repository
confirmedV2 bool
// wrap in a config?
authConfig engineTypes.AuthConfig
service *registry.Service
}
func (mf *v2ManifestFetcher) Fetch(ctx context.Context, ref reference.Named) (*types.ImageInspect, error) {
var (
imgInspect *types.ImageInspect
err error
)
//mf.repo, mf.confirmedV2, err = distribution.NewV2Repository(ctx, mf.repoInfo, mf.endpoint, mf.config.MetaHeaders, mf.config.AuthConfig, "pull")
mf.repo, mf.confirmedV2, err = dockerdistribution.NewV2Repository(ctx, mf.repoInfo, mf.endpoint, nil, &mf.authConfig, "pull")
if err != nil {
logrus.Debugf("Error getting v2 registry: %v", err)
return nil, fallbackError{err: err, confirmedV2: mf.confirmedV2}
}
imgInspect, err = mf.fetchWithRepository(ctx, ref)
if err != nil {
if _, ok := err.(fallbackError); ok {
return nil, err
}
if registry.ContinueOnError(err) {
err = fallbackError{err: err, confirmedV2: mf.confirmedV2}
}
}
return imgInspect, err
}
func (mf *v2ManifestFetcher) fetchWithRepository(ctx context.Context, ref reference.Named) (*types.ImageInspect, error) {
var (
manifest distribution.Manifest
tagOrDigest string // Used for logging/progress only
tagList = []string{}
)
manSvc, err := mf.repo.Manifests(ctx)
if err != nil {
return nil, err
}
if _, isTagged := ref.(reference.NamedTagged); !isTagged {
ref, err = reference.WithTag(ref, reference.DefaultTag)
if err != nil {
return nil, err
}
}
if tagged, isTagged := ref.(reference.NamedTagged); isTagged {
// NOTE: not using TagService.Get, since it uses HEAD requests
// against the manifests endpoint, which are not supported by
// all registry versions.
manifest, err = manSvc.Get(ctx, "", client.WithTag(tagged.Tag()))
if err != nil {
return nil, allowV1Fallback(err)
}
tagOrDigest = tagged.Tag()
} else if digested, isDigested := ref.(reference.Canonical); isDigested {
manifest, err = manSvc.Get(ctx, digested.Digest())
if err != nil {
return nil, err
}
tagOrDigest = digested.Digest().String()
} else {
return nil, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", ref.String())
}
if manifest == nil {
return nil, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest)
}
// If manSvc.Get succeeded, we can be confident that the registry on
// the other side speaks the v2 protocol.
mf.confirmedV2 = true
tagList, err = mf.repo.Tags(ctx).All(ctx)
if err != nil {
// If this repository doesn't exist on V2, we should
// permit a fallback to V1.
return nil, allowV1Fallback(err)
}
var (
image *image.Image
manifestDigest digest.Digest
)
switch v := manifest.(type) {
case *schema1.SignedManifest:
image, manifestDigest, err = mf.pullSchema1(ctx, ref, v)
if err != nil {
return nil, err
}
case *schema2.DeserializedManifest:
image, manifestDigest, err = mf.pullSchema2(ctx, ref, v)
if err != nil {
return nil, err
}
case *manifestlist.DeserializedManifestList:
image, manifestDigest, err = mf.pullManifestList(ctx, ref, v)
if err != nil {
return nil, err
}
default:
return nil, errors.New("unsupported manifest format")
}
// TODO(runcom)
//var showTags bool
//if reference.IsNameOnly(ref) {
//showTags = true
//logrus.Debug("Using default tag: latest")
//ref = reference.WithDefaultTag(ref)
//}
//_ = showTags
return makeImageInspect(image, tagOrDigest, manifestDigest, tagList), nil
}
func (mf *v2ManifestFetcher) pullSchema1(ctx context.Context, ref reference.Named, unverifiedManifest *schema1.SignedManifest) (img *image.Image, manifestDigest digest.Digest, err error) {
var verifiedManifest *schema1.Manifest
verifiedManifest, err = verifySchema1Manifest(unverifiedManifest, ref)
if err != nil {
return nil, "", err
}
// remove duplicate layers and check parent chain validity
err = fixManifestLayers(verifiedManifest)
if err != nil {
return nil, "", err
}
// Image history converted to the new format
var history []image.History
// Note that the order of this loop is in the direction of bottom-most
// to top-most, so that the downloads slice gets ordered correctly.
for i := len(verifiedManifest.FSLayers) - 1; i >= 0; i-- {
var throwAway struct {
ThrowAway bool `json:"throwaway,omitempty"`
}
if err := json.Unmarshal([]byte(verifiedManifest.History[i].V1Compatibility), &throwAway); err != nil {
return nil, "", err
}
h, err := v1.HistoryFromConfig([]byte(verifiedManifest.History[i].V1Compatibility), throwAway.ThrowAway)
if err != nil {
return nil, "", err
}
history = append(history, h)
}
rootFS := image.NewRootFS()
configRaw, err := makeRawConfigFromV1Config([]byte(verifiedManifest.History[0].V1Compatibility), rootFS, history)
config, err := json.Marshal(configRaw)
if err != nil {
return nil, "", err
}
img, err = image.NewFromJSON(config)
if err != nil {
return nil, "", err
}
manifestDigest = digest.FromBytes(unverifiedManifest.Canonical)
return img, manifestDigest, nil
}
func verifySchema1Manifest(signedManifest *schema1.SignedManifest, ref reference.Named) (m *schema1.Manifest, err error) {
// If pull by digest, then verify the manifest digest. NOTE: It is
// important to do this first, before any other content validation. If the
// digest cannot be verified, don't even bother with those other things.
if digested, isCanonical := ref.(reference.Canonical); isCanonical {
verifier, err := digest.NewDigestVerifier(digested.Digest())
if err != nil {
return nil, err
}
if _, err := verifier.Write(signedManifest.Canonical); err != nil {
return nil, err
}
if !verifier.Verified() {
err := fmt.Errorf("image verification failed for digest %s", digested.Digest())
logrus.Error(err)
return nil, err
}
}
m = &signedManifest.Manifest
if m.SchemaVersion != 1 {
return nil, fmt.Errorf("unsupported schema version %d for %q", m.SchemaVersion, ref.String())
}
if len(m.FSLayers) != len(m.History) {
return nil, fmt.Errorf("length of history not equal to number of layers for %q", ref.String())
}
if len(m.FSLayers) == 0 {
return nil, fmt.Errorf("no FSLayers in manifest for %q", ref.String())
}
return m, nil
}
func fixManifestLayers(m *schema1.Manifest) error {
imgs := make([]*image.V1Image, len(m.FSLayers))
for i := range m.FSLayers {
img := &image.V1Image{}
if err := json.Unmarshal([]byte(m.History[i].V1Compatibility), img); err != nil {
return err
}
imgs[i] = img
if err := v1.ValidateID(img.ID); err != nil {
return err
}
}
if imgs[len(imgs)-1].Parent != "" && runtime.GOOS != "windows" {
// Windows base layer can point to a base layer parent that is not in manifest.
return errors.New("Invalid parent ID in the base layer of the image.")
}
// check general duplicates to error instead of a deadlock
idmap := make(map[string]struct{})
var lastID string
for _, img := range imgs {
// skip IDs that appear after each other, we handle those later
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID)
}
lastID = img.ID
idmap[lastID] = struct{}{}
}
// backwards loop so that we keep the remaining indexes after removing items
for i := len(imgs) - 2; i >= 0; i-- {
if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue
m.FSLayers = append(m.FSLayers[:i], m.FSLayers[i+1:]...)
m.History = append(m.History[:i], m.History[i+1:]...)
} else if imgs[i].Parent != imgs[i+1].ID {
return fmt.Errorf("Invalid parent ID. Expected %v, got %v.", imgs[i+1].ID, imgs[i].Parent)
}
}
return nil
}
func (mf *v2ManifestFetcher) pullSchema2(ctx context.Context, ref reference.Named, mfst *schema2.DeserializedManifest) (img *image.Image, manifestDigest digest.Digest, err error) {
manifestDigest, err = schema2ManifestDigest(ref, mfst)
if err != nil {
return nil, "", err
}
target := mfst.Target()
configChan := make(chan []byte, 1)
errChan := make(chan error, 1)
var cancel func()
ctx, cancel = context.WithCancel(ctx)
// Pull the image config
go func() {
configJSON, err := mf.pullSchema2ImageConfig(ctx, target.Digest)
if err != nil {
errChan <- err
cancel()
return
}
configChan <- configJSON
}()
var (
configJSON []byte // raw serialized image config
unmarshalledConfig image.Image // deserialized image config
)
if runtime.GOOS == "windows" {
configJSON, unmarshalledConfig, err = receiveConfig(configChan, errChan)
if err != nil {
return nil, "", err
}
if unmarshalledConfig.RootFS == nil {
return nil, "", errors.New("image config has no rootfs section")
}
}
if configJSON == nil {
configJSON, unmarshalledConfig, err = receiveConfig(configChan, errChan)
if err != nil {
return nil, "", err
}
}
img, err = image.NewFromJSON(configJSON)
if err != nil {
return nil, "", err
}
return img, manifestDigest, nil
}
func (mf *v2ManifestFetcher) pullSchema2ImageConfig(ctx context.Context, dgst digest.Digest) (configJSON []byte, err error) {
blobs := mf.repo.Blobs(ctx)
configJSON, err = blobs.Get(ctx, dgst)
if err != nil {
return nil, err
}
// Verify image config digest
verifier, err := digest.NewDigestVerifier(dgst)
if err != nil {
return nil, err
}
if _, err := verifier.Write(configJSON); err != nil {
return nil, err
}
if !verifier.Verified() {
err := fmt.Errorf("image config verification failed for digest %s", dgst)
logrus.Error(err)
return nil, err
}
return configJSON, nil
}
func receiveConfig(configChan <-chan []byte, errChan <-chan error) ([]byte, image.Image, error) {
select {
case configJSON := <-configChan:
var unmarshalledConfig image.Image
if err := json.Unmarshal(configJSON, &unmarshalledConfig); err != nil {
return nil, image.Image{}, err
}
return configJSON, unmarshalledConfig, nil
case err := <-errChan:
return nil, image.Image{}, err
// Don't need a case for ctx.Done in the select because cancellation
// will trigger an error in p.pullSchema2ImageConfig.
}
}
// allowV1Fallback checks if the error is a possible reason to fallback to v1
// (even if confirmedV2 has been set already), and if so, wraps the error in
// a fallbackError with confirmedV2 set to false. Otherwise, it returns the
// error unmodified.
func allowV1Fallback(err error) error {
switch v := err.(type) {
case errcode.Errors:
if len(v) != 0 {
if v0, ok := v[0].(errcode.Error); ok && registry.ShouldV2Fallback(v0) {
return fallbackError{err: err, confirmedV2: false}
}
}
case errcode.Error:
if registry.ShouldV2Fallback(v) {
return fallbackError{err: err, confirmedV2: false}
}
}
return err
}
// schema2ManifestDigest computes the manifest digest, and, if pulling by
// digest, ensures that it matches the requested digest.
func schema2ManifestDigest(ref reference.Named, mfst distribution.Manifest) (digest.Digest, error) {
_, canonical, err := mfst.Payload()
if err != nil {
return "", err
}
// If pull by digest, then verify the manifest digest.
if digested, isDigested := ref.(reference.Canonical); isDigested {
verifier, err := digest.NewDigestVerifier(digested.Digest())
if err != nil {
return "", err
}
if _, err := verifier.Write(canonical); err != nil {
return "", err
}
if !verifier.Verified() {
err := fmt.Errorf("manifest verification failed for digest %s", digested.Digest())
logrus.Error(err)
return "", err
}
return digested.Digest(), nil
}
return digest.FromBytes(canonical), nil
}
// pullManifestList handles "manifest lists" which point to various
// platform-specifc manifests.
func (mf *v2ManifestFetcher) pullManifestList(ctx context.Context, ref reference.Named, mfstList *manifestlist.DeserializedManifestList) (img *image.Image, manifestListDigest digest.Digest, err error) {
manifestListDigest, err = schema2ManifestDigest(ref, mfstList)
if err != nil {
return nil, "", err
}
var manifestDigest digest.Digest
for _, manifestDescriptor := range mfstList.Manifests {
// TODO(aaronl): The manifest list spec supports optional
// "features" and "variant" fields. These are not yet used.
// Once they are, their values should be interpreted here.
if manifestDescriptor.Platform.Architecture == runtime.GOARCH && manifestDescriptor.Platform.OS == runtime.GOOS {
manifestDigest = manifestDescriptor.Digest
break
}
}
if manifestDigest == "" {
return nil, "", errors.New("no supported platform found in manifest list")
}
manSvc, err := mf.repo.Manifests(ctx)
if err != nil {
return nil, "", err
}
manifest, err := manSvc.Get(ctx, manifestDigest)
if err != nil {
return nil, "", err
}
manifestRef, err := reference.WithDigest(ref, manifestDigest)
if err != nil {
return nil, "", err
}
switch v := manifest.(type) {
case *schema1.SignedManifest:
img, _, err = mf.pullSchema1(ctx, manifestRef, v)
if err != nil {
return nil, "", err
}
case *schema2.DeserializedManifest:
img, _, err = mf.pullSchema2(ctx, manifestRef, v)
if err != nil {
return nil, "", err
}
default:
return nil, "", errors.New("unsupported manifest format")
}
return img, manifestListDigest, err
}

Binary file not shown.

View File

@@ -0,0 +1,5 @@
{
"schemaVersion": 99999,
"name": "mitr/noversion-nonsense",
"tag": "latest"
}

View File

@@ -0,0 +1,56 @@
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v1+json",
"size": 2094,
"digest": "sha256:7820f9a86d4ad15a2c4f0c0e5479298df2aa7c2f6871288e2ef8546f3e7b6783",
"platform": {
"architecture": "ppc64le",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v1+json",
"size": 1922,
"digest": "sha256:ae1b0e06e8ade3a11267564a26e750585ba2259c0ecab59ab165ad1af41d1bdd",
"platform": {
"architecture": "amd64",
"os": "linux",
"features": [
"sse"
]
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v1+json",
"size": 2084,
"digest": "sha256:e4c0df75810b953d6717b8f8f28298d73870e8aa2a0d5e77b8391f16fdfbbbe2",
"platform": {
"architecture": "s390x",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v1+json",
"size": 2084,
"digest": "sha256:07ebe243465ef4a667b78154ae6c3ea46fdb1582936aac3ac899ea311a701b40",
"platform": {
"architecture": "arm",
"os": "linux",
"variant": "armv7"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v1+json",
"size": 2090,
"digest": "sha256:fb2fc0707b86dafa9959fe3d29e66af8787aee4d9a23581714be65db4265ad8a",
"platform": {
"architecture": "arm64",
"os": "linux",
"variant": "armv8"
}
}
]
}

View File

@@ -0,0 +1,11 @@
{
"schemaVersion": 1,
"name": "mitr/buxybox",
"tag": "latest",
"architecture": "amd64",
"fsLayers": [
],
"history": [
],
"signatures": 1
}

View File

@@ -0,0 +1,44 @@
{
"schemaVersion": 1,
"name": "mitr/buxybox",
"tag": "latest",
"architecture": "amd64",
"fsLayers": [
{
"blobSum": "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
},
{
"blobSum": "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
},
{
"blobSum": "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
}
],
"history": [
{
"v1Compatibility": "{\"id\":\"f1b5eb0a1215f663765d509b6cdf3841bc2bcff0922346abb943d1342d469a97\",\"parent\":\"594075be8d003f784074cc639d970d1fa091a8197850baaae5052c01564ac535\",\"created\":\"2016-03-03T11:29:44.222098366Z\",\"container\":\"c0924f5b281a1992127d0afc065e59548ded8880b08aea4debd56d4497acb17a\",\"container_config\":{\"Hostname\":\"56f0fe1dfc95\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"ExposedPorts\":null,\"PublishService\":\"\",\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":null,\"Cmd\":[\"/bin/sh\",\"-c\",\"#(nop) LABEL Checksum=4fef81d30f31f9213c642881357e6662846a0f884c2366c13ebad807b4031368 ./tests/test-images/Dockerfile.2\"],\"Image\":\"594075be8d003f784074cc639d970d1fa091a8197850baaae5052c01564ac535\",\"Volumes\":null,\"VolumeDriver\":\"\",\"WorkingDir\":\"\",\"Entrypoint\":null,\"NetworkDisabled\":false,\"MacAddress\":\"\",\"OnBuild\":null,\"Labels\":{\"Checksum\":\"4fef81d30f31f9213c642881357e6662846a0f884c2366c13ebad807b4031368 ./tests/test-images/Dockerfile.2\",\"Name\":\"atomic-test-2\"}},\"docker_version\":\"1.8.2-fc22\",\"author\":\"\\\"William Temple \\u003cwtemple at redhat dot com\\u003e\\\"\",\"config\":{\"Hostname\":\"56f0fe1dfc95\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"ExposedPorts\":null,\"PublishService\":\"\",\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":null,\"Cmd\":null,\"Image\":\"594075be8d003f784074cc639d970d1fa091a8197850baaae5052c01564ac535\",\"Volumes\":null,\"VolumeDriver\":\"\",\"WorkingDir\":\"\",\"Entrypoint\":null,\"NetworkDisabled\":false,\"MacAddress\":\"\",\"OnBuild\":null,\"Labels\":{\"Checksum\":\"4fef81d30f31f9213c642881357e6662846a0f884c2366c13ebad807b4031368 ./tests/test-images/Dockerfile.2\",\"Name\":\"atomic-test-2\"}},\"architecture\":\"amd64\",\"os\":\"linux\",\"Size\":0}\n"
},
{
"v1Compatibility": "{\"id\":\"594075be8d003f784074cc639d970d1fa091a8197850baaae5052c01564ac535\",\"parent\":\"03dfa1cd1abe452bc2b69b8eb2362fa6beebc20893e65437906318954f6276d4\",\"created\":\"2016-03-03T11:29:38.563048924Z\",\"container\":\"fd4cf54dcd239fbae9bdade9db48e41880b436d27cb5313f60952a46ab04deff\",\"container_config\":{\"Hostname\":\"56f0fe1dfc95\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"ExposedPorts\":null,\"PublishService\":\"\",\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":null,\"Cmd\":[\"/bin/sh\",\"-c\",\"#(nop) LABEL Name=atomic-test-2\"],\"Image\":\"03dfa1cd1abe452bc2b69b8eb2362fa6beebc20893e65437906318954f6276d4\",\"Volumes\":null,\"VolumeDriver\":\"\",\"WorkingDir\":\"\",\"Entrypoint\":null,\"NetworkDisabled\":false,\"MacAddress\":\"\",\"OnBuild\":null,\"Labels\":{\"Name\":\"atomic-test-2\"}},\"docker_version\":\"1.8.2-fc22\",\"author\":\"\\\"William Temple \\u003cwtemple at redhat dot com\\u003e\\\"\",\"config\":{\"Hostname\":\"56f0fe1dfc95\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"ExposedPorts\":null,\"PublishService\":\"\",\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":null,\"Cmd\":null,\"Image\":\"03dfa1cd1abe452bc2b69b8eb2362fa6beebc20893e65437906318954f6276d4\",\"Volumes\":null,\"VolumeDriver\":\"\",\"WorkingDir\":\"\",\"Entrypoint\":null,\"NetworkDisabled\":false,\"MacAddress\":\"\",\"OnBuild\":null,\"Labels\":{\"Name\":\"atomic-test-2\"}},\"architecture\":\"amd64\",\"os\":\"linux\",\"Size\":0}\n"
},
{
"v1Compatibility": "{\"id\":\"03dfa1cd1abe452bc2b69b8eb2362fa6beebc20893e65437906318954f6276d4\",\"created\":\"2016-03-03T11:29:32.948089874Z\",\"container\":\"56f0fe1dfc95755dd6cda10f7215c9937a8d9c6348d079c581a261fd4c2f3a5f\",\"container_config\":{\"Hostname\":\"56f0fe1dfc95\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"ExposedPorts\":null,\"PublishService\":\"\",\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":null,\"Cmd\":[\"/bin/sh\",\"-c\",\"#(nop) MAINTAINER \\\"William Temple \\u003cwtemple at redhat dot com\\u003e\\\"\"],\"Image\":\"\",\"Volumes\":null,\"VolumeDriver\":\"\",\"WorkingDir\":\"\",\"Entrypoint\":null,\"NetworkDisabled\":false,\"MacAddress\":\"\",\"OnBuild\":null,\"Labels\":null},\"docker_version\":\"1.8.2-fc22\",\"author\":\"\\\"William Temple \\u003cwtemple at redhat dot com\\u003e\\\"\",\"config\":{\"Hostname\":\"56f0fe1dfc95\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"ExposedPorts\":null,\"PublishService\":\"\",\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":null,\"Cmd\":null,\"Image\":\"\",\"Volumes\":null,\"VolumeDriver\":\"\",\"WorkingDir\":\"\",\"Entrypoint\":null,\"NetworkDisabled\":false,\"MacAddress\":\"\",\"OnBuild\":null,\"Labels\":null},\"architecture\":\"amd64\",\"os\":\"linux\",\"Size\":0}\n"
}
],
"signatures": [
{
"header": {
"jwk": {
"crv": "P-256",
"kid": "OZ45:U3IG:TDOI:PMBD:NGP2:LDIW:II2U:PSBI:MMCZ:YZUP:TUUO:XPZT",
"kty": "EC",
"x": "ReC5c0J9tgXSdUL4_xzEt5RsD8kFt2wWSgJcpAcOQx8",
"y": "3sBGEqQ3ZMeqPKwQBAadN2toOUEASha18xa0WwsDF-M"
},
"alg": "ES256"
},
"signature": "dV1paJ3Ck1Ph4FcEhg_frjqxdlGdI6-ywRamk6CvMOcaOEUdCWCpCPQeBQpD2N6tGjkoG1BbstkFNflllfenCw",
"protected": "eyJmb3JtYXRMZW5ndGgiOjU0NzgsImZvcm1hdFRhaWwiOiJDbjAiLCJ0aW1lIjoiMjAxNi0wNC0xOFQyMDo1NDo0MloifQ"
}
]
}

View File

@@ -0,0 +1,26 @@
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 7023,
"digest": "sha256:b5b2b2c507a0944348e0303114d8d93aaaa081732b86451d9bce1f432a537bc7"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 32654,
"digest": "sha256:e692418e4cbaf90ca69d05a66403747baa33ee08806650b51fab815ad7fc331f"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 16724,
"digest": "sha256:3c3a4604a545cdc127456d94e421cd355bca5b528f4a9c1905b15da2eb4a4c6b"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 73109,
"digest": "sha256:ec4b8955958665577945c89419d1af06b5f7636b4ac3da7f12184802ad867736"
}
]
}

View File

@@ -0,0 +1,10 @@
{
"schemaVersion": 2,
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 7023,
"digest": "sha256:b5b2b2c507a0944348e0303114d8d93aaaa081732b86451d9bce1f432a537bc7"
},
"layers": [
]
}

View File

@@ -0,0 +1,8 @@
package utils
const (
// TestV2S2ManifestDigest is the Docker manifest digest of "v2s2.manifest.json"
TestV2S2ManifestDigest = "sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55"
// TestV2S1ManifestDigest is the Docker manifest digest of "v2s1.manifest.json"
TestV2S1ManifestDigest = "sha256:077594da70fc17ec2c93cfa4e6ed1fcc26992851fb2c71861338aaf4aa9e41b1"
)

66
docker/utils/manifest.go Normal file
View File

@@ -0,0 +1,66 @@
package utils
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"github.com/docker/libtrust"
)
// FIXME: Should we just use docker/distribution and docker/docker implementations directly?
const (
// DockerV2Schema1MIMEType MIME type represents Docker manifest schema 1
DockerV2Schema1MIMEType = "application/vnd.docker.distribution.manifest.v1+json"
// DockerV2Schema2MIMEType MIME type represents Docker manifest schema 2
DockerV2Schema2MIMEType = "application/vnd.docker.distribution.manifest.v2+json"
// DockerV2ListMIMEType MIME type represents Docker manifest schema 2 list
DockerV2ListMIMEType = "application/vnd.docker.distribution.manifest.list.v2+json"
)
// GuessManifestMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
// FIXME? We should, in general, prefer out-of-band MIME type instead of blindly parsing the manifest,
// but we may not have such metadata available (e.g. when the manifest is a local file).
func GuessManifestMIMEType(manifest []byte) string {
// A subset of manifest fields; the rest is silently ignored by json.Unmarshal.
// Also docker/distribution/manifest.Versioned.
meta := struct {
MediaType string `json:"mediaType"`
SchemaVersion int `json:"schemaVersion"`
}{}
if err := json.Unmarshal(manifest, &meta); err != nil {
return ""
}
switch meta.MediaType {
case DockerV2Schema2MIMEType, DockerV2ListMIMEType: // A recognized type.
return meta.MediaType
}
switch meta.SchemaVersion {
case 1:
return DockerV2Schema1MIMEType
case 2: // Really should not happen, meta.MediaType should have been set. But given the data, this is our best guess.
return DockerV2Schema2MIMEType
}
return ""
}
// ManifestDigest returns the a digest of a docker manifest, with any necessary implied transformations like stripping v1s1 signatures.
func ManifestDigest(manifest []byte) (string, error) {
if GuessManifestMIMEType(manifest) == DockerV2Schema1MIMEType {
sig, err := libtrust.ParsePrettySignature(manifest, "signatures")
if err != nil {
return "", err
}
manifest, err = sig.Payload()
if err != nil {
// Coverage: This should never happen, libtrust's Payload() can fail only if joseBase64UrlDecode() fails, on a string
// that libtrust itself has josebase64UrlEncode()d
return "", err
}
}
hash := sha256.Sum256(manifest)
return "sha256:" + hex.EncodeToString(hash[:]), nil
}

View File

@@ -0,0 +1,58 @@
package utils
import (
"io/ioutil"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestGuessManifestMIMEType(t *testing.T) {
cases := []struct {
path string
mimeType string
}{
{"v2s2.manifest.json", DockerV2Schema2MIMEType},
{"v2list.manifest.json", DockerV2ListMIMEType},
{"v2s1.manifest.json", DockerV2Schema1MIMEType},
{"v2s1-invalid-signatures.manifest.json", DockerV2Schema1MIMEType},
{"v2s2nomime.manifest.json", DockerV2Schema2MIMEType}, // It is unclear whether this one is legal, but we should guess v2s2 if anything at all.
{"unknown-version.manifest.json", ""},
{"non-json.manifest.json", ""}, // Not a manifest (nor JSON) at all
}
for _, c := range cases {
manifest, err := ioutil.ReadFile(filepath.Join("fixtures", c.path))
require.NoError(t, err)
mimeType := GuessManifestMIMEType(manifest)
assert.Equal(t, c.mimeType, mimeType)
}
}
func TestManifestDigest(t *testing.T) {
cases := []struct {
path string
digest string
}{
{"v2s2.manifest.json", TestV2S2ManifestDigest},
{"v2s1.manifest.json", TestV2S1ManifestDigest},
}
for _, c := range cases {
manifest, err := ioutil.ReadFile(filepath.Join("fixtures", c.path))
require.NoError(t, err)
digest, err := ManifestDigest(manifest)
require.NoError(t, err)
assert.Equal(t, c.digest, digest)
}
manifest, err := ioutil.ReadFile("fixtures/v2s1-invalid-signatures.manifest.json")
require.NoError(t, err)
digest, err := ManifestDigest(manifest)
assert.Error(t, err)
digest, err = ManifestDigest([]byte{})
require.NoError(t, err)
assert.Equal(t, "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", digest)
}

View File

@@ -1,15 +1,22 @@
#!/usr/bin/env bash
PROJECT=github.com/runcom/skopeo
PROJECT=github.com/projectatomic/skopeo
# Downloads dependencies into vendor/ directory
mkdir -p vendor
export GOPATH="$GOPATH:${PWD}/vendor"
original_GOPATH=$GOPATH
export GOPATH="${PWD}/vendor:$GOPATH"
find="/usr/bin/find"
clone() {
local delete_vendor=true
if [ "x$1" = x--keep-vendor ]; then
delete_vendor=false
shift
fi
local vcs="$1"
local pkg="$2"
local rev="$3"
@@ -39,17 +46,18 @@ clone() {
echo -n 'rm VCS, '
( cd "$target" && rm -rf .{git,hg} )
echo -n 'rm vendor, '
( cd "$target" && rm -rf vendor Godeps/_workspace )
if $delete_vendor; then
echo -n 'rm vendor, '
( cd "$target" && rm -rf vendor Godeps/_workspace )
fi
echo done
}
clean() {
local packages=(
"${PROJECT}" # package main
"${PROJECT}/integration" # package main
)
# If $GOPATH starts with ./vendor, (go list) shows the short-form import paths for packages inside ./vendor.
# So, reset GOPATH to the external value (without ./vendor), so that the grep -v works.
local packages=($(GOPATH=$original_GOPATH go list -e ./... | grep -v "^${PROJECT}/vendor"))
local platforms=( linux/amd64 linux/386 )
local buildTags=( )
@@ -66,6 +74,10 @@ clean() {
go list -e -tags "$buildTags" -f '{{join .TestImports "\n"}}' "${packages[@]}"
done | grep -vE "^${PROJECT}" | sort -u
) )
# .TestImports does not include indirect dependencies, so do one more iteration.
imports+=( $(
go list -e -f '{{join .Deps "\n"}}' "${imports[@]}" | grep -vE "^${PROJECT}" | sort -u
) )
imports=( $(go list -e -f '{{if not .Standard}}{{.ImportPath}}{{end}}' "${imports[@]}") )
unset IFS

View File

@@ -6,7 +6,7 @@ set -e
#
# Requirements:
# - The current directory should be a checkout of the skopeo source code
# (https://github.com/runcom/skopeo). Whatever version is checked out
# (https://github.com/projectatomic/skopeo). Whatever version is checked out
# will be built.
# - The script is intended to be run inside the docker container specified
# in the Dockerfile at the root of the source. In other words:
@@ -19,7 +19,7 @@ set -e
set -o pipefail
export SKOPEO_PKG='github.com/runcom/skopeo'
export SKOPEO_PKG='github.com/projectatomic/skopeo'
export SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
export MAKEDIR="$SCRIPTDIR/make"

View File

@@ -4,7 +4,7 @@ if [ -z "$VALIDATE_UPSTREAM" ]; then
# this is kind of an expensive check, so let's not do this twice if we
# are running more than one validate bundlescript
VALIDATE_REPO='https://github.com/runcom/skopeo.git'
VALIDATE_REPO='https://github.com/projectatomic/skopeo.git'
VALIDATE_BRANCH='master'
if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then
@@ -21,9 +21,7 @@ if [ -z "$VALIDATE_UPSTREAM" ]; then
VALIDATE_COMMIT_DIFF="$VALIDATE_UPSTREAM...$VALIDATE_HEAD"
validate_diff() {
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then
git diff "$VALIDATE_COMMIT_DIFF" "$@"
fi
git diff "$VALIDATE_UPSTREAM" "$@"
}
validate_log() {
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then

View File

@@ -8,8 +8,7 @@ unset IFS
badFiles=()
for f in "${files[@]}"; do
# we use "git show" here to validate that what's committed is formatted
if [ "$(git show "$VALIDATE_HEAD:$f" | gofmt -s -l)" ]; then
if [ "$(gofmt -s -l < $f)" ]; then
badFiles+=( "$f" )
fi
done

View File

@@ -11,7 +11,6 @@ unset IFS
errors=()
for f in "${files[@]}"; do
# we use "git show" here to validate that what's committed passes go lint
failedLint=$(golint "$f")
if [ "$failedLint" ]; then
errors+=( "$failedLint" )

View File

@@ -8,7 +8,6 @@ unset IFS
errors=()
for f in "${files[@]}"; do
# we use "git show" here to validate that what's committed passes go vet
failedVet=$(go vet "$f")
if [ "$failedVet" ]; then
errors+=( "$failedVet" )

View File

@@ -6,21 +6,22 @@ rm -rf vendor/
source 'hack/.vendor-helpers.sh'
clone git github.com/codegangsta/cli v1.2.0
clone git github.com/Sirupsen/logrus v0.8.7
clone git github.com/vbatts/tar-split master
clone git github.com/gorilla/mux master
clone git github.com/gorilla/context master
clone git golang.org/x/net master https://github.com/golang/net.git
clone git github.com/Sirupsen/logrus v0.10.0
clone git github.com/go-check/check v1
clone git github.com/docker/docker v1.10.2
clone git github.com/docker/engine-api v0.2.3
clone git github.com/docker/distribution 0f2d99b13ae0cfbcf118eff103e6e680b726b47e
clone git github.com/docker/go-connections master
clone git github.com/docker/go-units master
clone git github.com/stretchr/testify v1.1.3
clone git github.com/davecgh/go-spew master
clone git github.com/pmezard/go-difflib master
clone git github.com/docker/docker 0f5c9d301b9b1cca66b3ea0f9dec3b5317d3686d
clone git github.com/docker/distribution master
clone git github.com/docker/libtrust master
clone git github.com/opencontainers/runc master
clone git github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
clone git github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
clone git k8s.io/kubernetes 4a3f9c5b19c7ff804cbc1bf37a15c044ca5d2353 https://github.com/openshift/kubernetes
clone git github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
clone git gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
clone git github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
clean

View File

@@ -1,55 +0,0 @@
package main
import (
"fmt"
"strings"
"github.com/codegangsta/cli"
"github.com/runcom/skopeo/docker"
"github.com/runcom/skopeo/types"
)
type imgKind int
const (
imgTypeDocker = "docker://"
imgTypeAppc = "appc://"
kindUnknown = iota
kindDocker
kindAppc
)
func getImgType(img string) imgKind {
if strings.HasPrefix(img, imgTypeDocker) {
return kindDocker
}
if strings.HasPrefix(img, imgTypeAppc) {
return kindAppc
}
// TODO(runcom): v2 will support this
//return kindUnknown
return kindDocker
}
func inspect(c *cli.Context) (*types.ImageInspect, error) {
var (
imgInspect *types.ImageInspect
err error
name = c.Args().First()
kind = getImgType(name)
)
switch kind {
case kindDocker:
imgInspect, err = docker.GetData(c, strings.Replace(name, imgTypeDocker, "", -1))
if err != nil {
return nil, err
}
case kindAppc:
return nil, fmt.Errorf("not implemented yet")
default:
return nil, fmt.Errorf("%s image is invalid, please use either 'docker://' or 'appc://'", name)
}
return imgInspect, nil
}

View File

@@ -31,7 +31,7 @@ type SkopeoSuite struct {
regV1 *testRegistryV1
regV2 *testRegistryV2
regV2Shema1 *testRegistryV2
regV1WithAuth *testRegistryV1
regV1WithAuth *testRegistryV1 // does v1 support auth?
regV2WithAuth *testRegistryV2
}
@@ -47,7 +47,7 @@ func (s *SkopeoSuite) SetUpTest(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.regV1 = setupRegistryV1At(c, privateRegistryURL0, false) // not used
s.regV1 = setupRegistryV1At(c, privateRegistryURL0, false) // TODO:(runcom)
s.regV2 = setupRegistryV2At(c, privateRegistryURL1, false, false)
s.regV2Shema1 = setupRegistryV2At(c, privateRegistryURL2, false, true)
s.regV1WithAuth = setupRegistryV1At(c, privateRegistryURL3, true) // not used
@@ -81,19 +81,27 @@ func (s *SkopeoSuite) TestVersion(c *check.C) {
}
}
var (
errFetchManifest = "error fetching manifest: status code: %s"
)
func (s *SkopeoSuite) TestCanAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
out, err := exec.Command(skopeoBinary, "--docker-cfg=''", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, fmt.Sprintf("%s/busybox:latest", s.regV2WithAuth.url)).CombinedOutput()
// TODO(runcom)
c.Skip("we need to restore --username --password flags!")
out, err := exec.Command(skopeoBinary, "--docker-cfg=''", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url)).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf(string(out)))
wanted := "Error: image busybox not found"
wanted := fmt.Sprintf(errFetchManifest, "401")
if !strings.Contains(string(out), wanted) {
c.Fatalf("wanted %s, got %s", wanted, string(out))
}
}
func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
out, err := exec.Command(skopeoBinary, "--docker-cfg=''", fmt.Sprintf("%s/busybox:latest", s.regV2WithAuth.url)).CombinedOutput()
// TODO(runcom): mock the empty docker-cfg by removing it in the test itself (?)
c.Skip("mock empty docker config")
out, err := exec.Command(skopeoBinary, "--docker-cfg=''", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url)).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf(string(out)))
wanted := "no basic auth credentials"
wanted := fmt.Sprintf(errFetchManifest, "401")
if !strings.Contains(string(out), wanted) {
c.Fatalf("wanted %s, got %s", wanted, string(out))
}
@@ -102,13 +110,13 @@ func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C
// TODO(runcom): as soon as we can push to registries ensure you can inspect here
// not just get image not found :)
func (s *SkopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound(c *check.C) {
out, err := exec.Command(skopeoBinary, fmt.Sprintf("%s/busybox:latest", s.regV2.url)).CombinedOutput()
out, err := exec.Command(skopeoBinary, "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2.url)).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf(string(out)))
wanted := "Error: image busybox not found"
wanted := fmt.Sprintf(errFetchManifest, "404")
if !strings.Contains(string(out), wanted) {
c.Fatalf("wanted %s, got %s", wanted, string(out))
}
wanted = "no basic auth credentials"
wanted = fmt.Sprintf(errFetchManifest, "401")
if strings.Contains(string(out), wanted) {
c.Fatalf("not wanted %s, got %s", wanted, string(out))
}

View File

@@ -0,0 +1,26 @@
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 7023,
"digest": "sha256:b5b2b2c507a0944348e0303114d8d93aaaa081732b86451d9bce1f432a537bc7"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 32654,
"digest": "sha256:e692418e4cbaf90ca69d05a66403747baa33ee08806650b51fab815ad7fc331f"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 16724,
"digest": "sha256:3c3a4604a545cdc127456d94e421cd355bca5b528f4a9c1905b15da2eb4a4c6b"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 73109,
"digest": "sha256:ec4b8955958665577945c89419d1af06b5f7636b4ac3da7f12184802ad867736"
}
]
}

View File

@@ -0,0 +1,6 @@
package main
const (
// TestImageManifestDigest is the Docker manifest digest of "fixtures/image.manifest.json"
TestImageManifestDigest = "sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55"
)

View File

@@ -13,15 +13,21 @@ import (
)
const (
binaryV1 = "docker-registry"
binaryV2 = "registry-v2"
binaryV2Schema1 = "registry-v2-schema1"
)
type testRegistryV1 struct {
cmd *exec.Cmd
url string
dir string
}
func setupRegistryV1At(c *check.C, url string, auth bool) *testRegistryV1 {
return &testRegistryV1{}
return &testRegistryV1{
url: url,
}
}
type testRegistryV2 struct {

114
integration/signing_test.go Normal file
View File

@@ -0,0 +1,114 @@
package main
import (
"errors"
"io"
"io/ioutil"
"os"
"os/exec"
"strings"
"github.com/go-check/check"
)
const (
gpgBinary = "gpg"
)
func init() {
check.Suite(&SigningSuite{})
}
type SigningSuite struct {
gpgHome string
fingerprint string
}
func findFingerprint(lineBytes []byte) (string, error) {
lines := string(lineBytes)
for _, line := range strings.Split(lines, "\n") {
fields := strings.Split(line, ":")
if len(fields) >= 10 && fields[0] == "fpr" {
return fields[9], nil
}
}
return "", errors.New("No fingerprint found")
}
// ConsumeAndLogOutput takes (f, err) from an exec.*Pipe(), and causes all output to it to be logged to c.
func ConsumeAndLogOutput(c *check.C, id string, f io.ReadCloser, err error) {
c.Assert(err, check.IsNil)
go func() {
defer func() {
f.Close()
c.Logf("Output %s: Closed", id)
}()
buf := make([]byte, 0, 1024)
for {
c.Logf("Output %s: waiting", id)
n, err := f.Read(buf)
c.Logf("Output %s: got %d,%#v: %#v", id, n, err, buf[:n])
if n <= 0 {
break
}
}
}()
}
func (s *SigningSuite) SetUpTest(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.gpgHome, err = ioutil.TempDir("", "skopeo-gpg")
c.Assert(err, check.IsNil)
os.Setenv("GNUPGHOME", s.gpgHome)
cmd := exec.Command(gpgBinary, "--homedir", s.gpgHome, "--batch", "--gen-key")
stdin, err := cmd.StdinPipe()
c.Assert(err, check.IsNil)
stdout, err := cmd.StdoutPipe()
ConsumeAndLogOutput(c, "gen-key stdout", stdout, err)
stderr, err := cmd.StderrPipe()
ConsumeAndLogOutput(c, "gen-key stderr", stderr, err)
err = cmd.Start()
c.Assert(err, check.IsNil)
_, err = stdin.Write([]byte("Key-Type: RSA\nName-Real: Testing user\n%commit\n"))
c.Assert(err, check.IsNil)
err = stdin.Close()
c.Assert(err, check.IsNil)
err = cmd.Wait()
c.Assert(err, check.IsNil)
lines, err := exec.Command(gpgBinary, "--homedir", s.gpgHome, "--with-colons", "--no-permission-warning", "--fingerprint").Output()
c.Assert(err, check.IsNil)
s.fingerprint, err = findFingerprint(lines)
c.Assert(err, check.IsNil)
}
func (s *SigningSuite) TearDownTest(c *check.C) {
if s.gpgHome != "" {
err := os.RemoveAll(s.gpgHome)
c.Assert(err, check.IsNil)
}
s.gpgHome = ""
os.Unsetenv("GNUPGHOME")
}
func (s *SigningSuite) TestSignVerifySmoke(c *check.C) {
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/smoketest"
sigOutput, err := ioutil.TempFile("", "sig")
c.Assert(err, check.IsNil)
defer os.Remove(sigOutput.Name())
out, err := exec.Command(skopeoBinary, "standalone-sign", "-o", sigOutput.Name(),
manifestPath, dockerReference, s.fingerprint).CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.Assert(string(out), check.Equals, "")
out, err = exec.Command(skopeoBinary, "standalone-verify", manifestPath,
dockerReference, s.fingerprint, sigOutput.Name()).CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.Assert(string(out), check.Equals, "Signature verified, digest "+TestImageManifestDigest+"\n")
}

66
main.go
View File

@@ -1,66 +0,0 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/docker/docker/cliconfig"
)
const (
version = "0.1.5-dev"
usage = "inspect images on a registry"
)
var inspectCmd = func(c *cli.Context) {
imgInspect, err := inspect(c)
if err != nil {
logrus.Fatal(err)
}
out, err := json.Marshal(imgInspect)
if err != nil {
logrus.Fatal(err)
}
fmt.Println(string(out))
}
func main() {
app := cli.NewApp()
app.Name = "skopeo"
app.Version = version
app.Usage = usage
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug",
Usage: "enable debug output",
},
cli.StringFlag{
Name: "username",
Value: "",
Usage: "registry username",
},
cli.StringFlag{
Name: "password",
Value: "",
Usage: "registry password",
},
cli.StringFlag{
Name: "docker-cfg",
Value: cliconfig.ConfigDir(),
Usage: "Docker's cli config for auth",
},
}
app.Before = func(c *cli.Context) error {
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
return nil
}
app.Action = inspectCmd
if err := app.Run(os.Args); err != nil {
logrus.Fatal(err)
}
}

View File

@@ -1,14 +0,0 @@
% SKOPEO(1)
% Antonio Murdaca
% JANUARY 2016
# NAME
skopeo - Inspect Docker images and repositories on registries
# SYNOPSIS
# DESCRIPTION
# ARGUMENTS
# AUTHORS
Antonio Murdaca <runcom@redhat.com>

118
man1/skopeo.1 Normal file
View File

@@ -0,0 +1,118 @@
.\" To review this file formatted
.\" groff -man -Tascii skopeo.1
.\"
.de FN
\fI\|\\$1\|\fP
..
.TH "skopeo" "1" "2016-04-21" "Linux" "Linux Programmer's Manual"
.SH NAME
skopeo \(em Inspect Docker images and repositories on registries
.SH SYNOPSIS
\fBskopeo copy\fR [\fB--sign-by=\fRkey-ID] source-location destination-location
.PP
\fBskopeo inspect\fR image-name [\fB--raw\fR]
.PP
\fBskopeo layers\fR image-name
.PP
\fBskopeo standalone-sign\fR manifest docker-reference key-fingerprint \%\fB--output\fR|\fB-o\fR signature
.PP
\fBskopeo standalone-verify\fR manifest docker-reference key-fingerprint \%signature
.PP
\fBskopeo help\fR [command]
.SH DESCRIPTION
\fBskopeo\fR is a command line utility which is able to inspect a repository on a Docker registry and fetch images
layers. It fetches the repository's manifest and it is able to show you a \fBdocker inspect\fR-like json output about a
whole repository or a tag. This tool, in contrast to \fBdocker inspect\fR, helps you gather useful information about a
repository or a tag without requiring you to run \fBdocker pull\fR - e.g. - which tags are available for the given
repository? which labels the image has?
.SH OPTIONS
.B "--debug"
enable debug output
.PP
.B "--username"
Username to use to authenicate to the given registry
.PP
.B --password
Password to use to authenicate to the given registry
.PP
.B "--cert-path"
Path to certificates to use to authenicate to the given registry (cert.pem, key.pem)
.PP
.B "--tls-verify"
Whether to verify certificates or not
.PP
.B "--help, -h"
Show help
.PP
.B "--version, -v"
print the version number
.SH COMMANDS
.TP
.B copy
Copy an image (manifest, filesystem layers, signatures) from one location to another.
.sp
.B source-location
and
.B destination-location
can be \fBdocker://\fRdocker-reference, \fBdir:\fRlocal-path, or \fBatomic:\fRimagestream-name\fB:\fRtag .
.sp
\fB\-\-sign\-by=\fRkey-id
Add a signature by the specified key ID for image name corresponding to \fBdestination-location\fR.
Existing signatures, if any, are preserved as well.
.TP
.B inspect
Return low-level information on images in a registry
.sp
.B image-name
name of image to retrieve information about
.br
.B "--raw"
output raw manifest, default is to format in JSON
.TP
.B layers
Get image layers
.sp
.B image-name
name of the image to retrieve layers
.TP
.B standalone-sign
Create a signature using local files.
This is primarily a debugging tool and should not be part of your normal operational workflow.
.sp
.B manifest
path to file containing manifest of image
.br
.B docker-reference
docker reference of blob to be signed
.br
.B key-fingerprint
key identity to use for signing
.br
.B ""--output, -o"
write signature to given file
.TP
.B standalone-verify
Verify a signature using local files, digest will be printed on success.
This is primarily a debugging tool and should not be part of your normal operational workflow.
.sp
.B manifest
Path to file containing manifest of image
.br
.B docker-reference
docker reference of signed blob
.br
.B key-fingerprint
key identity to use for verification
.br
.B signature
Path to file containing signature
.TP
.B help
show help for \fBskopeo\fR
.SH AUTHORS
Antonio Murdaca <runcom@redhat.com>
.br
Miloslav Trmac <mitr@redhat.com>
.br
Jhon Honce <jhonce@redhat.com>

File diff suppressed because it is too large Load Diff

398
openshift/openshift.go Normal file
View File

@@ -0,0 +1,398 @@
package openshift
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"regexp"
"strings"
"github.com/Sirupsen/logrus"
"github.com/projectatomic/skopeo/docker"
"github.com/projectatomic/skopeo/docker/utils"
"github.com/projectatomic/skopeo/types"
"github.com/projectatomic/skopeo/version"
)
// openshiftClient is configuration for dealing with a single image stream, for reading or writing.
type openshiftClient struct {
// Values from Kubernetes configuration
baseURL *url.URL
httpClient *http.Client
bearerToken string // "" if not used
username string // "" if not used
password string // if username != ""
// Values specific to this image
namespace string
stream string
tag string
}
// FIXME: Is imageName like this a good way to refer to OpenShift images?
var imageNameRegexp = regexp.MustCompile("^([^:/]*)/([^:/]*):([^:/]*)$")
// newOpenshiftClient creates a new openshiftClient for the specified image.
func newOpenshiftClient(imageName string) (*openshiftClient, error) {
// Overall, this is modelled on openshift/origin/pkg/cmd/util/clientcmd.New().ClientConfig() and openshift/origin/pkg/client.
cmdConfig := defaultClientConfig()
logrus.Debugf("cmdConfig: %#v", cmdConfig)
restConfig, err := cmdConfig.ClientConfig()
if err != nil {
return nil, err
}
// REMOVED: SetOpenShiftDefaults (values are not overridable in config files, so hard-coded these defaults.)
logrus.Debugf("restConfig: %#v", restConfig)
baseURL, httpClient, err := restClientFor(restConfig)
if err != nil {
return nil, err
}
logrus.Debugf("URL: %#v", *baseURL)
m := imageNameRegexp.FindStringSubmatch(imageName)
if m == nil || len(m) != 4 {
return nil, fmt.Errorf("Invalid image reference %s, %#v", imageName, m)
}
return &openshiftClient{
baseURL: baseURL,
httpClient: httpClient,
bearerToken: restConfig.BearerToken,
username: restConfig.Username,
password: restConfig.Password,
namespace: m[1],
stream: m[2],
tag: m[3],
}, nil
}
// doRequest performs a correctly authenticated request to a specified path, and returns response body or an error object.
func (c *openshiftClient) doRequest(method, path string, requestBody []byte) ([]byte, error) {
url := *c.baseURL
url.Path = path
var requestBodyReader io.Reader
if requestBody != nil {
logrus.Debugf("Will send body: %s", requestBody)
requestBodyReader = bytes.NewReader(requestBody)
}
req, err := http.NewRequest(method, url.String(), requestBodyReader)
if err != nil {
return nil, err
}
if len(c.bearerToken) != 0 {
req.Header.Set("Authorization", "Bearer "+c.bearerToken)
} else if len(c.username) != 0 {
req.SetBasicAuth(c.username, c.password)
}
req.Header.Set("Accept", "application/json, */*")
req.Header.Set("User-Agent", fmt.Sprintf("skopeo/%s", version.Version))
if requestBody != nil {
req.Header.Set("Content-Type", "application/json")
}
logrus.Debugf("%s %s", method, url)
res, err := c.httpClient.Do(req)
if err != nil {
return nil, err
}
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, err
}
logrus.Debugf("Got body: %s", body)
// FIXME: Just throwing this useful information away only to try to guess later...
logrus.Debugf("Got content-type: %s", res.Header.Get("Content-Type"))
var status status
statusValid := false
if err := json.Unmarshal(body, &status); err == nil && len(status.Status) > 0 {
statusValid = true
}
switch {
case res.StatusCode == http.StatusSwitchingProtocols: // FIXME?! No idea why this weird case exists in k8s.io/kubernetes/pkg/client/restclient.
if statusValid && status.Status != "Success" {
return nil, errors.New(status.Message)
}
case res.StatusCode >= http.StatusOK && res.StatusCode <= http.StatusPartialContent:
// OK.
default:
if statusValid {
return nil, errors.New(status.Message)
}
return nil, fmt.Errorf("HTTP error: status code: %d, body: %s", res.StatusCode, string(body))
}
return body, nil
}
// canonicalDockerReference returns a canonical reference we use for signing OpenShift images.
// FIXME: This is, strictly speaking, a namespace conflict with images placed in a Docker registry running on the same host.
// Do we need to do something else, perhaps disambiguate (port number?) or namespace Docker and OpenShift separately?
func (c *openshiftClient) canonicalDockerReference() string {
return fmt.Sprintf("%s/%s/%s:%s", c.baseURL.Host, c.namespace, c.stream, c.tag)
}
// convertDockerImageReference takes an image API DockerImageReference value and returns a reference we can actually use;
// currently OpenShift stores the cluster-internal service IPs here, which are unusable from the outside.
func (c *openshiftClient) convertDockerImageReference(ref string) (string, error) {
parts := strings.SplitN(ref, "/", 2)
if len(parts) != 2 {
return "", fmt.Errorf("Invalid format of docker reference %s: missing '/'", ref)
}
// Sanity check that the reference is at least plausibly similar, i.e. uses the hard-coded port we expect.
if !strings.HasSuffix(parts[0], ":5000") {
return "", fmt.Errorf("Invalid format of docker reference %s: expecting port 5000", ref)
}
return c.dockerRegistryHostPart() + "/" + parts[1], nil
}
// dockerRegistryHostPart returns the host:port of the embedded Docker Registry API endpoint
// FIXME: There seems to be no way to discover the correct:host port using the API, so hard-code our knowledge
// about how the OpenShift Atomic Registry is configured, per examples/atomic-registry/run.sh:
// -p OPENSHIFT_OAUTH_PROVIDER_URL=https://${INSTALL_HOST}:8443,COCKPIT_KUBE_URL=https://${INSTALL_HOST},REGISTRY_HOST=${INSTALL_HOST}:5000
func (c *openshiftClient) dockerRegistryHostPart() string {
return strings.SplitN(c.baseURL.Host, ":", 2)[0] + ":5000"
}
type openshiftImageSource struct {
client *openshiftClient
// Values specific to this image
certPath string // Only for parseDockerImageSource
tlsVerify bool // Only for parseDockerImageSource
// State
docker types.ImageSource // The Docker Registry endpoint, or nil if not resolved yet
imageStreamImageName string // Resolved image identifier, or "" if not known yet
}
// NewOpenshiftImageSource creates a new ImageSource for the specified image and connection specification.
func NewOpenshiftImageSource(imageName, certPath string, tlsVerify bool) (types.ImageSource, error) {
client, err := newOpenshiftClient(imageName)
if err != nil {
return nil, err
}
return &openshiftImageSource{
client: client,
certPath: certPath,
tlsVerify: tlsVerify,
}, nil
}
// IntendedDockerReference returns the full, unambiguous, Docker reference for this image, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
// May be "" if unknown.
func (s *openshiftImageSource) IntendedDockerReference() string {
return s.client.canonicalDockerReference()
}
func (s *openshiftImageSource) GetManifest(mimetypes []string) ([]byte, string, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, "", err
}
return s.docker.GetManifest(mimetypes)
}
func (s *openshiftImageSource) GetLayer(digest string) (io.ReadCloser, error) {
if err := s.ensureImageIsResolved(); err != nil {
return nil, err
}
return s.docker.GetLayer(digest)
}
func (s *openshiftImageSource) GetSignatures() ([][]byte, error) {
return nil, nil
}
// ensureImageIsResolved sets up s.docker and s.imageStreamImageName
func (s *openshiftImageSource) ensureImageIsResolved() error {
if s.docker != nil {
return nil
}
// FIXME: validate components per validation.IsValidPathSegmentName?
path := fmt.Sprintf("/oapi/v1/namespaces/%s/imagestreams/%s", s.client.namespace, s.client.stream)
body, err := s.client.doRequest("GET", path, nil)
if err != nil {
return err
}
// Note: This does absolutely no kind/version checking or conversions.
var is imageStream
if err := json.Unmarshal(body, &is); err != nil {
return err
}
var te *tagEvent
for _, tag := range is.Status.Tags {
if tag.Tag != s.client.tag {
continue
}
if len(tag.Items) > 0 {
te = &tag.Items[0]
break
}
}
if te == nil {
return fmt.Errorf("No matching tag found")
}
logrus.Debugf("tag event %#v", te)
dockerRef, err := s.client.convertDockerImageReference(te.DockerImageReference)
if err != nil {
return err
}
logrus.Debugf("Resolved reference %#v", dockerRef)
d, err := docker.NewDockerImageSource(dockerRef, s.certPath, s.tlsVerify)
if err != nil {
return err
}
s.docker = d
s.imageStreamImageName = te.Image
return nil
}
type openshiftImageDestination struct {
client *openshiftClient
docker types.ImageDestination // The Docker Registry endpoint
}
// NewOpenshiftImageDestination creates a new ImageDestination for the specified image and connection specification.
func NewOpenshiftImageDestination(imageName, certPath string, tlsVerify bool) (types.ImageDestination, error) {
client, err := newOpenshiftClient(imageName)
if err != nil {
return nil, err
}
// FIXME: Should this always use a digest, not a tag? Uploading to Docker by tag requires the tag _inside_ the manifest to match,
// i.e. a single signed image cannot be available under multiple tags. But with types.ImageDestination, we don't know
// the manifest digest at this point.
dockerRef := fmt.Sprintf("%s/%s/%s:%s", client.dockerRegistryHostPart(), client.namespace, client.stream, client.tag)
docker, err := docker.NewDockerImageDestination(dockerRef, certPath, tlsVerify)
if err != nil {
return nil, err
}
return &openshiftImageDestination{
client: client,
docker: docker,
}, nil
}
func (d *openshiftImageDestination) CanonicalDockerReference() (string, error) {
return d.client.canonicalDockerReference(), nil
}
func (d *openshiftImageDestination) PutManifest(manifest []byte) error {
// Note: This does absolutely no kind/version checking or conversions.
manifestDigest, err := utils.ManifestDigest(manifest)
if err != nil {
return err
}
// FIXME: We can't do what respositorymiddleware.go does because we don't know the internal address. Does any of this matter?
dockerImageReference := fmt.Sprintf("%s/%s/%s@%s", d.client.dockerRegistryHostPart(), d.client.namespace, d.client.stream, manifestDigest)
ism := imageStreamMapping{
typeMeta: typeMeta{
Kind: "ImageStreamMapping",
APIVersion: "v1",
},
objectMeta: objectMeta{
Namespace: d.client.namespace,
Name: d.client.stream,
},
Image: image{
objectMeta: objectMeta{
Name: manifestDigest,
},
DockerImageReference: dockerImageReference,
DockerImageManifest: string(manifest),
},
Tag: d.client.tag,
}
body, err := json.Marshal(ism)
if err != nil {
return err
}
// FIXME: validate components per validation.IsValidPathSegmentName?
path := fmt.Sprintf("/oapi/v1/namespaces/%s/imagestreammappings", d.client.namespace)
body, err = d.client.doRequest("POST", path, body)
if err != nil {
return err
}
return d.docker.PutManifest(manifest)
}
func (d *openshiftImageDestination) PutLayer(digest string, stream io.Reader) error {
return d.docker.PutLayer(digest, stream)
}
func (d *openshiftImageDestination) PutSignatures(signatures [][]byte) error {
if len(signatures) != 0 {
return fmt.Errorf("Pushing signatures to an Atomic Registry is not supported")
}
return nil
}
// These structs are subsets of github.com/openshift/origin/pkg/image/api/v1 and its dependencies.
type imageStream struct {
Status imageStreamStatus `json:"status,omitempty"`
}
type imageStreamStatus struct {
DockerImageRepository string `json:"dockerImageRepository"`
Tags []namedTagEventList `json:"tags,omitempty"`
}
type namedTagEventList struct {
Tag string `json:"tag"`
Items []tagEvent `json:"items"`
}
type tagEvent struct {
DockerImageReference string `json:"dockerImageReference"`
Image string `json:"image"`
}
type imageStreamImage struct {
Image image `json:"image"`
}
type image struct {
objectMeta `json:"metadata,omitempty"`
DockerImageReference string `json:"dockerImageReference,omitempty"`
// DockerImageMetadata runtime.RawExtension `json:"dockerImageMetadata,omitempty"`
DockerImageMetadataVersion string `json:"dockerImageMetadataVersion,omitempty"`
DockerImageManifest string `json:"dockerImageManifest,omitempty"`
// DockerImageLayers []ImageLayer `json:"dockerImageLayers"`
}
type imageStreamMapping struct {
typeMeta `json:",inline"`
objectMeta `json:"metadata,omitempty"`
Image image `json:"image"`
Tag string `json:"tag"`
}
type typeMeta struct {
Kind string `json:"kind,omitempty"`
APIVersion string `json:"apiVersion,omitempty"`
}
type objectMeta struct {
Name string `json:"name,omitempty"`
GenerateName string `json:"generateName,omitempty"`
Namespace string `json:"namespace,omitempty"`
SelfLink string `json:"selfLink,omitempty"`
ResourceVersion string `json:"resourceVersion,omitempty"`
Generation int64 `json:"generation,omitempty"`
DeletionGracePeriodSeconds *int64 `json:"deletionGracePeriodSeconds,omitempty"`
Labels map[string]string `json:"labels,omitempty"`
Annotations map[string]string `json:"annotations,omitempty"`
}
// A subset of k8s.io/kubernetes/pkg/api/unversioned/Status
type status struct {
Status string `json:"status,omitempty"`
Message string `json:"message,omitempty"`
// Reason StatusReason `json:"reason,omitempty"`
// Details *StatusDetails `json:"details,omitempty"`
Code int32 `json:"code,omitempty"`
}

View File

@@ -1,13 +1,13 @@
package reference
package reference // COPY WITH EDITS FROM DOCKER/DOCKER
import (
"errors"
"fmt"
"regexp"
"strings"
"github.com/docker/distribution/digest"
distreference "github.com/docker/distribution/reference"
"github.com/docker/docker/image/v1"
)
const (
@@ -190,9 +190,21 @@ func normalize(name string) (string, error) {
return name, nil
}
// EDIT FROM DOCKER/DOCKER TO NOT IMPORT IMAGE.V1
func validateName(name string) error {
if err := v1.ValidateID(name); err == nil {
if err := ValidateIDV1(name); err == nil {
return fmt.Errorf("Invalid repository name (%s), cannot specify 64-byte hexadecimal strings", name)
}
return nil
}
var validHex = regexp.MustCompile(`^([a-f0-9]{64})$`)
// ValidateIDV1 checks whether an ID string is a valid image ID.
func ValidateIDV1(id string) error {
if ok := validHex.MatchString(id); !ok {
return fmt.Errorf("image ID %q is invalid", id)
}
return nil
}

43
signature/docker.go Normal file
View File

@@ -0,0 +1,43 @@
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
package signature
import (
"fmt"
"github.com/projectatomic/skopeo/docker/utils"
)
// SignDockerManifest returns a signature for manifest as the specified dockerReference,
// using mech and keyIdentity.
func SignDockerManifest(manifest []byte, dockerReference string, mech SigningMechanism, keyIdentity string) ([]byte, error) {
manifestDigest, err := utils.ManifestDigest(manifest)
if err != nil {
return nil, err
}
sig := privateSignature{
Signature{
DockerManifestDigest: manifestDigest,
DockerReference: dockerReference,
},
}
return sig.sign(mech, keyIdentity)
}
// VerifyDockerManifestSignature checks that unverifiedSignature uses expectedKeyIdentity to sign unverifiedManifest as expectedDockerReference,
// using mech.
func VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest []byte,
expectedDockerReference string, mech SigningMechanism, expectedKeyIdentity string) (*Signature, error) {
expectedManifestDigest, err := utils.ManifestDigest(unverifiedManifest)
if err != nil {
return nil, err
}
sig, err := verifyAndExtractSignature(mech, unverifiedSignature, expectedKeyIdentity, expectedDockerReference)
if err != nil {
return nil, err
}
if sig.DockerManifestDigest != expectedManifestDigest {
return nil, InvalidSignatureError{msg: fmt.Sprintf("Docker manifest digest %s does not match %s", sig.DockerManifestDigest, expectedManifestDigest)}
}
return sig, nil
}

79
signature/docker_test.go Normal file
View File

@@ -0,0 +1,79 @@
package signature
import (
"io/ioutil"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestSignDockerManifest(t *testing.T) {
mech, err := newGPGSigningMechanismInDirectory(testGPGHomeDirectory)
require.NoError(t, err)
manifest, err := ioutil.ReadFile("fixtures/image.manifest.json")
require.NoError(t, err)
// Successful signing
signature, err := SignDockerManifest(manifest, TestImageSignatureReference, mech, TestKeyFingerprint)
require.NoError(t, err)
verified, err := VerifyDockerManifestSignature(signature, manifest, TestImageSignatureReference, mech, TestKeyFingerprint)
assert.NoError(t, err)
assert.Equal(t, TestImageSignatureReference, verified.DockerReference)
assert.Equal(t, TestImageManifestDigest, verified.DockerManifestDigest)
// Error computing Docker manifest
invalidManifest, err := ioutil.ReadFile("fixtures/v2s1-invalid-signatures.manifest.json")
require.NoError(t, err)
_, err = SignDockerManifest(invalidManifest, TestImageSignatureReference, mech, TestKeyFingerprint)
assert.Error(t, err)
// Error creating blob to sign
_, err = SignDockerManifest(manifest, "", mech, TestKeyFingerprint)
assert.Error(t, err)
// Error signing
_, err = SignDockerManifest(manifest, TestImageSignatureReference, mech, "this fingerprint doesn't exist")
assert.Error(t, err)
}
func TestVerifyDockerManifestSignature(t *testing.T) {
mech, err := newGPGSigningMechanismInDirectory(testGPGHomeDirectory)
require.NoError(t, err)
manifest, err := ioutil.ReadFile("fixtures/image.manifest.json")
require.NoError(t, err)
signature, err := ioutil.ReadFile("fixtures/image.signature")
require.NoError(t, err)
// Successful verification
sig, err := VerifyDockerManifestSignature(signature, manifest, TestImageSignatureReference, mech, TestKeyFingerprint)
require.NoError(t, err)
assert.Equal(t, TestImageSignatureReference, sig.DockerReference)
assert.Equal(t, TestImageManifestDigest, sig.DockerManifestDigest)
// For extra paranoia, test that we return nil data on error.
// Error computing Docker manifest
invalidManifest, err := ioutil.ReadFile("fixtures/v2s1-invalid-signatures.manifest.json")
require.NoError(t, err)
sig, err = VerifyDockerManifestSignature(signature, invalidManifest, TestImageSignatureReference, mech, TestKeyFingerprint)
assert.Error(t, err)
assert.Nil(t, sig)
// Error verifying signature
corruptSignature, err := ioutil.ReadFile("fixtures/corrupt.signature")
sig, err = VerifyDockerManifestSignature(corruptSignature, manifest, TestImageSignatureReference, mech, TestKeyFingerprint)
assert.Error(t, err)
assert.Nil(t, sig)
// Key fingerprint mismatch
sig, err = VerifyDockerManifestSignature(signature, manifest, TestImageSignatureReference, mech, "unexpected fingerprint")
assert.Error(t, err)
assert.Nil(t, sig)
// Docker manifest digest mismatch
sig, err = VerifyDockerManifestSignature(signature, []byte("unexpected manifest"), TestImageSignatureReference, mech, TestKeyFingerprint)
assert.Error(t, err)
assert.Nil(t, sig)
}

4
signature/fixtures/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
/*.gpg~
/.gpg-v21-migrated
/private-keys-v1.d
/random_seed

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,26 @@
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 7023,
"digest": "sha256:b5b2b2c507a0944348e0303114d8d93aaaa081732b86451d9bce1f432a537bc7"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 32654,
"digest": "sha256:e692418e4cbaf90ca69d05a66403747baa33ee08806650b51fab815ad7fc331f"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 16724,
"digest": "sha256:3c3a4604a545cdc127456d94e421cd355bca5b528f4a9c1905b15da2eb4a4c6b"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 73109,
"digest": "sha256:ec4b8955958665577945c89419d1af06b5f7636b4ac3da7f12184802ad867736"
}
]
}

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,84 @@
{
"default": [
{
"type": "reject"
}
],
"specific": {
"example.com/playground": [
{
"type": "insecureAcceptAnything"
}
],
"example.com/production": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/keys/employee-gpg-keyring"
}
],
"example.com/hardened": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/keys/employee-gpg-keyring",
"signedIdentity": {
"type": "matchRepository"
}
},
{
"type": "signedBy",
"keyType": "signedByGPGKeys",
"keyPath": "/keys/public-key-signing-gpg-keyring",
"signedIdentity": {
"type": "matchExact"
}
},
{
"type": "signedBaseLayer",
"baseLayerIdentity": {
"type": "exactRepository",
"dockerRepository": "registry.access.redhat.com/rhel7/rhel"
}
}
],
"example.com/hardened-x509": [
{
"type": "signedBy",
"keyType": "X509Certificates",
"keyPath": "/keys/employee-cert-file",
"signedIdentity": {
"type": "matchRepository"
}
},
{
"type": "signedBy",
"keyType": "signedByX509CAs",
"keyPath": "/keys/public-key-signing-ca-file"
}
],
"registry.access.redhat.com": [
{
"type": "signedBy",
"keyType": "signedByGPGKeys",
"keyPath": "/keys/RH-key-signing-key-gpg-keyring"
}
],
"bogus/key-data-example": [
{
"type": "signedBy",
"keyType": "signedByGPGKeys",
"keyData": "bm9uc2Vuc2U="
}
],
"bogus/signed-identity-example": [
{
"type": "signedBaseLayer",
"baseLayerIdentity": {
"type": "exactReference",
"dockerReference": "registry.access.redhat.com/rhel7/rhel:latest"
}
}
]
}
}

View File

@@ -0,0 +1,19 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mI0EVurzqQEEAL3qkFq4K2URtSWVDYnQUNA9HdM9sqS2eAWfqUFMrkD5f+oN+LBL
tPyaE5GNLA0vXY7nHAM2TeM8ijZ/eMP17Raj64JL8GhCymL3wn2jNvb9XaF0R0s6
H0IaRPPu45A3SnxLwm4Orc/9Z7/UxtYjKSg9xOaTiVPzJgaf5Vm4J4ApABEBAAG0
EnNrb3BlbyB0ZXN0aW5nIGtleYi4BBMBAgAiBQJW6vOpAhsDBgsJCAcDAgYVCAIJ
CgsEFgIDAQIeAQIXgAAKCRDbcvIYi7RsyBbOBACgJFiKDlQ1UyvsNmGqJ7D0OpbS
1OppJlradKgZXyfahFswhFI+7ZREvELLHbinq3dBy5cLXRWzQKdJZNHknSN5Tjf2
0ipVBQuqpcBo+dnKiG4zH6fhTri7yeTZksIDfsqlI6FXDOdKLUSnahagEBn4yU+x
jHPvZk5SuuZv56A45biNBFbq86kBBADIC/9CsAlOmRALuYUmkhcqEjuFwn3wKz2d
IBjzgvro7zcVNNCgxQfMEjcUsvEh5cx13G3QQHcwOKy3M6Bv6VMhfZjd+1P1el4P
0fJS8GFmhWRBknMN8jFsgyohQeouQ798RFFv94KszfStNnr/ae8oao5URmoUXSCa
/MdUxn0YKwARAQABiJ8EGAECAAkFAlbq86kCGwwACgkQ23LyGIu0bMjUywQAq0dn
lUpDNSoLTcpNWuVvHQ7c/qmnE4TyiSLiRiAywdEWA6gMiyhUUucuGsEhMFP1WX1k
UNwArZ6UG7BDOUsvngP7jKGNqyUOQrq1s/r8D+0MrJGOWErGLlfttO2WeoijECkI
5qm8cXzAra3Xf/Z3VjxYTKSnNu37LtZkakdTdYE=
=tJAt
-----END PGP PUBLIC KEY BLOCK-----

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,11 @@
{
"schemaVersion": 1,
"name": "mitr/buxybox",
"tag": "latest",
"architecture": "amd64",
"fsLayers": [
],
"history": [
],
"signatures": 1
}

View File

@@ -0,0 +1,10 @@
package signature
const (
// TestImageManifestDigest is the Docker manifest digest of "image.manifest.json"
TestImageManifestDigest = "sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55"
// TestImageSignatureReference is the Docker image reference signed in "image.signature"
TestImageSignatureReference = "testing/manifest"
// TestKeyFingerprint is the fingerprint of the private key in this directory.
TestKeyFingerprint = "1D8230F6CDB6A06716E414C1DB72F2188BB46CC8"
)

107
signature/json.go Normal file
View File

@@ -0,0 +1,107 @@
package signature
import (
"bytes"
"encoding/json"
"fmt"
"io"
)
// jsonFormatError is returned when JSON does not match expected format.
type jsonFormatError string
func (err jsonFormatError) Error() string {
return string(err)
}
// validateExactMapKeys returns an error if the keys of m are not exactly expectedKeys, which must be pairwise distinct
func validateExactMapKeys(m map[string]interface{}, expectedKeys ...string) error {
if len(m) != len(expectedKeys) {
return jsonFormatError("Unexpected keys in a JSON object")
}
for _, k := range expectedKeys {
if _, ok := m[k]; !ok {
return jsonFormatError(fmt.Sprintf("Key %s missing in a JSON object", k))
}
}
// Assuming expectedKeys are pairwise distinct, we know m contains len(expectedKeys) different values in expectedKeys.
return nil
}
// mapField returns a member fieldName of m, if it is a JSON map, or an error.
func mapField(m map[string]interface{}, fieldName string) (map[string]interface{}, error) {
untyped, ok := m[fieldName]
if !ok {
return nil, jsonFormatError(fmt.Sprintf("Field %s missing", fieldName))
}
v, ok := untyped.(map[string]interface{})
if !ok {
return nil, jsonFormatError(fmt.Sprintf("Field %s is not a JSON object", fieldName))
}
return v, nil
}
// stringField returns a member fieldName of m, if it is a string, or an error.
func stringField(m map[string]interface{}, fieldName string) (string, error) {
untyped, ok := m[fieldName]
if !ok {
return "", jsonFormatError(fmt.Sprintf("Field %s missing", fieldName))
}
v, ok := untyped.(string)
if !ok {
return "", jsonFormatError(fmt.Sprintf("Field %s is not a JSON object", fieldName))
}
return v, nil
}
// paranoidUnmarshalJSONObject unmarshals data as a JSON object, but failing on the slightest unexpected aspect
// (including duplicated keys, unrecognized keys, and non-matching types). Uses fieldResolver to
// determine the destination for a field value, which should return a pointer to the destination if valid, or nil if the key is rejected.
//
// The fieldResolver approach is useful for decoding the Policy.Specific map; using it for structs is a bit lazy,
// we could use reflection to automate this. Later?
func paranoidUnmarshalJSONObject(data []byte, fieldResolver func(string) interface{}) error {
seenKeys := map[string]struct{}{}
dec := json.NewDecoder(bytes.NewReader(data))
t, err := dec.Token()
if err != nil {
return jsonFormatError(err.Error())
}
if t != json.Delim('{') {
return jsonFormatError(fmt.Sprintf("JSON object expected, got \"%s\"", t))
}
for {
t, err := dec.Token()
if err != nil {
return jsonFormatError(err.Error())
}
if t == json.Delim('}') {
break
}
key, ok := t.(string)
if !ok {
// Coverage: This should never happen, dec.Token() rejects non-string-literals in this state.
return jsonFormatError(fmt.Sprintf("Key string literal expected, got \"%s\"", t))
}
if _, ok := seenKeys[key]; ok {
return jsonFormatError(fmt.Sprintf("Duplicate key \"%s\"", key))
}
seenKeys[key] = struct{}{}
valuePtr := fieldResolver(key)
if valuePtr == nil {
return jsonFormatError(fmt.Sprintf("Unknown key \"%s\"", key))
}
// This works like json.Unmarshal, in particular it allows us to implement UnmarshalJSON to implement strict parsing of the field value.
if err := dec.Decode(valuePtr); err != nil {
return jsonFormatError(err.Error())
}
}
if _, err := dec.Token(); err != io.EOF {
return jsonFormatError("Unexpected data after JSON object")
}
return nil
}

149
signature/json_test.go Normal file
View File

@@ -0,0 +1,149 @@
package signature
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type mSI map[string]interface{} // To minimize typing the long name
// A short-hand way to get a JSON object field value or panic. No error handling done, we know
// what we are working with, a panic in a test is good enough, and fitting test cases on a single line
// is a priority.
func x(m mSI, fields ...string) mSI {
for _, field := range fields {
// Not .(mSI) because type assertion of an unnamed type to a named type always fails (the types
// are not "identical"), but the assignment is fine because they are "assignable".
m = m[field].(map[string]interface{})
}
return m
}
func TestValidateExactMapKeys(t *testing.T) {
// Empty map and keys
err := validateExactMapKeys(mSI{})
assert.NoError(t, err)
// Success
err = validateExactMapKeys(mSI{"a": nil, "b": 1}, "b", "a")
assert.NoError(t, err)
// Extra map keys
err = validateExactMapKeys(mSI{"a": nil, "b": 1}, "a")
assert.Error(t, err)
// Extra expected keys
err = validateExactMapKeys(mSI{"a": 1}, "b", "a")
assert.Error(t, err)
// Unexpected key values
err = validateExactMapKeys(mSI{"a": 1}, "b")
assert.Error(t, err)
}
func TestMapField(t *testing.T) {
// Field not found
_, err := mapField(mSI{"a": mSI{}}, "b")
assert.Error(t, err)
// Field has a wrong type
_, err = mapField(mSI{"a": 1}, "a")
assert.Error(t, err)
// Success
// FIXME? We can't use mSI as the type of child, that type apparently can't be converted to the raw map type.
child := map[string]interface{}{"b": mSI{}}
m, err := mapField(mSI{"a": child, "b": nil}, "a")
require.NoError(t, err)
assert.Equal(t, child, m)
}
func TestStringField(t *testing.T) {
// Field not found
_, err := stringField(mSI{"a": "x"}, "b")
assert.Error(t, err)
// Field has a wrong type
_, err = stringField(mSI{"a": 1}, "a")
assert.Error(t, err)
// Success
s, err := stringField(mSI{"a": "x", "b": nil}, "a")
require.NoError(t, err)
assert.Equal(t, "x", s)
}
// implementsUnmarshalJSON is a minimalistic type used to detect that
// paranoidUnmarshalJSONObject uses the json.Unmarshaler interface of resolved
// pointers.
type implementsUnmarshalJSON bool
// Compile-time check that Policy implements json.Unmarshaler.
var _ json.Unmarshaler = (*implementsUnmarshalJSON)(nil)
func (dest *implementsUnmarshalJSON) UnmarshalJSON(data []byte) error {
_ = data // We don't care, not really.
*dest = true // Mark handler as called
return nil
}
func TestParanoidUnmarshalJSONObject(t *testing.T) {
type testStruct struct {
A string
B int
}
ts := testStruct{}
var unmarshalJSONCalled implementsUnmarshalJSON
tsResolver := func(key string) interface{} {
switch key {
case "a":
return &ts.A
case "b":
return &ts.B
case "implementsUnmarshalJSON":
return &unmarshalJSONCalled
default:
return nil
}
}
// Empty object
ts = testStruct{}
err := paranoidUnmarshalJSONObject([]byte(`{}`), tsResolver)
require.NoError(t, err)
assert.Equal(t, testStruct{}, ts)
// Success
ts = testStruct{}
err = paranoidUnmarshalJSONObject([]byte(`{"a":"x", "b":2}`), tsResolver)
require.NoError(t, err)
assert.Equal(t, testStruct{A: "x", B: 2}, ts)
// json.Unamarshaler is used for decoding values
ts = testStruct{}
unmarshalJSONCalled = implementsUnmarshalJSON(false)
err = paranoidUnmarshalJSONObject([]byte(`{"implementsUnmarshalJSON":true}`), tsResolver)
require.NoError(t, err)
assert.Equal(t, unmarshalJSONCalled, implementsUnmarshalJSON(true))
// Various kinds of invalid input
for _, input := range []string{
``, // Empty input
`&`, // Entirely invalid JSON
`1`, // Not an object
`{&}`, // Invalid key JSON
`{1:1}`, // Key not a string
`{"b":1, "b":1}`, // Duplicate key
`{"thisdoesnotexist":1}`, // Key rejected by resolver
`{"a":&}`, // Invalid value JSON
`{"a":1}`, // Type mismatch
`{"a":"value"}{}`, // Extra data after object
} {
ts = testStruct{}
err := paranoidUnmarshalJSONObject([]byte(input), tsResolver)
assert.Error(t, err)
}
}

121
signature/mechanism.go Normal file
View File

@@ -0,0 +1,121 @@
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
package signature
import (
"bytes"
"fmt"
"github.com/mtrmac/gpgme"
)
// SigningMechanism abstracts a way to sign binary blobs and verify their signatures.
// FIXME: Eventually expand on keyIdentity (namespace them between mechanisms to
// eliminate ambiguities, support CA signatures and perhaps other key properties)
type SigningMechanism interface {
// ImportKeysFromBytes imports public keys from the supplied blob and returns their identities.
// The blob is assumed to have an appropriate format (the caller is expected to know which one).
// NOTE: This may modify long-term state (e.g. key storage in a directory underlying the mechanism).
ImportKeysFromBytes(blob []byte) ([]string, error)
// Sign creates a (non-detached) signature of input using keyidentity
Sign(input []byte, keyIdentity string) ([]byte, error)
// Verify parses unverifiedSignature and returns the content and the signer's identity
Verify(unverifiedSignature []byte) (contents []byte, keyIdentity string, err error)
}
// A GPG/OpenPGP signing mechanism.
type gpgSigningMechanism struct {
ctx *gpgme.Context
}
// NewGPGSigningMechanism returns a new GPG/OpenPGP signing mechanism.
func NewGPGSigningMechanism() (SigningMechanism, error) {
return newGPGSigningMechanismInDirectory("")
}
// newGPGSigningMechanismInDirectory returns a new GPG/OpenPGP signing mechanism, using optionalDir if not empty.
func newGPGSigningMechanismInDirectory(optionalDir string) (SigningMechanism, error) {
ctx, err := gpgme.New()
if err != nil {
return nil, err
}
if err = ctx.SetProtocol(gpgme.ProtocolOpenPGP); err != nil {
return nil, err
}
if optionalDir != "" {
err := ctx.SetEngineInfo(gpgme.ProtocolOpenPGP, "", optionalDir)
if err != nil {
return nil, err
}
}
ctx.SetArmor(false)
ctx.SetTextMode(false)
return gpgSigningMechanism{ctx: ctx}, nil
}
// ImportKeysFromBytes implements SigningMechanism.ImportKeysFromBytes
func (m gpgSigningMechanism) ImportKeysFromBytes(blob []byte) ([]string, error) {
inputData, err := gpgme.NewDataBytes(blob)
if err != nil {
return nil, err
}
res, err := m.ctx.Import(inputData)
if err != nil {
return nil, err
}
keyIdentities := []string{}
for _, i := range res.Imports {
if i.Result == nil {
keyIdentities = append(keyIdentities, i.Fingerprint)
}
}
return keyIdentities, nil
}
// Sign implements SigningMechanism.Sign
func (m gpgSigningMechanism) Sign(input []byte, keyIdentity string) ([]byte, error) {
key, err := m.ctx.GetKey(keyIdentity, true)
if err != nil {
return nil, err
}
inputData, err := gpgme.NewDataBytes(input)
if err != nil {
return nil, err
}
var sigBuffer bytes.Buffer
sigData, err := gpgme.NewDataWriter(&sigBuffer)
if err != nil {
return nil, err
}
if err = m.ctx.Sign([]*gpgme.Key{key}, inputData, sigData, gpgme.SigModeNormal); err != nil {
return nil, err
}
return sigBuffer.Bytes(), nil
}
// Verify implements SigningMechanism.Verify
func (m gpgSigningMechanism) Verify(unverifiedSignature []byte) (contents []byte, keyIdentity string, err error) {
signedBuffer := bytes.Buffer{}
signedData, err := gpgme.NewDataWriter(&signedBuffer)
if err != nil {
return nil, "", err
}
unverifiedSignatureData, err := gpgme.NewDataBytes(unverifiedSignature)
if err != nil {
return nil, "", err
}
_, sigs, err := m.ctx.Verify(unverifiedSignatureData, nil, signedData)
if err != nil {
return nil, "", err
}
if len(sigs) != 1 {
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Unexpected GPG signature count %d", len(sigs))}
}
sig := sigs[0]
// This is sig.Summary == gpgme.SigSumValid except for key trust, which we handle ourselves
if sig.Status != nil || sig.Validity == gpgme.ValidityNever || sig.ValidityReason != nil || sig.WrongKeyUsage {
// FIXME: Better error reporting eventually
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Invalid GPG signature: %#v", sig)}
}
return signedBuffer.Bytes(), sig.Fingerprint, nil
}

142
signature/mechanism_test.go Normal file
View File

@@ -0,0 +1,142 @@
package signature
import (
"bytes"
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
testGPGHomeDirectory = "./fixtures"
)
func TestNewGPGSigningMechanism(t *testing.T) {
// A dumb test just for code coverage. We test more with newGPGSigningMechanismInDirectory().
_, err := NewGPGSigningMechanism()
assert.NoError(t, err)
}
func TestNewGPGSigningMechanismInDirectory(t *testing.T) {
// A dumb test just for code coverage.
_, err := newGPGSigningMechanismInDirectory(testGPGHomeDirectory)
assert.NoError(t, err)
// The various GPG failure cases are not obviously easy to reach.
}
func TestGPGSigningMechanismImportKeysFromBytes(t *testing.T) {
testDir, err := ioutil.TempDir("", "gpg-import-keys")
require.NoError(t, err)
defer os.RemoveAll(testDir)
mech, err := newGPGSigningMechanismInDirectory(testDir)
require.NoError(t, err)
// Try validating a signature when the key is unknown.
signature, err := ioutil.ReadFile("./fixtures/invalid-blob.signature")
require.NoError(t, err)
content, signingFingerprint, err := mech.Verify(signature)
require.Error(t, err)
// Successful import
keyBlob, err := ioutil.ReadFile("./fixtures/public-key.gpg")
require.NoError(t, err)
keyIdentities, err := mech.ImportKeysFromBytes(keyBlob)
require.NoError(t, err)
assert.Equal(t, []string{TestKeyFingerprint}, keyIdentities)
// After import, the signature should validate.
content, signingFingerprint, err = mech.Verify(signature)
require.NoError(t, err)
assert.Equal(t, []byte("This is not JSON\n"), content)
assert.Equal(t, TestKeyFingerprint, signingFingerprint)
// Two keys: just concatenate the valid input twice.
keyIdentities, err = mech.ImportKeysFromBytes(bytes.Join([][]byte{keyBlob, keyBlob}, nil))
require.NoError(t, err)
assert.Equal(t, []string{TestKeyFingerprint, TestKeyFingerprint}, keyIdentities)
// Invalid input: This is accepted anyway by GPG, just returns no keys.
keyIdentities, err = mech.ImportKeysFromBytes([]byte("This is invalid"))
require.NoError(t, err)
assert.Equal(t, []string{}, keyIdentities)
// The various GPG/GPGME failures cases are not obviously easy to reach.
}
func TestGPGSigningMechanismSign(t *testing.T) {
mech, err := newGPGSigningMechanismInDirectory(testGPGHomeDirectory)
require.NoError(t, err)
// Successful signing
content := []byte("content")
signature, err := mech.Sign(content, TestKeyFingerprint)
require.NoError(t, err)
signedContent, signingFingerprint, err := mech.Verify(signature)
require.NoError(t, err)
assert.EqualValues(t, content, signedContent)
assert.Equal(t, TestKeyFingerprint, signingFingerprint)
// Error signing
_, err = mech.Sign(content, "this fingerprint doesn't exist")
assert.Error(t, err)
// The various GPG/GPGME failures cases are not obviously easy to reach.
}
func assertSigningError(t *testing.T, content []byte, fingerprint string, err error) {
assert.Error(t, err)
assert.Nil(t, content)
assert.Empty(t, fingerprint)
}
func TestGPGSigningMechanismVerify(t *testing.T) {
mech, err := newGPGSigningMechanismInDirectory(testGPGHomeDirectory)
require.NoError(t, err)
// Successful verification
signature, err := ioutil.ReadFile("./fixtures/invalid-blob.signature")
require.NoError(t, err)
content, signingFingerprint, err := mech.Verify(signature)
require.NoError(t, err)
assert.Equal(t, []byte("This is not JSON\n"), content)
assert.Equal(t, TestKeyFingerprint, signingFingerprint)
// For extra paranoia, test that we return nil data on error.
// Completely invalid signature.
content, signingFingerprint, err = mech.Verify([]byte{})
assertSigningError(t, content, signingFingerprint, err)
content, signingFingerprint, err = mech.Verify([]byte("invalid signature"))
assertSigningError(t, content, signingFingerprint, err)
// Literal packet, not a signature
signature, err = ioutil.ReadFile("./fixtures/unsigned-literal.signature")
require.NoError(t, err)
content, signingFingerprint, err = mech.Verify(signature)
assertSigningError(t, content, signingFingerprint, err)
// Encrypted data, not a signature.
signature, err = ioutil.ReadFile("./fixtures/unsigned-encrypted.signature")
require.NoError(t, err)
content, signingFingerprint, err = mech.Verify(signature)
assertSigningError(t, content, signingFingerprint, err)
// FIXME? Is there a way to create a multi-signature so that gpgme_op_verify returns multiple signatures?
// Expired signature
signature, err = ioutil.ReadFile("./fixtures/expired.signature")
require.NoError(t, err)
content, signingFingerprint, err = mech.Verify(signature)
assertSigningError(t, content, signingFingerprint, err)
// Corrupt signature
signature, err = ioutil.ReadFile("./fixtures/corrupt.signature")
require.NoError(t, err)
content, signingFingerprint, err = mech.Verify(signature)
assertSigningError(t, content, signingFingerprint, err)
// The various GPG/GPGME failures cases are not obviously easy to reach.
}

612
signature/policy_config.go Normal file
View File

@@ -0,0 +1,612 @@
// policy_config.go hanles creation of policy objects, either by parsing JSON
// or by programs building them programmatically.
// The New* constructors are intended to be a stable API. FIXME: after an independent review.
// Do not invoke the internals of the JSON marshaling/unmarshaling directly.
// We can't just blindly call json.Unmarshal because that would silently ignore
// typos, and that would just not do for security policy.
// FIXME? This is by no means an user-friendly parser: No location information in error messages, no other context.
// But at least it is not worse than blind json.Unmarshal()…
package signature
import (
"encoding/json"
"fmt"
"io/ioutil"
"github.com/projectatomic/skopeo/reference"
)
// InvalidPolicyFormatError is returned when parsing an invalid policy configuration.
type InvalidPolicyFormatError string
func (err InvalidPolicyFormatError) Error() string {
return string(err)
}
// FIXME: NewDefaultPolicy, from default file (or environment if trusted?)
// NewPolicyFromFile returns a policy configured in the specified file.
func NewPolicyFromFile(fileName string) (*Policy, error) {
contents, err := ioutil.ReadFile(fileName)
if err != nil {
return nil, err
}
return NewPolicyFromBytes(contents)
}
// NewPolicyFromBytes returns a policy parsed from the specified blob.
// Use this function instead of calling json.Unmarshal directly.
func NewPolicyFromBytes(data []byte) (*Policy, error) {
p := Policy{}
if err := json.Unmarshal(data, &p); err != nil {
return nil, InvalidPolicyFormatError(err.Error())
}
return &p, nil
}
// Compile-time check that Policy implements json.Unmarshaler.
var _ json.Unmarshaler = (*Policy)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (p *Policy) UnmarshalJSON(data []byte) error {
*p = Policy{}
specific := policySpecificMap{}
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "default":
return &p.Default
case "specific":
return &specific
default:
return nil
}
}); err != nil {
return err
}
if p.Default == nil {
return InvalidPolicyFormatError("Default policy is missing")
}
p.Specific = map[string]PolicyRequirements(specific)
return nil
}
// policySpecificMap is a specialization of this map type for the strict JSON parsing semantics appropriate for the Policy.Specific member.
type policySpecificMap map[string]PolicyRequirements
// Compile-time check that policySpecificMap implements json.Unmarshaler.
var _ json.Unmarshaler = (*policySpecificMap)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (m *policySpecificMap) UnmarshalJSON(data []byte) error {
// We can't unmarshal directly into map values because it is not possible to take an address of a map value.
// So, use a temporary map of pointers-to-slices and convert.
tmpMap := map[string]*PolicyRequirements{}
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
// Check that the scope format is at least plausible.
if _, err := reference.ParseNamed(key); err != nil {
return nil // FIXME? This returns an "Unknown key" error instead of saying that the format is invalid.
}
// paranoidUnmarshalJSONObject detects key duplication for us, check just to be safe.
if _, ok := tmpMap[key]; ok {
return nil
}
ptr := &PolicyRequirements{} // This allocates a new instance on each call.
tmpMap[key] = ptr
return ptr
}); err != nil {
return err
}
for key, ptr := range tmpMap {
(*m)[key] = *ptr
}
return nil
}
// Compile-time check that PolicyRequirements implements json.Unmarshaler.
var _ json.Unmarshaler = (*PolicyRequirements)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (m *PolicyRequirements) UnmarshalJSON(data []byte) error {
reqJSONs := []json.RawMessage{}
if err := json.Unmarshal(data, &reqJSONs); err != nil {
return err
}
if len(reqJSONs) == 0 {
return InvalidPolicyFormatError("List of verification policy requirements must not be empty")
}
res := make([]PolicyRequirement, len(reqJSONs))
for i, reqJSON := range reqJSONs {
req, err := newPolicyRequirementFromJSON(reqJSON)
if err != nil {
return err
}
res[i] = req
}
*m = res
return nil
}
// newPolicyRequirementFromJSON parses JSON data into a PolicyRequirement implementation.
func newPolicyRequirementFromJSON(data []byte) (PolicyRequirement, error) {
var typeField prCommon
if err := json.Unmarshal(data, &typeField); err != nil {
return nil, err
}
var res PolicyRequirement
switch typeField.Type {
case prTypeInsecureAcceptAnything:
res = &prInsecureAcceptAnything{}
case prTypeReject:
res = &prReject{}
case prTypeSignedBy:
res = &prSignedBy{}
case prTypeSignedBaseLayer:
res = &prSignedBaseLayer{}
default:
return nil, InvalidPolicyFormatError(fmt.Sprintf("Unknown policy requirement type \"%s\"", typeField.Type))
}
if err := json.Unmarshal(data, &res); err != nil {
return nil, err
}
return res, nil
}
// newPRInsecureAcceptAnything is NewPRInsecureAcceptAnything, except it returns the private type.
func newPRInsecureAcceptAnything() *prInsecureAcceptAnything {
return &prInsecureAcceptAnything{prCommon{Type: prTypeInsecureAcceptAnything}}
}
// NewPRInsecureAcceptAnything returns a new "insecureAcceptAnything" PolicyRequirement.
func NewPRInsecureAcceptAnything() PolicyRequirement {
return newPRInsecureAcceptAnything()
}
// Compile-time check that prInsecureAcceptAnything implements json.Unmarshaler.
var _ json.Unmarshaler = (*prInsecureAcceptAnything)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (pr *prInsecureAcceptAnything) UnmarshalJSON(data []byte) error {
*pr = prInsecureAcceptAnything{}
var tmp prInsecureAcceptAnything
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
default:
return nil
}
}); err != nil {
return err
}
if tmp.Type != prTypeInsecureAcceptAnything {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
*pr = *newPRInsecureAcceptAnything()
return nil
}
// newPRReject is NewPRReject, except it returns the private type.
func newPRReject() *prReject {
return &prReject{prCommon{Type: prTypeReject}}
}
// NewPRReject returns a new "reject" PolicyRequirement.
func NewPRReject() PolicyRequirement {
return newPRReject()
}
// Compile-time check that prReject implements json.Unmarshaler.
var _ json.Unmarshaler = (*prReject)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (pr *prReject) UnmarshalJSON(data []byte) error {
*pr = prReject{}
var tmp prReject
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
default:
return nil
}
}); err != nil {
return err
}
if tmp.Type != prTypeReject {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
*pr = *newPRReject()
return nil
}
// newPRSignedBy returns a new prSignedBy if parameters are valid.
func newPRSignedBy(keyType sbKeyType, keyPath string, keyData []byte, signedIdentity PolicyReferenceMatch) (*prSignedBy, error) {
if !keyType.IsValid() {
return nil, InvalidPolicyFormatError(fmt.Sprintf("invalid keyType \"%s\"", keyType))
}
if len(keyPath) > 0 && len(keyData) > 0 {
return nil, InvalidPolicyFormatError("keyType and keyData cannot be used simultaneously")
}
if signedIdentity == nil {
return nil, InvalidPolicyFormatError("signedIdentity not specified")
}
return &prSignedBy{
prCommon: prCommon{Type: prTypeSignedBy},
KeyType: keyType,
KeyPath: keyPath,
KeyData: keyData,
SignedIdentity: signedIdentity,
}, nil
}
// newPRSignedByKeyPath is NewPRSignedByKeyPath, except it returns the private type.
func newPRSignedByKeyPath(keyType sbKeyType, keyPath string, signedIdentity PolicyReferenceMatch) (*prSignedBy, error) {
return newPRSignedBy(keyType, keyPath, nil, signedIdentity)
}
// NewPRSignedByKeyPath returns a new "signedBy" PolicyRequirement using a KeyPath
func NewPRSignedByKeyPath(keyType sbKeyType, keyPath string, signedIdentity PolicyReferenceMatch) (PolicyRequirement, error) {
return newPRSignedByKeyPath(keyType, keyPath, signedIdentity)
}
// newPRSignedByKeyData is NewPRSignedByKeyData, except it returns the private type.
func newPRSignedByKeyData(keyType sbKeyType, keyData []byte, signedIdentity PolicyReferenceMatch) (*prSignedBy, error) {
return newPRSignedBy(keyType, "", keyData, signedIdentity)
}
// NewPRSignedByKeyData returns a new "signedBy" PolicyRequirement using a KeyData
func NewPRSignedByKeyData(keyType sbKeyType, keyData []byte, signedIdentity PolicyReferenceMatch) (PolicyRequirement, error) {
return newPRSignedByKeyData(keyType, keyData, signedIdentity)
}
// Compile-time check that prSignedBy implements json.Unmarshaler.
var _ json.Unmarshaler = (*prSignedBy)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (pr *prSignedBy) UnmarshalJSON(data []byte) error {
*pr = prSignedBy{}
var tmp prSignedBy
var gotKeyPath, gotKeyData = false, false
var signedIdentity json.RawMessage
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
case "keyType":
return &tmp.KeyType
case "keyPath":
gotKeyPath = true
return &tmp.KeyPath
case "keyData":
gotKeyData = true
return &tmp.KeyData
case "signedIdentity":
return &signedIdentity
default:
return nil
}
}); err != nil {
return err
}
if tmp.Type != prTypeSignedBy {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
if signedIdentity == nil {
tmp.SignedIdentity = NewPRMMatchExact()
} else {
si, err := newPolicyReferenceMatchFromJSON(signedIdentity)
if err != nil {
return err
}
tmp.SignedIdentity = si
}
var res *prSignedBy
var err error
switch {
case gotKeyPath && gotKeyData:
return InvalidPolicyFormatError("keyPath and keyData cannot be used simultaneously")
case gotKeyPath && !gotKeyData:
res, err = newPRSignedByKeyPath(tmp.KeyType, tmp.KeyPath, tmp.SignedIdentity)
case !gotKeyPath && gotKeyData:
res, err = newPRSignedByKeyData(tmp.KeyType, tmp.KeyData, tmp.SignedIdentity)
case !gotKeyPath && !gotKeyData:
return InvalidPolicyFormatError("At least one of keyPath and keyData mus be specified")
default: // Coverage: This should never happen
return fmt.Errorf("Impossible keyPath/keyData presence combination!?")
}
if err != nil {
return err
}
*pr = *res
return nil
}
// IsValid returns true iff kt is a recognized value
func (kt sbKeyType) IsValid() bool {
switch kt {
case SBKeyTypeGPGKeys, SBKeyTypeSignedByGPGKeys,
SBKeyTypeX509Certificates, SBKeyTypeSignedByX509CAs:
return true
default:
return false
}
}
// Compile-time check that sbKeyType implements json.Unmarshaler.
var _ json.Unmarshaler = (*sbKeyType)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (kt *sbKeyType) UnmarshalJSON(data []byte) error {
*kt = sbKeyType("")
var s string
if err := json.Unmarshal(data, &s); err != nil {
return err
}
if !sbKeyType(s).IsValid() {
return InvalidPolicyFormatError(fmt.Sprintf("Unrecognized keyType value \"%s\"", s))
}
*kt = sbKeyType(s)
return nil
}
// newPRSignedBaseLayer is NewPRSignedBaseLayer, except it returns the private type.
func newPRSignedBaseLayer(baseLayerIdentity PolicyReferenceMatch) (*prSignedBaseLayer, error) {
if baseLayerIdentity == nil {
return nil, InvalidPolicyFormatError("baseLayerIdenitty not specified")
}
return &prSignedBaseLayer{
prCommon: prCommon{Type: prTypeSignedBaseLayer},
BaseLayerIdentity: baseLayerIdentity,
}, nil
}
// NewPRSignedBaseLayer returns a new "signedBaseLayer" PolicyRequirement.
func NewPRSignedBaseLayer(baseLayerIdentity PolicyReferenceMatch) (PolicyRequirement, error) {
return newPRSignedBaseLayer(baseLayerIdentity)
}
// Compile-time check that prSignedBaseLayer implements json.Unmarshaler.
var _ json.Unmarshaler = (*prSignedBaseLayer)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (pr *prSignedBaseLayer) UnmarshalJSON(data []byte) error {
*pr = prSignedBaseLayer{}
var tmp prSignedBaseLayer
var baseLayerIdentity json.RawMessage
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
case "baseLayerIdentity":
return &baseLayerIdentity
default:
return nil
}
}); err != nil {
return err
}
if tmp.Type != prTypeSignedBaseLayer {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
if baseLayerIdentity == nil {
return InvalidPolicyFormatError(fmt.Sprintf("baseLayerIdentity not specified"))
}
bli, err := newPolicyReferenceMatchFromJSON(baseLayerIdentity)
if err != nil {
return err
}
res, err := newPRSignedBaseLayer(bli)
if err != nil {
// Coverage: This should never happen, newPolicyReferenceMatchFromJSON has ensured bli is valid.
return err
}
*pr = *res
return nil
}
// newPolicyRequirementFromJSON parses JSON data into a PolicyReferenceMatch implementation.
func newPolicyReferenceMatchFromJSON(data []byte) (PolicyReferenceMatch, error) {
var typeField prmCommon
if err := json.Unmarshal(data, &typeField); err != nil {
return nil, err
}
var res PolicyReferenceMatch
switch typeField.Type {
case prmTypeMatchExact:
res = &prmMatchExact{}
case prmTypeMatchRepository:
res = &prmMatchRepository{}
case prmTypeExactReference:
res = &prmExactReference{}
case prmTypeExactRepository:
res = &prmExactRepository{}
default:
return nil, InvalidPolicyFormatError(fmt.Sprintf("Unknown policy reference match type \"%s\"", typeField.Type))
}
if err := json.Unmarshal(data, &res); err != nil {
return nil, err
}
return res, nil
}
// newPRMMatchExact is NewPRMMatchExact, except it resturns the private type.
func newPRMMatchExact() *prmMatchExact {
return &prmMatchExact{prmCommon{Type: prmTypeMatchExact}}
}
// NewPRMMatchExact returns a new "matchExact" PolicyReferenceMatch.
func NewPRMMatchExact() PolicyReferenceMatch {
return newPRMMatchExact()
}
// Compile-time check that prmMatchExact implements json.Unmarshaler.
var _ json.Unmarshaler = (*prmMatchExact)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (prm *prmMatchExact) UnmarshalJSON(data []byte) error {
*prm = prmMatchExact{}
var tmp prmMatchExact
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
default:
return nil
}
}); err != nil {
return err
}
if tmp.Type != prmTypeMatchExact {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
*prm = *newPRMMatchExact()
return nil
}
// newPRMMatchRepository is NewPRMMatchRepository, except it resturns the private type.
func newPRMMatchRepository() *prmMatchRepository {
return &prmMatchRepository{prmCommon{Type: prmTypeMatchRepository}}
}
// NewPRMMatchRepository returns a new "matchRepository" PolicyReferenceMatch.
func NewPRMMatchRepository() PolicyReferenceMatch {
return newPRMMatchRepository()
}
// Compile-time check that prmMatchRepository implements json.Unmarshaler.
var _ json.Unmarshaler = (*prmMatchRepository)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (prm *prmMatchRepository) UnmarshalJSON(data []byte) error {
*prm = prmMatchRepository{}
var tmp prmMatchRepository
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
default:
return nil
}
}); err != nil {
return err
}
if tmp.Type != prmTypeMatchRepository {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
*prm = *newPRMMatchRepository()
return nil
}
// newPRMExactReference is NewPRMExactReference, except it resturns the private type.
func newPRMExactReference(dockerReference string) (*prmExactReference, error) {
ref, err := reference.ParseNamed(dockerReference)
if err != nil {
return nil, InvalidPolicyFormatError(fmt.Sprintf("Invalid format of dockerReference %s: %s", dockerReference, err.Error()))
}
if reference.IsNameOnly(ref) {
return nil, InvalidPolicyFormatError(fmt.Sprintf("dockerReference %s contains neither a tag nor digest", dockerReference))
}
return &prmExactReference{
prmCommon: prmCommon{Type: prmTypeExactReference},
DockerReference: dockerReference,
}, nil
}
// NewPRMExactReference returns a new "exactReference" PolicyReferenceMatch.
func NewPRMExactReference(dockerReference string) (PolicyReferenceMatch, error) {
return newPRMExactReference(dockerReference)
}
// Compile-time check that prmExactReference implements json.Unmarshaler.
var _ json.Unmarshaler = (*prmExactReference)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (prm *prmExactReference) UnmarshalJSON(data []byte) error {
*prm = prmExactReference{}
var tmp prmExactReference
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
case "dockerReference":
return &tmp.DockerReference
default:
return nil
}
}); err != nil {
return err
}
if tmp.Type != prmTypeExactReference {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
res, err := newPRMExactReference(tmp.DockerReference)
if err != nil {
return err
}
*prm = *res
return nil
}
// newPRMExactRepository is NewPRMExactRepository, except it resturns the private type.
func newPRMExactRepository(dockerRepository string) (*prmExactRepository, error) {
if _, err := reference.ParseNamed(dockerRepository); err != nil {
return nil, InvalidPolicyFormatError(fmt.Sprintf("Invalid format of dockerRepository %s: %s", dockerRepository, err.Error()))
}
return &prmExactRepository{
prmCommon: prmCommon{Type: prmTypeExactRepository},
DockerRepository: dockerRepository,
}, nil
}
// NewPRMExactRepository returns a new "exactRepository" PolicyRepositoryMatch.
func NewPRMExactRepository(dockerRepository string) (PolicyReferenceMatch, error) {
return newPRMExactRepository(dockerRepository)
}
// Compile-time check that prmExactRepository implements json.Unmarshaler.
var _ json.Unmarshaler = (*prmExactRepository)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface.
func (prm *prmExactRepository) UnmarshalJSON(data []byte) error {
*prm = prmExactRepository{}
var tmp prmExactRepository
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
switch key {
case "type":
return &tmp.Type
case "dockerRepository":
return &tmp.DockerRepository
default:
return nil
}
}); err != nil {
return err
}
if tmp.Type != prmTypeExactRepository {
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
}
res, err := newPRMExactRepository(tmp.DockerRepository)
if err != nil {
return err
}
*prm = *res
return nil
}

File diff suppressed because it is too large Load Diff

139
signature/policy_types.go Normal file
View File

@@ -0,0 +1,139 @@
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
// This defines types used to represent a signature verification policy in memory.
// Do not use the private types directly; either parse a configuration file, or construct a Policy from PolicyRequirements
// built using the constructor functions provided in policy_config.go.
package signature
// Policy defines requirements for considering a signature valid.
type Policy struct {
// Default applies to any image which does not have a matching policy in Specific.
Default PolicyRequirements `json:"default"`
// Specific applies to images matching scope, the map key.
// Scope is registry/server, namespace in a registry, single repository.
// FIXME: Scope syntax - should it be namespaced docker:something ? Or, in the worst case, a composite object (we couldn't use a JSON map)
// Most specific scope wins, duplication is prohibited (hard failure).
// Defaults to an empty map if not specified.
Specific map[string]PolicyRequirements `json:"specific"`
}
// PolicyRequirements is a set of requirements applying to a set of images; each of them must be satisfied (though perhaps each by a different signature).
// Must not be empty, frequently will only contain a single element.
type PolicyRequirements []PolicyRequirement
// PolicyRequirement is a rule which must be satisfied by at least one of the signatures of an image.
// The type is public, but its definition is private.
type PolicyRequirement interface{} // Will be expanded and moved elsewhere later.
// prCommon is the common type field in a JSON encoding of PolicyRequirement.
type prCommon struct {
Type prTypeIdentifier `json:"type"`
}
// prTypeIdentifier is string designating a kind of a PolicyRequirement.
type prTypeIdentifier string
const (
prTypeInsecureAcceptAnything prTypeIdentifier = "insecureAcceptAnything"
prTypeReject prTypeIdentifier = "reject"
prTypeSignedBy prTypeIdentifier = "signedBy"
prTypeSignedBaseLayer prTypeIdentifier = "signedBaseLayer"
)
// prInsecureAcceptAnything is a PolicyRequirement with type = prTypeInsecureAcceptAnything:
// every image is allowed to run.
// Note that because PolicyRequirements are implicitly ANDed, this is necessary only if it is the only rule (to make the list non-empty and the policy explicit).
// NOTE: This allows the image to run; it DOES NOT consider the signature verified (per IsSignatureAuthorAccepted).
// FIXME? Better name?
type prInsecureAcceptAnything struct {
prCommon
}
// prReject is a PolicyRequirement with type = prTypeReject: every image is rejected.
type prReject struct {
prCommon
}
// prSignedBy is a PolicyRequirement with type = prTypeSignedBy: the image is signed by trusted keys for a specified identity
type prSignedBy struct {
prCommon
// KeyType specifies what kind of key reference KeyPath/KeyData is.
// Acceptable values are “GPGKeys” | “signedByGPGKeys” “X.509Certificates” | “signedByX.509CAs”
// FIXME: eventually also support GPGTOFU, X.509TOFU, with KeyPath only
KeyType sbKeyType `json:"keyType"`
// KeyPath is a pathname to a local file containing the trusted key(s). Exactly one of KeyPath and KeyData must be specified.
KeyPath string `json:"keyPath,omitempty"`
// KeyData contains the trusted key(s), base64-encoded. Exactly one of KeyPath and KeyData must be specified.
KeyData []byte `json:"keyData,omitempty"`
// SignedIdentity specifies what image identity the signature must be claiming about the image.
// Defaults to "match-exact" if not specified.
SignedIdentity PolicyReferenceMatch `json:"signedIdentity"`
}
// sbKeyType are the allowed values for prSignedBy.KeyType
type sbKeyType string
const (
// SBKeyTypeGPGKeys refers to keys contained in a GPG keyring
SBKeyTypeGPGKeys sbKeyType = "GPGKeys"
// SBKeyTypeSignedByGPGKeys refers to keys signed by keys in a GPG keyring
SBKeyTypeSignedByGPGKeys sbKeyType = "signedByGPGKeys"
// SBKeyTypeX509Certificates refers to keys in a set of X.509 certificates
// FIXME: PEM, DER?
SBKeyTypeX509Certificates sbKeyType = "X509Certificates"
// SBKeyTypeSignedByX509CAs refers to keys signed by one of the X.509 CAs
// FIXME: PEM, DER?
SBKeyTypeSignedByX509CAs sbKeyType = "signedByX509CAs"
)
// prSignedBaseLayer is a PolicyRequirement with type = prSignedBaseLayer: the image has a specified, correctly signed, base image.
type prSignedBaseLayer struct {
prCommon
// BaseLayerIdentity specifies the base image to look for. "match-exact" is rejected, "match-repository" is unlikely to be useful.
BaseLayerIdentity PolicyReferenceMatch `json:"baseLayerIdentity"`
}
// PolicyReferenceMatch specifies a set of image identities accepted in PolicyRequirement.
// The type is public, but its implementation is private.
type PolicyReferenceMatch interface{} // Will be expanded and moved elsewhere later.
// prmCommon is the common type field in a JSON encoding of PolicyReferenceMatch.
type prmCommon struct {
Type prmTypeIdentifier `json:"type"`
}
// prmTypeIdentifier is string designating a kind of a PolicyReferenceMatch.
type prmTypeIdentifier string
const (
prmTypeMatchExact prmTypeIdentifier = "matchExact"
prmTypeMatchRepository prmTypeIdentifier = "matchRepository"
prmTypeExactReference prmTypeIdentifier = "exactReference"
prmTypeExactRepository prmTypeIdentifier = "exactRepository"
)
// prmMatchExact is a PolicyReferenceMatch with type = prmMatchExact: the two references must match exactly.
type prmMatchExact struct {
prmCommon
}
// prmMatchRepository is a PolicyReferenceMatch with type = prmMatchRepository: the two references must use the same repository, may differ in the tag.
type prmMatchRepository struct {
prmCommon
}
// prmExactReference is a PolicyReferenceMatch with type = prmExactReference: matches a specified reference exactly.
type prmExactReference struct {
prmCommon
DockerReference string `json:"dockerReference"`
}
// prmExactRepository is a PolicyReferenceMatch with type = prmExactRepository: matches a specified repository, with any tag.
type prmExactRepository struct {
prmCommon
DockerRepository string `json:"dockerRepository"`
}

181
signature/signature.go Normal file
View File

@@ -0,0 +1,181 @@
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
package signature
import (
"encoding/json"
"errors"
"fmt"
"time"
"github.com/projectatomic/skopeo/version"
)
const (
signatureType = "atomic container signature"
signatureCreatorID = "atomic " + version.Version
)
// InvalidSignatureError is returned when parsing an invalid signature.
type InvalidSignatureError struct {
msg string
}
func (err InvalidSignatureError) Error() string {
return err.msg
}
// Signature is a parsed content of a signature.
type Signature struct {
DockerManifestDigest string // FIXME: more precise type?
DockerReference string // FIXME: more precise type?
}
// Wrap signature to add to it some methods which we don't want to make public.
type privateSignature struct {
Signature
}
// Compile-time check that privateSignature implements json.Marshaler
var _ json.Marshaler = (*privateSignature)(nil)
// MarshalJSON implements the json.Marshaler interface.
func (s privateSignature) MarshalJSON() ([]byte, error) {
return s.marshalJSONWithVariables(time.Now().UTC().Unix(), signatureCreatorID)
}
// Implementation of MarshalJSON, with a caller-chosen values of the variable items to help testing.
func (s privateSignature) marshalJSONWithVariables(timestamp int64, creatorID string) ([]byte, error) {
if s.DockerManifestDigest == "" || s.DockerReference == "" {
return nil, errors.New("Unexpected empty signature content")
}
critical := map[string]interface{}{
"type": signatureType,
"image": map[string]string{"docker-manifest-digest": s.DockerManifestDigest},
"identity": map[string]string{"docker-reference": s.DockerReference},
}
optional := map[string]interface{}{
"creator": creatorID,
"timestamp": timestamp,
}
signature := map[string]interface{}{
"critical": critical,
"optional": optional,
}
return json.Marshal(signature)
}
// Compile-time check that privateSignature implements json.Unmarshaler
var _ json.Unmarshaler = (*privateSignature)(nil)
// UnmarshalJSON implements the json.Unmarshaler interface
func (s *privateSignature) UnmarshalJSON(data []byte) error {
err := s.strictUnmarshalJSON(data)
if err != nil {
if _, ok := err.(jsonFormatError); ok {
err = InvalidSignatureError{msg: err.Error()}
}
}
return err
}
// strictUnmarshalJSON is UnmarshalJSON, except that it may return the internal jsonFormatError error type.
// Splitting it into a separate function allows us to do the jsonFormatError → InvalidSignatureError in a single place, the caller.
func (s *privateSignature) strictUnmarshalJSON(data []byte) error {
var untyped interface{}
if err := json.Unmarshal(data, &untyped); err != nil {
return err
}
o, ok := untyped.(map[string]interface{})
if !ok {
return InvalidSignatureError{msg: "Invalid signature format"}
}
if err := validateExactMapKeys(o, "critical", "optional"); err != nil {
return err
}
c, err := mapField(o, "critical")
if err != nil {
return err
}
if err := validateExactMapKeys(c, "type", "image", "identity"); err != nil {
return err
}
optional, err := mapField(o, "optional")
if err != nil {
return err
}
_ = optional // We don't use anything from here for now.
t, err := stringField(c, "type")
if err != nil {
return err
}
if t != signatureType {
return InvalidSignatureError{msg: fmt.Sprintf("Unrecognized signature type %s", t)}
}
image, err := mapField(c, "image")
if err != nil {
return err
}
if err := validateExactMapKeys(image, "docker-manifest-digest"); err != nil {
return err
}
digest, err := stringField(image, "docker-manifest-digest")
if err != nil {
return err
}
s.DockerManifestDigest = digest
identity, err := mapField(c, "identity")
if err != nil {
return err
}
if err := validateExactMapKeys(identity, "docker-reference"); err != nil {
return err
}
reference, err := stringField(identity, "docker-reference")
if err != nil {
return err
}
s.DockerReference = reference
return nil
}
// Sign formats the signature and returns a blob signed using mech and keyIdentity
func (s privateSignature) sign(mech SigningMechanism, keyIdentity string) ([]byte, error) {
json, err := json.Marshal(s)
if err != nil {
return nil, err
}
return mech.Sign(json, keyIdentity)
}
// verifyAndExtractSignature verifies that signature has been signed by expectedKeyIdentity
// using mech for expectedDockerReference, and returns it (without matching its contents to an image).
func verifyAndExtractSignature(mech SigningMechanism, unverifiedSignature []byte,
expectedKeyIdentity, expectedDockerReference string) (*Signature, error) {
signed, keyIdentity, err := mech.Verify(unverifiedSignature)
if err != nil {
return nil, err
}
if keyIdentity != expectedKeyIdentity {
return nil, InvalidSignatureError{msg: fmt.Sprintf("Signature by %s does not match expected fingerprint %s", keyIdentity, expectedKeyIdentity)}
}
var unmatchedSignature privateSignature
if err := json.Unmarshal(signed, &unmatchedSignature); err != nil {
return nil, InvalidSignatureError{msg: err.Error()}
}
if unmatchedSignature.DockerReference != expectedDockerReference {
return nil, InvalidSignatureError{msg: fmt.Sprintf("Docker reference %s does not match %s",
unmatchedSignature.DockerReference, expectedDockerReference)}
}
signature := unmatchedSignature.Signature // Policy OK.
return &signature, nil
}

206
signature/signature_test.go Normal file
View File

@@ -0,0 +1,206 @@
package signature
import (
"encoding/json"
"io/ioutil"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestInvalidSignatureError(t *testing.T) {
// A stupid test just to keep code coverage
s := "test"
err := InvalidSignatureError{msg: s}
assert.Equal(t, s, err.Error())
}
func TestMarshalJSON(t *testing.T) {
// Empty string values
s := privateSignature{Signature{DockerManifestDigest: "", DockerReference: "_"}}
_, err := s.MarshalJSON()
assert.Error(t, err)
s = privateSignature{Signature{DockerManifestDigest: "_", DockerReference: ""}}
_, err = s.MarshalJSON()
assert.Error(t, err)
// Success
s = privateSignature{Signature{DockerManifestDigest: "digest!@#", DockerReference: "reference#@!"}}
marshaled, err := s.marshalJSONWithVariables(0, "CREATOR")
require.NoError(t, err)
assert.Equal(t, []byte("{\"critical\":{\"identity\":{\"docker-reference\":\"reference#@!\"},\"image\":{\"docker-manifest-digest\":\"digest!@#\"},\"type\":\"atomic container signature\"},\"optional\":{\"creator\":\"CREATOR\",\"timestamp\":0}}"),
marshaled)
// We can't test MarshalJSON directly because the timestamp will keep changing, so just test that
// it doesn't fail. And call it through the JSON package for a good measure.
_, err = json.Marshal(s)
assert.NoError(t, err)
}
// Return the result of modifying validJSON with fn and unmarshaling it into *sig
func tryUnmarshalModifiedSignature(t *testing.T, sig *privateSignature, validJSON []byte, modifyFn func(mSI)) error {
var tmp mSI
err := json.Unmarshal(validJSON, &tmp)
require.NoError(t, err)
modifyFn(tmp)
testJSON, err := json.Marshal(tmp)
require.NoError(t, err)
*sig = privateSignature{}
return json.Unmarshal(testJSON, sig)
}
func TestUnmarshalJSON(t *testing.T) {
var s privateSignature
// Invalid input. Note that json.Unmarshal is guaranteed to validate input before calling our
// UnmarshalJSON implementation; so test that first, then test our error handling for completeness.
err := json.Unmarshal([]byte("&"), &s)
assert.Error(t, err)
err = s.UnmarshalJSON([]byte("&"))
assert.Error(t, err)
// Not an object
err = json.Unmarshal([]byte("1"), &s)
assert.Error(t, err)
// Start with a valid JSON.
validSig := privateSignature{
Signature{
DockerManifestDigest: "digest!@#",
DockerReference: "reference#@!",
},
}
validJSON, err := validSig.MarshalJSON()
require.NoError(t, err)
// Success
s = privateSignature{}
err = json.Unmarshal(validJSON, &s)
require.NoError(t, err)
assert.Equal(t, validSig, s)
// Various ways to corrupt the JSON
breakFns := []func(mSI){
// A top-level field is missing
func(v mSI) { delete(v, "critical") },
func(v mSI) { delete(v, "optional") },
// Extra top-level sub-object
func(v mSI) { v["unexpected"] = 1 },
// "critical" not an object
func(v mSI) { v["critical"] = 1 },
// "optional" not an object
func(v mSI) { v["optional"] = 1 },
// A field of "critical" is missing
func(v mSI) { delete(x(v, "critical"), "type") },
func(v mSI) { delete(x(v, "critical"), "image") },
func(v mSI) { delete(x(v, "critical"), "identity") },
// Extra field of "critical"
func(v mSI) { x(v, "critical")["unexpected"] = 1 },
// Invalid "type"
func(v mSI) { x(v, "critical")["type"] = 1 },
func(v mSI) { x(v, "critical")["type"] = "unexpected" },
// Invalid "image" object
func(v mSI) { x(v, "critical")["image"] = 1 },
func(v mSI) { delete(x(v, "critical", "image"), "docker-manifest-digest") },
func(v mSI) { x(v, "critical", "image")["unexpected"] = 1 },
// Invalid "docker-manifest-digest"
func(v mSI) { x(v, "critical", "image")["docker-manifest-digest"] = 1 },
// Invalid "identity" object
func(v mSI) { x(v, "critical")["identity"] = 1 },
func(v mSI) { delete(x(v, "critical", "identity"), "docker-reference") },
func(v mSI) { x(v, "critical", "identity")["unexpected"] = 1 },
// Invalid "docker-reference"
func(v mSI) { x(v, "critical", "identity")["docker-reference"] = 1 },
}
for _, fn := range breakFns {
err = tryUnmarshalModifiedSignature(t, &s, validJSON, fn)
assert.Error(t, err)
}
// Modifications to "optional" are allowed and ignored
allowedModificationFns := []func(mSI){
// Add an optional field
func(v mSI) { x(v, "optional")["unexpected"] = 1 },
// Delete an optional field
func(v mSI) { delete(x(v, "optional"), "creator") },
}
for _, fn := range allowedModificationFns {
err = tryUnmarshalModifiedSignature(t, &s, validJSON, fn)
require.NoError(t, err)
assert.Equal(t, validSig, s)
}
}
func TestSign(t *testing.T) {
mech, err := newGPGSigningMechanismInDirectory(testGPGHomeDirectory)
require.NoError(t, err)
sig := privateSignature{
Signature{
DockerManifestDigest: "digest!@#",
DockerReference: "reference#@!",
},
}
// Successful signing
signature, err := sig.sign(mech, TestKeyFingerprint)
require.NoError(t, err)
verified, err := verifyAndExtractSignature(mech, signature, TestKeyFingerprint, sig.DockerReference)
require.NoError(t, err)
assert.Equal(t, sig.Signature, *verified)
// Error creating blob to sign
_, err = privateSignature{}.sign(mech, TestKeyFingerprint)
assert.Error(t, err)
// Error signing
_, err = sig.sign(mech, "this fingerprint doesn't exist")
assert.Error(t, err)
}
func TestVerifyAndExtractSignature(t *testing.T) {
mech, err := newGPGSigningMechanismInDirectory(testGPGHomeDirectory)
require.NoError(t, err)
signature, err := ioutil.ReadFile("./fixtures/image.signature")
require.NoError(t, err)
// Successful verification
sig, err := verifyAndExtractSignature(mech, signature, TestKeyFingerprint, TestImageSignatureReference)
require.NoError(t, err)
assert.Equal(t, TestImageSignatureReference, sig.DockerReference)
assert.Equal(t, TestImageManifestDigest, sig.DockerManifestDigest)
// For extra paranoia, test that we return a nil signature object on error.
// Completely invalid signature.
sig, err = verifyAndExtractSignature(mech, []byte{}, TestKeyFingerprint, TestImageSignatureReference)
assert.Error(t, err)
assert.Nil(t, sig)
sig, err = verifyAndExtractSignature(mech, []byte("invalid signature"), TestKeyFingerprint, TestImageSignatureReference)
assert.Error(t, err)
assert.Nil(t, sig)
// Valid signature of non-JSON
invalidBlobSignature, err := ioutil.ReadFile("./fixtures/invalid-blob.signature")
require.NoError(t, err)
sig, err = verifyAndExtractSignature(mech, invalidBlobSignature, TestKeyFingerprint, TestImageSignatureReference)
assert.Error(t, err)
assert.Nil(t, sig)
// Valid signature with a wrong key
sig, err = verifyAndExtractSignature(mech, signature, "unexpected fingerprint", TestImageSignatureReference)
assert.Error(t, err)
assert.Nil(t, sig)
// Valid signature with a wrong image reference
sig, err = verifyAndExtractSignature(mech, signature, TestKeyFingerprint, "unexpected docker reference")
assert.Error(t, err)
assert.Nil(t, sig)
}

View File

@@ -1,19 +1,76 @@
package types
import (
containerTypes "github.com/docker/engine-api/types/container"
"io"
"time"
)
type ImageInspect struct {
Tag string
Digest string
RepoTags []string
Comment string
Created string
ContainerConfig *containerTypes.Config
DockerVersion string
Author string
Config *containerTypes.Config
Architecture string
Os string
// Registry is a service providing repositories.
type Registry interface {
Repositories() []Repository
Repository(ref string) Repository
Lookup(term string) []Image // docker registry v1 only AFAICT, v2 can be built hacking with Images()
}
// Repository is a set of images.
type Repository interface {
Images() []Image
Image(ref string) Image // ref == image name w/o registry part
}
// ImageSource is a service, possibly remote (= slow), to download components of a single image.
// This is primarily useful for copying images around; for examining their properties, Image (below)
// is usually more useful.
type ImageSource interface {
// IntendedDockerReference returns the full, unambiguous, Docker reference for this image, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
// May be "" if unknown.
IntendedDockerReference() string
// GetManifest returns the image's manifest along with its MIME type. The empty string is returned if the MIME type is unknown. The slice parameter indicates the supported mime types the manifest should be when getting it.
// It may use a remote (= slow) service.
GetManifest([]string) ([]byte, string, error)
// Note: Calling GetLayer() may have ordering dependencies WRT other methods of this type. FIXME: How does this work with (docker save) on stdin?
GetLayer(digest string) (io.ReadCloser, error)
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
GetSignatures() ([][]byte, error)
}
// ImageDestination is a service, possibly remote (= slow), to store components of a single image.
type ImageDestination interface {
// CanonicalDockerReference returns the full, unambiguous, Docker reference for this image (even if the user referred to the image using some shorthand notation).
CanonicalDockerReference() (string, error)
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
PutManifest([]byte) error
// Note: Calling PutLayer() and other methods may have ordering dependencies WRT other methods of this type. FIXME: Figure out and document.
PutLayer(digest string, stream io.Reader) error
PutSignatures(signatures [][]byte) error
}
// Image is the primary API for inspecting properties of images.
type Image interface {
// ref to repository?
// IntendedDockerReference returns the full, unambiguous, Docker reference for this image, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
// May be "" if unknown.
IntendedDockerReference() string
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
// FIXME? This should also return a MIME type if known, to differentiate between schema versions.
Manifest() ([]byte, error)
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
Signatures() ([][]byte, error)
Layers(layers ...string) error // configure download directory? Call it DownloadLayers?
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
Inspect() (*ImageInspectInfo, error)
DockerTar() ([]byte, error) // ??? also, configure output directory
}
// ImageInspectInfo is a set of metadata describing Docker images, primarily their manifest and configuration.
type ImageInspectInfo struct {
Tag string
Created time.Time
DockerVersion string
Labels map[string]string
Architecture string
Os string
Layers []string
}

View File

@@ -1,8 +1,9 @@
language: go
go:
- 1.2
- 1.3
- 1.4
- 1.5
- tip
install:
- go get -t ./...
script: GOMAXPROCS=4 GORACE="halt_on_error=1" go test -race -v ./...

View File

@@ -1,3 +1,22 @@
# 0.10.0
* feature: Add a test hook (#180)
* feature: `ParseLevel` is now case-insensitive (#326)
* feature: `FieldLogger` interface that generalizes `Logger` and `Entry` (#308)
* performance: avoid re-allocations on `WithFields` (#335)
# 0.9.0
* logrus/text_formatter: don't emit empty msg
* logrus/hooks/airbrake: move out of main repository
* logrus/hooks/sentry: move out of main repository
* logrus/hooks/papertrail: move out of main repository
* logrus/hooks/bugsnag: move out of main repository
* logrus/core: run tests with `-race`
* logrus/core: detect TTY based on `stderr`
* logrus/core: support `WithError` on logger
* logrus/core: Solaris support
# 0.8.7
* logrus/core: fix possible race (#216)

View File

@@ -1,4 +1,4 @@
# Logrus <img src="http://i.imgur.com/hTeVwmJ.png" width="40" height="40" alt=":walrus:" class="emoji" title=":walrus:"/>&nbsp;[![Build Status](https://travis-ci.org/Sirupsen/logrus.svg?branch=master)](https://travis-ci.org/Sirupsen/logrus)&nbsp;[![godoc reference](https://godoc.org/github.com/Sirupsen/logrus?status.png)][godoc]
# Logrus <img src="http://i.imgur.com/hTeVwmJ.png" width="40" height="40" alt=":walrus:" class="emoji" title=":walrus:"/>&nbsp;[![Build Status](https://travis-ci.org/Sirupsen/logrus.svg?branch=master)](https://travis-ci.org/Sirupsen/logrus)&nbsp;[![GoDoc](https://godoc.org/github.com/Sirupsen/logrus?status.svg)](https://godoc.org/github.com/Sirupsen/logrus)
Logrus is a structured logger for Go (golang), completely API compatible with
the standard library logger. [Godoc][godoc]. **Please note the Logrus API is not
@@ -12,7 +12,7 @@ plain text):
![Colored](http://i.imgur.com/PY7qMwd.png)
With `log.Formatter = new(logrus.JSONFormatter)`, for easy parsing by logstash
With `log.SetFormatter(&log.JSONFormatter{})`, for easy parsing by logstash
or Splunk:
```json
@@ -32,7 +32,7 @@ ocean","size":10,"time":"2014-03-10 19:57:38.562264131 -0400 EDT"}
"time":"2014-03-10 19:57:38.562543128 -0400 EDT"}
```
With the default `log.Formatter = new(&log.TextFormatter{})` when a TTY is not
With the default `log.SetFormatter(&log.TextFormatter{})` when a TTY is not
attached, the output is compatible with the
[logfmt](http://godoc.org/github.com/kr/logfmt) format:
@@ -75,17 +75,12 @@ package main
import (
"os"
log "github.com/Sirupsen/logrus"
"github.com/Sirupsen/logrus/hooks/airbrake"
)
func init() {
// Log as JSON instead of the default ASCII formatter.
log.SetFormatter(&log.JSONFormatter{})
// Use the Airbrake hook to report errors that have Error severity or above to
// an exception tracker. You can create custom hooks, see the Hooks section.
log.AddHook(airbrake.NewHook("https://example.com", "xyz", "development"))
// Output to stderr instead of stdout, could also be a file.
log.SetOutput(os.Stderr)
@@ -182,13 +177,16 @@ Logrus comes with [built-in hooks](hooks/). Add those, or your custom hook, in
```go
import (
log "github.com/Sirupsen/logrus"
"github.com/Sirupsen/logrus/hooks/airbrake"
"gopkg.in/gemnasium/logrus-airbrake-hook.v2" // the package is named "aibrake"
logrus_syslog "github.com/Sirupsen/logrus/hooks/syslog"
"log/syslog"
)
func init() {
log.AddHook(airbrake.NewHook("https://example.com", "xyz", "development"))
// Use the Airbrake hook to report errors that have Error severity or above to
// an exception tracker. You can create custom hooks, see the Hooks section.
log.AddHook(airbrake.NewHook(123, "xyz", "production"))
hook, err := logrus_syslog.NewSyslogHook("udp", "localhost:514", syslog.LOG_INFO, "")
if err != nil {
@@ -198,20 +196,21 @@ func init() {
}
}
```
Note: Syslog hook also support connecting to local syslog (Ex. "/dev/log" or "/var/run/syslog" or "/var/run/log"). For the detail, please check the [syslog hook README](hooks/syslog/README.md).
| Hook | Description |
| ----- | ----------- |
| [Airbrake](https://github.com/Sirupsen/logrus/blob/master/hooks/airbrake/airbrake.go) | Send errors to an exception tracking service compatible with the Airbrake API. Uses [`airbrake-go`](https://github.com/tobi/airbrake-go) behind the scenes. |
| [Papertrail](https://github.com/Sirupsen/logrus/blob/master/hooks/papertrail/papertrail.go) | Send errors to the Papertrail hosted logging service via UDP. |
| [Airbrake](https://github.com/gemnasium/logrus-airbrake-hook) | Send errors to the Airbrake API V3. Uses the official [`gobrake`](https://github.com/airbrake/gobrake) behind the scenes. |
| [Airbrake "legacy"](https://github.com/gemnasium/logrus-airbrake-legacy-hook) | Send errors to an exception tracking service compatible with the Airbrake API V2. Uses [`airbrake-go`](https://github.com/tobi/airbrake-go) behind the scenes. |
| [Papertrail](https://github.com/polds/logrus-papertrail-hook) | Send errors to the [Papertrail](https://papertrailapp.com) hosted logging service via UDP. |
| [Syslog](https://github.com/Sirupsen/logrus/blob/master/hooks/syslog/syslog.go) | Send errors to remote syslog server. Uses standard library `log/syslog` behind the scenes. |
| [BugSnag](https://github.com/Sirupsen/logrus/blob/master/hooks/bugsnag/bugsnag.go) | Send errors to the Bugsnag exception tracking service. |
| [Sentry](https://github.com/Sirupsen/logrus/blob/master/hooks/sentry/sentry.go) | Send errors to the Sentry error logging and aggregation service. |
| [Bugsnag](https://github.com/Shopify/logrus-bugsnag/blob/master/bugsnag.go) | Send errors to the Bugsnag exception tracking service. |
| [Sentry](https://github.com/evalphobia/logrus_sentry) | Send errors to the Sentry error logging and aggregation service. |
| [Hiprus](https://github.com/nubo/hiprus) | Send errors to a channel in hipchat. |
| [Logrusly](https://github.com/sebest/logrusly) | Send logs to [Loggly](https://www.loggly.com/) |
| [Slackrus](https://github.com/johntdyer/slackrus) | Hook for Slack chat. |
| [Journalhook](https://github.com/wercker/journalhook) | Hook for logging to `systemd-journald` |
| [Graylog](https://github.com/gemnasium/logrus-hooks/tree/master/graylog) | Hook for logging to [Graylog](http://graylog2.org/) |
| [Graylog](https://github.com/gemnasium/logrus-graylog-hook) | Hook for logging to [Graylog](http://graylog2.org/) |
| [Raygun](https://github.com/squirkle/logrus-raygun-hook) | Hook for logging to [Raygun.io](http://raygun.io/) |
| [LFShook](https://github.com/rifflock/lfshook) | Hook for logging to the local filesystem |
| [Honeybadger](https://github.com/agonzalezro/logrus_honeybadger) | Hook for sending exceptions to Honeybadger |
@@ -219,6 +218,15 @@ func init() {
| [Rollrus](https://github.com/heroku/rollrus) | Hook for sending errors to rollbar |
| [Fluentd](https://github.com/evalphobia/logrus_fluent) | Hook for logging to fluentd |
| [Mongodb](https://github.com/weekface/mgorus) | Hook for logging to mongodb |
| [InfluxDB](https://github.com/Abramovic/logrus_influxdb) | Hook for logging to influxdb |
| [Octokit](https://github.com/dorajistyle/logrus-octokit-hook) | Hook for logging to github via octokit |
| [DeferPanic](https://github.com/deferpanic/dp-logrus) | Hook for logging to DeferPanic |
| [Redis-Hook](https://github.com/rogierlommers/logrus-redis-hook) | Hook for logging to a ELK stack (through Redis) |
| [Amqp-Hook](https://github.com/vladoatanasov/logrus_amqp) | Hook for logging to Amqp broker (Like RabbitMQ) |
| [KafkaLogrus](https://github.com/goibibo/KafkaLogrus) | Hook for logging to kafka |
| [Typetalk](https://github.com/dragon3/logrus-typetalk-hook) | Hook for logging to [Typetalk](https://www.typetalk.in/) |
| [ElasticSearch](https://github.com/sohlich/elogrus) | Hook for logging to ElasticSearch|
#### Level logging
@@ -296,15 +304,16 @@ The built-in logging formatters are:
field to `true`. To force no colored output even if there is a TTY set the
`DisableColors` field to `true`
* `logrus.JSONFormatter`. Logs fields as JSON.
* `logrus_logstash.LogstashFormatter`. Logs fields as Logstash Events (http://logstash.net).
* `logrus/formatters/logstash.LogstashFormatter`. Logs fields as [Logstash](http://logstash.net) Events.
```go
logrus.SetFormatter(&logrus_logstash.LogstashFormatter{Type: application_name"})
logrus.SetFormatter(&logstash.LogstashFormatter{Type: "application_name"})
```
Third party logging formatters:
* [`zalgo`](https://github.com/aybabtme/logzalgo): invoking the P͉̫o̳̼̊w̖͈̰͎e̬͔̭͂r͚̼̹̲ ̫͓͉̳͈ō̠͕͖̚f̝͍̠ ͕̲̞͖͑Z̖̫̤̫ͪa͉̬͈̗l͖͎g̳̥o̰̥̅!̣͔̲̻͊̄ ̙̘̦̹̦.
* [`prefixed`](https://github.com/x-cray/logrus-prefixed-formatter). Displays log entry source along with alternative layout.
* [`zalgo`](https://github.com/aybabtme/logzalgo). Invoking the P͉̫o̳̼̊w̖͈̰͎e̬͔̭͂r͚̼̹̲ ̫͓͉̳͈ō̠͕͖̚f̝͍̠ ͕̲̞͖͑Z̖̫̤̫ͪa͉̬͈̗l͖͎g̳̥o̰̥̅!̣͔̲̻͊̄ ̙̘̦̹̦.
You can define your formatter by implementing the `Formatter` interface,
requiring a `Format` method. `Format` takes an `*Entry`. `entry.Data` is a
@@ -353,5 +362,27 @@ Log rotation is not provided with Logrus. Log rotation should be done by an
external program (like `logrotate(8)`) that can compress and delete old log
entries. It should not be a feature of the application-level logger.
#### Tools
[godoc]: https://godoc.org/github.com/Sirupsen/logrus
| Tool | Description |
| ---- | ----------- |
|[Logrus Mate](https://github.com/gogap/logrus_mate)|Logrus mate is a tool for Logrus to manage loggers, you can initial logger's level, hook and formatter by config file, the logger will generated with different config at different environment.|
#### Testing
Logrus has a built in facility for asserting the presence of log messages. This is implemented through the `test` hook and provides:
* decorators for existing logger (`test.NewLocal` and `test.NewGlobal`) which basically just add the `test` hook
* a test logger (`test.NewNullLogger`) that just records log messages (and does not output any):
```go
logger, hook := NewNullLogger()
logger.Error("Hello error")
assert.Equal(1, len(hook.Entries))
assert.Equal(logrus.ErrorLevel, hook.LastEntry().Level)
assert.Equal("Hello error", hook.LastEntry().Message)
hook.Reset()
assert.Nil(hook.LastEntry())
```

View File

@@ -68,7 +68,7 @@ func (entry *Entry) WithField(key string, value interface{}) *Entry {
// Add a map of fields to the Entry.
func (entry *Entry) WithFields(fields Fields) *Entry {
data := Fields{}
data := make(Fields, len(entry.Data)+len(fields))
for k, v := range entry.Data {
data[k] = v
}

View File

@@ -64,6 +64,12 @@ func (logger *Logger) WithFields(fields Fields) *Entry {
return NewEntry(logger).WithFields(fields)
}
// Add an error as single field to the log entry. All it does is call
// `WithError` for the given `error`.
func (logger *Logger) WithError(err error) *Entry {
return NewEntry(logger).WithError(err)
}
func (logger *Logger) Debugf(format string, args ...interface{}) {
if logger.Level >= DebugLevel {
NewEntry(logger).Debugf(format, args...)

View File

@@ -3,6 +3,7 @@ package logrus
import (
"fmt"
"log"
"strings"
)
// Fields type, used to pass to `WithFields`.
@@ -33,7 +34,7 @@ func (level Level) String() string {
// ParseLevel takes a string level and returns the Logrus log level constant.
func ParseLevel(lvl string) (Level, error) {
switch lvl {
switch strings.ToLower(lvl) {
case "panic":
return PanicLevel, nil
case "fatal":
@@ -52,6 +53,16 @@ func ParseLevel(lvl string) (Level, error) {
return l, fmt.Errorf("not a valid logrus Level: %q", lvl)
}
// A constant exposing all logging levels
var AllLevels = []Level{
PanicLevel,
FatalLevel,
ErrorLevel,
WarnLevel,
InfoLevel,
DebugLevel,
}
// These are the different logging levels. You can set the logging level to log
// on your instance of logger, obtained with `logrus.New()`.
const (
@@ -96,3 +107,37 @@ type StdLogger interface {
Panicf(string, ...interface{})
Panicln(...interface{})
}
// The FieldLogger interface generalizes the Entry and Logger types
type FieldLogger interface {
WithField(key string, value interface{}) *Entry
WithFields(fields Fields) *Entry
WithError(err error) *Entry
Debugf(format string, args ...interface{})
Infof(format string, args ...interface{})
Printf(format string, args ...interface{})
Warnf(format string, args ...interface{})
Warningf(format string, args ...interface{})
Errorf(format string, args ...interface{})
Fatalf(format string, args ...interface{})
Panicf(format string, args ...interface{})
Debug(args ...interface{})
Info(args ...interface{})
Print(args ...interface{})
Warn(args ...interface{})
Warning(args ...interface{})
Error(args ...interface{})
Fatal(args ...interface{})
Panic(args ...interface{})
Debugln(args ...interface{})
Infoln(args ...interface{})
Println(args ...interface{})
Warnln(args ...interface{})
Warningln(args ...interface{})
Errorln(args ...interface{})
Fatalln(args ...interface{})
Panicln(args ...interface{})
}

View File

@@ -12,9 +12,9 @@ import (
"unsafe"
)
// IsTerminal returns true if the given file descriptor is a terminal.
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
fd := syscall.Stdout
fd := syscall.Stderr
var termios Termios
_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)
return err == 0

15
vendor/github.com/Sirupsen/logrus/terminal_solaris.go generated vendored Normal file
View File

@@ -0,0 +1,15 @@
// +build solaris
package logrus
import (
"os"
"golang.org/x/sys/unix"
)
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal() bool {
_, err := unix.IoctlGetTermios(int(os.Stdout.Fd()), unix.TCGETA)
return err == nil
}

View File

@@ -18,9 +18,9 @@ var (
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
)
// IsTerminal returns true if the given file descriptor is a terminal.
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
fd := syscall.Stdout
fd := syscall.Stderr
var st uint32
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)
return r != 0 && e == 0

View File

@@ -84,7 +84,9 @@ func (f *TextFormatter) Format(entry *Entry) ([]byte, error) {
f.appendKeyValue(b, "time", entry.Time.Format(timestampFormat))
}
f.appendKeyValue(b, "level", entry.Level.String())
f.appendKeyValue(b, "msg", entry.Message)
if entry.Message != "" {
f.appendKeyValue(b, "msg", entry.Message)
}
for _, key := range keys {
f.appendKeyValue(b, key, entry.Data[key])
}

13
vendor/github.com/davecgh/go-spew/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,13 @@
Copyright (c) 2012-2013 Dave Collins <dave@davec.name>
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

151
vendor/github.com/davecgh/go-spew/spew/bypass.go generated vendored Normal file
View File

@@ -0,0 +1,151 @@
// Copyright (c) 2015 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when the code is not running on Google App Engine and "-tags disableunsafe"
// is not added to the go build command line.
// +build !appengine,!disableunsafe
package spew
import (
"reflect"
"unsafe"
)
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = false
// ptrSize is the size of a pointer on the current arch.
ptrSize = unsafe.Sizeof((*byte)(nil))
)
var (
// offsetPtr, offsetScalar, and offsetFlag are the offsets for the
// internal reflect.Value fields. These values are valid before golang
// commit ecccf07e7f9d which changed the format. The are also valid
// after commit 82f48826c6c7 which changed the format again to mirror
// the original format. Code in the init function updates these offsets
// as necessary.
offsetPtr = uintptr(ptrSize)
offsetScalar = uintptr(0)
offsetFlag = uintptr(ptrSize * 2)
// flagKindWidth and flagKindShift indicate various bits that the
// reflect package uses internally to track kind information.
//
// flagRO indicates whether or not the value field of a reflect.Value is
// read-only.
//
// flagIndir indicates whether the value field of a reflect.Value is
// the actual data or a pointer to the data.
//
// These values are valid before golang commit 90a7c3c86944 which
// changed their positions. Code in the init function updates these
// flags as necessary.
flagKindWidth = uintptr(5)
flagKindShift = uintptr(flagKindWidth - 1)
flagRO = uintptr(1 << 0)
flagIndir = uintptr(1 << 1)
)
func init() {
// Older versions of reflect.Value stored small integers directly in the
// ptr field (which is named val in the older versions). Versions
// between commits ecccf07e7f9d and 82f48826c6c7 added a new field named
// scalar for this purpose which unfortunately came before the flag
// field, so the offset of the flag field is different for those
// versions.
//
// This code constructs a new reflect.Value from a known small integer
// and checks if the size of the reflect.Value struct indicates it has
// the scalar field. When it does, the offsets are updated accordingly.
vv := reflect.ValueOf(0xf00)
if unsafe.Sizeof(vv) == (ptrSize * 4) {
offsetScalar = ptrSize * 2
offsetFlag = ptrSize * 3
}
// Commit 90a7c3c86944 changed the flag positions such that the low
// order bits are the kind. This code extracts the kind from the flags
// field and ensures it's the correct type. When it's not, the flag
// order has been changed to the newer format, so the flags are updated
// accordingly.
upf := unsafe.Pointer(uintptr(unsafe.Pointer(&vv)) + offsetFlag)
upfv := *(*uintptr)(upf)
flagKindMask := uintptr((1<<flagKindWidth - 1) << flagKindShift)
if (upfv&flagKindMask)>>flagKindShift != uintptr(reflect.Int) {
flagKindShift = 0
flagRO = 1 << 5
flagIndir = 1 << 6
// Commit adf9b30e5594 modified the flags to separate the
// flagRO flag into two bits which specifies whether or not the
// field is embedded. This causes flagIndir to move over a bit
// and means that flagRO is the combination of either of the
// original flagRO bit and the new bit.
//
// This code detects the change by extracting what used to be
// the indirect bit to ensure it's set. When it's not, the flag
// order has been changed to the newer format, so the flags are
// updated accordingly.
if upfv&flagIndir == 0 {
flagRO = 3 << 5
flagIndir = 1 << 7
}
}
}
// unsafeReflectValue converts the passed reflect.Value into a one that bypasses
// the typical safety restrictions preventing access to unaddressable and
// unexported data. It works by digging the raw pointer to the underlying
// value out of the protected value and generating a new unprotected (unsafe)
// reflect.Value to it.
//
// This allows us to check for implementations of the Stringer and error
// interfaces to be used for pretty printing ordinarily unaddressable and
// inaccessible values such as unexported struct fields.
func unsafeReflectValue(v reflect.Value) (rv reflect.Value) {
indirects := 1
vt := v.Type()
upv := unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetPtr)
rvf := *(*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetFlag))
if rvf&flagIndir != 0 {
vt = reflect.PtrTo(v.Type())
indirects++
} else if offsetScalar != 0 {
// The value is in the scalar field when it's not one of the
// reference types.
switch vt.Kind() {
case reflect.Uintptr:
case reflect.Chan:
case reflect.Func:
case reflect.Map:
case reflect.Ptr:
case reflect.UnsafePointer:
default:
upv = unsafe.Pointer(uintptr(unsafe.Pointer(&v)) +
offsetScalar)
}
}
pv := reflect.NewAt(vt, upv)
rv = pv
for i := 0; i < indirects; i++ {
rv = rv.Elem()
}
return rv
}

37
vendor/github.com/davecgh/go-spew/spew/bypasssafe.go generated vendored Normal file
View File

@@ -0,0 +1,37 @@
// Copyright (c) 2015 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when either the code is running on Google App Engine or "-tags disableunsafe"
// is added to the go build command line.
// +build appengine disableunsafe
package spew
import "reflect"
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = true
)
// unsafeReflectValue typically converts the passed reflect.Value into a one
// that bypasses the typical safety restrictions preventing access to
// unaddressable and unexported data. However, doing this relies on access to
// the unsafe package. This is a stub version which simply returns the passed
// reflect.Value when the unsafe package is not available.
func unsafeReflectValue(v reflect.Value) reflect.Value {
return v
}

341
vendor/github.com/davecgh/go-spew/spew/common.go generated vendored Normal file
View File

@@ -0,0 +1,341 @@
/*
* Copyright (c) 2013 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"reflect"
"sort"
"strconv"
)
// Some constants in the form of bytes to avoid string overhead. This mirrors
// the technique used in the fmt package.
var (
panicBytes = []byte("(PANIC=")
plusBytes = []byte("+")
iBytes = []byte("i")
trueBytes = []byte("true")
falseBytes = []byte("false")
interfaceBytes = []byte("(interface {})")
commaNewlineBytes = []byte(",\n")
newlineBytes = []byte("\n")
openBraceBytes = []byte("{")
openBraceNewlineBytes = []byte("{\n")
closeBraceBytes = []byte("}")
asteriskBytes = []byte("*")
colonBytes = []byte(":")
colonSpaceBytes = []byte(": ")
openParenBytes = []byte("(")
closeParenBytes = []byte(")")
spaceBytes = []byte(" ")
pointerChainBytes = []byte("->")
nilAngleBytes = []byte("<nil>")
maxNewlineBytes = []byte("<max depth reached>\n")
maxShortBytes = []byte("<max>")
circularBytes = []byte("<already shown>")
circularShortBytes = []byte("<shown>")
invalidAngleBytes = []byte("<invalid>")
openBracketBytes = []byte("[")
closeBracketBytes = []byte("]")
percentBytes = []byte("%")
precisionBytes = []byte(".")
openAngleBytes = []byte("<")
closeAngleBytes = []byte(">")
openMapBytes = []byte("map[")
closeMapBytes = []byte("]")
lenEqualsBytes = []byte("len=")
capEqualsBytes = []byte("cap=")
)
// hexDigits is used to map a decimal value to a hex digit.
var hexDigits = "0123456789abcdef"
// catchPanic handles any panics that might occur during the handleMethods
// calls.
func catchPanic(w io.Writer, v reflect.Value) {
if err := recover(); err != nil {
w.Write(panicBytes)
fmt.Fprintf(w, "%v", err)
w.Write(closeParenBytes)
}
}
// handleMethods attempts to call the Error and String methods on the underlying
// type the passed reflect.Value represents and outputes the result to Writer w.
//
// It handles panics in any called methods by catching and displaying the error
// as the formatted value.
func handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) {
// We need an interface to check if the type implements the error or
// Stringer interface. However, the reflect package won't give us an
// interface on certain things like unexported struct fields in order
// to enforce visibility rules. We use unsafe, when it's available,
// to bypass these restrictions since this package does not mutate the
// values.
if !v.CanInterface() {
if UnsafeDisabled {
return false
}
v = unsafeReflectValue(v)
}
// Choose whether or not to do error and Stringer interface lookups against
// the base type or a pointer to the base type depending on settings.
// Technically calling one of these methods with a pointer receiver can
// mutate the value, however, types which choose to satisify an error or
// Stringer interface with a pointer receiver should not be mutating their
// state inside these interface methods.
if !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() {
v = unsafeReflectValue(v)
}
if v.CanAddr() {
v = v.Addr()
}
// Is it an error or Stringer?
switch iface := v.Interface().(type) {
case error:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.Error()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.Error()))
return true
case fmt.Stringer:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.String()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.String()))
return true
}
return false
}
// printBool outputs a boolean value as true or false to Writer w.
func printBool(w io.Writer, val bool) {
if val {
w.Write(trueBytes)
} else {
w.Write(falseBytes)
}
}
// printInt outputs a signed integer value to Writer w.
func printInt(w io.Writer, val int64, base int) {
w.Write([]byte(strconv.FormatInt(val, base)))
}
// printUint outputs an unsigned integer value to Writer w.
func printUint(w io.Writer, val uint64, base int) {
w.Write([]byte(strconv.FormatUint(val, base)))
}
// printFloat outputs a floating point value using the specified precision,
// which is expected to be 32 or 64bit, to Writer w.
func printFloat(w io.Writer, val float64, precision int) {
w.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision)))
}
// printComplex outputs a complex value using the specified float precision
// for the real and imaginary parts to Writer w.
func printComplex(w io.Writer, c complex128, floatPrecision int) {
r := real(c)
w.Write(openParenBytes)
w.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision)))
i := imag(c)
if i >= 0 {
w.Write(plusBytes)
}
w.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision)))
w.Write(iBytes)
w.Write(closeParenBytes)
}
// printHexPtr outputs a uintptr formatted as hexidecimal with a leading '0x'
// prefix to Writer w.
func printHexPtr(w io.Writer, p uintptr) {
// Null pointer.
num := uint64(p)
if num == 0 {
w.Write(nilAngleBytes)
return
}
// Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix
buf := make([]byte, 18)
// It's simpler to construct the hex string right to left.
base := uint64(16)
i := len(buf) - 1
for num >= base {
buf[i] = hexDigits[num%base]
num /= base
i--
}
buf[i] = hexDigits[num]
// Add '0x' prefix.
i--
buf[i] = 'x'
i--
buf[i] = '0'
// Strip unused leading bytes.
buf = buf[i:]
w.Write(buf)
}
// valuesSorter implements sort.Interface to allow a slice of reflect.Value
// elements to be sorted.
type valuesSorter struct {
values []reflect.Value
strings []string // either nil or same len and values
cs *ConfigState
}
// newValuesSorter initializes a valuesSorter instance, which holds a set of
// surrogate keys on which the data should be sorted. It uses flags in
// ConfigState to decide if and how to populate those surrogate keys.
func newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface {
vs := &valuesSorter{values: values, cs: cs}
if canSortSimply(vs.values[0].Kind()) {
return vs
}
if !cs.DisableMethods {
vs.strings = make([]string, len(values))
for i := range vs.values {
b := bytes.Buffer{}
if !handleMethods(cs, &b, vs.values[i]) {
vs.strings = nil
break
}
vs.strings[i] = b.String()
}
}
if vs.strings == nil && cs.SpewKeys {
vs.strings = make([]string, len(values))
for i := range vs.values {
vs.strings[i] = Sprintf("%#v", vs.values[i].Interface())
}
}
return vs
}
// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted
// directly, or whether it should be considered for sorting by surrogate keys
// (if the ConfigState allows it).
func canSortSimply(kind reflect.Kind) bool {
// This switch parallels valueSortLess, except for the default case.
switch kind {
case reflect.Bool:
return true
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return true
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return true
case reflect.Float32, reflect.Float64:
return true
case reflect.String:
return true
case reflect.Uintptr:
return true
case reflect.Array:
return true
}
return false
}
// Len returns the number of values in the slice. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Len() int {
return len(s.values)
}
// Swap swaps the values at the passed indices. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Swap(i, j int) {
s.values[i], s.values[j] = s.values[j], s.values[i]
if s.strings != nil {
s.strings[i], s.strings[j] = s.strings[j], s.strings[i]
}
}
// valueSortLess returns whether the first value should sort before the second
// value. It is used by valueSorter.Less as part of the sort.Interface
// implementation.
func valueSortLess(a, b reflect.Value) bool {
switch a.Kind() {
case reflect.Bool:
return !a.Bool() && b.Bool()
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return a.Int() < b.Int()
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return a.Uint() < b.Uint()
case reflect.Float32, reflect.Float64:
return a.Float() < b.Float()
case reflect.String:
return a.String() < b.String()
case reflect.Uintptr:
return a.Uint() < b.Uint()
case reflect.Array:
// Compare the contents of both arrays.
l := a.Len()
for i := 0; i < l; i++ {
av := a.Index(i)
bv := b.Index(i)
if av.Interface() == bv.Interface() {
continue
}
return valueSortLess(av, bv)
}
}
return a.String() < b.String()
}
// Less returns whether the value at index i should sort before the
// value at index j. It is part of the sort.Interface implementation.
func (s *valuesSorter) Less(i, j int) bool {
if s.strings == nil {
return valueSortLess(s.values[i], s.values[j])
}
return s.strings[i] < s.strings[j]
}
// sortValues is a sort function that handles both native types and any type that
// can be converted to error or Stringer. Other inputs are sorted according to
// their Value.String() value to ensure display stability.
func sortValues(values []reflect.Value, cs *ConfigState) {
if len(values) == 0 {
return
}
sort.Sort(newValuesSorter(values, cs))
}

297
vendor/github.com/davecgh/go-spew/spew/config.go generated vendored Normal file
View File

@@ -0,0 +1,297 @@
/*
* Copyright (c) 2013 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"os"
)
// ConfigState houses the configuration options used by spew to format and
// display values. There is a global instance, Config, that is used to control
// all top-level Formatter and Dump functionality. Each ConfigState instance
// provides methods equivalent to the top-level functions.
//
// The zero value for ConfigState provides no indentation. You would typically
// want to set it to a space or a tab.
//
// Alternatively, you can use NewDefaultConfig to get a ConfigState instance
// with default settings. See the documentation of NewDefaultConfig for default
// values.
type ConfigState struct {
// Indent specifies the string to use for each indentation level. The
// global config instance that all top-level functions use set this to a
// single space by default. If you would like more indentation, you might
// set this to a tab with "\t" or perhaps two spaces with " ".
Indent string
// MaxDepth controls the maximum number of levels to descend into nested
// data structures. The default, 0, means there is no limit.
//
// NOTE: Circular data structures are properly detected, so it is not
// necessary to set this value unless you specifically want to limit deeply
// nested data structures.
MaxDepth int
// DisableMethods specifies whether or not error and Stringer interfaces are
// invoked for types that implement them.
DisableMethods bool
// DisablePointerMethods specifies whether or not to check for and invoke
// error and Stringer interfaces on types which only accept a pointer
// receiver when the current type is not a pointer.
//
// NOTE: This might be an unsafe action since calling one of these methods
// with a pointer receiver could technically mutate the value, however,
// in practice, types which choose to satisify an error or Stringer
// interface with a pointer receiver should not be mutating their state
// inside these interface methods. As a result, this option relies on
// access to the unsafe package, so it will not have any effect when
// running in environments without access to the unsafe package such as
// Google App Engine or with the "disableunsafe" build tag specified.
DisablePointerMethods bool
// ContinueOnMethod specifies whether or not recursion should continue once
// a custom error or Stringer interface is invoked. The default, false,
// means it will print the results of invoking the custom error or Stringer
// interface and return immediately instead of continuing to recurse into
// the internals of the data type.
//
// NOTE: This flag does not have any effect if method invocation is disabled
// via the DisableMethods or DisablePointerMethods options.
ContinueOnMethod bool
// SortKeys specifies map keys should be sorted before being printed. Use
// this to have a more deterministic, diffable output. Note that only
// native types (bool, int, uint, floats, uintptr and string) and types
// that support the error or Stringer interfaces (if methods are
// enabled) are supported, with other types sorted according to the
// reflect.Value.String() output which guarantees display stability.
SortKeys bool
// SpewKeys specifies that, as a last resort attempt, map keys should
// be spewed to strings and sorted by those strings. This is only
// considered if SortKeys is true.
SpewKeys bool
}
// Config is the active configuration of the top-level functions.
// The configuration can be changed by modifying the contents of spew.Config.
var Config = ConfigState{Indent: " "}
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the formatted string as a value that satisfies error. See NewFormatter
// for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, c.convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, c.convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, c.convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a Formatter interface returned by c.NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, c.convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Print(a ...interface{}) (n int, err error) {
return fmt.Print(c.convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, c.convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Println(a ...interface{}) (n int, err error) {
return fmt.Println(c.convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprint(a ...interface{}) string {
return fmt.Sprint(c.convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, c.convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a Formatter interface returned by c.NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintln(a ...interface{}) string {
return fmt.Sprintln(c.convertArgs(a)...)
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
c.Printf, c.Println, or c.Printf.
*/
func (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(c, v)
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func (c *ConfigState) Fdump(w io.Writer, a ...interface{}) {
fdump(c, w, a...)
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by modifying the public members
of c. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func (c *ConfigState) Dump(a ...interface{}) {
fdump(c, os.Stdout, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func (c *ConfigState) Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(c, &buf, a...)
return buf.String()
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a spew Formatter interface using
// the ConfigState associated with s.
func (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = newFormatter(c, arg)
}
return formatters
}
// NewDefaultConfig returns a ConfigState with the following default settings.
//
// Indent: " "
// MaxDepth: 0
// DisableMethods: false
// DisablePointerMethods: false
// ContinueOnMethod: false
// SortKeys: false
func NewDefaultConfig() *ConfigState {
return &ConfigState{Indent: " "}
}

202
vendor/github.com/davecgh/go-spew/spew/doc.go generated vendored Normal file
View File

@@ -0,0 +1,202 @@
/*
* Copyright (c) 2013 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*
Package spew implements a deep pretty printer for Go data structures to aid in
debugging.
A quick overview of the additional features spew provides over the built-in
printing facilities for Go data types are as follows:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output (only when using
Dump style)
There are two different approaches spew allows for dumping Go data structures:
* Dump style which prints with newlines, customizable indentation,
and additional debug information such as types and all pointer addresses
used to indirect to the final value
* A custom Formatter interface that integrates cleanly with the standard fmt
package and replaces %v, %+v, %#v, and %#+v to provide inline printing
similar to the default %v while providing the additional functionality
outlined above and passing unsupported format verbs such as %x and %q
along to fmt
Quick Start
This section demonstrates how to quickly get started with spew. See the
sections below for further details on formatting and configuration options.
To dump a variable with full newlines, indentation, type, and pointer
information use Dump, Fdump, or Sdump:
spew.Dump(myVar1, myVar2, ...)
spew.Fdump(someWriter, myVar1, myVar2, ...)
str := spew.Sdump(myVar1, myVar2, ...)
Alternatively, if you would prefer to use format strings with a compacted inline
printing style, use the convenience wrappers Printf, Fprintf, etc with
%v (most compact), %+v (adds pointer addresses), %#v (adds types), or
%#+v (adds types and pointer addresses):
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
Configuration Options
Configuration of spew is handled by fields in the ConfigState type. For
convenience, all of the top-level functions use a global state available
via the spew.Config global.
It is also possible to create a ConfigState instance that provides methods
equivalent to the top-level functions. This allows concurrent configuration
options. See the ConfigState documentation for more details.
The following configuration options are available:
* Indent
String to use for each indentation level for Dump functions.
It is a single space by default. A popular alternative is "\t".
* MaxDepth
Maximum number of levels to descend into nested data structures.
There is no limit by default.
* DisableMethods
Disables invocation of error and Stringer interface methods.
Method invocation is enabled by default.
* DisablePointerMethods
Disables invocation of error and Stringer interface methods on types
which only accept pointer receivers from non-pointer variables.
Pointer method invocation is enabled by default.
* ContinueOnMethod
Enables recursion into types after invoking error and Stringer interface
methods. Recursion after method invocation is disabled by default.
* SortKeys
Specifies map keys should be sorted before being printed. Use
this to have a more deterministic, diffable output. Note that
only native types (bool, int, uint, floats, uintptr and string)
and types which implement error or Stringer interfaces are
supported with other types sorted according to the
reflect.Value.String() output which guarantees display
stability. Natural map order is used by default.
* SpewKeys
Specifies that, as a last resort attempt, map keys should be
spewed to strings and sorted by those strings. This is only
considered if SortKeys is true.
Dump Usage
Simply call spew.Dump with a list of variables you want to dump:
spew.Dump(myVar1, myVar2, ...)
You may also call spew.Fdump if you would prefer to output to an arbitrary
io.Writer. For example, to dump to standard error:
spew.Fdump(os.Stderr, myVar1, myVar2, ...)
A third option is to call spew.Sdump to get the formatted output as a string:
str := spew.Sdump(myVar1, myVar2, ...)
Sample Dump Output
See the Dump example for details on the setup of the types and variables being
shown here.
(main.Foo) {
unexportedField: (*main.Bar)(0xf84002e210)({
flag: (main.Flag) flagTwo,
data: (uintptr) <nil>
}),
ExportedField: (map[interface {}]interface {}) (len=1) {
(string) (len=3) "one": (bool) true
}
}
Byte (and uint8) arrays and slices are displayed uniquely like the hexdump -C
command as shown.
([]uint8) (len=32 cap=32) {
00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
00000020 31 32 |12|
}
Custom Formatter
Spew provides a custom formatter that implements the fmt.Formatter interface
so that it integrates cleanly with standard fmt package printing functions. The
formatter is useful for inline printing of smaller data types similar to the
standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Custom Formatter Usage
The simplest way to make use of the spew custom formatter is to call one of the
convenience functions such as spew.Printf, spew.Println, or spew.Printf. The
functions have syntax you are most likely already familiar with:
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Println(myVar, myVar2)
spew.Fprintf(os.Stderr, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(os.Stderr, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
See the Index for the full list convenience functions.
Sample Formatter Output
Double pointer to a uint8:
%v: <**>5
%+v: <**>(0xf8400420d0->0xf8400420c8)5
%#v: (**uint8)5
%#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5
Pointer to circular struct with a uint8 field and a pointer to itself:
%v: <*>{1 <*><shown>}
%+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)<shown>}
%#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)<shown>}
%#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)<shown>}
See the Printf example for details on the setup of variables being shown
here.
Errors
Since it is possible for custom Stringer/error interfaces to panic, spew
detects them and handles them internally by printing the panic information
inline with the output. Since spew is intended to provide deep pretty printing
capabilities on structures, it intentionally does not return any errors.
*/
package spew

509
vendor/github.com/davecgh/go-spew/spew/dump.go generated vendored Normal file
View File

@@ -0,0 +1,509 @@
/*
* Copyright (c) 2013 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"encoding/hex"
"fmt"
"io"
"os"
"reflect"
"regexp"
"strconv"
"strings"
)
var (
// uint8Type is a reflect.Type representing a uint8. It is used to
// convert cgo types to uint8 slices for hexdumping.
uint8Type = reflect.TypeOf(uint8(0))
// cCharRE is a regular expression that matches a cgo char.
// It is used to detect character arrays to hexdump them.
cCharRE = regexp.MustCompile("^.*\\._Ctype_char$")
// cUnsignedCharRE is a regular expression that matches a cgo unsigned
// char. It is used to detect unsigned character arrays to hexdump
// them.
cUnsignedCharRE = regexp.MustCompile("^.*\\._Ctype_unsignedchar$")
// cUint8tCharRE is a regular expression that matches a cgo uint8_t.
// It is used to detect uint8_t arrays to hexdump them.
cUint8tCharRE = regexp.MustCompile("^.*\\._Ctype_uint8_t$")
)
// dumpState contains information about the state of a dump operation.
type dumpState struct {
w io.Writer
depth int
pointers map[uintptr]int
ignoreNextType bool
ignoreNextIndent bool
cs *ConfigState
}
// indent performs indentation according to the depth level and cs.Indent
// option.
func (d *dumpState) indent() {
if d.ignoreNextIndent {
d.ignoreNextIndent = false
return
}
d.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth))
}
// unpackValue returns values inside of non-nil interfaces when possible.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (d *dumpState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface && !v.IsNil() {
v = v.Elem()
}
return v
}
// dumpPtr handles formatting of pointers by indirecting them as necessary.
func (d *dumpState) dumpPtr(v reflect.Value) {
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range d.pointers {
if depth >= d.depth {
delete(d.pointers, k)
}
}
// Keep list of all dereferenced pointers to show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by dereferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := d.pointers[addr]; ok && pd < d.depth {
cycleFound = true
indirects--
break
}
d.pointers[addr] = d.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type information.
d.w.Write(openParenBytes)
d.w.Write(bytes.Repeat(asteriskBytes, indirects))
d.w.Write([]byte(ve.Type().String()))
d.w.Write(closeParenBytes)
// Display pointer information.
if len(pointerChain) > 0 {
d.w.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
d.w.Write(pointerChainBytes)
}
printHexPtr(d.w, addr)
}
d.w.Write(closeParenBytes)
}
// Display dereferenced value.
d.w.Write(openParenBytes)
switch {
case nilFound == true:
d.w.Write(nilAngleBytes)
case cycleFound == true:
d.w.Write(circularBytes)
default:
d.ignoreNextType = true
d.dump(ve)
}
d.w.Write(closeParenBytes)
}
// dumpSlice handles formatting of arrays and slices. Byte (uint8 under
// reflection) arrays and slices are dumped in hexdump -C fashion.
func (d *dumpState) dumpSlice(v reflect.Value) {
// Determine whether this type should be hex dumped or not. Also,
// for types which should be hexdumped, try to use the underlying data
// first, then fall back to trying to convert them to a uint8 slice.
var buf []uint8
doConvert := false
doHexDump := false
numEntries := v.Len()
if numEntries > 0 {
vt := v.Index(0).Type()
vts := vt.String()
switch {
// C types that need to be converted.
case cCharRE.MatchString(vts):
fallthrough
case cUnsignedCharRE.MatchString(vts):
fallthrough
case cUint8tCharRE.MatchString(vts):
doConvert = true
// Try to use existing uint8 slices and fall back to converting
// and copying if that fails.
case vt.Kind() == reflect.Uint8:
// We need an addressable interface to convert the type
// to a byte slice. However, the reflect package won't
// give us an interface on certain things like
// unexported struct fields in order to enforce
// visibility rules. We use unsafe, when available, to
// bypass these restrictions since this package does not
// mutate the values.
vs := v
if !vs.CanInterface() || !vs.CanAddr() {
vs = unsafeReflectValue(vs)
}
if !UnsafeDisabled {
vs = vs.Slice(0, numEntries)
// Use the existing uint8 slice if it can be
// type asserted.
iface := vs.Interface()
if slice, ok := iface.([]uint8); ok {
buf = slice
doHexDump = true
break
}
}
// The underlying data needs to be converted if it can't
// be type asserted to a uint8 slice.
doConvert = true
}
// Copy and convert the underlying type if needed.
if doConvert && vt.ConvertibleTo(uint8Type) {
// Convert and copy each element into a uint8 byte
// slice.
buf = make([]uint8, numEntries)
for i := 0; i < numEntries; i++ {
vv := v.Index(i)
buf[i] = uint8(vv.Convert(uint8Type).Uint())
}
doHexDump = true
}
}
// Hexdump the entire slice as needed.
if doHexDump {
indent := strings.Repeat(d.cs.Indent, d.depth)
str := indent + hex.Dump(buf)
str = strings.Replace(str, "\n", "\n"+indent, -1)
str = strings.TrimRight(str, d.cs.Indent)
d.w.Write([]byte(str))
return
}
// Recursively call dump for each item.
for i := 0; i < numEntries; i++ {
d.dump(d.unpackValue(v.Index(i)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
// dump is the main workhorse for dumping a value. It uses the passed reflect
// value to figure out what kind of object we are dealing with and formats it
// appropriately. It is a recursive function, however circular data structures
// are detected and handled properly.
func (d *dumpState) dump(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
d.w.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
d.indent()
d.dumpPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !d.ignoreNextType {
d.indent()
d.w.Write(openParenBytes)
d.w.Write([]byte(v.Type().String()))
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
d.ignoreNextType = false
// Display length and capacity if the built-in len and cap functions
// work with the value's kind and the len/cap itself is non-zero.
valueLen, valueCap := 0, 0
switch v.Kind() {
case reflect.Array, reflect.Slice, reflect.Chan:
valueLen, valueCap = v.Len(), v.Cap()
case reflect.Map, reflect.String:
valueLen = v.Len()
}
if valueLen != 0 || valueCap != 0 {
d.w.Write(openParenBytes)
if valueLen != 0 {
d.w.Write(lenEqualsBytes)
printInt(d.w, int64(valueLen), 10)
}
if valueCap != 0 {
if valueLen != 0 {
d.w.Write(spaceBytes)
}
d.w.Write(capEqualsBytes)
printInt(d.w, int64(valueCap), 10)
}
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
// Call Stringer/error interfaces if they exist and the handle methods flag
// is enabled
if !d.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(d.cs, d.w, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(d.w, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(d.w, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(d.w, v.Uint(), 10)
case reflect.Float32:
printFloat(d.w, v.Float(), 32)
case reflect.Float64:
printFloat(d.w, v.Float(), 64)
case reflect.Complex64:
printComplex(d.w, v.Complex(), 32)
case reflect.Complex128:
printComplex(d.w, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
d.dumpSlice(v)
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.String:
d.w.Write([]byte(strconv.Quote(v.String())))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
d.w.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
numEntries := v.Len()
keys := v.MapKeys()
if d.cs.SortKeys {
sortValues(keys, d.cs)
}
for i, key := range keys {
d.dump(d.unpackValue(key))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.MapIndex(key)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Struct:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
vt := v.Type()
numFields := v.NumField()
for i := 0; i < numFields; i++ {
d.indent()
vtf := vt.Field(i)
d.w.Write([]byte(vtf.Name))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.Field(i)))
if i < (numFields - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(d.w, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(d.w, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it in case any new
// types are added.
default:
if v.CanInterface() {
fmt.Fprintf(d.w, "%v", v.Interface())
} else {
fmt.Fprintf(d.w, "%v", v.String())
}
}
}
// fdump is a helper function to consolidate the logic from the various public
// methods which take varying writers and config states.
func fdump(cs *ConfigState, w io.Writer, a ...interface{}) {
for _, arg := range a {
if arg == nil {
w.Write(interfaceBytes)
w.Write(spaceBytes)
w.Write(nilAngleBytes)
w.Write(newlineBytes)
continue
}
d := dumpState{w: w, cs: cs}
d.pointers = make(map[uintptr]int)
d.dump(reflect.ValueOf(arg))
d.w.Write(newlineBytes)
}
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func Fdump(w io.Writer, a ...interface{}) {
fdump(&Config, w, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(&Config, &buf, a...)
return buf.String()
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by an exported package global,
spew.Config. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func Dump(a ...interface{}) {
fdump(&Config, os.Stdout, a...)
}

419
vendor/github.com/davecgh/go-spew/spew/format.go generated vendored Normal file
View File

@@ -0,0 +1,419 @@
/*
* Copyright (c) 2013 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"reflect"
"strconv"
"strings"
)
// supportedFlags is a list of all the character flags supported by fmt package.
const supportedFlags = "0-+# "
// formatState implements the fmt.Formatter interface and contains information
// about the state of a formatting operation. The NewFormatter function can
// be used to get a new Formatter which can be used directly as arguments
// in standard fmt package printing calls.
type formatState struct {
value interface{}
fs fmt.State
depth int
pointers map[uintptr]int
ignoreNextType bool
cs *ConfigState
}
// buildDefaultFormat recreates the original format string without precision
// and width information to pass in to fmt.Sprintf in the case of an
// unrecognized type. Unless new types are added to the language, this
// function won't ever be called.
func (f *formatState) buildDefaultFormat() (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
buf.WriteRune('v')
format = buf.String()
return format
}
// constructOrigFormat recreates the original format string including precision
// and width information to pass along to the standard fmt package. This allows
// automatic deferral of all format strings this package doesn't support.
func (f *formatState) constructOrigFormat(verb rune) (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
if width, ok := f.fs.Width(); ok {
buf.WriteString(strconv.Itoa(width))
}
if precision, ok := f.fs.Precision(); ok {
buf.Write(precisionBytes)
buf.WriteString(strconv.Itoa(precision))
}
buf.WriteRune(verb)
format = buf.String()
return format
}
// unpackValue returns values inside of non-nil interfaces when possible and
// ensures that types for values which have been unpacked from an interface
// are displayed when the show types flag is also set.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (f *formatState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface {
f.ignoreNextType = false
if !v.IsNil() {
v = v.Elem()
}
}
return v
}
// formatPtr handles formatting of pointers by indirecting them as necessary.
func (f *formatState) formatPtr(v reflect.Value) {
// Display nil if top level pointer is nil.
showTypes := f.fs.Flag('#')
if v.IsNil() && (!showTypes || f.ignoreNextType) {
f.fs.Write(nilAngleBytes)
return
}
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range f.pointers {
if depth >= f.depth {
delete(f.pointers, k)
}
}
// Keep list of all dereferenced pointers to possibly show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by derferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := f.pointers[addr]; ok && pd < f.depth {
cycleFound = true
indirects--
break
}
f.pointers[addr] = f.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type or indirection level depending on flags.
if showTypes && !f.ignoreNextType {
f.fs.Write(openParenBytes)
f.fs.Write(bytes.Repeat(asteriskBytes, indirects))
f.fs.Write([]byte(ve.Type().String()))
f.fs.Write(closeParenBytes)
} else {
if nilFound || cycleFound {
indirects += strings.Count(ve.Type().String(), "*")
}
f.fs.Write(openAngleBytes)
f.fs.Write([]byte(strings.Repeat("*", indirects)))
f.fs.Write(closeAngleBytes)
}
// Display pointer information depending on flags.
if f.fs.Flag('+') && (len(pointerChain) > 0) {
f.fs.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
f.fs.Write(pointerChainBytes)
}
printHexPtr(f.fs, addr)
}
f.fs.Write(closeParenBytes)
}
// Display dereferenced value.
switch {
case nilFound == true:
f.fs.Write(nilAngleBytes)
case cycleFound == true:
f.fs.Write(circularShortBytes)
default:
f.ignoreNextType = true
f.format(ve)
}
}
// format is the main workhorse for providing the Formatter interface. It
// uses the passed reflect value to figure out what kind of object we are
// dealing with and formats it appropriately. It is a recursive function,
// however circular data structures are detected and handled properly.
func (f *formatState) format(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
f.fs.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
f.formatPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !f.ignoreNextType && f.fs.Flag('#') {
f.fs.Write(openParenBytes)
f.fs.Write([]byte(v.Type().String()))
f.fs.Write(closeParenBytes)
}
f.ignoreNextType = false
// Call Stringer/error interfaces if they exist and the handle methods
// flag is enabled.
if !f.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(f.cs, f.fs, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(f.fs, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(f.fs, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(f.fs, v.Uint(), 10)
case reflect.Float32:
printFloat(f.fs, v.Float(), 32)
case reflect.Float64:
printFloat(f.fs, v.Float(), 64)
case reflect.Complex64:
printComplex(f.fs, v.Complex(), 32)
case reflect.Complex128:
printComplex(f.fs, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
f.fs.Write(openBracketBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
numEntries := v.Len()
for i := 0; i < numEntries; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(v.Index(i)))
}
}
f.depth--
f.fs.Write(closeBracketBytes)
case reflect.String:
f.fs.Write([]byte(v.String()))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
f.fs.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
f.fs.Write(openMapBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
keys := v.MapKeys()
if f.cs.SortKeys {
sortValues(keys, f.cs)
}
for i, key := range keys {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(key))
f.fs.Write(colonBytes)
f.ignoreNextType = true
f.format(f.unpackValue(v.MapIndex(key)))
}
}
f.depth--
f.fs.Write(closeMapBytes)
case reflect.Struct:
numFields := v.NumField()
f.fs.Write(openBraceBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
vt := v.Type()
for i := 0; i < numFields; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
vtf := vt.Field(i)
if f.fs.Flag('+') || f.fs.Flag('#') {
f.fs.Write([]byte(vtf.Name))
f.fs.Write(colonBytes)
}
f.format(f.unpackValue(v.Field(i)))
}
}
f.depth--
f.fs.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(f.fs, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(f.fs, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it if any get added.
default:
format := f.buildDefaultFormat()
if v.CanInterface() {
fmt.Fprintf(f.fs, format, v.Interface())
} else {
fmt.Fprintf(f.fs, format, v.String())
}
}
}
// Format satisfies the fmt.Formatter interface. See NewFormatter for usage
// details.
func (f *formatState) Format(fs fmt.State, verb rune) {
f.fs = fs
// Use standard formatting for verbs that are not v.
if verb != 'v' {
format := f.constructOrigFormat(verb)
fmt.Fprintf(fs, format, f.value)
return
}
if f.value == nil {
if fs.Flag('#') {
fs.Write(interfaceBytes)
}
fs.Write(nilAngleBytes)
return
}
f.format(reflect.ValueOf(f.value))
}
// newFormatter is a helper function to consolidate the logic from the various
// public methods which take varying config states.
func newFormatter(cs *ConfigState, v interface{}) fmt.Formatter {
fs := &formatState{value: v, cs: cs}
fs.pointers = make(map[uintptr]int)
return fs
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
Printf, Println, or Fprintf.
*/
func NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(&Config, v)
}

148
vendor/github.com/davecgh/go-spew/spew/spew.go generated vendored Normal file
View File

@@ -0,0 +1,148 @@
/*
* Copyright (c) 2013 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"fmt"
"io"
)
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the formatted string as a value that satisfies error. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a default Formatter interface returned by NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(spew.NewFormatter(a), spew.NewFormatter(b))
func Print(a ...interface{}) (n int, err error) {
return fmt.Print(convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(spew.NewFormatter(a), spew.NewFormatter(b))
func Println(a ...interface{}) (n int, err error) {
return fmt.Println(convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprint(a ...interface{}) string {
return fmt.Sprint(convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintln(a ...interface{}) string {
return fmt.Sprintln(convertArgs(a)...)
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a default spew Formatter interface.
func convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = NewFormatter(arg)
}
return formatters
}

Some files were not shown because too many files have changed in this diff Show More