Do repository synchronization tests using k8s.gcr.io/pause and
k8s.gcr.io/coredns/coredns instead of docker.io/alpine.
The k8s.gcr.io/pause repository includes multiple tags, at least one of
which is a single image ("1.0"), and at least one of which is a manifest
list ("3.2", "3.3"), and includes a "latest" tag.
The k8s.gcr.io/coredns/coredns repository includes multiple tags, at
least one of which is a single image ("v1.6.6"), and at least one of which
is a manifest list ("v1.8.0").
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Switch most of our tests that exercise reading, copying from, and
inspecting tags that point to manifest lists from using
docker.io/estesp/busybox to using
registry.fedoraproject.org/fedora-minimal, which doesn't limit how often
we can pull the images.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
We are preparing for RHEL 8.4 release and want to make
sure all container tools have the same containers suppackages.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Pass down the creds from yaml file only if the values are not empty.
Enables to use credentials from other authfiles alternatively.
Signed-off-by: Qi Wang <qiwan@redhat.com>
Homebrew:
> Warning: You are using macOS 10.13.
> We (and Apple) do not provide support for this old version.
> You will encounter build failures with some formulae.
So, update to the 10.14 major version, fully-updated.
Also remove the Xcode update attempt, it was added before
Homebrew was warning about the Xcode version, but updating only
after running Homebrew does not help, and anyway it does not
complain anymore after the update.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Also update to the correct one information about required env variables
for multi-arch build
Signed-off-by: Yulia Gaponenko <yulia.gaponenko1@de.ibm.com>
Service accounts (a.k.a. robots) in `quay.io` are forcably namespaced
to the user or orginization under which they are created. Therefore,
it is impossible to use a common login/password to push images for
both `skopeo` and `containers` namespaces. Worse, because the
authentication is recorded against `quay.io`, multiple login sessions
are required.
Fix this by adding a function definition which verifies non-empty
username/password arguments, before logging in. Call this function
as needed from relevant targets, prior to pushing images.
Signed-off-by: Chris Evich <cevich@redhat.com>
This replicates the --all copy flag to sync to perform the same
behavior. Namely, the default is CopySystemImage unless --all is passed
which changes the behavior to CopyAllImages. While it is probably
desirable for --all to be the default as there is no option to override
ones architecture with the sync command, --all can potentially break
existing sync incantations depending on registry support. Hence
CopySystemImage remains the default.
Signed-off-by: Andrew DeMaria <ademaria@cloudflare.com>
Travis is used, as it has native hardware to run the build for many
architectures (amd64, s390x, ppc64le). Docker is used as build and
manifest tool. `quay.io/skopeo/upstream:master`, `quay.io/skopeo/stable:v1.2.0`
and `quay.io/containers/skopeo:v1.2.0` are specified as target multi-arch
upstream image.
Travis config file has 3 stages:
- local-build to do the local test for linux/amd64 and osx, as it was in
the initial code
- image-build-push to build and push images for specific architectures
(amd64, s390x, ppc64le)
- manifest-multiarch-push to create and push manifest for multi-arch
image - `quay.io/skopeo/upstream:master`, `quay.io/skopeo/stable:v1.2.0`
and `quay.io/containers/skopeo:v1.2.0`
last stage amnd image push step are not done for pull request.
2 env variables specified in Travis settings are expected - QUAY_USERNAME and
QUAY_PASSWORD to push the images to quay.io.
As a result multi-arch images for 3 architectures are published.
README about build setup id prepared
Signed-off-by: Yulia Gaponenko <yulia.gaponenko1@de.ibm.com>
For yet unknown reasons, Travis throws permission errors when trying to
recursively list the contents of a temp directory. It passes locally,
so disable the logs to unblock CI. Note, the reasons for the error are
yet to be revealed.
Related-issue: https://github.com/containers/skopeo/issues/1093
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
We now use the osusergo build tag to not use the glibc functions which
occur in the warnings but them from golang the os/user package.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
By default we should build bin/skopeo locally
and build docs locally.
Show output when doing make docs.
Add description in `make help` to explain default
behaviour.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Go 1.15 deprecates checking CN; this broke gating tests:
Get "https://localhost:5000/v2/": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
Easy two-line solution in the 'openssl' invocation. Huge
thanks to Nalin for tracking down and fixing while I was
still getting started:
https://github.com/containers/buildah/pull/2595
Copied from 0f2892a5b021de3b1cf273f5679fda8298b57c02 in buildah
Signed-off-by: Ed Santiago <santiago@redhat.com>
Add a make target that cross-compiles for a handful of the possible
targets that `go tool dist list` can tell us about.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
... which no longer works after #932.
This does not add documentation for the current static build approach,
nor does it add any other place where DISABLE_CGO is documented;
both are not tested by CI, and discouraged due to bad integration
with the rest of the system.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
because that's what users are looking for, and instead of using
a containers-storage: source, which might not even work all that
well with all the automatic defaults Podman sets up.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Enables to retry skopeo inspect. Add `--retry-times` to set the number of times to retry. Use exponential backoff and 1s as default initial retry delay.
Signed-off-by: Qi Wang <qiwan@redhat.com>
... which are currently failing with
> Error: The `brew link` step did not complete successfully
> The formula built, but is not symlinked into /usr/local
> Could not symlink Frameworks/Python.framework/Headers
> Target /usr/local/Frameworks/Python.framework/Headers
> is a symlink belonging to python@2. You can unlink it:
> brew unlink python@2
because the Travis-installed machine apparently has quite a few
Homebrew formulae installed, with an old version of Homebrew,
including a now-removed python@2, and that prevents updates of
python@3.
Remove the obsolete motivation for running (brew update), and replace it
with a similarly-good motivation that the Travis images are just too old
to be relevant to users.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
registry:2 no longer contains htpasswd.
Also don't use log_and_run ... >> $file
because that will cause the command to be logged to $file.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Add `-y` options to yum clean all
Only delete below /var/cache/dnf so that I can use the
-v /var/cache/dnf:/var/cache/dnf:O option when building
to speed up builds.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Say that the regex is the cause, include it in the error message,
and don't continue as if the compilation succeeded.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... instead of manually parsing strings.
Should not change behavior, except maybe error messages if the
registry returns invalid tags.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This removes another string formatting use, and removes the
last recently introduced docker.Reference->reference.Named
redundancy.
Should not change behavior, apart from error messages.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Right now that only complicates code by going through a
types.ImageReference->reference.Named->types.ImageReference sequence,
but that will change as we modify the callers as well.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
It is redundant, only used to form a tagged reference,
which can be done more safely using reference.WithTag.
Also move the *types.SystemContext parameter to the front,
as is usual.
Should not change behavior, apart from a few error messages.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
- Improve the language
- Be consistent with the previous example about a trailing slash
- Don't unnecessarily quote :, it is not a shell metacharacter.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Fields that magically change their behavior depending on type of the value
are too much hassle for no benefit.
For now, this just copies&pastes the full loop in imagesToCopyFromRegistry
to create another loop handling the new ImagesByTagRegex field. Simplifications
to reduce duplication will follow.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This PR adds the Dockerfiles necessary to create the upstream
and testing variants of the Skopeo container images that will
reside in quay.io/skopeo/upstream and quay.io/skopeo/testing
repositories. The only difference in the Dockerfile between
the stable and testing image is the option `--enablerepo updates-testing`
was added. The testing variant is relatively the same, but
I'd to clone and install Skopeo in the container.
I've also added a README.md which explains all of the varities
of images and includes some sample usage.
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Adds the Dockerfile for building the Skopeo container
image on quay.io. Once merged, this image will be
built automatically upon any merge into the master
branch. The images will live at:
quay.io/containers/skopeo:latest
quay.io/skopeo/stable:latest
I've built an image using this Dockerfile and have pushed
it to both repositories if you want to play with that.
Once merged, I'll create similar Dockerfiles for
quay.io/skopeo/testing and quay.io/skopeo/upstream.
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Only bumps the version number after the recent vendoring
from master, but Dependabot seems to be confused by that;
so, update to the final release to hopefully un-confuse it.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This is an unreleased version of c/image, but it is important to
to have the test added in in the next commit enforcing as soon as
possible.
> go get github.com/containers/image/v5@HEAD
> make vendor
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Use cobra in skopeo can help share code with podman/buildah(code for skopeo login/logout CLI).
(libpod issue #839)
Signed-off-by: Qi Wang <qiwan@redhat.com>
'podman info' changed format, again, without preserving backward
compatibility. Basically, some keys that used to be lower-case
are now upper-case-first-letter.
These tests need to work with new podman on rawhide, and
old podman on f31/f32 and possibly RHEL. We must therefore
add a revolting workaround for the change.
Signed-off-by: Ed Santiago <santiago@redhat.com>
This is conceptually consistent: First change the set of
dependencies, then update the vendored copy.
(Due to (go mod verify) afterwards, and CI running this again,
this should not make a difference in practice, so this is just
a clean-up.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
(export a=b command args) does not run (command args) with a=b,
it sets $a to b, and marks variables $a $command $args as exported,
i.e. (command args) is not run.
So, before https://github.com/containers/skopeo/pull/888 we were not actually
running (go mod tidy), and now we are not running (go mod vendor).
Just use $(GO), which already sets GO111MODULE=on, without the extra export.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
PR 834 broke Fedora gating tests, because "--runtime runc"
doesn't work so well on Rawhide. Let's try to be smarter
about when we add that override.
Signed-off-by: Ed Santiago <santiago@redhat.com>
We currently need it to drag in recent versions of other dependencies,
per https://github.com/containers/skopeo/issues/796 .
I'll work to update the relevant dependencies in c/image, but that will
only propagate to skopeo in the next c/image release; in the meantime,
this at least undoes the downgrades.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Replace shortnames with FQINs; this should allow tests to
run regardless of the state of registries.conf.
And, fix one broken new test that invoked 'jq' (without dot).
This usage works in Fedora, but not in RHEL.
Signed-off-by: Ed Santiago <santiago@redhat.com>
crun had a regression running on cgroupsv1 in containers. It has been
fixed upstream but did not yet bubble up into the packages. Force using
runc to unblock Skopeo's CI.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Fix cli to use REGISTRY_AUTH_FILE if set and to display the
default location to use for authfiles in the `skopeo copy --help`
Modify tests to verify the different settings.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Bump github.com/containers/image/v5 from 5.2.0 to 5.2.1
* Bump gopkg.in/yaml.v2 from 2.2.7 to 2.2.8
* Bump github.com/containers/common from 0.0.7 to 0.1.4
* Remove the reference to openshift/api
* vendor github.com/containers/image/v5@v5.2.0
* Manually update buildah to v1.13.1
* add specific authfile options to copy (and sync) command.
* Bump github.com/containers/buildah from 1.11.6 to 1.12.0
* Add context to --encryption-key / --decryption-key processing failures
* Bump github.com/containers/storage from 1.15.2 to 1.15.3
* Bump github.com/containers/buildah from 1.11.5 to 1.11.6
* remove direct reference on c/image/storage
* Makefile: set GOBIN
* Bump gopkg.in/yaml.v2 from 2.2.2 to 2.2.7
* Bump github.com/containers/storage from 1.15.1 to 1.15.2
* Introduce the sync command
* openshift cluster: remove .docker directory on teardown
* Bump github.com/containers/storage from 1.14.0 to 1.15.1
* document installation via apk on alpine
* Fix typos in doc for image encryption
* Image encryption/decryption support in skopeo
* make vendor-in-container
* Bump github.com/containers/buildah from 1.11.4 to 1.11.5
* Travis: use go v1.13
* Use a Windows Nano Server image instead of Server Core for multi-arch
testing
* Increase test timeout to 15 minutes
* Run the test-system container without --net=host
* Mount /run/systemd/journal/socket into test-system containers
* Don't unnecessarily filter out vendor from (go list ./...) output
* Use -mod=vendor in (go {list,test,vet})
* Bump github.com/containers/buildah from 1.8.4 to 1.11.4
* Bump github.com/urfave/cli from 1.20.0 to 1.22.1
* skopeo: drop support for ostree
* Don't critically fail on a 403 when listing tags
* Revert "Temporarily work around auth.json location confusion"
* Remove references to atomic
* Remove references to storage.conf
* Dockerfile: use golang-github-cpuguy83-go-md2man
* bump version to v0.1.41-dev
* systemtest: inspect container image different from current platform
arch
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The referenced tag has been removed, which breaks dependabot (#791).
This is another attempt to fix it, by removing an explicit reference
(which was added when updating Buildah, because the version seemed newer than
Buildah's v0.0.0 with a newer commit).
The referenced package is never even physically vendored in here, so remove the
reference:
> go mod edit -droprequire=github.com/openshift/api
> make vendor
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
With additional prefixed flags for authfiles, it is possible to override the shared authfile flag to use different authfiles for src and dest registries. This is an important feature if the two registries have the same domain (but different paths) and require separate credentials.
Closes#773.
Signed-off-by: Daniel Strobusch <1847260+dastrobu@users.noreply.github.com>
Remove a direct reference on c/image/v5/storage which breaks the build
when using the `containers_image_storage_stub`. The reference is only
used to get the storage tranport string, which is now hard-coded; this
is fine as the transport will not change for backwards compat.
Fixes: #771
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The skopeo sync command can sync images between a SOURCE and a
destination.
The purpose of this command is to assist with the mirroring of
container images from different docker registries to a single
docker registry.
Right now the following source/destination locations are implemented:
* docker -> docker
* docker-> dir
* dir -> docker
The dir location is supported to handle the use case
of air-gapped environments.
In this context users can perform an initial sync on a trusted machine
connected to the internet; that would be a `docker` -> `dir` sync.
The target directory can be copied to a removable drive that can then be
plugged into a node of the air-gapped environment. From there a
`dir` -> `docker` sync will import all the images into the registry serving
the air-gapped environment.
Notes when specifying the `--scoped` option:
The image namespace is changed during the `docker` to `docker` or `dir` copy.
The FQDN of the registry hosting the image will be added as new root namespace
of the image. For example, the image `registry.example.com/busybox:latest`
will be copied to
`registry.local.lan/registry.example.com/busybox:latest`.
The image namespace is not changed when doing a
`dir:` -> `docker` sync operation.
The alteration of the image namespace is used to nicely scope images
coming from different registries (the Docker Hub, quay.io, gcr,
other registries). That allows all of them to be hosted on the same
registry without incurring in clashes and making their origin explicit.
Signed-off-by: Flavio Castelli <fcastelli@suse.com>
Co-authored-by: Marco Vedovati <mvedovati@suse.com>
Remove the $HOME/.docker directory when tearing down a cluster,
so that subsequent cluster creations can be carried out successfully.
Signed-off-by: Marco Vedovati <mvedovati@suse.com>
Add a vendor-in-container make target to allow for executing make vendor
in a golang:1.13 container. The CI is currently enforcing golang 1.13
which has a different vendoring behavior than previous versions which
can lead to failing tests as some files might be added or deleted. The
new make target will help users who are not using 1.13 to vendor their
changes.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
This image is about 100 MB instead of about 2 GB for the Server Core,
decreasing disk requirements and hopefully significantly speeeding up
integration tests.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Experimentally, this seems to help with localhost access inside that
container (but I have no idea what's the reason for that).
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
The nested podman tries to write to it. This primarily only
removes noise from logs, it does not seem to significantly change
behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This allows using the vendored dependencies instead of
searching for them in $GOPATH and elsewhere.
This does not necessarily matter for skopeo itself, but
the test-skopeo Makefile target in containers/image uses
(go mod edit -replace) to replace the vendored c/image with
a locally-edited copy; skopeo's (make check) then runs tests in
a container which does not have access to this locally-edited
copy, and since Go 1.13 this causes (go {list,test,vet})
to fail if -mod=vendor is not used.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Move signature yaml file to point at /var/lib/containers/sigstore.
Change skopeo-copy.1 to use containers-storage and docker transports
rather then atomic.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
These are getting out of date and should be left in containers/storage.
If packagers need it then then should get it from that repo.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When --raw is provided, can inspect show the raw manifest list, w/o
requiring any particular platform to be present, this test case is
used for make sure inspect command w/ --raw option works well for
container image is different from current platform arch.
Signed-off-by: Alex Jia <chuanchang.jia@gmail.com>
Add a --all/-a flag to instruct us to attempt to copy all of the
instances in the source image, if the source image specified to "skopeo
copy" is actually a list of images. Previously, we'd just try to locate
one for our preferred OS/arch combination.
Add a couple of tests to verify that we can copy an image into and then
back out of containers-storage. The contents of an image that has been
copied out of containers-storage need a bit of tweaking to compensate
for containers-storage's habit of returning uncompressed versions of the
layer blobs that were originally written to it, in order to be
comparable to the image as it was when it was pulled from a registry.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
- zstd test - give unique name.
a36d81c copy/pasted an existing test but didn't give
the new test a new name, leading to bats warning:
duplicate test name(s) in [...]/020-copy.bats
- start_registry() - use bash builtins, not curl, to test
if registry port is open.
curl on Fedora now barfs with "Received HTTP/0.9 when not
allowed" when the registry is run with SSL, because the
response is not valid HTTP. One workaround would be 'curl
--http0.9' but (surprise) that option doesn't exist on rhel8;
and even with that option we would need --output /dev/null
to silence a different curl warning. Curl is overkill
for this purpose anyway, all we really need is netcat
or some simple binary is-port-listening-or-not test.
Fortunately, bash provides a /dev/tcp/<host>/<port>
emulator that does the right thing and works on Fedora
as well as RHEL8.
- new log_and_run() helper
This is the noisiest yet least critical part of this PR.
I'm sorry. It's motivated by my frustration in trying
to reproduce the curl problem above: getting just the
right incantation of openssl + podman-run cost me time.
With this enhancement, important commands are logged
as part of the output of failing tests, making it
easy[*] for maintenance programmers to figure out a
recipe for reproducing the failure.
[*] "easy" as long as the test-writing developer
uses log_and_run() wisely.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Add a systemtest copying an image from docker to storage and then to an
oci-archive. There are other ways to trigger the same code paths, but
this one has caught a regression in c/image in libpod's.
Fixes: #734
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The "inspect: env" test started failing since the environment in the
`fedora:latest` image has changed. Hence, only check for `PATH` in
the image's environment which is a defacto standard.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Skip the NewImage() step if we're just inspecting the raw manifest, so
that if the tag or digest being inspected resolves to a manifest list,
the local arch/OS combination doesn't need to be found in it to avoid an
error.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
This mainly pulls in the latest support for zstd-compressed layers and
eases testing of containers/image.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The PR #700 replaced ostree buildtag with containers_image_ostree.
However specifying the ostree buildtag is needed by containers/storage
vfs driver.
Signed-off-by: Marco Vedovati <mvedovati@suse.com>
Starting from 9b902d0, the ostree transport is disabled by default,
and ostree is enabled with the tag containers_image_ostree.
Signed-off-by: Marco Vedovati <mvedovati@suse.com>
* vendor github.com/containers/image@v3.0.0
* enforce blocking of registries
* Fix lowest possible go version to be 1.9
* man pages: add --dest-oci-accept-uncompressed-layers
* bash completion: add --dest-oci-accept-uncompressed-layers
* README.md: skopeo on openSUSE
* copy: add a CLI flag for OCIAcceptUncompressedLayers
* migrate to go modules
* README: Clarify use of `libbtrfs-dev` on Ubuntu
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Go 1.13.x isn't sensitive to the `GO111MODULE` environment variable
causing `make binary-local` to not use the vendored sources in
`./vendor`. Force builds of module-supporting go versions to use the
vendored sources by setting `-mod=vendor`.
Verified in a `fedora:rawhide` container.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Vendor in the latest c/image to enforce blocking of registries when
creating a c/image/docker.dockerClient. Add integration tests to
avoid regressions.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* progress bar: use spinners for unknown blob sizes
* improve README.md and the review of the changes
* use 'containers_image_ostree' as build tag
* ostree: default is no OStree support
* Add "Env" to ImageInspectInfo
* config.go: improve debug message
* config.go: log where credentials come from
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
containers/storage needs math/bits which has been added in go 1.9, so
this is now the lowest possible go version to build skopeo. We can also
remove the GO15VENDOREXPERIMENT variable since this has been enabled in
go 1.6 per default and removed in go 1.7.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
Turn of go modules to avoid breaking build environments to accidentally
try pulling the dependencies instead of using the ./vendor directory.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Adds a simple documentation how to install skopeo and its build dependencies
on an openSUSE distribution
Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Don't get tricked by the v1.5.2-0.20190620105408-93b1deece293 reference
in the go.mod file. The upper commit is *after* v2.0.0 and go simply
has a bug in dealing with git tags.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
There are cases where we want to pass this flag to the actual copy engine,
so let's add a CLI flag for it.
Signed-off-by: Tycho Andersen <tycho@tycho.ws>
Using `go get` with go modules has side-effects that we can avoid by
installing golint from the Fedora repositories.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
This adds the mirror-by-digest-only option to mirrors, and moves the search
order to an independent list.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This does not happen in this repo's tests, but containers/image's
(make test-skopeo) fails in the containers_image_openpgp configuration with
> not ok 10 signing
> ...
> # time="2019-06-11T20:59:32Z" level=fatal msg="Signing not supported: signing is not supported in github.com/containers/image built with the containers_image_openpgp build tag"
To reproduce/test this:
> make test-system BUILDTAGS='ostree containers_image_openpgp'
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
The usual 'podman run -d' race condition: we've been forking
off the container but not actually making sure it's up; this
leads to flakes in which we try (and fail) to access it.
Solution: use curl to check the port; we will expect a zero
exit status once we can connect. Time out at ten seconds.
Resolves: #675
Signed-off-by: Ed Santiago <santiago@redhat.com>
Since GPG 2.1, GPG asks for a passphrase by default; opt out when
generating test keys to avoid
> gpg: agent_genkey failed: No pinentry
> gpg: key generation failed: No pinentry
which happens otherwise (and we can't use an interactive pinentry
in a batch process anyway).
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Skopeo CI tests run under podman; hence the registries
run in the tests will be podman-in-podman. This requires
complex muckery to make work:
- install bats, jq, and podman in the test image
- add new test-system Make target. It runs podman
with /var/lib/containers bind-mounted to a tmpdir
and with other necessary options; and invokes a
test script that hack-edits /etc/containers/storage.conf
before running podman for the first time.
- add --cgroup-manager=cgroupfs option to podman
invocations in BATS: without this, podman-in-podman
fails with:
systemd cgroup flag passed, but systemd support for managing cgroups is not available
Also: gpg --pinentry-mode option is not available on all
our test platforms. Check for it before using.
Signed-off-by: Ed Santiago <santiago@redhat.com>
- Got TLS registry working, and test enabled. The trick was to
copy the .crt file to a separate directory *without* the .key
- auth test - set up a private XDG_RUNTIME_DIR, in case tests
are being run by a real user.
- signing test - remove FIXME comments; questions answered.
- helpers.bash - document start_registries(); save a .crt file,
not .cert; and remove unused stop_registries() - it's too hard
to do right, and very easy for individual tests to 'podman rm -f'
- run-tests - remove SKOPEO_BINARY definition, it's inconsistent
with the one in helpers.bash
Signed-off-by: Ed Santiago <santiago@redhat.com>
We need to verfy that the user entered a valid transport before attempting
to see if the transport exists, otherwise skopeo segfaults.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This change fixes skopeo usage in restricted environment such as
bubblewrap where it doesn't need extra capabilities or user namespace
to perform its action.
Close#649
Signed-off-by: Tristan Cacqueray <tdecacqu@redhat.com>
Add a --config option to "skopeo inspect" to dump an image's
configuration blob in the OCI format, or the original format
if --config and --raw are specified.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Follow PR #433Close#421
Currently skopeo inspect allows to:
Use the default credentials in $HOME/.docker.config
Explicitly define credentials via de --creds flag
This implements a --no-creds flag which will query docker registries anonymously.
Signed-off-by: Qi Wang <qiwan@redhat.com>
overlay: propagate errors from mountProgram
utils: root in a userns uses global conf file
Fix handling of additional stores
Correctly check permissions on rootless directory
Fix possible integer overflow on 32bit builds
Evaluate device path for lvm
lockfile test: make concurrent RW test determinisitc
lockfile test: make concurrent read tests deterministic
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This commit contains the necessary split-up between buildah/pkg and
buildah/util to avoid dependency breaks.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
This commit simply bumps containers/storage to the latest version to
unblock the containers/image integration test runs.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
Currently we are only installing the skopeo.1 man page. This
change will generate and install all man pages.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Since both of tabs and spaces in indentation were used and
there were tabs expected 4 spaces width and 8 spaces width,
only spaces use in indentation.
Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
After a global option was specified, a following string for global
options, commands, and command options was not complemented.
Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
Add checks to Tarvis to make sure that the vendor.conf is in sync with
the code and the dependencies in ./vendor. Do this by first running
`make vendor` followed by running `./hack/tree_status.sh` to check if
any file in the tree has been changed.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Most of the dependencies have been copied from libpod's vendor.conf
where such a cleanup has been executed recently.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
This script is meant to be used in CI after a `make vendor` run. It's
sole purpose is to execute a `git status --porcelain` and fail with the
list of files reported by it.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
containers/image moved to a new progress-bar library to fix various
issues related to overlapping bars and redundant entries.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Create a different man page for each of the subcommands.
Also replace some krufty references to kpod with podman
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Since the string of options variable as pattern in the case statement has
not been delimited and it does not match the value of prev variable,
bash completions tries to complement any option even when a specified
option requires a argument.
This fix stops complementing options when a option requires a argument.
Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
Move documentation about dependencies management from README.md to
CONTRIBUTING.md.
Closes#583
Signed-off-by: Silvano Cirujano Cuesta <silvano.cirujano-cuesta@siemens.com>
When copying images and the output is not a tty (e.g., when piping to a
file) print single lines instead of using progress bars. This avoids
long and hard to parse output.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Vendor the latest containers/image 50e5e55e46a391df8fce1291b2337f1af879b822
to enable parallel copying of layers.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
some tests I've done to try out the difference in performance:
I am using a directory repository so to not depend on the network.
User time (seconds): 39.40
System time (seconds): 6.83
Percent of CPU this job got: 121%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:38.07
User time (seconds): 8.32
System time (seconds): 1.62
Percent of CPU this job got: 128%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:07.72
User time (seconds): 42.68
System time (seconds): 6.64
Percent of CPU this job got: 162%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:30.44
User time (seconds): 8.94
System time (seconds): 1.51
Percent of CPU this job got: 178%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:05.85
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
That in turn makes sure that the cli.String() etc. flag access functions
are not used, and all flag handling is done using the *Options structures
and the Destination: members of cli.Flag.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We no longer need it for handling flags.
Also, require the caller to explicitly pass an image name to parseImage
instead of, horribly nontransparently, using the first CLI option.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
It was not really any clearer when broken out. We already have
a pair of trivial src/dest API calls before this, so adding
a similar src/dest call for SystemContext follows the pattern.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We no longer need the *cli.Context parameter, and at that point
it looks much cleaner to make this a method (already individually;
it will be even cleaner after a similar imageDestOptions conversion).
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
contextFromImageOptions is finally not using any string-based lookup
in cli.Context, so we don't need to record this value any more.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This introduces YET ANOTHER *Options structure, only to share this
option between copy source and destination. (We do need to do this,
because the libraries, rightly, refuse to work with source and
destination declaring its own versino of the --authfile flag.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This is an extension of imageOptions that carries destination-specific
flags.
This will allow us to handle --dest-* flags without also exposing
pointless --src-* flags.
(This is, also, where the type-safety somewhat breaks down;
after all the work to make the data flow and availability explicit,
everything ends up in an types.SystemContext, and it's easy enough
to use a destination-specific one for sources. OTOH, this is
not making the situation worse in any sense.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This is one of the ugliest parts; we need an extra parameter to support
the irregular screds/dcreds aliases.
This was previously unsupported by (skopeo layers).
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We don't want to worry about mismatch of the flagPrefix value
between imageFlags() and contextFromImageOptions(). For now,
record it in imageOptions; eventually we will stop using it in
contextFromImageOptions and remove it again.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This is similar to the previous *Options structures, but this one
will support differing sets of options, in particular for the
copy source/destination.
The way the return values of imageFlags() are integrated into
creation of a cli.Command forces fakeContext() in tests to do
very ugly filtering to have a working *imageOptions available
without having a copyCmd() cooperate to give it to us. Rather
than extend copyCmd(), we do the filtering, because the reliance
on copyCmd() will go away after all flags are migrated, and so
will the filtering and fakeContext() complexity.
Finally, rename contextFromGlobalOptions to not lie about only
caring about global options.
This only introduces the infrastructure, all flags continue
to be handled in the old way.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
contextFromGlobalOptions now uses globalOptions instead
of cli.Context.Global* . That required passing globalOptions
through a few more functions.
Now, "all" that is left are all the non-global options
handled in contextFromGlobalOptions.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Replace commandTimeoutContextFromGlobalOptions with
globalOptions.commandTimeoutContext. This requires passing
globalOptions to more per-command *Options state.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This works just like the command-specific options. Handles only
the single flag for now, others will be added as the infrastructure
is built.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This works just like the command-specific options. Also
moves the "Before:" handler into a separate method.
Does not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Use Destionation: &opts.flag in the flag definition
instead of c.String("flag-name") and the like in the hadler and
matching only by strings.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This is a big diff, but it really only replaces a few global variables
with functions returning a structure.
The ultimate goal of this patch set is to replace option handling using
> cli.StringFlag{Name:"foo", ...}
> ...
> func somethingHandler(c *cli.Context) error {
> c.String("foo")
> }
where the declaration and usage are connected only using a string constant,
and it's difficult to notice that one or the other is missing or that the
types don't match, by
> struct somethingOptions {
> foo string
> }
> ...
> cli.StringFlag{Name:"foo", Destination:&foo}
> ...
> func (opts *somethingOptions) run(c *cli.Context) error {
> opts.foo
> }
As a first step, this commit ONLY introduces the *Options structures,
but for now empty; nothing changes in the existing implementations.
So, we go from
> func somethingHandler(c *cli.Context error {...}
>
> var somethingCmd = cli.Command {
> ...
> Action: somethingHandler
> }
to
> type somethingOptions struct{
> } // empty for now
>
> func somethingCmd() cli.Command {
> opts := somethingOptions{}
> return cli.Command {
> ... // unchanged
> Action: opts.run
> }
> }
>
> func (opts *somethingOptions) run(c *cli.context) error {...} // unchanged
Using the struct type has also made it possible to place the definition of
cli.Command in front of the actual command handler, so do that for better
readability.
In a few cases this also broke out an in-line lambda in the Action: field
into a separate opts.run method. Again, nothing else has changed.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
It's probably not strictly necessary, but let's work with the current
implementation before worrying about possible idiosyncracies.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Before we use "go get" in CI, run "go version" so that we can be sure of
which version of the toolchain we're using.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
github.com/containers/image/copy.Image() now returns the copied
manifest, so we at least need to ignore it.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Bump github.com/containers/image to version
5e5b67d6b1cf43cc349128ec3ed7d5283a6cc0d1, which modifies copy.Image() to
add the new image's manifest to the values that it returns.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
... which has, apparently, never worked, because the golang image
has neither the GOPATH nor the working directory the Makefile expects.
Rather than move all this configuration into the Makefile to be able
to work with the golang images, just always use the skopeobuildimage
path, and only override the tags, to minimize divergence.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Instead, use DockerReference() to obtain the repository name (which
also makes it work for other transports that support Docker references),
and a check for docker.Transport + docker.GetRepositoryTags.
This will allow dropping docker.Image from containers/image, and maybe
even all of ImageReference.NewImage (forcing callers to think about
manifest lists, among other things).
Minor change to allow passing the env TESTFLAGS to make. That's pretty
convenient to filter what tests to run.
E.g. run integration tests containing the substring `Copy`:
make test-integration TESTFLAGS="-check.f Copy"
Signed-off-by: Marco Vedovati <mvedovati@suse.com>
Replace the occurrences of `github.com/projectatomic` with
`github.com/containers` to ensure clean clones of the project are
building, travis badges on the README work as expected and other minor
things.
Signed-off-by: Flavio Castelli <fcastelli@suse.com>
These targets produce a pure-Go binary, without the following features:
* ostree
* devicemapper
* btrfs
* gpgme
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
https://github.com/projectatomic/skopeo/pull/519 made (skopeo copy)
suceed and print nothing to stderr; that could lead to hard-to-diagnose
failures in rare corner cases, e.g. shell scripts which do
(skopeo copy $src $dst) (as opposed to the correct
(skopeo copy "$src" "$dst") ) if $src and $dst are empty due to
a previous failure.
Needed to pick up this change:
ostree: use the same thread for ostree operations
Since https://github.com/ostreedev/ostree/pull/1555, locking is
enabled by default in OSTree. Unfortunately it uses thread-private
data and it breaks the Golang bindings. Force the same thread for the
write operations to the OSTree repository.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
skopeo is failing to build now on 32 bit systems. go-selinux update
should fix this. Also container/storage has had some cleanup fixes
to devicemapper support.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The goal is to include the c/image documentation in a skopeo release,
so that RPMs and other distribution mechanisms can ship the c/image
documentation without having to create a separate package for c/image
(which would not otherwise be needed because it is vendored in users).
So, unify the updates of the "vendor" subdirectory as (make vendor),
and document it in README.md. Also drop hack/vendor.sh, we neither
use nor document it, so updating it as well seems pointless.
containers/storage and storage.conf now support flags to allow users
to setup containers/storage to run on devicemapper.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
docker-archive and oci-archive now allow the image reference
for the destination to be empty.
Update tests for this new change.
Signed-off-by: umohnani8 <umohnani@redhat.com>
- _Start_ with installing distribution packages, instead of
mentioning it after the user has already built everything from source.
- Note that both the binary and documentation needs to be built
for (make install) to work.
Add multitag support when generating docker-archive tarballs via the
newly added '--aditional-tag' option, which can be specified multiple
times to add more than one tag. All specified tags will be added to the
RepoTags field in the docker-archive's manifest.json file.
This change requires to vendor the latest containers/image with
commit a1a9391830fd08637edbe45133fd0a8a2682ae75.
Signed-off-by: Valentin Rothberg <vrothberg@suse.com>
Apparently, it was never documented to use (go vet $somefile.go)
(but (go tool vet $somefile.go) was).
go 1.10 seems to do more checks within packages, and $somefile.go
is interpreted as a package with only that file (even if other files
from that package are in the same directory), leading to spurious
"undefined: $symbol" errors.
So, just run (go vet) on ./... (explicitly excluding skopeo/vendor for the
benefit of Go 1.8). We only have three subpackages, so the savings, if any,
from running (go vet) only on the modified subpackages would be small.
More importantly, on a toolchain update, ./... allows us to see the newly
detected issues all at once, instead of randomly waiting for a commit that
changes one of the affected files for the failure to show up.
The hack/common.sh script contains
local go_version
go_version=($(go version))
if [[ "${go_version[2]}" < "go1.5" ]]; then
# fail
fi
which does a lexicographic string comparison, and fails with 1.10.
Just drop it, the fedora:latest image is not likely to revert to 1.5.
containers/image returns a more detailed error message for oci and
oci-archive transports when the syntax given by the user is incorrect
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
In addition to the minimum necessary to update the API, also rename some
parameters/variables for consistency:
c *cli.Context
ctx context.Context
sys *types.SystemContext
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This PR adds CLI support for overriding the default docker daemon host when using the
`docker-daemon` transport.
Fixes#244
Signed-off-by: Justin Lewis Salmon <justin.lewis.salmon@gmail.com>
These files are used by deb and rpm packages, so I'd rather have them
upstream than maintain in 2 separate places.
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
The dir transport has been changed to save the blobs without the .tar extension
Fixes the skopeo tests failing due to this change
Signed-off-by: umohnani8 <umohnani@redhat.com>
Anyone running (vndr) currently ends up with failing tests in OCI schema
validation because gojsonschema has fixed its "$ref" interpretation, exposing
inconsistent URI usage inside image-spec/schema.
So, this runs (vndr), and uses mtrmac/image-spec:id-based-loader
( https://github.com/opencontainers/image-spec/pull/739 ) to make the tests pass
again. As soon as that PR is merged we should revert to using the upstream
image-spec repo again.
Re-vendor containers/storage to current revision
0d32dfce498e06c132c60dac945081bf44c22464, and containers/image to
current revision c8bcd6aa11c62637c5a7da1420f43dd6a15f0e8d.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
skopeo copy, delete, and inspect can now use credentials stored in the auth file
by the kpod login command
e.g kpod login docker.io -> skopeo copy dir:mydir docker://username/image
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
User can select from 3 manifest types: oci, v2s1, or v2s2
skopeo copy defaults to oci manifest if the --format flag is not set
Adds option to compress blobs when saving to the directory using the dir transport
e.g skopeo copy --format v2s1 --compress-blobs docker-archive:alp.tar dir:my-directory
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
PR #440 reverted the vendor.conf edits of #426. This passed CI
because the corresponding vendor/* subpackages were not modified.
Restore the vendor.conf changes, and re-run full (vndr) to ensure
the two are consistent again.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
In README.md, there is an example of skopeo copy command to download an
image in OCI format, but the current code returns an error:
skopeo copy docker://busybox:latest oci:busybox_ocilayout
FATA[0000] Error initializing destination oci:tmp:: cannot save image with empty image.ref.name
If we add a tag after the oci directory, the problem is gone:
skopeo copy docker://busybox:latest oci:busybox_ocilayout:latest
Fixes: #446
Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com>
The security benefits of PIC binaries are quite well known (since they
work with ASLR), and there is effectively no downside. In addition,
we've been seeing some weird linker errors on ppc64le that are resolved
by using -buildmode=pie.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
On macOS, (brew install gpgme) installs it within /usr/local, but
/usr/local/include is not in the default search path.
Rather than hard-code this directory, use gpgme-config. Sadly that
must be done at the top-level user instead of locally in the gpgme
subpackage, because cgo supports only pkg-config, not general shell
scripts, and gpgme does not install a pkg-config file.
If gpgme is not installed or gpgme-config can’t be found for other reasons,
the error is silently ignored (and the user will probably find out because
the cgo compilation will fail); this is so that users can use the
containers_image_openpgp build tag without seeing ugly errors
(and without the Makefile having to detect that build tag in even more
shell scripts).
We want to get support into skopeo for handling
override_kernel_checks so that we can use overlay
backend on RHEL.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This will allow compilation with a custom go binary,
for example /usr/lib/go-1.8/bin/go instead of /usr/bin/go on Ubuntu
16.04 which is still version 1.6
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
make lint is complaining for cases where the error returned is checked
for err != nil, and then returned anyways.
Signed-off-by: umohnani8 <umohnani@redhat.com>
This reduces the time used to clone openshift/origin on Travis from
> real 2m34.227s
> user 4m18.844s
> sys 0m8.144s
to
> real 0m8.816s
> user 0m2.640s
> sys 0m0.856s
, and the download size from 782.78 MiB to 70.05 MiB .
We can't trivially do this for docker/distribution because it is using
(git checkout $commit) on the cloned repo; we could do a clone+fetch+fetch
with --depth=1, but the full clone takes less than two seconds, so let's
keep that one simple.
This effectively reverts f4a44f00b8 ("integration: disable check with
image-tools for image-spec RC5"), which disabled the compliance
validation due to upstream bugs. Since those bugs have been fixed,
re-enable the tests (to make the smoke tests far more effective).
Fixes: f4a44f00b8 ("integration: disable check with image-tools for image-spec RC5")
Signed-off-by: Aleksa Sarai <asarai@suse.de>
This requires re-vendoring a bunch of other things (as well as the old
Sirupsen/logrus path), the relevant commits being:
* github.com/xeipuuv/gojsonschema@0c8571ac0ce161a5feb57375a9cdf148c98c0f70
* github.com/xeipuuv/gojsonpointer@6fe8760cad3569743d51ddbb243b26f8456742dc
* github.com/xeipuuv/gojsonreference@e02fc20de94c78484cd5ffb007f8af96be030a45
* go4.org@034d17a462f7b2dcd1a4a73553ec5357ff6e6c6e
Signed-off-by: Aleksa Sarai <asarai@suse.de>
Update containers/storage and containers/image to the
current-as-of-this-writing versions,
105f7c77aef0c797429e41552743bf5b03b63263 and
23bddaa64cc6bf3f3077cda0dbf1cdd7007434df respectively.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
ubuntu 16.04 have not package `libostree-dev`. also, we should
install `libglib2.0-dev` package when build skopeo with command `make binary`.
Signed-off-by: 0x0916 <w@laoqinren.net>
To make it clearer that the two are alternatives.
Document that a docker command is needed for the in-container build.
Also move the “checkout in $GOPATH” warning into the “without a
container” section, where it belongs.
We want to start with the Go 1.5 dependency and build/checkout
instructions.
Also create a separate subsection, to match the future “Building
in/without a container” subsections
Two more packages are needed to locally build skopeo
on fedora viz. btrfs-progs-devel & device-mapper-devel,
so added them in README.
Signed-off-by: Suraj Deshmukh <surajssd009005@gmail.com>
statement.
From the [docs](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#build-cache) in March 2017:
Always combine RUN apt-get update with apt-get install in the same RUN statement, for example
RUN apt-get update && apt-get install -y package-bar
Using apt-get update alone in a RUN statement causes caching issues and subsequent apt-get install instructions fail.
Signed-off-by: Jing Qiu <aqiu0720@gmail.com>
containers/storage got new dependencies, so we will need to re-vendor
eventually anyway, and having this separate from other major work is
cleaner.
But the primary goal of this commit is to see whether it makes skopeo
buildable on OS X.
We are not testing registry start-up performance, and killing the test
suite just because Travis is a bit busy doesn’t help; we’re much better
off with a test run which gives the registry a bit more time.
Move "skip if signing is not available" into the test, there may be
tests which only need verification.
Move GNUPGHOME creation from SetUpTest to SetUpSuite, sharing a single
key is fine. We don’t change the GNUPGHOME contents at test runtime.
Now that we can update the embedded name:tag, the test no longer fails
on a schema1→schema1 copy with the old schema1 server which verifies the
name:tag value.
Before the update, we have loosened the equality check to ignore the
name/tag; now that we are generating them correctly, test for the
expected values.
TestCopySignatures, among other things, tests handling of a correctly
signed image to a different name without breaking the signature, which
will be impossible with schema1 after we start updating the names
embedded in the schema1 manifest. So, use the schema2 server binary,
and docker://busybox image versions which use schema2.
The new version of containers/image will update the name and tag fields
when pushing to schema1; so accept that before we update, so that tests
keep working.
For now, just ignore the name/tag fields, so that both the current and
updated versions of containers/image are acceptable; we will tighten
that after the update.
Use (diff -x manifest.json) instead of removing the manifest.json files.
Also rename the helper from destructiveCheckDirImageAreEqual to
assertDirImagesAreEqual.
In addition to the default registry in the OpenShift cluster, start two
more (one known to support s1 only, one known to support s1+s2), and
also a docker/distribution s1-only registry.
Then test that copying images around works as expected.
NOTE: The docker/distribution s1-only tests currently fail and are
disabled. See the added comment for details.
We don’t really need to differentiate between the master/registry, we
just want to terminate them, maybe in the right order. So, collect them
in an array instead of using separate members.
This will make it easier to have more registry instances in the near
future.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
The *check.C object can not be reused across tests, so storing it in
openshiftCluster is incorrect (and leads to weird behavior like
assertion failures being silently ignored). So far this hasn't really
been an issue because we have been using the *check.C only in SetUpSuite
and TearDownSuite, and the changes to this have turned out to be
unnecessary after all, but this is still the right thing to do.
This is more or less
> s/c\./cluster\./g; s/cluster\.c/c/g
(paying more attention to the syntax) and corresponding modifications
to the method declarations.
Does not change behavior, apart from using the correct *check.C in
CopySuite.TearDownSuite.
This makes the fixture editation more robust against typos or unexpected
changes (if the “fixture” comes from third parties, like the OpenShift
registry configuration file).
This separates creation of the account and configuration, which can be
shared across service instances, from actually starting the registry; we
will soon start several of them.
Only splits a function, does not change behavior.
This change includes the docker-archive: transport, allowing for
entirely local manipulation of Docker images.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
vndr has never supported non-root imports but it used to not produce
errors. Newer versions of vndr will not clone anything if the
vendor.conf doesn't "look right".
Signed-off-by: Aleksa Sarai <asarai@suse.de>
Some registries may choose to block the "list all tags" endpoint for
performance or other reasons. In this case we should still allow an
inspect which will not include the "tag list" in the output.
Signed-off-by: Phil Estes <estesp@gmail.com>
… testing signature reading and writing using the
X-Registry-Supports-Signatures extension, and its
interoperability/equivalence with the atomic: native OpenShift API.
Primarily vendor after merging mtrmac/image:openpgp.
Then update for the SigningMechanism API change.
Also skip signing tests if the GPG mechanism does not support signing.
Also abort some of the tests early instead of trying to use invalid (or
nil) values.
The current master of image-tools does not build with Go 1.6, so keep
using an older release.
Also requires adding a few more dependencies of our updated
dependencies.
We are maintaining code to set up and run registries, including the
fairly complex setup for Atomic Registry, in the integration tests.
This is all useful for experimentation in shell, and the easiest way to
do that is to add a “test” which, after all the set up is done, simply
starts a shell.
This is gated by a build tag, so it does not affect normal test runs.
A possible alternative would be to convert all of the setup code not to
depend on check.C and testing.T, but that would be fairly cumbersome due
to how prevalent c.Logf and c.Assert are throughout the setup code.
Especially the natural replacement of c.Assert with a panic() would be
pretty ugly, and adding real error handling to all of that would make
the code noticeably longer. The build tag and copy&pasting a command
works just as well, at least for now.
(It is not conveniently possible to create a new “main program” which
manually creates a check.C and testing.T just for the purpose of running
the setup code either; check.C can be created given a testing.T, but
testing.T is only created by testing.MainStart, which does not allow us
to submit a non-test method; and testing.MainStart is excluded from the
Go compatibility promise.)
This patch adds a new flag --insecure-policy.
Closes#181, we can now directly use the tool with the
above mentioned flag wihout using a policy file
Signed-off-by: Kushal Das <mail@kushaldas.in>
This is primarily to get the signature access docker/distribution API
extension.
To make it work, two updates to the test harness are necessary:
- Change the expected output of (oadm policy add-cluster-role-to-group)
- Don't expect (openshift start master) to create .kubeconfig files
for the registry service.
As of https://github.com/openshift/origin/pull/10830 ,
openshift.local.config/master/openshift-registry.kubeconfig is no longer
autogenerated. Instead, do what (oadm registry) does, creating a
service account and a cluster policy role binding. Then manually create
the necessary certificates and a .kubeconfig instead of using the
service account in a pod.
The integrated registry used to return the original signature unmodified
in 1.3.0-alpha.3; in 1.5.0-alpha-3 it regenerates a new one, so allow that
when comparing the original and copied image.
This includes fixes to docker-daemon's GetBlob, which will now
decompress blobs (making c/i/copy act sanely when trying to copy from a
docker-daemon to uncompressed destinations, as well as making
verification actually work properly).
Signed-off-by: Aleksa Sarai <asarai@suse.de>
In order to make sure that we don't create invalid OCI images that are
consistently invalid, add additional checks to ensure that both of the
generated OCI images in the round-trip test are valid according to the
upstream validator.
This commit vendors the following packages (deep breath):
* oci/image-tools@7575a09363, which requires
* oci/image-spec@v1.0.0-rc4 [revendor, but is technically an update
because I couldn't figure out what version was vendored last time]
* oci/runtime-spec@v1.0.0-rc4
* xeipuuv/gojsonschema@6b67b3fab7
* xeipuuv/gojsonreference@e02fc20de9
* xeipuuv/gojsonpointer@e0fe6f6830
* camlistore/go4@7ce08ca145
Signed-off-by: Aleksa Sarai <asarai@suse.de>
This test is just a general smoke test to make sure there are no errors
with skopeo, but also verifying that after passing through several
translation steps an OCI image will remain in fully working order.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
This is a bit better than raw (gpg -d $signature), and it allows testing
of the signature.GetSignatureInformationWithoutVerification function;
but, still, keeping it hidden because relying on this in common
workflows is probably a bad idea and we don’t _neeed_ to expose it right
now.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
vndr is almost exactly the same as our old good hack/vendor.sh. Except
it's cleaner and it allows to re-vendor just one dependency if needed
(which we do a lot for containers/image).
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Vendor containers/storage, and its dependencies github.com/pborman/uuid
and github.com/mistifyio/go-zfs, which we didn't already use.
Update the build Dockerfile to install their dependencies.
Add scriptlets that try to detect whether or not we need to use the
"libdm_no_deferred_remove" and/or "btrfs_noversion" build tags.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
When we start up, initialize handlers so that we can import blobs
correctly when using the storage library.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Run the "go" command with the $(BUILDTAGS) makefile variable passed in
as build tags. We don't currently set it, but we'll need to eventually,
and adding it now does no harm.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Both (make binary) and (make binary-static) compile the code and create
a skopeo binary, so (make all) should only depend on one of them.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This is added pretty much only for integration tests right now;
though, it might be useful also for non-root operation.
Also makes a tiny cleanup of contextFromGlobalOptions, removing a
variable.
The policy file is actualy indicatiting the signatures that the
user trusts. This patch changes the documentation and error messages
to indicate this trust.
Finally, load and enforce the policy.
NOTE that this breaks a simple ./skopeo from a built directory if you
don't have /etc/atomic/policy.json installed for other reasons;
use (./skopeo --policy default-policy.json) instead.
(skopeo copy) will soon ALWAYS require a present policy file. So,
install one by (make install), and ensure that integration tests do so
as well.
Also simplifies the usage of install(1) a bit.
This ordinarily uses the compiled-in default, but allows per-command
override. No users yet.
Note that this adds an URL to policy documentation within
containers/image, and that URL does not exist at the moment.
A plain sha256sum and the like is insufficient because we need to strip
signatures from v2s1 manifests; so, add a subcommand.
This can be used together with (skopeo inspect --raw) to download a
manifest from a source untrusted to modify it under us; we download a
manifest once using (skopeo inspect --raw), compute a digest using
(skopeo manifest-digest), and then do all future operations using a
digest reference.
* Use “override GOGCFLAGS+=” so that (make GOGCFLAGS=… DEBUG=1)
does not ignore the appending to GOGCFLAGS
* Move quoting of -gcflags from the variable to its use,
so that (make GOGCFLAGS=… DEBUG=1) is correctly quoted
* Now that GOGCFLAGS and DEBUG are both handled correctly when
completely empty, simplify by dropping the DEBUG!=1 branch.
* Beautify the command line by not using DEBUG= if DEBUG is unset.
This ensures that we are not installing e.g. an obsolete version of the
man page after the Markdown version is updated.
Note that this greatly benefits from the "skopeo" target being
non-phony, otherwise (make install) would rebuild the binary.
- Use ArgsUsage to document the non-option arguments
- Refer to ArgsUsage placeholders in Usage
- Use named placeholders in flag documentation
Fixes#137, more or less.
Among other minor changes:
- Do not duplicate synopses of the subcommands; use a generic synopsis
at the top, and detailed subcommand synopses only when documenting the
subcommands.
- Use the conventions documented in man-pages(7), in particular using
italic for replaceable values.
- Add a section documenting the transport:details reference format,
and list the supported transports.
- Relax the warning about standalone-sign.
Note that this requires ImageDestination.PutBlob to fail and delete
any unfinished data if stream.Read() fails.
We do not have to trust PutBlob to correctly handle a validation error,
so we don't; but we can't do the storage cleanup for PutBlob.
- Use transports.ParseImageReference instead of dealing with individual
transports
- CanonicalDockerReference replaced by Reference.DockerReference, can't
fail but can be unsupported
- directory.NewImageDestination replaced by
directory.NewReference.NewImageDestination
This fixes --version integration test on CentOS, as noticed by
https://github.com/projectatomic/skopeo/pull/91 . The underlying cause
is:
- Makefile builds with -ldflags "-X var=value", while go 1.4.2 only
supports "-X var value". This causes CentOS builds to be built
without the specific commit information
- The --version integration test assumes that commit information will
always follow the version number.
Changing either one of these would fix the build, changing the
integration test has the advantage that we don't have to use the
obsolete -X syntax and suffer warnings on newer Go versions.
I don’t know how to checkout a specific untagged commit (
9ff4bf43548c758b6767b639b335681285fece48 ) from the original repo, so
I have forked the project and fetched that commit from a cached Docker
image.
We should instead update the containers/image client for the new API ASAP,
and then the github.com/mtrmac/origin repo should be removed.
So that people don't need to install all dependencies just to build.
Make it so that "make binary" does nothing if nothing changed.
Remove ${DEST}
Signed-off-by: Doug Davis <dug@us.ibm.com>
To do so, have (skopeo copy) work with a types.Image, and replace uses
of types.ImageSource with types.Image where possible to allow the
caching in types.Image to work.
This is a slight behavior change:
- The manifest is now processed through fixManifestLayers
- Duplicate layers (created e.g. when a non-filesystem-altering command is used
in a Dockerfile) are only copied once.
Per the discussion in https://github.com/projectatomic/skopeo/pull/73 ,
types.Image.Manifest should not need to expose MIME types:
ImageSource.GetManifest allows supplying MIME types; the intent is
for clients who want to parse the manifests to use an ImageSource.
Clients who want to use skopeo’s parsing should use types.Image, and
then they don’t need to care about MIME types. In fact, types.Image
needs to decide among the various manifest alternatives which one to
parse (and which one to match against the provided or signed manifest
digest). So, Image.Manifest will not be all that useful for parsing the
contents, it is basically useful only for verifying against a digest.
Also split creation of cli.App from main(), and add a test helper
function.
This does not change behavior at the moment, but will allow writing
tests of the command handlers.
This builds from the image-signatures-rest branch for
https://github.com/openshift/origin/pull/9181 .
Testing push, pull, streaming.
Does not test working with the other Docker registries built in
Dockerfile; I will leave that to the author of that code :)
Note that this relies on an internet connection for pulling from the
Docker Hub (which is incidentally tested by that); pushing to no Docker
Registry, neither local nor Hub, is tested by this.
The tests only run in a container because the (oc login) / (docker
login)-like code modifies files in a home directory; the new
SKOPEO_CONTAINER_TESTS environment variable should protect against
accidental non-container runs.
- consumeAndLogOutputs
- assertSkopeoSucceeds
- assertSkopeoFails
- runCommandWithInput
All of these allow running commands as one-liners with no call-site
error handling, making tests much more readable.
Also modifies TestNoNeedAuthToPrivateRegistryV2ImageNotFound to use
check.Matches instead of manual strings.Contains conditions, which is
shorter and more consistent with the assertSkopeo... calls.
Primarily, make it actually work; reading into a non-zero-capacity but
zero-length slice would just return 0, the goroutine would terminate,
and even the producer of the output could fail with EPIPE/SIGPIPE.
Also make the logged output readable, converting it into a string
instead of a series of hexadecimal byte values.
This will be used also by non-signing tests.
No code changes besides removing the initial capital letter in the
function name; this is a separate commit only to make reviewing of
future changes to this function easier.
/skopeo.1 was a generated file before #35; now this path is not used
(replaced by man1/skopeo.1); if the generated file is left around, it is
obsolete (and confusingly empty). Remove it from .gitignore to nudge
developers like me to clean up.
This does not really go into why duplicate layers can happen or why it
is worth supporting that; the code originates from
504e67b867 ,
which does not explain either.
This will allow us to cleanly move genericImage into a separate package.
This costs an extra pointer, but also allows us to rely on the type
system and drop handling "certainly impossible" errors, worth it just
for this simplification anyway.
Add support to mark images for deletion from repository
Requires:
* V2 API and schema
* registry configured to allow deletes
* run registry garbage collection to free up disk space
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Implement a client to the chunked API, instead of the nonexistent
one-shot API (per
2a4deee441
).
Adds a FIXME to DELETE the pending upload on failure; the uploads are
supposed to time out so this is not immediately critical.
Fixes#64 .
PolicyContext is intended to be the primary API for skopeo/signature:
supply a policy and an image, and ask specific, well-defined
(preferably yes/no) questions.
Using the canonical minimized format of Docker references introduces too
many ambiguities.
This also removes some validation of the scope string, but all that was
really doing was rejecting completely invalid input like uppercase.
Sadly it is not qutie obvious that we can detect and reject mistakes like
using "busybox" as a scope instead of the correct
"docker.io/library/busybox". Perhaps require at least one dot or port
number in the host name?
To support verification of signatures when more than one key, or more
than one identity, are accepted, have verifyAndExtract signature accept
callbacks (in a struct so that they are explicitly named).
verifyAndExtractSignature now also validates the manifest digest. It is
intended to become THE SINGLE PLACE where untrusted signature blobs
have signatures verified, are validated against other expectations, and
parsed, and converted into internal data structures available to other
code.
Also:
- Modifies VerifyDockerManifestSignature to use utils.ManifestMatchesDigest.
- Adds a test for Docker reference mismatch in VerifyDockerManifestSignature.
(The key was one-time-generated in a temporary directory,
and is, intentionally, not available.)
This is not conceptually related to the rest of the PR, just adding a
missing case to the test, except that the added fixture will be reused
in a prSignedBy test.
As opposed to callers just calling utils.ManifestDigest(), this is
a forward-compatible interface, allowing other digest algorithms to
be added in the future.
Right now, we only support SHA-256, so the underlying implementation
does not change anything.
This is not expected to be that useful in production; for now it serves
as a demonstration of the concept, and it allows (skopeo inspect) to be
clumsily used as parser of stand-alone manifests (by creating a dir:
structure with that manifest).
(skopeo layers) support follows naturally, but is even less useful.
The remaining uses of the dependencies, in (skopeo inspect), now check
whether their types.Image is a docker.Image and call the docker.Image
functions directly.
This does not change behavior for Docker images.
For non-Docker images (which can't happen yet), the Name field is
removed; RepoTags remain and are reported as empty, because using
json:",omitempty" would also omit an empty list for Docker images.
The code not dependent on specifics of DockerImageSource now lives in
docker.genericImage; the rest directly in docker.Image.
docker.Image remains the only implementation of types.Image at this
point, but that will change.
This is the only Docker-specific aspect of types.Image.Inspect.
This does not change behavior; plausibly we might want to replace the
Name value in (skopeo inspect) by something else which is not dependent
on Docker, but that can be a separate work later.
Adds a FIXME? in docker_image.go for consistency with
dockerImage.GetRepositoryTags, both will be removed later in the
patchset.
We abort on failure to get the data anyway, so there is no need to use
temporaries to avoid modifying outputData on failure.
This is not a simplification yet, but handling optional (e.g.
Docker-specific) data this way will be simpler, and handling
non-optional data the same way will be more consistent.
This allows unmarshaling JSON data and refusing any ambiguous input, to
make sure users don't make mistakes when writing policy.
This might be a bit easier with reflection, but we will need the
non-reflection variant (for unmarshaling a map type) anyway, and quite a
few users which do ultimately unmarshal into a struct need to override
the type of one or more fields, so reflection would force them to define
temporary fields - not necessarily all that better.
This will make the output of godoc cleaner, we can't filter out the
subpackage otherwise.
Also copy the needed fixture into the integration subpackage, instead of
referring to it using ../signature/fixtures (and we can't import
signature/fixtures_info-test.go now).
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
also remove fixtures pkg as it would clutter godoc (there's not need
to have a .go files with fixtures)
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
This does not change behavior.
Rename types.DockerImageManifest to types.ImageInspectInfo.
This naming more accurately reflects what the function does and how it is
expected to be used.
(The only outstanding non-inspection piece is the Name field, which is
kind of a subset of GetIntendedDockerReference() right now. Not sure
whether that is intentional.)
Also fold makeImageManifest into its only user.
This does not change behavior.
Splits listing of repository tags, which is not a property of an image,
from the image.Manifest gathering of information about an image.
Compute the digest ourselves, the registry is in general untrusted and
computing it ourserlves is easy enough.
The stop passing the unverifiedCanonicalDigest value around, simplifying
ImageSource.GetManifest and related code. In particular, remove
retrieveRawManifest and have internal users just call Manifest() now that
we don't need the digest.
Does not change behavior.
This will allow us to move collecting some of the data to the (skopeo
inspect) code and to have a more focused types.Image API, where
types.Image.Manifest() does not return a grab bag of manifest-unrelated
data, eventually.
For how it actually makes the coupling more explicit by having
types.Image.Manifest() return a types.DockerImageManifest instead of the
too generic types.ImageManifest. We will need to think about which
parts of DockerImageManifest are truly generic, later.
Does not change behavior.
This better expresses the purpose of this method (it is working with
more, currently much more, than the manifest), and frees up the Manifest
method name for a simple getter of the raw blob.
No change in behavior.
These functions are guaranteed-cached versions of the same method in
types.ImageSource. Both will be needed for signature policy evaluation,
and the symmetry with ImageSource is nice.
Also replaces the equivalent RawManifest method, preferring to keep
the same naming convention as types.ImageSource.
Does not change behavior. This is a straightforward move and update of
package references, except for:
- Adding a duplicate definition of manifestSchema1 to
cmd/skopeo/copy.go. This will need to be cleaned up later, for now
preferring to make no design changes in this commit.
- Renaming parseDockerImage to NewDockerImage, to both make it public
and consistent with common golang conventions.
No semantic change, only a reorganization: The utilities now return
jsonFormatError instead of InvalidSignatureError, but their only
caller maps it back.
The dir: source type does not return the value, the value is
untrusted/not validated, and it is not at all clear why we should print
it in the first place.
This expects a GPG key fingerprint as a value of the argument (though
other key identification methods, like mitr@volny.cz, happen to work).
Do we need to namespace this (gpg:…)?
Note that this is unusable at the moment because only the dir: backend
implements storing signatures, and this backend does can not determine
the canonical Docker reference to use as a signed image identity.
This copies an image from ImageSource to ImageDestination, e.g.
skopeo copy atomic:mitr/busybox:latest dir:t-down # pull
skopeo copy dir:t-up atomic:mitr/busybox:latest # push
This finally uses all of the ImageSource and ImageDestination
implementations, though these utilities are in turn not used yet.
Adds unresolved FIXME (FIXME!!) notes for the tlsVerify default value;
for now, the code follows the existing parseImage semantics.
Also note the naming inconsistency: dir:…, atomic:…, but
docker://… . I think the non-// names are cleaner, but if we are
committed to docker://…, just being consistent might be better.
Note that this assumes that both (docker login) and (oc login) has
happened, the credentials can be read from the usual config files,
and that the default OpenShift instance should be used.
This includes copy&pasted/modified/simplified code from OpenShift
and Kubernetes, primarily for config file parsing and setting up
TLS and HTTP authentication.
This is much smaller than linking to the upstream OpenShift client
libraries, which via various abstractions and registration drag in much
(dozens of megabytes) more code.
The primary loss from this simplification is automatic conversions
between various versions of the API objects, both for the REST API and
for local configuration storage.
This does not contain downloading/uploading signatures, which depends on
server-side support.
Note that this does not allow uploading under new tags; Docker Registry
requires the tag to be present within the manifest, i.e. we might need
to modify the (possibly signed) manifest.
For now, uploading manifests only identified by a digest is sufficient
for the Atomic Registry; tagging happens in OpenShift imagestreams.
The dockerClient encapsulates makeRequest and authentication setup, and
will be shared between the pull and push code.
This is only a restructuring, does not change behavior.
The dockerImage->dockerImageSource->dockerClient inclusion chain is
somewhat ugly, hopefully eventually we will move the remaining
dockerImage functionality either to dockerutils or to the top level, and
then eliminate it.
The Docker Registry manifest upload should supply a Content-Type, and
guessing from the contents is the easiest we can do right now.
Also eliminate dockerutils.manifestMIMEType, it is making it too
difficult to use the returned value to be worth the extra safety.
Call dockerImageSource.ping() in .makeRequest() if needed, instead of
expecting a caller to do it (which only happened in GetManifest).
This required splitting the URLs into the baseURL (dependent on .ping()
result) and the suffix (independent of it), which was a simplification
anyway.
Also rename WWWAuthenticate to wwwAuthenticate, it is a private cache
field.
This will hopefully allow better reuse of the "copy images" code from
docker.go in the future.
No behavior change, the dirImageDestination code was based on the code
this commit is replacing.
This is consistent with the (skopeo layers) storage layout; otherwise it
is expected to be used primarily as an a debugging aid when working on
more complex image transfers (e.g. directly from OpenShift to a running
Docker daemon), allowing them to be split to two simpler problems
between one complex storage mechanism and a simple directory.
Not used yet, users will be added in future commits.
The ImageSource type does not provide all of the functionality of
docker.go, but we will be able to reuse the ImageSource parts in an
OpenShift client.
This is only a restructuring, does not change behavior.
Right now, only a declaration.
This will allow writing generalized push/pull between various storage
mechanisms, and reuse of the Docker Registry client code for the Docker
Registry embedded in OpenShift.
Move the manifest computation (with v2s1 signature stripping) out of
skopeo/signature into a separate package; it is necessary in the
OpenShift client as well, unrelated to signatures.
Other Docker-specific utilities, like getting a list of layer blobsums
from a manifest, may be also moved here in the future.
Resolves https://github.com/projectatomic/skopeo/issues/12
* Convert man page from markdown to nroff
* Fill out man page
* Remove TODO's from go code regarding man page
* Additional information on building instructions
* Update Makfile
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Set GOPATH to start with ./vendor so that we use the dependencies in our
vendored versions instead of dependencies in whatever other version is
elsewhere in GOPATH.
And then undo it when trying to list the non-vendor subpackages in the
current directory.
github.com/coreos/etcd as of v2.2.5 uses a Godeps subdirectory, and
imports packages by including the Godeps path fragments directly in the
package name; so we can't just remove the subdirectory and vendor the
included package directly. So, add a flag to clone() to surpress
removing the vendor subdirectories.
Instead of only checking dependencies of the "main" packages, include
also test dependencies of all subpackages of the project, and their
transitive dependencies.
Otherwise the "clean" step of hack/vendor.sh would drop most .go files
from vendor/ as unused.
Also commits refreshed versions of a few of the vendored packages.
- (make check): GNU coding standards-compliant primary entry point,
running all available tests in the best environment (i.e. Docker
container).
- (make test-all-local): Local entry point, running only tests
which do not require a special environment; intended for IDE
integration and quick turnaround cycles.
Also modifies the Travis configuration to run (make check), to prevent
duplication.
Validating only committed files is not useful in the natural
$test_everything_passes; commit; push
workflow; the failures will not be caught locally, only by Travis later
(and only if PRs are used instead of direct commits to master).
So, use the working directory state instead of last commit for
validations; and remove misleading comments in checks which already use
the working directory state.
# Multiarch manifest will support architectures from this list. It should be the same architectures, as ones in image-build-push step in this Travis config
to see if someone else has already reported it. If so, feel free to add
your scenario, or additional information, to the discussion. Or simply
"subscribe" to it to be notified when it is updated.
If you find a new issue with the project we'd love to hear about it! The most
important aspect of a bug report is that it includes enough information for
us to reproduce it. So, please include as much detail as possible and try
to remove the extra stuff that doesn't really relate to the issue itself.
The easier it is for us to reproduce it, the faster it'll be fixed!
Please don't include any private/sensitive information in your issue!
## Submitting Pull Requests
No Pull Request (PR) is too small! Typos, additional comments in the code,
new testcases, bug fixes, new features, more documentation, ... it's all
welcome!
While bug fixes can first be identified via an "issue", that is not required.
It's ok to just open up a PR with the fix, but make sure you include the same
information you would have included in an issue - like how to reproduce it.
PRs for new features should include some background on what use cases the
new code is trying to address. When possible and when it makes sense, try to break-up
larger PRs into smaller ones - it's easier to review smaller
code changes. But only if those smaller ones make sense as stand-alone PRs.
Regardless of the type of PR, all PRs should include:
* well documented code changes
* additional testcases. Ideally, they should fail w/o your code change applied
* documentation changes
Squash your commits into logical pieces of work that might want to be reviewed
separate from the rest of the PRs. Ideally, each commit should implement a single
idea, and the PR branch should pass the tests at every commit. GitHub makes it easy
to review the cumulative effect of many commits; so, when in doubt, use smaller commits.
PRs that fix issues should include a reference like `Closes #XXXX` in the
commit message so that github will automatically close the referenced issue
when the PR is merged.
<!--
All PRs require at least two LGTMs (Looks Good To Me) from maintainers.
-->
### Sign your PRs
The sign-off is a line at the end of the explanation for the patch. Your
signature certifies that you wrote the patch or otherwise have the right to pass
it on as an open-source patch. The rules are simple: if you can certify
the below (from [developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
Then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe.smith@email.com>
Use your real name (sorry, no pseudonyms or anonymous contributions.)
If you set your `user.name` and `user.email` git configs, you can sign your
commit automatically with `git commit -s`.
### Dependencies management
Make sure [`vndr`](https://github.com/LK4D4/vndr) is installed.
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- run `make vendor`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `make vendor`
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `make vendor`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now required by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `make vendor`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete successfully again, merge the skopeo PR
## Communications
For general questions, or discussions, please use the
IRC group on `irc.freenode.net` called `container-projects`
that has been setup.
For discussions around issues/bugs and features, you can use the github
log.Error("Unable to connect to local syslog daemon")
}else{
log.AddHook(hook)
}
}
```
Note: Syslog hook also support connecting to local syslog (Ex. "/dev/log" or "/var/run/syslog" or "/var/run/log"). For the detail, please check the [syslog hook README](hooks/syslog/README.md).
| Hook | Description |
| ----- | ----------- |
| [Airbrake](https://github.com/gemnasium/logrus-airbrake-hook) | Send errors to the Airbrake API V3. Uses the official [`gobrake`](https://github.com/airbrake/gobrake) behind the scenes. |
| [Airbrake "legacy"](https://github.com/gemnasium/logrus-airbrake-legacy-hook) | Send errors to an exception tracking service compatible with the Airbrake API V2. Uses [`airbrake-go`](https://github.com/tobi/airbrake-go) behind the scenes. |
| [Papertrail](https://github.com/polds/logrus-papertrail-hook) | Send errors to the [Papertrail](https://papertrailapp.com) hosted logging service via UDP. |
| [Syslog](https://github.com/Sirupsen/logrus/blob/master/hooks/syslog/syslog.go) | Send errors to remote syslog server. Uses standard library `log/syslog` behind the scenes. |
| [Bugsnag](https://github.com/Shopify/logrus-bugsnag/blob/master/bugsnag.go) | Send errors to the Bugsnag exception tracking service. |
| [Sentry](https://github.com/evalphobia/logrus_sentry) | Send errors to the Sentry error logging and aggregation service. |
| [Hiprus](https://github.com/nubo/hiprus) | Send errors to a channel in hipchat. |
| [Logrusly](https://github.com/sebest/logrusly) | Send logs to [Loggly](https://www.loggly.com/) |
| [Slackrus](https://github.com/johntdyer/slackrus) | Hook for Slack chat. |
| [Journalhook](https://github.com/wercker/journalhook) | Hook for logging to `systemd-journald` |
| [Graylog](https://github.com/gemnasium/logrus-graylog-hook) | Hook for logging to [Graylog](http://graylog2.org/) |
| [Raygun](https://github.com/squirkle/logrus-raygun-hook) | Hook for logging to [Raygun.io](http://raygun.io/) |
| [LFShook](https://github.com/rifflock/lfshook) | Hook for logging to the local filesystem |
| [Honeybadger](https://github.com/agonzalezro/logrus_honeybadger) | Hook for sending exceptions to Honeybadger |
| [Mail](https://github.com/zbindenren/logrus_mail) | Hook for sending exceptions via mail |
| [Rollrus](https://github.com/heroku/rollrus) | Hook for sending errors to rollbar |
| [Fluentd](https://github.com/evalphobia/logrus_fluent) | Hook for logging to fluentd |
| [Mongodb](https://github.com/weekface/mgorus) | Hook for logging to mongodb |
| [InfluxDB](https://github.com/Abramovic/logrus_influxdb) | Hook for logging to influxdb |
| [Octokit](https://github.com/dorajistyle/logrus-octokit-hook) | Hook for logging to github via octokit |
| [DeferPanic](https://github.com/deferpanic/dp-logrus) | Hook for logging to DeferPanic |
#### Level logging
Logrus has six logging levels: Debug, Info, Warning, Error, Fatal and Panic.
```go
log.Debug("Useful debugging information.")
log.Info("Something noteworthy happened!")
log.Warn("You should probably take a look at this.")
log.Error("Something failed but I'm not quitting.")
// Calls os.Exit(1) after logging
log.Fatal("Bye.")
// Calls panic() after logging
log.Panic("I'm bailing.")
```
You can set the logging level on a `Logger`, then it will only log entries with
that severity or anything above it:
```go
// Will log anything that is info or above (warn, error, fatal, panic). Default.
log.SetLevel(log.InfoLevel)
```
It may be useful to set `log.Level = logrus.DebugLevel` in a debug or verbose
environment if your application has that.
#### Entries
Besides the fields added with `WithField` or `WithFields` some fields are
automatically added to all logging events:
1.`time`. The timestamp when the entry was created.
2.`msg`. The logging message passed to `{Info,Warn,Error,Fatal,Panic}` after
the `AddFields` call. E.g. `Failed to send event.`
3.`level`. The logging level. E.g. `info`.
#### Environments
Logrus has no notion of environment.
If you wish for hooks and formatters to only be used in specific environments,
you should handle that yourself. For example, if your application has a global
variable `Environment`, which is a string representation of the environment you
could do:
```go
import(
log"github.com/Sirupsen/logrus"
)
init(){
// do something here to set environment depending on an environment variable
// or command-line flag
ifEnvironment=="production"{
log.SetFormatter(&log.JSONFormatter{})
}else{
// The TextFormatter is default, you don't actually have to do this.
log.SetFormatter(&log.TextFormatter{})
}
}
```
This configuration is how `logrus` was intended to be used, but JSON in
production is mostly only useful if you do log aggregation with tools like
Splunk or Logstash.
#### Formatters
The built-in logging formatters are:
*`logrus.TextFormatter`. Logs the event in colors if stdout is a tty, otherwise
without colors.
* *Note:* to force colored output when there is no TTY, set the `ForceColors`
field to `true`. To force no colored output even if there is a TTY set the
`DisableColors` field to `true`
*`logrus.JSONFormatter`. Logs fields as JSON.
*`logrus/formatters/logstash.LogstashFormatter`. Logs fields as [Logstash](http://logstash.net) Events.
// Note this doesn't include Time, Level and Message which are available on
// the Entry. Consult `godoc` on information about those fields or read the
// source of the official loggers.
serialized, err := json.Marshal(entry.Data)
if err != nil {
return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err)
}
return append(serialized, '\n'), nil
}
```
#### Logger as an `io.Writer`
Logrus can be transformed into an `io.Writer`. That writer is the end of an `io.Pipe` and it is your responsibility to close it.
```go
w := logger.Writer()
defer w.Close()
srv := http.Server{
// create a stdlib log.Logger that writes to
// logrus.Logger.
ErrorLog: log.New(w, "", 0),
}
```
Each line written to that writer will be printed the usual way, using formatters
and hooks. The level for those entries is `info`.
#### Rotation
Log rotation is not provided with Logrus. Log rotation should be done by an
external program (like `logrotate(8)`) that can compress and delete old log
entries. It should not be a feature of the application-level logger.
#### Tools
| Tool | Description |
| ---- | ----------- |
|[Logrus Mate](https://github.com/gogap/logrus_mate)|Logrus mate is a tool for Logrus to manage loggers, you can initial logger's level, hook and formatter by config file, the logger will generated with different config at different environment.|
If you want to connect to local syslog (Ex. "/dev/log" or "/var/run/syslog" or "/var/run/log"). Just assign empty string to the first two parameters of `NewSyslogHook`. It should look like the following.
`cli.go` is simple, fast, and fun package for building command line apps in Go. The goal is to enable developers to write fast and distributable command line applications in an expressive way.
## Overview
Command line apps are usually so tiny that there is absolutely no reason why your code should *not* be self-documenting. Things like generating help text and parsing command flags/options should not hinder productivity when writing a command line app.
**This is where `cli.go` comes into play.**`cli.go` makes command line programming fun, organized, and expressive!
## Installation
Make sure you have a working Go environment (go 1.1+ is *required*). [See the install instructions](http://golang.org/doc/install.html).
To install `cli.go`, simply run:
```
$ go get github.com/codegangsta/cli
```
Make sure your `PATH` includes to the `$GOPATH/bin` directory so your commands can be easily used:
```
export PATH=$PATH:$GOPATH/bin
```
## Getting Started
One of the philosophies behind `cli.go` is that an API should be playful and full of discovery. So a `cli.go` app can be as little as one line of code in `main()`.
``` go
package main
import (
"os"
"github.com/codegangsta/cli"
)
func main() {
cli.NewApp().Run(os.Args)
}
```
This app will run and show help text, but is not very useful. Let's give an action to execute and some help documentation:
``` go
package main
import (
"os"
"github.com/codegangsta/cli"
)
func main() {
app := cli.NewApp()
app.Name = "boom"
app.Usage = "make an explosive entrance"
app.Action = func(c *cli.Context) {
println("boom! I say!")
}
app.Run(os.Args)
}
```
Running this already gives you a ton of functionality, plus support for things like subcommands and flags, which are covered below.
## Example
Being a programmer can be a lonely job. Thankfully by the power of automation that is not the case! Let's create a greeter app to fend off our demons of loneliness!
Start by creating a directory named `greet`, and within it, add a file, `greet.go` with the following code in it:
``` go
package main
import (
"os"
"github.com/codegangsta/cli"
)
func main() {
app := cli.NewApp()
app.Name = "greet"
app.Usage = "fight the loneliness!"
app.Action = func(c *cli.Context) {
println("Hello friend!")
}
app.Run(os.Args)
}
```
Install our command to the `$GOPATH/bin` directory:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS
--version Shows version information
```
### Arguments
You can lookup arguments by calling the `Args` function on `cli.Context`.
``` go
...
app.Action = func(c *cli.Context) {
println("Hello", c.Args()[0])
}
...
```
### Flags
Setting and querying flags is simple.
``` go
...
app.Flags = []cli.Flag {
cli.StringFlag{
Name: "lang",
Value: "english",
Usage: "language for the greeting",
},
}
app.Action = func(c *cli.Context) {
name := "someone"
if len(c.Args()) > 0 {
name = c.Args()[0]
}
if c.String("lang") == "spanish" {
println("Hola", name)
} else {
println("Hello", name)
}
}
...
```
You can also set a destination variable for a flag, to which the content will be scanned.
``` go
...
var language string
app.Flags = []cli.Flag {
cli.StringFlag{
Name: "lang",
Value: "english",
Usage: "language for the greeting",
Destination: &language,
},
}
app.Action = func(c *cli.Context) {
name := "someone"
if len(c.Args()) > 0 {
name = c.Args()[0]
}
if language == "spanish" {
println("Hola", name)
} else {
println("Hello", name)
}
}
...
```
See full list of flags at http://godoc.org/github.com/codegangsta/cli
#### Alternate Names
You can set alternate (or short) names for flags by providing a comma-delimited list for the `Name`. e.g.
``` go
app.Flags = []cli.Flag {
cli.StringFlag{
Name: "lang, l",
Value: "english",
Usage: "language for the greeting",
},
}
```
That flag can then be set with `--lang spanish` or `-l spanish`. Note that giving two different forms of the same flag in the same command invocation is an error.
#### Values from the Environment
You can also have the default value set from the environment via `EnvVar`. e.g.
``` go
app.Flags = []cli.Flag {
cli.StringFlag{
Name: "lang, l",
Value: "english",
Usage: "language for the greeting",
EnvVar: "APP_LANG",
},
}
```
The `EnvVar` may also be given as a comma-delimited "cascade", where the first environment variable that resolves is used as the default.
``` go
app.Flags = []cli.Flag {
cli.StringFlag{
Name: "lang, l",
Value: "english",
Usage: "language for the greeting",
EnvVar: "LEGACY_COMPAT_LANG,APP_LANG,LANG",
},
}
```
### Subcommands
Subcommands can be defined for a more git-like command line app.
Alternatively, you can just document that users should source the generic
`autocomplete/bash_autocomplete` in their bash configuration with `$PROG` set
to the name of their program (as above).
## Contribution Guidelines
Feel free to put up a pull request to fix a bug or maybe add a feature. I will give it a code review and make sure that it does not break backwards compatibility. If I or any other collaborators agree that it is in line with the vision of the project, we will work with you to get the code into a mergeable state and merge it into the master branch.
If you have contributed something significant to the project, I will most likely add you as a collaborator. As a collaborator you are given the ability to merge others pull requests. It is very important that new code does not break existing code, so be careful about what code you do choose to merge. If you have any questions feel free to link @codegangsta to the issue in question and we can review it together.
If you feel like you have contributed to the project but have not yet been added as a collaborator, I probably forgot to add you. Hit @codegangsta up over email and we will get it figured out.
- your account on the [Docker Hub](https://hub.docker.com/)
- any other [Docker Hub](https://hub.docker.com/) issue
Then please do not report your issue here - you should instead report it to [https://support.docker.com](https://support.docker.com)
### If you...
- need help setting up your registry
- can't figure out something
- are not sure what's going on or what your problem is
Then please do not open an issue here yet - you should first try one of the following support forums:
- irc: #docker-distribution on freenode
- mailing-list: <distribution@dockerproject.org> or https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution
## Reporting an issue properly
By following these simple rules you will get better and faster feedback on your issue.
- search the bugtracker for an already reported issue
###If you found an issue that describes your problem:
- please read other user comments first, and confirm this is the same issue: a given error condition might be indicative of different problems - you may also find a workaround in the comments
- please refrain from adding "same thing here" or "+1" comments
- you don't need to comment on an issue to get notified of updates: just hit the "subscribe" button
- comment if you have some new, technical and relevant information to add to the case
- __DO NOT__ comment on closed issues or merged PRs. If you think you have a related problem, open up a new issue and reference the PR or issue.
### If you have not found an existing issue that describes your problem:
1. create a new issue, with a succinct title that describes your issue:
- bad title: "It doesn't work with my docker"
- good title: "Private registry push fail: 400 error with E_INVALID_DIGEST"
3. copy the command line you used to launch your Registry
4. restart your docker daemon in debug mode (add `-D` to the daemon launch arguments)
5. reproduce your problem and get your docker daemon logs showing the error
6. if relevant, copy your registry logs that show the error
7. provide any relevant detail about your specific Registry configuration (e.g., storage backend used)
8. indicate if you are using an enterprise proxy, Nginx, or anything else between you and your Registry
## Contributing a patch for a known bug, or a small correction
You should follow the basic GitHub workflow:
1. fork
2. commit a change
3. make sure the tests pass
4. PR
Additionally, you must [sign your commits](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work). It's very simple:
- configure your name with git: `git config user.name "Real Name" && git config user.email mail@example.com`
- sign your commits using `-s`: `git commit -s -m "My commit"`
Some simple rules to ensure quick merge:
- clearly point to the issue(s) you want to fix in your PR comment (e.g., `closes #12345`)
- prefer multiple (smaller) PRs addressing individual issues over a big one trying to address multiple issues at once
- if you need to amend your PR following comments, please squash instead of adding more commits
## Contributing new features
You are heavily encouraged to first discuss what you want to do. You can do so on the irc channel, or by opening an issue that clearly describes the use case you want to fulfill, or the problem you are trying to solve.
If this is a major new feature, you should then submit a proposal that describes your technical solution and reasoning.
If you did discuss it first, this will likely be greenlighted very fast. It's advisable to address all feedback on this proposal before starting actual work.
Then you should submit your implementation, clearly linking to the issue (and possible proposal).
Your PR will be reviewed by the community, then ultimately by the project maintainers, before being merged.
It's mandatory to:
- interact respectfully with other community members and maintainers - more generally, you are expected to abide by the [Docker community rules](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#docker-community-guidelines)
- address maintainers' comments and modify your submission accordingly
- write tests for any new code
Complying to these simple rules will greatly accelerate the review process, and will ensure you have a pleasant experience in contributing code to the Registry.
Have a look at a great, succesful contribution: the [Ceph driver PR](https://github.com/docker/distribution/pull/443)
## Coding Style
Unless explicitly stated, we follow all coding guidelines from the Go
community. While some of these standards may seem arbitrary, they somehow seem
to result in a solid, consistent codebase.
It is possible that the code base does not currently comply with these
guidelines. We are not looking for a massive PR that fixes this, since that
goes against the spirit of the guidelines. All new contributions should make a
best effort to clean up and make the code base better than they left it.
Obviously, apply your best judgement. Remember, the goal here is to make the
code base easier for humans to navigate and understand. Always keep that in
mind when nudging others to comply.
The rules:
1. All code should be formatted with `gofmt -s`.
2. All code should pass the default levels of
[`golint`](https://github.com/golang/lint).
3. All code should follow the guidelines covered in [Effective
Go](http://golang.org/doc/effective_go.html) and [Go Code Review
| **registry** | An implementation of the [Docker Registry HTTP API V2](docs/spec/api.md) for use with docker 1.6+. |
| **libraries** | A rich set of libraries for interacting with,distribution components. Please see [godoc](https://godoc.org/github.com/docker/distribution) for details. **Note**: These libraries are **unstable**. |
| **specifications** | _Distribution_ related specifications are available in [docs/spec](docs/spec) |
| **documentation** | Docker's full documentation set is available at [docs.docker.com](https://docs.docker.com). This repository [contains the subset](docs/index.md) related just to the registry. |
### How does this integrate with Docker engine?
This project should provide an implementation to a V2 API for use in the [Docker
core project](https://github.com/docker/docker). The API should be embeddable
and simplify the process of securely pulling and pushing content from `docker`
daemons.
### What are the long term goals of the Distribution project?
The _Distribution_ project has the further long term goal of providing a
secure tool chain for distributing content. The specifications, APIs and tools
should be as useful with Docker as they are without.
Our goal is to design a professional grade and extensible content distribution
system that allow users to:
* Enjoy an efficient, secured and reliable way to store, manage, package and
exchange content
* Hack/roll their own on top of healthy open-source components
* Implement their own home made solution through good specs, and solid
extensions mechanism.
## More about Registry 2.0
The new registry implementation provides the following benefits:
- faster push and pull
- new, more efficient implementation
- simplified deployment
- pluggable storage backend
- webhook notifications
For information on upcoming functionality, please see [ROADMAP.md](ROADMAP.md).
### Who needs to deploy a registry?
By default, Docker users pull images from Docker's public registry instance.
[Installing Docker](https://docs.docker.com/engine/installation/) gives users this
ability. Users can also push images to a repository on Docker's public registry,
if they have a [Docker Hub](https://hub.docker.com/) account.
For some users and even companies, this default behavior is sufficient. For
others, it is not.
For example, users with their own software products may want to maintain a
registry for private, company images. Also, you may wish to deploy your own
image repository for images used to test or in continuous integration. For these
use cases and others, [deploying your own registry instance](docs/deploying.md)
may be the better choice.
### Migration to Registry 2.0
For those who have previously deployed their own registry based on the Registry
1.0 implementation and wish to deploy a Registry 2.0 while retaining images,
data migration is required. A tool to assist with migration efforts has been
created. For more information see [docker/migrator]
(https://github.com/docker/migrator).
## Contribute
Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute
issues, fixes, and patches to this project. If you are contributing code, see
the instructions for [building a development environment](docs/building.md).
## Support
If any issues are encountered while using the _Distribution_ project, several
// WithTrace allocates a traced timing span in a new context. This allows a
// caller to track the time between calling WithTrace and the returned done
// function. When the done function is called, a log message is emitted with a
// "trace.duration" field, corresponding to the elapased time and a
// "trace.func" field, corresponding to the function that called WithTrace.
//
// The logging keys "trace.id" and "trace.parent.id" are provided to implement
// dapper-like tracing. This function should be complemented with a WithSpan
// method that could be used for tracing distributed RPC calls.
//
// The main benefit of this function is to post-process log messages or
// intercept them in a hook to provide timing data. Trace ids and parent ids
// can also be linked to provide call tracing, if so required.
//
// Here is an example of the usage:
//
// func timedOperation(ctx Context) {
// ctx, done := WithTrace(ctx)
// defer done("this will be the log message")
// // ... function body ...
// }
//
// If the function ran for roughly 1s, such a usage would emit a log message
// as follows:
//
// INFO[0001] this will be the log message trace.duration=1.004575763s trace.func=github.com/docker/distribution/context.traceOperation trace.id=<id> ...
//
// Notice that the function name is automatically resolved, along with the
// package and a trace id is emitted that can be linked with parent ids.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.