Compare commits

...

261 Commits

Author SHA1 Message Date
Miloslav Trmač
e079f9d61b Bump to version v0.1.37
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2019-06-14 17:56:35 +02:00
Valentin Rothberg
ceabc0a404 Merge pull request #679 from mtrmac/rebases
Update buildah to 1.8.4, c/storage to 1.12.10
2019-06-14 17:51:42 +02:00
Miloslav Trmač
523b8b44a2 Update buildah to 1.8.4, c/storage to 1.12.10
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2019-06-14 17:24:35 +02:00
Miloslav Trmač
d2d1796eb5 Merge pull request #666 from mtrmac/registries.conf-mirrors
Rebase containers/image to v2.0.0
2019-06-14 01:07:43 +02:00
Miloslav Trmač
c67e5f7425 Rebase containers/image to v2.0.0
This adds the mirror-by-digest-only option to mirrors, and moves the search
order to an independent list.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2019-06-14 00:19:39 +02:00
Miloslav Trmač
1b8686d044 Merge pull request #673 from mtrmac/systemtest-openpgp
Skip systemtest/050-signing.bats if skopeo can't create signatures
2019-06-13 19:07:12 +02:00
Miloslav Trmač
a4de1428f9 Skip systemtest/050-signing.bats if skopeo can't create signatures
This does not happen in this repo's tests, but containers/image's
(make test-skopeo) fails in the containers_image_openpgp configuration with

> not ok 10 signing
> ...
> # time="2019-06-11T20:59:32Z" level=fatal msg="Signing not supported: signing is not supported in github.com/containers/image built with the containers_image_openpgp build tag"

To reproduce/test this:
> make test-system BUILDTAGS='ostree containers_image_openpgp'

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2019-06-13 18:53:53 +02:00
Miloslav Trmač
524f6c0682 Merge pull request #677 from edsantiago/wait_for_registry
start_registry: wait for registry to be ready
2019-06-13 18:48:59 +02:00
Ed Santiago
fa18fce7e8 start_registry: wait for registry to be ready
The usual 'podman run -d' race condition: we've been forking
off the container but not actually making sure it's up; this
leads to flakes in which we try (and fail) to access it.

Solution: use curl to check the port; we will expect a zero
exit status once we can connect. Time out at ten seconds.

Resolves: #675

Signed-off-by: Ed Santiago <santiago@redhat.com>
2019-06-13 09:27:58 -06:00
Miloslav Trmač
96be1bb155 Merge pull request #668 from mtrmac/fedora-30-gpg2
Explicitly disable encrypting the test GPG key
2019-06-11 16:50:10 +02:00
Miloslav Trmač
23c6b42b26 Explicitly disable encrypting test GPG keys
Since GPG 2.1, GPG asks for a passphrase by default; opt out when
generating test keys to avoid
> gpg: agent_genkey failed: No pinentry
> gpg: key generation failed: No pinentry
which happens otherwise (and we can't use an interactive pinentry
in a batch process anyway).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2019-06-10 22:03:23 +02:00
Miloslav Trmač
6307635b5f Merge pull request #659 from edsantiago/systemtests
systemtest - new set of BATS tests for RHEL8 gating
2019-06-04 18:55:07 +02:00
Ed Santiago
47e7cda4e9 System tests - get working under podman-in-podman
Skopeo CI tests run under podman; hence the registries
run in the tests will be podman-in-podman. This requires
complex muckery to make work:

 - install bats, jq, and podman in the test image
 - add new test-system Make target. It runs podman
   with /var/lib/containers bind-mounted to a tmpdir
   and with other necessary options; and invokes a
   test script that hack-edits /etc/containers/storage.conf
   before running podman for the first time.
 - add --cgroup-manager=cgroupfs option to podman
   invocations in BATS: without this, podman-in-podman
   fails with:
       systemd cgroup flag passed, but systemd support for managing cgroups is not available

Also: gpg --pinentry-mode option is not available on all
our test platforms. Check for it before using.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2019-05-28 10:53:12 -06:00
Ed Santiago
5dd3b2bffd fixup! Incorporate review feedback from mtrmac
- Got TLS registry working, and test enabled. The trick was to
  copy the .crt file to a separate directory *without* the .key

- auth test - set up a private XDG_RUNTIME_DIR, in case tests
  are being run by a real user.

- signing test - remove FIXME comments; questions answered.

- helpers.bash - document start_registries(); save a .crt file,
  not .cert; and remove unused stop_registries() - it's too hard
  to do right, and very easy for individual tests to 'podman rm -f'

- run-tests - remove SKOPEO_BINARY definition, it's inconsistent
  with the one in helpers.bash

Signed-off-by: Ed Santiago <santiago@redhat.com>
2019-05-28 10:10:50 -06:00
Ed Santiago
12f0e24519 systemtest - new set of BATS tests for RHEL8 gating
Signed-off-by: Ed Santiago <santiago@redhat.com>
2019-05-28 10:10:50 -06:00
Miloslav Trmač
b137741385 Merge pull request #664 from mtrmac/ubuntu-build
Fix build on Ubuntu
2019-05-27 17:19:33 +02:00
Miloslav Trmač
233804fedc Fix build on Ubuntu
btrfs/ioctl.h is in libbtrfs-dev (now?), btrfs-tools does not pull it in.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2019-05-27 17:01:44 +02:00
Daniel J Walsh
0c90e57eaf Merge pull request #657 from TristanCacqueray/master
Add integration test for invalid reference
2019-05-20 08:14:14 -04:00
Tristan Cacqueray
8fb4ab3d92 Add integration test for invalid reference
This change adds a couple of tests to prevent further regression
introduced by https://github.com/containers/skopeo/pull/653

Signed-off-by: Tristan Cacqueray <tdecacqu@redhat.com>
2019-05-19 03:02:19 +00:00
Miloslav Trmač
8c9e250801 Merge pull request #656 from rhatdan/unshare
Skopeo crashes on any invalid transport
2019-05-18 21:07:10 +02:00
Daniel J Walsh
04aee56a36 Skopeo crashes on any invalid transport
We need to verfy that the user entered a valid transport before attempting
to see if the transport exists,  otherwise skopeo segfaults.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-05-18 12:30:15 -04:00
Daniel J Walsh
4f1fabc2a4 Merge pull request #654 from rhatdan/master
Update release to v0.1.36
2019-05-18 07:37:37 -04:00
Daniel J Walsh
43bc356337 Move to version v0.1.37-dev
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-05-18 06:39:12 -04:00
Daniel J Walsh
41991bab70 Bump to version v0.1.36
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-05-18 06:37:01 -04:00
Miloslav Trmač
2b5086167f Merge pull request #653 from TristanCacqueray/master
rootless: don't create a namespace unless for containers-storage
2019-05-18 05:49:21 +02:00
Tristan Cacqueray
b46d16f48c rootless: don't create a namespace unless for containers-storage
This change fixes skopeo usage in restricted environment such as
bubblewrap where it doesn't need extra capabilities or user namespace
to perform its action.

Close #649
Signed-off-by: Tristan Cacqueray <tdecacqu@redhat.com>
2019-05-18 02:53:20 +00:00
Tristan Cacqueray
9fef0eb3f3 vendor: update containers/image
Depends-On: https://github.com/containers/image/pull/631
Signed-off-by: Tristan Cacqueray <tdecacqu@redhat.com>
2019-05-18 02:53:20 +00:00
Miloslav Trmač
30b0a1741e Merge pull request #650 from csomh/fix-man-page-typo
Fix typo on the main man page
2019-05-15 18:51:37 +02:00
Hunor Csomortáni
945b9dc08f Fix typo on the main man page
Signed-off-by: Hunor Csomortáni <csomh@redhat.com>
2019-05-15 17:20:26 +02:00
Miloslav Trmač
904b064da4 Merge pull request #647 from nalind/config
inspect: add a --config flag
2019-05-09 15:37:12 +02:00
Nalin Dahyabhai
7ae62af073 inspect: add a --config flag
Add a --config option to "skopeo inspect" to dump an image's
configuration blob in the OCI format, or the original format
if --config and --raw are specified.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2019-05-08 11:07:52 -04:00
Miloslav Trmač
7525a79c93 Merge pull request #646 from QiWang19/creds
Add --no-creds flag to skopeo inspect
2019-05-07 20:53:02 +02:00
juanluisvaladas
07287b5783 Add --no-creds flag to skopeo inspect
Follow PR #433
Close #421

Currently skopeo inspect allows to:
Use the default credentials in $HOME/.docker.config
Explicitly define credentials via de --creds flag

This implements a --no-creds flag which will query docker registries anonymously.

Signed-off-by: Qi Wang <qiwan@redhat.com>
2019-05-03 13:30:33 -04:00
Daniel J Walsh
0a2a62ac20 Merge pull request #618 from SUSE/registry-mirror
Add skopeo registry mirror integration tests
2019-04-25 06:26:08 -04:00
Daniel J Walsh
5581c62a3a Merge pull request #632 from hakandilek/master
build image updated to ubuntu:18.10
2019-04-25 06:24:47 -04:00
Sascha Grunert
6b5bdb7563 Add skopeo registry mirror integration tests
- Update toml to latest release
- Update containers/image
- Add integration tests
- Add hidden `--registry-conf` flag used by the integration tests

Signed-off-by: Sascha Grunert <sgrunert@suse.com>
2019-04-25 11:35:12 +02:00
Valentin Rothberg
2bdffc89c2 Merge pull request #640 from rhatdan/vendor
Vendor update container/storage
2019-04-25 11:17:15 +02:00
Daniel J Walsh
65e6449c95 Vendor update container/storage
overlay: propagate errors from mountProgram
utils: root in a userns uses global conf file
Fix handling of additional stores
Correctly check permissions on rootless directory
Fix possible integer overflow on 32bit builds
Evaluate device path for lvm
lockfile test: make concurrent RW test determinisitc
lockfile test: make concurrent read tests deterministic

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-04-24 20:32:46 -04:00
Valentin Rothberg
2829f7da9e Merge pull request #638 from giuseppe/skip-namespace-if-not-needed
rootless: do not create a user namespace if not needed
2019-04-24 14:27:48 +02:00
Giuseppe Scrivano
ece44c2842 rootless: do not create a user namespace if not needed
do not create a user namespace if we already have the capabilities we
need for pulling and storing an image.

Closes: https://github.com/containers/skopeo/issues/637

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-04-24 13:48:31 +02:00
Miloslav Trmač
0fa335c149 Merge pull request #635 from SUSE/buildah-update
Vendor the latest buildah master
2019-04-24 07:16:29 +02:00
Sascha Grunert
5c0ad57c2c Vendor the latest buildah master
This commit contains the necessary split-up between buildah/pkg and
buildah/util to avoid dependency breaks.

Signed-off-by: Sascha Grunert <sgrunert@suse.com>
2019-04-23 15:07:37 +02:00
Hakan Dilek
b2934e7cf6 build image updated to ubuntu:18.10
fixes #621

Signed-off-by: Hakan Dilek <hakandilek@gmail.com>
2019-04-17 22:04:12 +02:00
Daniel J Walsh
2af7114ea1 Merge pull request #629 from chuanchang/add_help_to_makefile
added help to Makefile
2019-04-17 12:04:02 -04:00
Alex Jia
0e1cc9203e Merge branch 'master' into add_help_to_makefile 2019-04-17 09:49:44 +08:00
Miloslav Trmač
e255ccc145 Merge pull request #630 from lsm5/go-envvar
use GO envvar throughout in Makefile
2019-04-16 19:50:51 +02:00
Alex Jia
9447a55b61 added help to Makefile
Signed-off-by: Alex Jia <chuanchang.jia@gmail.com>
2019-04-16 09:29:10 +08:00
Lokesh Mandvekar
9fdceeb2b2 use GO envvar throughout in Makefile
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2019-04-16 00:04:57 +00:00
Daniel J Walsh
18ee5f8119 Merge pull request #628 from vrothberg/update-bolt
Switch to github.com/etcd-io/bbolt
2019-04-12 12:36:01 -04:00
Valentin Rothberg
ab6a17059c Switch to github.com/etcd-io/bbolt
github.com/boltdb/bolt is no longer maintained.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-04-12 17:27:09 +02:00
Miloslav Trmač
81c5e94850 Merge pull request #624 from giuseppe/skopeo-rootless
skopeo: add rootless support
2019-04-11 21:05:59 +02:00
Daniel J Walsh
99dc83062a Merge pull request #627 from SUSE/storage-v1.12.2
Update containers/storage to v1.12.2
2019-04-11 07:44:29 -04:00
Sascha Grunert
4d8ea6729f Update containers/storage to v1.12.2
This commit simply bumps containers/storage to the latest version to
unblock the containers/image integration test runs.

Signed-off-by: Sascha Grunert <sgrunert@suse.com>
2019-04-11 10:52:28 +02:00
Giuseppe Scrivano
ac85091ecd skopeo: create a userns when running rootless
Closes: https://github.com/containers/skopeo/issues/623

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-04-10 16:51:46 +02:00
Giuseppe Scrivano
ffa640c2b0 vendor: add and update containers/{buildah,image}
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-04-10 09:33:13 +02:00
Miloslav Trmač
c73bcba7e6 Merge pull request #626 from grdryn/fix-links
Update broken links in info docs
2019-04-08 14:56:45 +02:00
Gerard Ryan
329e1cf61c Update broken links in info docs 2019-04-07 14:37:14 +01:00
Valentin Rothberg
854f766dc7 Merge pull request #622 from rhatdan/man
Make sure we install man pages
2019-03-27 13:22:03 +01:00
Daniel J Walsh
5c73fdbfdc Make sure we install man pages
Currently we are only installing the skopeo.1 man page.  This
change will generate and install all man pages.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-27 05:52:48 -04:00
Valentin Rothberg
097549748a Merge pull request #620 from rhatdan/vendor
Vendor in latest containers/storage and containers/image
2019-03-25 10:49:46 +01:00
Daniel J Walsh
032309941b Vendor in latest containers/storage and containers/image
Update containers/storage and containers/image to define location of local storage.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-24 13:32:35 -04:00
Valentin Rothberg
d93a581fb8 Merge pull request #615 from vrothberg/fix-613
vendor: don't remove containers/image/registries.conf
2019-03-13 17:23:05 +01:00
Valentin Rothberg
52075ab386 Merge branch 'master' into fix-613 2019-03-13 14:20:44 +01:00
Miloslav Trmač
d65ae4b1d7 Merge pull request #616 from vrothberg/vendor-image
vendor containers/image
2019-03-13 14:19:05 +01:00
Valentin Rothberg
c32d27f59e Merge branch 'master' into fix-613 2019-03-13 13:51:16 +01:00
Valentin Rothberg
883d65a54a vendor containers/image
The progress bars now show messages on completion of the copy
operations.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-03-13 08:39:40 +01:00
Miloslav Trmač
94728fb73f Merge pull request #614 from vrothberg/vendor-storage-image
WIP - Vendor storage image
2019-03-12 17:04:11 +01:00
Valentin Rothberg
520f0e5ddb WIP - update storage & image
TEST PR for: https://github.com/containers/image/pull/603

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-03-12 14:38:48 +01:00
Valentin Rothberg
fa39b49a5c vendor: don't remove containers/image/registries.conf
Instruct vndr to not remove image/registries.conf to ease packaging on
Ubuntu.

Fixes: #618
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-03-11 17:37:14 +01:00
Valentin Rothberg
0490018903 Merge pull request #611 from eramoto/completions-global-option
completions: Fix completions with a global option and indentation
2019-03-06 11:04:11 +01:00
ERAMOTO Masaya
b434c8f424 completions: Use only spaces in indent
Since both of tabs and spaces in indentation were used and
there were tabs expected 4 spaces width and 8 spaces width,
only spaces use in indentation.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-03-06 11:45:41 +09:00
ERAMOTO Masaya
79de2d9f09 completions: Fix completions with a global option
After a global option was specified, a following string for global
options, commands, and command options was not complemented.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-03-06 11:45:13 +09:00
Valentin Rothberg
2031e17b3c Merge pull request #609 from rhatdan/release
Release 0.1.35
2019-03-05 14:43:59 +01:00
Daniel J Walsh
5a050c1383 version: bump to v0.1.36-dev
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-04 16:18:07 -05:00
Daniel J Walsh
404c5bd341 version: bump to v0.1.35
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-04 16:18:07 -05:00
Valentin Rothberg
2134209960 Merge pull request #608 from rhatdan/vendor
Vendor in latest containers/storage and image
2019-03-01 15:05:56 +01:00
Daniel J Walsh
1e8c029562 Vendor in latest containers/storage and image
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-01 07:16:57 -05:00
Miloslav Trmač
932b037d66 Merge pull request #606 from vrothberg/vendor-vendor-vendor
Vendor updates
2019-02-23 03:37:46 +01:00
Valentin Rothberg
26a48586a0 Travis: add vendor checks
Add checks to Tarvis to make sure that the vendor.conf is in sync with
the code and the dependencies in ./vendor.  Do this by first running
`make vendor` followed by running `./hack/tree_status.sh` to check if
any file in the tree has been changed.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-22 12:21:36 +01:00
Valentin Rothberg
683f4263ef vendor.conf: remove unused dependencies
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-21 14:07:33 +01:00
Valentin Rothberg
ebfa1e936b vendor.conf: pin branches to releases or commits
Most of the dependencies have been copied from libpod's vendor.conf
where such a cleanup has been executed recently.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-21 14:03:14 +01:00
Valentin Rothberg
509782e78b add hack/tree_status.sh
This script is meant to be used in CI after a `make vendor` run.  It's
sole purpose is to execute a `git status --porcelain` and fail with the
list of files reported by it.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-21 13:50:00 +01:00
Valentin Rothberg
776b408f76 make vendor: always fetch the latest vndr
Make sure to always use the latest version of vndr by fetching it prior
to execution.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-21 13:48:22 +01:00
Miloslav Trmač
fee5981ebf Merge pull request #604 from eramoto/transports-completions
completions: Introduce transports completions
2019-02-16 20:10:27 +01:00
Valentin Rothberg
d9e9604979 Merge pull request #602 from vrothberg/mpb-progress-bars
update containers/image
2019-02-16 10:37:27 +01:00
Valentin Rothberg
3606380bdb vendor latest containers/image
containers/image moved to a new progress-bar library to fix various
issues related to overlapping bars and redundant entries.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-02-16 10:08:35 +01:00
ERAMOTO Masaya
640b967463 completions: Introduce transports completions
Introduces bash completions for transports which commands (copy, delete,
and inspect) support.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-02-15 14:27:55 +09:00
Valentin Rothberg
b8b9913695 Merge pull request #603 from eramoto/modify-gitignore
Modify .gitignore for generated man pages
2019-02-13 10:16:44 +01:00
ERAMOTO Masaya
9e2720dfcc Modify .gitignore for generated man pages
Modify .gitigare to target any man page since skopeo man page was split up
in #598.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-02-13 10:03:26 +09:00
Miloslav Trmač
b329dd0d4e Merge pull request #600 from nalind/storage-multiple-manifests
Vendor latest github.com/containers/storage
2019-02-08 01:02:50 +01:00
Nalin Dahyabhai
1b10352591 Vendor latest github.com/containers/storage
Update github.com/containers/storage to master(06b6c2e4cf254f5922a79da058c94ac2a65bb92f).

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2019-02-07 17:20:45 -05:00
Daniel J Walsh
bba2874451 Merge pull request #598 from rhatdan/man
split up skopeo man pages
2019-02-01 15:15:46 -05:00
Daniel J Walsh
0322441640 Merge branch 'master' into man 2019-02-01 13:28:45 -05:00
Daniel J Walsh
8868d2ebe4 Merge pull request #596 from eramoto/fix-bash-completions
completions: Fix bash completions when a option requires a argument
2019-02-01 13:28:14 -05:00
Daniel J Walsh
f19acc1c90 split up skopeo man pages
Create a different man page for each of the subcommands.
Also replace some krufty references to kpod with podman

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-02-01 11:21:51 -05:00
Daniel J Walsh
47f24b4097 Merge branch 'master' into fix-bash-completions 2019-02-01 09:58:46 -05:00
Daniel J Walsh
c2597aab22 Merge pull request #599 from rhatdan/quiet
Add --quiet option to skopeo copy
2019-02-01 09:55:57 -05:00
Daniel J Walsh
47065938da Add --quiet option to skopeo copy
People are using skopeo copy in batch commands and do not need
all of the logging.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-02-01 01:38:39 +00:00
ERAMOTO Masaya
790620024e completions: Fix bash completions when a option requires a argument
Since the string of options variable as pattern in the case statement has
not been delimited and it does not match the value of prev variable,
bash completions tries to complement any option even when a specified
option requires a argument.
This fix stops complementing options when a option requires a argument.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-01-23 19:14:26 +09:00
Daniel J Walsh
42b01df89e Merge pull request #586 from Silvanoc/update-contributing
docs: consolidate CONTRIBUTING
2019-01-17 10:15:18 -05:00
Silvano Cirujano Cuesta
aafae2bc50 docs: consolidate CONTRIBUTING
Move documentation about dependencies management from README.md to
CONTRIBUTING.md.

Closes #583

Signed-off-by: Silvano Cirujano Cuesta <silvano.cirujano-cuesta@siemens.com>
2019-01-17 16:04:13 +01:00
Daniel J Walsh
e5b9ea5ee6 Merge pull request #593 from vrothberg/progress-bar-tty-check
vendor latest c/image
2019-01-17 06:45:48 -05:00
Valentin Rothberg
1c2ff140cb vendor latest c/image
When copying images and the output is not a tty (e.g., when piping to a
file) print single lines instead of using progress bars. This avoids
long and hard to parse output.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2019-01-16 17:59:52 +01:00
Valentin Rothberg
f7c608e65e Merge pull request #592 from eramoto/build-in-container
Makefile: Build docs in a container
2019-01-15 14:12:49 +01:00
ERAMOTO Masaya
ec810c91fe Makefile: Build docs in a container
Enables to build docs in a container even when go-md2man is not installed
locally.

Signed-off-by: ERAMOTO Masaya <eramoto.masaya@jp.fujitsu.com>
2019-01-15 18:57:30 +09:00
Daniel J Walsh
17bea86e89 Merge pull request #581 from afbjorklund/build-tag
Allow building without btrfs and ostree
2019-01-04 09:13:25 -05:00
Anders F Björklund
3e0026d907 Allow building without btrfs and ostree
Copy the build tag scripts from Buildah

Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
2019-01-03 20:32:53 +01:00
Antonio Murdaca
3e98377bf2 Merge pull request #579 from runcom/v0134
release v0.1.34
2018-12-21 16:10:05 +01:00
Antonio Murdaca
0658bc80f3 version: bump to v0.1.35-dev
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-12-21 15:52:49 +01:00
Antonio Murdaca
e96a9b0e1b version: bump to v0.1.34
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-12-21 15:52:36 +01:00
Antonio Murdaca
08c30b8f06 bump(github.com/containers/image)
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-12-21 15:52:01 +01:00
Antonio Murdaca
05212df1c5 Merge pull request #577 from runcom/0133
bump to v0.1.33
2018-12-19 12:11:04 +01:00
Antonio Murdaca
7ec68dd463 version: bump to v0.1.34-dev
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-12-19 11:14:07 +01:00
Antonio Murdaca
6eb5131b85 version: bump to v0.1.33
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-12-19 11:13:51 +01:00
Antonio Murdaca
736cd7641d Merge pull request #573 from vrothberg/parapull
vendor containers/image for parallel copying of layers
2018-12-19 09:30:06 +01:00
Valentin Rothberg
78bd5dd3df vendor containers/image for parallel copying of layers
Vendor the latest containers/image 50e5e55e46a391df8fce1291b2337f1af879b822
to enable parallel copying of layers.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2018-12-19 09:06:56 +01:00
Antonio Murdaca
ecd675e0a6 Merge pull request #572 from giuseppe/use-optimized-gzip
vendor: use faster version instead compress/gzip
2018-12-18 17:24:57 +01:00
Giuseppe Scrivano
5675895460 vendor: update containers/storage and containers/image
some tests I've done to try out the difference in performance:

I am using a directory repository so to not depend on the network.

User time (seconds): 39.40
System time (seconds): 6.83
Percent of CPU this job got: 121%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:38.07
User time (seconds): 8.32
System time (seconds): 1.62
Percent of CPU this job got: 128%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:07.72

User time (seconds): 42.68
System time (seconds): 6.64
Percent of CPU this job got: 162%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:30.44
User time (seconds): 8.94
System time (seconds): 1.51
Percent of CPU this job got: 178%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:05.85

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-12-18 10:45:39 +01:00
Giuseppe Scrivano
0f8f870bd3 vendor: update ostree-go
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-12-13 16:35:35 +01:00
Daniel J Walsh
a51e38e60d Merge pull request #523 from mtrmac/cli-parsing
RFC: Reliable CLI parsing
2018-12-07 09:24:31 -05:00
Miloslav Trmač
8fe1595f92 Do not interpret % metacharacters in (skopeo inspect) output
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:54 +01:00
Miloslav Trmač
2497f500d5 Add commandAction to make *cli.Context unavailable in command handlers
That in turn makes sure that the cli.String() etc. flag access functions
are not used, and all flag handling is done using the *Options structures
and the Destination: members of cli.Flag.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:54 +01:00
Miloslav Trmač
afa92d58f6 Drop the *cli.Context argument from parseImage and parseImageSource
We no longer need it for handling flags.

Also, require the caller to explicitly pass an image name to parseImage
instead of, horribly nontransparently, using the first CLI option.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:38 +01:00
Miloslav Trmač
958cafb2c0 Inline contextsFromCopyOptions
It was not really any clearer when broken out. We already have
a pair of trivial src/dest API calls before this, so adding
a similar src/dest call for SystemContext follows the pattern.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
1d1bf0d393 Replace contextFromImageDestOptions by imageDestOptions.newSystemContext
This is analogous to the imageOptions.newSystemContext conversion.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
3094320203 Replace contextFromImageOptions by imageOptions.newSystemContext
We no longer need the *cli.Context parameter, and at that point
it looks much cleaner to make this a method (already individually;
it will be even cleaner after a similar imageDestOptions conversion).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
39de98777d Remove no longer needed flagsPrefix from imageOptions
contextFromImageOptions is finally not using any string-based lookup
in cli.Context, so we don't need to record this value any more.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
8084f6f4e2 No longer define all "skopeo copy" flags in utils_test.go
All the contextFromImage{,Dest}Options flags are now defined in
imageFlags/imageDestFlags.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
6ef45e5cf1 Migrate --authfile to sharedImageOptions
This introduces YET ANOTHER *Options structure, only to share this
option between copy source and destination.  (We do need to do this,
because the libraries, rightly, refuse to work with source and
destination declaring its own versino of the --authfile flag.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
444b90a9cf Migrate --dest-compress to imageDestOptions
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
72a3dc17ee Migrate --dest-ostree-tmp-dir to imageDestOptions
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
88c748f47a Introduce imageDestOptions
This is an extension of imageOptions that carries destination-specific
flags.

This will allow us to handle --dest-* flags without also exposing
pointless --src-* flags.

(This is, also, where the type-safety somewhat breaks down;
after all the work to make the data flow and availability explicit,
everything ends up in an types.SystemContext, and it's easy enough
to use a destination-specific one for sources.  OTOH, this is
not making the situation worse in any sense.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
7e8c89d619 Migrate --*daemon-host to imageOptions
This was previously only supported in (skopeo copy).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
694f915003 Migrate --*shared-blob-dir to imageOptions.
This was previously only supported in (skopeo copy).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
a77b409619 Migrate --*tls-verify to imageOptions
This was previously unsupported by (skopeo layers)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
1faff791ce Migrate --*cert-dir to imageOptions
This was previously unsupported by (skopeo layers).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
8b8afe0fda Migrate --*creds to imageOptions
This is one of the ugliest parts; we need an extra parameter to support
the irregular screds/dcreds aliases.

This was previously unsupported by (skopeo layers).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
09a120a59b Temporarily add flagPrefix to imageOptions
We don't want to worry about mismatch of the flagPrefix value
between imageFlags() and contextFromImageOptions().  For now,
record it in imageOptions; eventually we will stop using it in
contextFromImageOptions and remove it again.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
c769c7789e Introduce imageOptions
This is similar to the previous *Options structures, but this one
will support differing sets of options, in particular for the
copy source/destination.

The way the return values of imageFlags() are integrated into
creation of a cli.Command forces fakeContext() in tests to do
very ugly filtering to have a working *imageOptions available
without having a copyCmd() cooperate to give it to us.  Rather
than extend copyCmd(), we do the filtering, because the reliance
on copyCmd() will go away after all flags are migrated, and so
will the filtering and fakeContext() complexity.

Finally, rename contextFromGlobalOptions to not lie about only
caring about global options.

This only introduces the infrastructure, all flags continue
to be handled in the old way.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:29:18 +01:00
Miloslav Trmač
3ea3965e5e Use globalOptions for setting up types.SystemContext
contextFromGlobalOptions now uses globalOptions instead
of cli.Context.Global* .  That required passing globalOptions
through a few more functions.

Now, "all" that is left are all the non-global options
handled in contextFromGlobalOptions.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:28:57 +01:00
Miloslav Trmač
ee8391db34 Use globalOptions for the global timeout option
Replace commandTimeoutContextFromGlobalOptions with
globalOptions.commandTimeoutContext.  This requires passing
globalOptions to more per-command *Options state.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:28:29 +01:00
Miloslav Trmač
e1cc97d9d7 Use globalOptions for policy configuration
This requires us to propagate globalOptions to the per-command
*Options state.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:28:29 +01:00
Miloslav Trmač
f30756a9bb Use globalOptions for the debug flag
This works just like the command-specific options.  Handles only
the single flag for now, others will be added as the infrastructure
is built.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:28:29 +01:00
Miloslav Trmač
33b474b224 Create a globalOptions structure
This works just like the command-specific options.  Also
moves the "Before:" handler into a separate method.

Does not change behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:28:29 +01:00
Miloslav Trmač
485a7aa330 Use the *Options structures for command-specific options
Use Destionation: &opts.flag in the flag definition
instead of c.String("flag-name") and the like in the hadler and
matching only by strings.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:28:29 +01:00
Miloslav Trmač
59117e6e3d Fix a typo
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:28:29 +01:00
Miloslav Trmač
8ee3ead743 Create an "options" structure for each command
This is a big diff, but it really only replaces a few global variables
with functions returning a structure.

The ultimate goal of this patch set is to replace option handling using

> cli.StringFlag{Name:"foo", ...}
> ...
> func somethingHandler(c *cli.Context) error {
>     c.String("foo")
> }

where the declaration and usage are connected only using a string constant,
and it's difficult to notice that one or the other is missing or that the
types don't match, by

> struct somethingOptions {
>    foo string
> }
> ...
> cli.StringFlag{Name:"foo", Destination:&foo}
> ...
> func (opts *somethingOptions) run(c *cli.Context) error {
>     opts.foo
> }

As a first step, this commit ONLY introduces the *Options structures,
but for now empty; nothing changes in the existing implementations.

So, we go from

> func somethingHandler(c *cli.Context error {...}
>
> var somethingCmd = cli.Command {
>     ...
>     Action: somethingHandler
> }

to

> type somethingOptions struct{
> } // empty for now
>
> func somethingCmd() cli.Command {
>     opts := somethingOptions{}
>     return cli.Command {
>         ... // unchanged
>         Action: opts.run
>     }
> }
>
> func (opts *somethingOptions) run(c *cli.context) error {...} // unchanged

Using the struct type has also made it possible to place the definition of
cli.Command in front of the actual command handler, so do that for better
readability.

In a few cases this also broke out an in-line lambda in the Action: field
into a separate opts.run method.  Again, nothing else has changed.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:28:29 +01:00
Miloslav Trmač
bc39e4f9b6 Implement an optionalString, to be used as a cli.GenericFlag
This mirrors the behavior of cli.StringFlag, but records an explicit
"present" indication.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:23:49 +01:00
Miloslav Trmač
3017d87ade Implement an optionalBool, to be used as a cli.GenericFlag
This mirrors the behavior of cli.BoolFlag, but records and explicit
"present" indication.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:23:49 +01:00
Miloslav Trmač
d8f1d4572b Update github.com/urfave/cli
It's probably not strictly necessary, but let's work with the current
implementation before worrying about possible idiosyncracies.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-07 00:23:49 +01:00
Miloslav Trmač
41d8dd8b80 Merge pull request #570 from mtrmac/blob-info-caching
Vendor c/image after merging blob-info-caching
2018-12-07 00:22:16 +01:00
Miloslav Trmač
bcf3dbbb93 Vendor after merging c/image#536
... which adds blob info caching

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-12-06 23:26:31 +01:00
Daniel J Walsh
bfc0c5e531 Merge pull request #555 from mtrmac/revendor-image-spec
Re-vendor image-spec from upstream again
2018-12-06 14:40:04 -05:00
Miloslav Trmač
013ebac8d8 Re-vendor image-spec from upstream again
... after https://github.com/opencontainers/image-spec/pull/750 was merged.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-11-29 14:29:23 +01:00
Miloslav Trmač
fbc2e4f70f Merge pull request #521 from mtrmac/regsv2-docker
Vendor in vrothberg/image:regsv2-docker
2018-11-29 14:00:43 +01:00
Miloslav Trmač
72468d6817 Vendor c/image after merging vrothberg/image:regsv2-docker
Also update the user and tests for the API change.
2018-11-29 13:28:04 +01:00
Miloslav Trmač
5dec940523 Add tests for contextFromGlobalOptions
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-11-19 22:10:05 +01:00
Miloslav Trmač
761a6811c1 Merge pull request #569 from mtrmac/podman-security-opt
Use --security-opt label=disable instead of label:disable
2018-11-08 22:52:46 +01:00
Miloslav Trmač
b3a023f9dd Use --security-opt label=disable instead of label:disable
podman only accepts the = syntax.

Fixes #567.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-11-08 03:29:52 +01:00
Antonio Murdaca
5aa217fe0d Merge pull request #568 from runcom/bump-0.1.32
Bump 0.1.32
2018-11-07 22:22:50 +01:00
Antonio Murdaca
737438d026 version: bump to v0.1.33-dev
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-11-07 18:06:43 +01:00
Antonio Murdaca
1715c90841 version: bump to v0.1.32
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-11-07 18:06:24 +01:00
Miloslav Trmač
187299a20b Merge pull request #566 from isimluk/fix-podman-build
Enable build in podman
2018-11-03 02:05:12 +01:00
Šimon Lukašík
89d8bddd9b Actually, enable build in podman 2018-11-03 00:10:44 +01:00
Daniel J Walsh
ba649c56bf Merge pull request #565 from armstrongli/master
add timeout support for image copy
2018-11-02 11:32:36 -04:00
jianqli
3456577268 add command timeout support for skopeo
* add global command-timeout option for skopeo
2018-11-01 23:02:33 +08:00
Miloslav Trmač
b52e700666 Merge pull request #562 from Bhudjo/patch-1
Fix typo
2018-10-17 01:42:29 +02:00
Alessandro Buggin
ee32f1f7aa Fix typo 2018-10-16 15:47:30 +01:00
Miloslav Trmač
5af0da9de6 Merge pull request #558 from nalind/copy-digest
Update containers/image to 5e5b67d6b1cf43cc349128ec3ed7d5283a6cc0d1
2018-10-16 00:22:09 +02:00
Nalin Dahyabhai
879a6d793f Log the version of the go toolchain
Before we use "go get" in CI, run "go version" so that we can be sure of
which version of the toolchain we're using.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2018-10-15 15:45:45 -04:00
Nalin Dahyabhai
2734f93e30 Update for github.com/containers/image API change
github.com/containers/image/copy.Image() now returns the copied
manifest, so we at least need to ignore it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2018-10-15 15:45:45 -04:00
Nalin Dahyabhai
2b97124e4a bump(github.com/containers/imge)
Bump github.com/containers/image to version
5e5b67d6b1cf43cc349128ec3ed7d5283a6cc0d1, which modifies copy.Image() to
add the new image's manifest to the values that it returns.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2018-10-15 15:45:43 -04:00
Antonio Murdaca
7815a5801e Merge pull request #561 from runcom/fix-golint
fix golint
2018-10-13 16:44:19 +02:00
Antonio Murdaca
501e1be3cf fix golint
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-10-13 15:41:55 +02:00
Miloslav Trmač
fc386a6dca Merge pull request #535 from rhatdan/podman
Add support for using podman in testing skopeo
2018-10-01 17:37:34 +02:00
Daniel J Walsh
2a134a0ddd Add support for using podman in testing skopeo
Also rename commands from Docker to container.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-09-30 07:54:23 +02:00
Miloslav Trmač
17250d7e8d Merge pull request #554 from rhatdan/vendor
Update vendor for skopeo release
2018-09-27 21:09:28 +02:00
Daniel J Walsh
65d28709c3 Update vendor for skopeo release
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-09-21 08:49:55 -04:00
Miloslav Trmač
d6c6c78d1b Merge pull request #553 from mtrmac/revendor
Run (make vendor)
2018-09-17 16:49:44 +02:00
Miloslav Trmač
67ffa00b1d Run (make vendor)
Temporarily vendor opencontainers/image-spec from a fork
to fix "id" value duplication, which is detected and
refused by gojsonschema now
( https://github.com/opencontainers/image-spec/pull/750 ).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-09-17 16:16:19 +02:00
Antonio Murdaca
a581847345 Merge pull request #552 from mtrmac/disable-cgo
Fix (make DISABLE_CGO=1)
2018-09-17 14:28:44 +02:00
Miloslav Trmač
bcd26a4ae4 Fix (make DISABLE_CGO=1)
... which has, apparently, never worked, because the golang image
has neither the GOPATH nor the working directory the Makefile expects.

Rather than move all this configuration into the Makefile to be able
to work with the golang images, just always use the skopeobuildimage
path, and only override the tags, to minimize divergence.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-09-17 13:55:46 +02:00
Miloslav Trmač
e38c345f23 Merge pull request #551 from rst0git/patch-1
readme: Add Ubuntu dependencies
2018-09-17 13:11:13 +02:00
Radostin Stoyanov
0421fb04c2 readme: Add Ubuntu dependencies
Signed-off-by: Radostin Stoyanov <rstoyanov1@gmail.com>
2018-09-16 12:01:32 +01:00
Antonio Murdaca
82186b916f Merge pull request #543 from giuseppe/remove-glog
vendor.conf: remove unused github.com/golang/glog
2018-08-28 12:00:07 +02:00
Giuseppe Scrivano
15eed5beda vendor.conf: remove unused github.com/golang/glog
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-08-28 11:37:47 +02:00
Miloslav Trmač
81837bd55b Merge pull request #499 from mtrmac/no-docker.Image
Stop using docker.Image
2018-08-27 16:00:12 +02:00
Miloslav Trmač
3dec6a1cdf Stop using docker.Image
Instead, use DockerReference() to obtain the repository name (which
also makes it work for other transports that support Docker references),
and a check for docker.Transport + docker.GetRepositoryTags.

This will allow dropping docker.Image from containers/image, and maybe
even all of ImageReference.NewImage (forcing callers to think about
manifest lists, among other things).
2018-08-27 14:00:44 +02:00
Miloslav Trmač
fe14427129 Merge pull request #542 from mtrmac/projectatomic-.validate
Update one more reference to projectatomic/skopeo
2018-08-27 13:52:20 +02:00
Miloslav Trmač
be27588418 Update one more reference to projectatomic/skopeo
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-08-27 13:10:02 +02:00
Miloslav Trmač
fb84437cd1 Merge pull request #541 from marcov/testflags
Allow passing TESTFLAGS env to hack/make.sh from make
2018-08-27 11:15:36 +02:00
Marco Vedovati
d9b495ca38 Allow passing TESTFLAGS env to hack/make.sh from make
Minor change to allow passing the env TESTFLAGS to make. That's pretty
convenient to filter what tests to run.
E.g. run integration tests containing the substring `Copy`:
make test-integration TESTFLAGS="-check.f Copy"

Signed-off-by: Marco Vedovati <mvedovati@suse.com>
2018-08-27 10:51:40 +02:00
Miloslav Trmač
6b93d4794f Merge pull request #537 from giuseppe/fix-makefile-vendor
Makefile: make vendor a .PHONY target
2018-08-16 15:58:26 +02:00
Giuseppe Scrivano
5d3849a510 Makefile: make vendor a .PHONY target
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-08-16 15:06:16 +02:00
Daniel J Walsh
fef142f811 Merge pull request #536 from SUSE/update-github-references
Complete transition from the `projectactomic` project to the `containers` one
2018-08-15 16:23:43 -04:00
Flavio Castelli
2684e51aa5 Complete transition from the projectactomic project to the containers one
Replace the occurrences of `github.com/projectatomic` with
`github.com/containers` to ensure clean clones of the project are
building, travis badges on the README work as expected and other minor
things.

Signed-off-by: Flavio Castelli <fcastelli@suse.com>
2018-08-14 10:10:06 +02:00
Antonio Murdaca
e814f9605a Merge pull request #529 from runcom/new-0131
release v0.1.31
2018-07-29 19:17:19 +02:00
Antonio Murdaca
5d136a46ed version: bump to v0.1.32-dev
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-07-29 19:03:02 +02:00
Antonio Murdaca
b0b750dfa1 version: bump to v0.1.31
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-07-29 19:02:33 +02:00
Miloslav Trmač
e3034e1d91 Merge pull request #525 from mtrmac/docker-archive-auto-compression
Vendor after merging mtrmac/image:docker-archive-auto-compression
2018-07-18 01:19:09 +02:00
Miloslav Trmač
1a259b76da Vendor after merging mtrmac/image:docker-archive-auto-compression
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-07-18 01:02:26 +02:00
Antonio Murdaca
ae64ff7084 Merge pull request #522 from AkihiroSuda/minimal
Makefile: add binary-minimal and binary-static-minimal
2018-07-09 15:11:02 +02:00
Akihiro Suda
d67d3a4620 Makefile: add binary-minimal and binary-static-minimal
These targets produce a pure-Go binary, without the following features:
* ostree
* devicemapper
* btrfs
* gpgme

Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
2018-07-09 18:35:58 +09:00
Daniel J Walsh
196bc48723 Merge pull request #520 from mtrmac/copy-error-handling
Ensure that we still return 1, and print something to stderr, on (skopeo copy) failure
2018-07-02 07:57:19 -04:00
Miloslav Trmač
1c6c7bc481 Ensure that we still return 1, and print something to stderr, on (skopeo copy) failure
https://github.com/projectatomic/skopeo/pull/519 made (skopeo copy)
suceed and print nothing to stderr; that could lead to hard-to-diagnose
failures in rare corner cases, e.g. shell scripts which do
(skopeo copy $src $dst) (as opposed to the correct
(skopeo copy "$src" "$dst") ) if $src and $dst are empty due to
a previous failure.
2018-06-30 13:05:23 +02:00
Daniel J Walsh
6e23a32282 Merge pull request #519 from vbatts/copy_help_output
cmd/skopeo: show full help on 'copy' with no args
2018-06-29 11:17:03 -04:00
Vincent Batts
f398c9c035 cmd/skopeo: show full help on 'copy' with no args
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
2018-06-29 10:13:13 -04:00
Miloslav Trmač
0144aa8dc5 Merge pull request #514 from giuseppe/vndr-c-image
vendor: update containers/image
2018-05-30 20:50:35 +02:00
Giuseppe Scrivano
0df5dcf09c vendor: update containers/image
Needed to pick up this change:

ostree: use the same thread for ostree operations

Since https://github.com/ostreedev/ostree/pull/1555, locking is
enabled by default in OSTree.  Unfortunately it uses thread-private
data and it breaks the Golang bindings.  Force the same thread for the
write operations to the OSTree repository.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-05-30 19:00:34 +02:00
Miloslav Trmač
f9baaa6b87 Merge pull request #512 from mtrmac/docker_vendor_update
Update docker/docker dependencies.
2018-05-26 05:56:42 +02:00
Max Goltzsche
67ff78925b Update docker/docker dependencies.
Required to update those dependencies in containers/image.
See https://github.com/containers/image/pull/446.

Updated by mitr@redhat.com to vendor from containers/image master again,
which brought in a few more dependency updates.

Signed-off-by: Max Goltzsche <max.goltzsche@gmail.com>
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-05-26 05:41:06 +02:00
Miloslav Trmač
5c611083f2 Merge pull request #510 from rhatdan/vendor
Vendor in latest go-selinux and containers/storage
2018-05-22 17:47:23 +02:00
Daniel J Walsh
976d57ea45 Vendor in latest go-selinux and containers/storage
skopeo is failing to build now on 32 bit systems.  go-selinux update
should fix this.  Also container/storage has had some cleanup fixes
to devicemapper support.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-05-22 11:09:34 -04:00
Antonio Murdaca
63569fcd63 Merge pull request #509 from runcom/bump-0.1.30
Bump 0.1.30
2018-05-20 11:11:19 +02:00
Antonio Murdaca
98b3a13b46 version: bump v0.1.31-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-05-20 10:56:32 +02:00
Antonio Murdaca
ca3bff6a7c version: bump v0.1.30
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-05-20 10:56:16 +02:00
Daniel J Walsh
563a4ac523 Merge pull request #507 from mtrmac/c-image-docs
Include vendor/github.com/containers/image/docs
2018-05-19 04:02:44 -04:00
Miloslav Trmač
14ea9f8bfd Run (make vendor) for the first time.
This primarily adds vendor/github.com/containers/image/docs/ ,
but also updates other dependencies that are not pinned to a specific
commit.
2018-05-19 04:24:17 +02:00
Miloslav Trmač
05e38e127e Add a (make vendor) target, primarily to preserve c/image documentation
The goal is to include the c/image documentation in a skopeo release,
so that RPMs and other distribution mechanisms can ship the c/image
documentation without having to create a separate package for c/image
(which would not otherwise be needed because it is vendored in users).

So, unify the updates of the "vendor" subdirectory as (make vendor),
and document it in README.md.  Also drop hack/vendor.sh, we neither
use nor document it, so updating it as well seems pointless.
2018-05-19 04:21:15 +02:00
Antonio Murdaca
1ef80d8082 Merge pull request #506 from rhatdan/storage
Vendor in latest containers-storage to add devmapper support
2018-05-18 23:33:15 +02:00
Daniel J Walsh
597b6bd204 Vendor in latest containers-storage to add devmapper support
containers/storage and storage.conf now support flags to allow users
to setup containers/storage to run on devicemapper.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-05-18 12:04:00 -04:00
Miloslav Trmač
7e9a664764 Merge pull request #505 from umohnani8/transport
[DO NOT MERGE] Pick up changes to transports in containers/image
2018-05-15 21:40:03 +02:00
umohnani8
79449a358d Pick up changes to transports in containers/image
docker-archive and oci-archive now allow the image reference
for the destination to be empty.
Update tests for this new change.

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-05-15 15:16:36 -04:00
Daniel J Walsh
2d04db9ac8 Merge pull request #504 from mtrmac/README-installation
Improve installation documentation a bit
2018-05-14 16:47:36 -04:00
Miloslav Trmač
3e7a28481c Improve installation documentation a bit
- _Start_ with installing distribution packages, instead of
  mentioning it after the user has already built everything from source.
- Note that both the binary and documentation needs to be built
  for (make install) to work.
2018-05-12 04:09:00 +02:00
Miloslav Trmač
79225f2e65 Merge pull request #501 from vrothberg/multitags
skopeo-copy: docker-archive: multitag support
2018-05-11 22:07:55 +02:00
Valentin Rothberg
e1c1bbf26d skopeo-copy: docker-archive: multitag support
Add multitag support when generating docker-archive tarballs via the
newly added '--aditional-tag' option, which can be specified multiple
times to add more than one tag.  All specified tags will be added to the
RepoTags field in the docker-archive's manifest.json file.

This change requires to vendor the latest containers/image with
commit a1a9391830fd08637edbe45133fd0a8a2682ae75.

Signed-off-by: Valentin Rothberg <vrothberg@suse.com>
2018-05-11 07:43:23 +02:00
Miloslav Trmač
c4808f002e Merge pull request #503 from mtrmac/vet-package
Run (go vet) on all subpackages instead only of changed files
2018-05-10 21:35:34 +02:00
Miloslav Trmač
42203b366d Run (go vet) on all subpackages instead only of changed files
Apparently, it was never documented to use (go vet $somefile.go)
(but (go tool vet $somefile.go) was).

go 1.10 seems to do more checks within packages, and $somefile.go
is interpreted as a package with only that file (even if other files
from that package are in the same directory), leading to spurious
"undefined: $symbol" errors.

So, just run (go vet) on ./... (explicitly excluding skopeo/vendor for the
benefit of Go 1.8). We only have three subpackages, so the savings, if any,
from running (go vet) only on the modified subpackages would be small.

More importantly, on a toolchain update, ./... allows us to see the newly
detected issues all at once, instead of randomly waiting for a commit that
changes one of the affected files for the failure to show up.
2018-05-10 21:12:51 +02:00
Miloslav Trmač
1f11b8b350 Merge pull request #500 from mtrmac/go-1.10-check
Fix test suite failures with go >= 1.10
2018-05-07 19:17:06 +02:00
Miloslav Trmač
ea23621c70 Fix test suite failures with go >= 1.10
The hack/common.sh script contains
    local go_version
    go_version=($(go version))
    if [[ "${go_version[2]}" < "go1.5" ]]; then
      # fail
    fi

which does a lexicographic string comparison, and fails with 1.10.
Just drop it, the fedora:latest image is not likely to revert to 1.5.
2018-05-07 18:32:28 +02:00
Miloslav Trmač
ab2bc6e8d1 Merge pull request #494 from umohnani8/oci
Vendor in changes made to containers/image
2018-04-12 12:24:00 +02:00
umohnani8
c520041b83 Vendor in changes made to containers/image
containers/image returns a more detailed error message for oci and
oci-archive transports when the syntax given by the user is incorrect

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
2018-04-11 15:58:41 -04:00
Miloslav Trmač
e626fca6a7 Merge pull request #493 from mtrmac/context-everywhere
Update for API changes in containers/image#431
2018-04-10 19:30:07 +02:00
Miloslav Trmač
92b6262224 Update for adding context.Context to containers/image API
In addition to the minimum necessary to update the API, also rename some
parameters/variables for consistency:

c	*cli.Context
ctx	context.Context
sys	*types.SystemContext

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-04-10 19:08:49 +02:00
Miloslav Trmač
e8dea9e770 Vendor after merging https://github.com/novas0x2a/image:context-everywhere 2018-04-10 19:08:37 +02:00
Miloslav Trmač
28080c8d5f Merge pull request #487 from jlsalmon/configure-daemon-host
Support host configuration for docker-daemon sources and destinations
2018-04-07 06:28:05 +02:00
Justin Lewis Salmon
0cea6dde02 Support host configuration for docker-daemon sources and destinations
This PR adds CLI support for overriding the default docker daemon host when using the
`docker-daemon` transport.

Fixes #244

Signed-off-by: Justin Lewis Salmon <justin.lewis.salmon@gmail.com>
2018-04-07 14:14:32 +10:00
Miloslav Trmač
22482e099a Merge pull request #490 from mtrmac/storage-api-revendor
Vendor after merging containers/image#436
2018-04-05 21:53:18 +02:00
Miloslav Trmač
7aba888e99 Vendor after merging containers/image#436
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-04-05 21:33:04 +02:00
Miloslav Trmač
c61482d2cf Merge pull request #486 from runcom/bump-0.1.29
Bump 0.1.29
2018-03-29 15:40:29 +02:00
Antonio Murdaca
db941ebd8f version: bump v0.1.30-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-03-29 15:03:29 +02:00
Antonio Murdaca
7add6fc80b version: bump v0.1.29
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-03-29 15:03:14 +02:00
Miloslav Trmač
eb9d74090e Merge pull request #485 from nlewo/pr/docker-archive-legacy
Add Docker legacy archive support
2018-03-28 22:38:49 +02:00
Antoine Eiche
61351d44d7 Vendor after merging https://github.com/containers/image/pull/370
Signed-off-by: Antoine Eiche <lewo@abesis.fr>
2018-03-28 18:46:26 +02:00
Miloslav Trmač
aa73bd9d0d Update for changed PutBlob API
Signed-off-by: Antoine Eiche <lewo@abesis.fr>
2018-03-28 18:46:14 +02:00
Miloslav Trmač
b08350db15 Merge pull request #477 from mtrmac/305-cleanup
Vendor mtrmac/image:305-cleanup
2018-03-15 16:17:46 +01:00
Miloslav Trmač
f63f78225d Update for types.Image.Inspect output change
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-03-15 15:26:00 +01:00
Miloslav Trmač
60aa4aa82d Vendor after merging mtrmac/image:305-cleanup
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-03-15 15:25:31 +01:00
Miloslav Trmač
37264e21fb Merge pull request #483 from lsm5/contrib-storage
add storage.conf and manpage in contrib/
2018-03-12 19:07:12 +01:00
Lokesh Mandvekar
fe2591054c add storage.conf and manpage in contrib/
These files are used by deb and rpm packages, so I'd rather have them
upstream than maintain in 2 separate places.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2018-03-12 13:28:43 -04:00
Miloslav Trmač
fd0c3d7f08 Merge pull request #482 from umohnani8/gzip
Vendor in latest containers/image
2018-03-09 04:08:37 +01:00
umohnani8
b325cc22b8 Vendor in latest containers/image
Adds support to handle compressed docker-archive files

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-03-08 15:42:28 -05:00
Miloslav Trmač
5f754820da Merge pull request #479 from umohnani8/dir
Fix skopeo tests with changes to dir transport
2018-02-22 17:08:40 +01:00
umohnani8
43acc747d5 Fix skopeo tests with changes to dir transport
The dir transport has been changed to save the blobs without the .tar extension
Fixes the skopeo tests failing due to this change

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-22 10:50:22 -05:00
Daniel J Walsh
b3dec98757 Merge pull request #476 from jonboulle/fixbuild
Dockerfile: bump to ubuntu 17.10
2018-02-12 14:36:15 -05:00
Jonathan Boulle
b1795a08fb Dockerfile: bump to ubuntu 17.10
17.04 is EOLed and no longer works.

Signed-off-by: Jonathan Boulle <jonathanboulle@gmail.com>
2018-02-12 19:58:11 +01:00
Antonio Murdaca
1307cac0c2 Merge pull request #468 from mtrmac/oci-schema-rebase
Re-vendor, notably opencontainers/image-spec to fix tests
2018-02-09 20:16:42 +01:00
Miloslav Trmač
dc1567c8bc Re-vendor, and use mtrmac/image-spec:id-based-loader to fix tests
Anyone running (vndr) currently ends up with failing tests in OCI schema
validation because gojsonschema has fixed its "$ref" interpretation, exposing
inconsistent URI usage inside image-spec/schema.

So, this runs (vndr), and uses mtrmac/image-spec:id-based-loader
( https://github.com/opencontainers/image-spec/pull/739 ) to make the tests pass
again.  As soon as that PR is merged we should revert to using the upstream
image-spec repo again.
2018-02-09 18:34:31 +01:00
Antonio Murdaca
22c524b0e0 Merge pull request #474 from runcom/bump-0.1.28
Bump 0.1.28
2018-01-31 16:23:15 +01:00
Antonio Murdaca
9a225c3968 version: bump to v0.1.29-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-01-31 16:01:51 +01:00
1029 changed files with 69992 additions and 16983 deletions

2
.gitignore vendored
View File

@@ -1,3 +1,3 @@
/docs/skopeo.1
*.1
/layers-*
/skopeo

View File

@@ -1,3 +1,4 @@
language: go
matrix:
include:
@@ -21,4 +22,4 @@ install:
script:
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then hack/travis_osx.sh ; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make check ; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then make vendor && ./hack/tree_status.sh && make check ; fi

View File

@@ -15,7 +15,7 @@ that we follow.
## Reporting Issues
Before reporting an issue, check our backlog of
[open issues](https://github.com/projectatomic/skopeo/issues)
[open issues](https://github.com/containers/skopeo/issues)
to see if someone else has already reported it. If so, feel free to add
your scenario, or additional information, to the discussion. Or simply
"subscribe" to it to be notified when it is updated.
@@ -115,6 +115,35 @@ Use your real name (sorry, no pseudonyms or anonymous contributions.)
If you set your `user.name` and `user.email` git configs, you can sign your
commit automatically with `git commit -s`.
### Dependencies management
Make sure [`vndr`](https://github.com/LK4D4/vndr) is installed.
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- run `make vendor`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `make vendor`
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `make vendor`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `make vendor`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete succcesfully again, merge the skopeo PR
## Communications
For general questions, or discussions, please use the
@@ -122,9 +151,9 @@ IRC group on `irc.freenode.net` called `container-projects`
that has been setup.
For discussions around issues/bugs and features, you can use the github
[issues](https://github.com/projectatomic/skopeo/issues)
[issues](https://github.com/containers/skopeo/issues)
and
[PRs](https://github.com/projectatomic/skopeo/pulls)
[PRs](https://github.com/containers/skopeo/pulls)
tracking system.
<!--

View File

@@ -10,6 +10,7 @@ RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md
gnupg \
# OpenShift deps
which tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \
bats jq podman \
&& dnf clean all
# Install two versions of the registry. The first is an older version that
@@ -32,6 +33,8 @@ RUN set -x \
RUN set -x \
&& export GOPATH=$(mktemp -d) \
&& git clone --depth 1 -b v1.5.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
# The sed edits out a "go < 1.5" check which works incorrectly with go ≥ 1.10. \
&& sed -i -e 's/\[\[ "\${go_version\[2]}" < "go1.5" ]]/false/' "$GOPATH/src/github.com/openshift/origin/hack/common.sh" \
&& (cd "$GOPATH/src/github.com/openshift/origin" && make clean build && make all WHAT=cmd/dockerregistry) \
&& cp -a "$GOPATH/src/github.com/openshift/origin/_output/local/bin/linux"/*/* /usr/local/bin \
&& cp "$GOPATH/src/github.com/openshift/origin/images/dockerregistry/config.yml" /atomic-registry-config.yml \
@@ -40,8 +43,9 @@ RUN set -x \
ENV GOPATH /usr/share/gocode:/go
ENV PATH $GOPATH/bin:/usr/share/gocode/bin:$PATH
RUN go get github.com/golang/lint/golint
WORKDIR /go/src/github.com/projectatomic/skopeo
COPY . /go/src/github.com/projectatomic/skopeo
RUN go version
RUN go get golang.org/x/lint/golint
WORKDIR /go/src/github.com/containers/skopeo
COPY . /go/src/github.com/containers/skopeo
#ENTRYPOINT ["hack/dind"]

View File

@@ -1,8 +1,8 @@
FROM ubuntu:17.04
FROM ubuntu:18.10
RUN apt-get update && apt-get install -y \
golang \
btrfs-tools \
libbtrfs-dev \
git-core \
libdevmapper-dev \
libgpgme11-dev \
@@ -11,4 +11,4 @@ RUN apt-get update && apt-get install -y \
libostree-dev
ENV GOPATH=/
WORKDIR /src/github.com/projectatomic/skopeo
WORKDIR /src/github.com/containers/skopeo

101
Makefile
View File

@@ -1,4 +1,4 @@
.PHONY: all binary build-container build-local clean install install-binary install-completions shell test-integration
.PHONY: all binary build-container docs docs-in-container build-local clean install install-binary install-completions shell test-integration .install.vndr vendor
export GO15VENDOREXPERIMENT=1
@@ -22,58 +22,78 @@ CONTAINERSSYSCONFIGDIR=${DESTDIR}/etc/containers
REGISTRIESDDIR=${CONTAINERSSYSCONFIGDIR}/registries.d
SIGSTOREDIR=${DESTDIR}/var/lib/atomic/sigstore
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
GO_MD2MAN ?= go-md2man
GO ?= go
CONTAINER_RUNTIME := $(shell command -v podman 2> /dev/null || echo docker)
GOMD2MAN ?= $(shell command -v go-md2man || echo '$(GOBIN)/go-md2man')
ifeq ($(DEBUG), 1)
override GOGCFLAGS += -N -l
endif
ifeq ($(shell go env GOOS), linux)
ifeq ($(shell $(GO) env GOOS), linux)
GO_DYN_FLAGS="-buildmode=pie"
endif
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
DOCKER_IMAGE := skopeo-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
IMAGE := skopeo-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
# set env like gobuildtag?
DOCKER_FLAGS := docker run --rm -i #$(DOCKER_ENVS)
CONTAINER_CMD := ${CONTAINER_RUNTIME} run --rm -i -e TESTFLAGS="$(TESTFLAGS)" #$(CONTAINER_ENVS)
# if this session isn't interactive, then we don't want to allocate a
# TTY, which would fail, but if it is interactive, we do want to attach
# so that the user can send e.g. ^C through.
INTERACTIVE := $(shell [ -t 0 ] && echo 1 || echo 0)
ifeq ($(INTERACTIVE), 1)
DOCKER_FLAGS += -t
CONTAINER_CMD += -t
endif
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
CONTAINER_RUN := $(CONTAINER_CMD) "$(IMAGE)"
GIT_COMMIT := $(shell git rev-parse HEAD 2> /dev/null || true)
MANPAGES_MD = $(wildcard docs/*.md)
MANPAGES ?= $(MANPAGES_MD:%.md=%)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh) $(shell hack/btrfs_installed_tag.sh)
LIBDM_BUILD_TAG = $(shell hack/libdm_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG) $(DARWIN_BUILD_TAG)
OSTREE_BUILD_TAG = $(shell hack/ostree_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG) $(OSTREE_BUILD_TAG) $(DARWIN_BUILD_TAG)
BUILDTAGS += $(LOCAL_BUILD_TAGS)
ifeq ($(DISABLE_CGO), 1)
override BUILDTAGS = containers_image_ostree_stub exclude_graphdriver_devicemapper exclude_graphdriver_btrfs containers_image_openpgp
endif
# make all DEBUG=1
# Note: Uses the -N -l go compiler options to disable compiler optimizations
# and inlining. Using these build options allows you to subsequently
# use source debugging tools like delve.
all: binary docs
all: binary docs-in-container
# Build a docker image (skopeobuild) that has everything we need to build.
help:
@echo "Usage: make <target>"
@echo
@echo " * 'install' - Install binaries and documents to system locations"
@echo " * 'binary' - Build skopeo with a container"
@echo " * 'binary-local' - Build skopeo locally"
@echo " * 'test-unit' - Execute unit tests"
@echo " * 'test-integration' - Execute integration tests"
@echo " * 'validate' - Verify whether there is no conflict and all Go source files have been formatted, linted and vetted"
@echo " * 'check' - Including above validate, test-integration and test-unit"
@echo " * 'shell' - Run the built image and attach to a shell"
@echo " * 'clean' - Clean artifacts"
# Build a container image (skopeobuild) that has everything we need to build.
# Then do the build and the output (skopeo) should appear in current dir
binary: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
${CONTAINER_RUNTIME} run --rm --security-opt label=disable -v $$(pwd):/src/github.com/containers/skopeo \
skopeobuildimage make binary-local $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
binary-static: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
${CONTAINER_RUNTIME} run --rm --security-opt label=disable -v $$(pwd):/src/github.com/containers/skopeo \
skopeobuildimage make binary-local-static $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
# Build w/o using Docker containers
# Build w/o using containers
binary-local:
$(GPGME_ENV) $(GO) build ${GO_DYN_FLAGS} -ldflags "-X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
@@ -81,13 +101,17 @@ binary-local-static:
$(GPGME_ENV) $(GO) build -ldflags "-extldflags \"-static\" -X main.gitCommit=${GIT_COMMIT}" -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o skopeo ./cmd/skopeo
build-container:
docker build ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" .
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -t "$(IMAGE)" .
docs/%.1: docs/%.1.md
$(GO_MD2MAN) -in $< -out $@.tmp && touch $@.tmp && mv $@.tmp $@
$(MANPAGES): %: %.md
@sed -e 's/\((skopeo.*\.md)\)//' -e 's/\[\(skopeo.*\)\]/\1/' $< | $(GOMD2MAN) -in /dev/stdin -out $@
.PHONY: docs
docs: $(MANPAGES_MD:%.md=%)
docs: $(MANPAGES)
docs-in-container:
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
${CONTAINER_RUNTIME} run --rm --security-opt label=disable -v $$(pwd):/src/github.com/containers/skopeo \
skopeobuildimage make docs $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
clean:
rm -f skopeo docs/*.1
@@ -103,29 +127,40 @@ install-binary: ./skopeo
install -d -m 755 ${INSTALLDIR}
install -m 755 skopeo ${INSTALLDIR}/skopeo
install-docs: docs/skopeo.1
install-docs: docs
install -d -m 755 ${MANINSTALLDIR}/man1
install -m 644 docs/skopeo.1 ${MANINSTALLDIR}/man1/skopeo.1
install -m 644 docs/*.1 ${MANINSTALLDIR}/man1/
install-completions:
install -m 755 -d ${BASHINSTALLDIR}
install -m 644 completions/bash/skopeo ${BASHINSTALLDIR}/skopeo
shell: build-container
$(DOCKER_RUN_DOCKER) bash
$(CONTAINER_RUN) bash
check: validate test-unit test-integration
check: validate test-unit test-integration test-system
# The tests can run out of entropy and block in containers, so replace /dev/random.
test-integration: build-container
$(DOCKER_RUN_DOCKER) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; SKOPEO_CONTAINER_TESTS=1 BUILDTAGS="$(BUILDTAGS)" hack/make.sh test-integration'
$(CONTAINER_RUN) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; SKOPEO_CONTAINER_TESTS=1 BUILDTAGS="$(BUILDTAGS)" hack/make.sh test-integration'
# complicated set of options needed to run podman-in-podman
test-system: build-container
DTEMP=$(shell mktemp -d --tmpdir=/var/tmp podman-tmp.XXXXXX); \
$(CONTAINER_CMD) --privileged --net=host \
-v $$DTEMP:/var/lib/containers:Z \
"$(IMAGE)" \
bash -c 'BUILDTAGS="$(BUILDTAGS)" hack/make.sh test-system'; \
rc=$$?; \
$(RM) -rf $$DTEMP; \
exit $$rc
test-unit: build-container
# Just call (make test unit-local) here instead of worrying about environment differences, e.g. GO15VENDOREXPERIMENT.
$(DOCKER_RUN_DOCKER) make test-unit-local BUILDTAGS='$(BUILDTAGS)'
$(CONTAINER_RUN) make test-unit-local BUILDTAGS='$(BUILDTAGS)'
validate: build-container
$(DOCKER_RUN_DOCKER) hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
$(CONTAINER_RUN) hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
# This target is only intended for development, e.g. executing it from an IDE. Use (make test) for CI or pre-release testing.
test-all-local: validate-local test-unit-local
@@ -134,4 +169,12 @@ validate-local:
hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
test-unit-local:
$(GPGME_ENV) $(GO) test -tags "$(BUILDTAGS)" $$($(GO) list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')
$(GPGME_ENV) $(GO) test -tags "$(BUILDTAGS)" $$($(GO) list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/containers/skopeo/\(integration\|vendor/.*\)$$')
.install.vndr:
$(GO) get -u github.com/LK4D4/vndr
vendor: vendor.conf .install.vndr
$(GOPATH)/bin/vndr \
-whitelist '^github.com/containers/image/docs/.*' \
-whitelist '^github.com/containers/image/registries.conf'

View File

@@ -1,7 +1,7 @@
skopeo [![Build Status](https://travis-ci.org/projectatomic/skopeo.svg?branch=master)](https://travis-ci.org/projectatomic/skopeo)
skopeo [![Build Status](https://travis-ci.org/containers/skopeo.svg?branch=master)](https://travis-ci.org/containers/skopeo)
=
<img src="https://cdn.rawgit.com/projectatomic/skopeo/master/docs/skopeo.svg" width="250">
<img src="https://cdn.rawgit.com/containers/skopeo/master/docs/skopeo.svg" width="250">
----
@@ -12,7 +12,7 @@ skopeo [![Build Status](https://travis-ci.org/projectatomic/skopeo.svg?branch=ma
Skopeo works with API V2 registries such as Docker registries, the Atomic registry, private registries, local directories and local OCI-layout directories. Skopeo does not require a daemon to be running to perform these operations which consist of:
* Copying an image from and to various storage mechanisms.
For example you can copy images from one registry to another, without requiring priviledge.
For example you can copy images from one registry to another, without requiring privilege.
* Inspecting a remote image showing its properties including its layers, without requiring you to pull the image to the host.
* Deleting an image from an image repository.
* When required by the repository, skopeo can pass the appropriate credentials and certificates for authentication.
@@ -149,8 +149,16 @@ $ skopeo copy --src-creds=testuser:testpassword docker://myregistrydomain.com:50
If your cli config is found but it doesn't contain the necessary credentials for the queried registry
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--creds` or `--src-creds|--dest-creds`.
Building
Obtaining skopeo
-
`skopeo` may already be packaged in your distribution, for example on Fedora 23 and later you can install it using
```sh
$ sudo dnf install skopeo
```
Otherwise, read on for building and installing it from source:
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag.
There are two ways to build skopeo: in a container, or locally without a container. Choose the one which better matches your needs and environment.
@@ -164,14 +172,15 @@ Building without a container requires a bit more manual work and setup in your e
Install the necessary dependencies:
```sh
Fedora$ sudo dnf install gpgme-devel libassuan-devel btrfs-progs-devel device-mapper-devel ostree-devel
Ubuntu$ sudo apt install libgpgme-dev libassuan-dev libbtrfs-dev libdevmapper-dev libostree-dev
macOS$ brew install gpgme
```
Make sure to clone this repository in your `GOPATH` - otherwise compilation fails.
```sh
$ git clone https://github.com/projectatomic/skopeo $GOPATH/src/github.com/projectatomic/skopeo
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make binary-local
$ git clone https://github.com/containers/skopeo $GOPATH/src/github.com/containers/skopeo
$ cd $GOPATH/src/github.com/containers/skopeo && make binary-local
```
### Building in a container
@@ -183,6 +192,12 @@ Building in a container is simpler, but more restrictive:
$ make binary # Or (make all) to also build documentation, see below.
```
To build a pure-Go static binary (disables ostree, devicemapper, btrfs, and gpgme):
```sh
$ make binary-static DISABLE_CGO=1
```
### Building documentation
To build the manual you will need go-md2man.
```sh
@@ -194,16 +209,12 @@ Then
$ make docs
```
Installing
-
If you built from source:
### Installation
Finally, after the binary and documentation is built:
```sh
$ sudo make install
```
`skopeo` is also available from Fedora 23 (and later):
```sh
$ sudo dnf install skopeo
```
TODO
-
- list all images on registry?
@@ -218,34 +229,7 @@ NOT TODO
CONTRIBUTING
-
### Dependencies management
`skopeo` uses [`vndr`](https://github.com/LK4D4/vndr) for dependencies management.
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- run `vndr github.com/pkg/errors`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `vndr github.com/pkg/errors`
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `vndr github.com/containers/image`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `vndr github.com/containers/image`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete succcesfully again, merge the skopeo PR
Please read the [contribution guide](CONTRIBUTING.md) if you want to collaborate in the project.
License
-

View File

@@ -3,88 +3,43 @@ package main
import (
"errors"
"fmt"
"os"
"io"
"strings"
"github.com/containers/image/copy"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
// contextsFromGlobalOptions returns source and destionation types.SystemContext depending on c.
func contextsFromGlobalOptions(c *cli.Context) (*types.SystemContext, *types.SystemContext, error) {
sourceCtx, err := contextFromGlobalOptions(c, "src-")
if err != nil {
return nil, nil, err
}
type copyOptions struct {
global *globalOptions
srcImage *imageOptions
destImage *imageDestOptions
additionalTags cli.StringSlice // For docker-archive: destinations, in addition to the name:tag specified as destination, also add these
removeSignatures bool // Do not copy signatures from the source image
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
format optionalString // Force conversion of the image to a specified format
quiet bool // Suppress output information when copying images
destinationCtx, err := contextFromGlobalOptions(c, "dest-")
if err != nil {
return nil, nil, err
}
return sourceCtx, destinationCtx, nil
}
func copyHandler(context *cli.Context) error {
if len(context.Args()) != 2 {
return errors.New("Usage: copy source destination")
func copyCmd(global *globalOptions) cli.Command {
sharedFlags, sharedOpts := sharedImageFlags()
srcFlags, srcOpts := imageFlags(global, sharedOpts, "src-", "screds")
destFlags, destOpts := imageDestFlags(global, sharedOpts, "dest-", "dcreds")
opts := copyOptions{global: global,
srcImage: srcOpts,
destImage: destOpts,
}
policyContext, err := getPolicyContext(context)
if err != nil {
return fmt.Errorf("Error loading trust policy: %v", err)
}
defer policyContext.Destroy()
srcRef, err := alltransports.ParseImageName(context.Args()[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
}
destRef, err := alltransports.ParseImageName(context.Args()[1])
if err != nil {
return fmt.Errorf("Invalid destination name %s: %v", context.Args()[1], err)
}
signBy := context.String("sign-by")
removeSignatures := context.Bool("remove-signatures")
sourceCtx, destinationCtx, err := contextsFromGlobalOptions(context)
if err != nil {
return err
}
var manifestType string
if context.IsSet("format") {
switch context.String("format") {
case "oci":
manifestType = imgspecv1.MediaTypeImageManifest
case "v2s1":
manifestType = manifest.DockerV2Schema1SignedMediaType
case "v2s2":
manifestType = manifest.DockerV2Schema2MediaType
default:
return fmt.Errorf("unknown format %q. Choose on of the supported formats: 'oci', 'v2s1', or 'v2s2'", context.String("format"))
}
}
return copy.Image(policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: removeSignatures,
SignBy: signBy,
ReportWriter: os.Stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
ForceManifestMIMEType: manifestType,
})
}
var copyCmd = cli.Command{
Name: "copy",
Usage: "Copy an IMAGE-NAME from one location to another",
Description: fmt.Sprintf(`
return cli.Command{
Name: "copy",
Usage: "Copy an IMAGE-NAME from one location to another",
Description: fmt.Sprintf(`
Container "IMAGE-NAME" uses a "transport":"details" format.
@@ -93,72 +48,112 @@ var copyCmd = cli.Command{
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "SOURCE-IMAGE DESTINATION-IMAGE",
Action: copyHandler,
// FIXME: Do we need to namespace the GPG aspect?
Flags: []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.BoolFlag{
Name: "remove-signatures",
Usage: "Do not copy signatures from SOURCE-IMAGE",
},
cli.StringFlag{
Name: "sign-by",
Usage: "Sign the image using a GPG key with the specified `FINGERPRINT`",
},
cli.StringFlag{
Name: "src-creds, screds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the source registry",
},
cli.StringFlag{
Name: "dest-creds, dcreds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the destination registry",
},
cli.StringFlag{
Name: "src-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the source registry",
},
cli.BoolTFlag{
Name: "src-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the container source registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the destination registry",
},
cli.BoolTFlag{
Name: "dest-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the container destination registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-ostree-tmp-dir",
Value: "",
Usage: "`DIRECTORY` to use for OSTree temporary files",
},
cli.StringFlag{
Name: "src-shared-blob-dir",
Value: "",
Usage: "`DIRECTORY` to use to fetch retrieved blobs (OCI layout sources only)",
},
cli.StringFlag{
Name: "dest-shared-blob-dir",
Value: "",
Usage: "`DIRECTORY` to use to store retrieved blobs (OCI layout destinations only)",
},
cli.StringFlag{
Name: "format, f",
Usage: "`MANIFEST TYPE` (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)",
},
cli.BoolFlag{
Name: "dest-compress",
Usage: "Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)",
},
},
ArgsUsage: "SOURCE-IMAGE DESTINATION-IMAGE",
Action: commandAction(opts.run),
// FIXME: Do we need to namespace the GPG aspect?
Flags: append(append(append([]cli.Flag{
cli.StringSliceFlag{
Name: "additional-tag",
Usage: "additional tags (supports docker-archive)",
Value: &opts.additionalTags, // Surprisingly StringSliceFlag does not support Destination:, but modifies Value: in place.
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "Suppress output information when copying images",
Destination: &opts.quiet,
},
cli.BoolFlag{
Name: "remove-signatures",
Usage: "Do not copy signatures from SOURCE-IMAGE",
Destination: &opts.removeSignatures,
},
cli.StringFlag{
Name: "sign-by",
Usage: "Sign the image using a GPG key with the specified `FINGERPRINT`",
Destination: &opts.signByFingerprint,
},
cli.GenericFlag{
Name: "format, f",
Usage: "`MANIFEST TYPE` (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)",
Value: newOptionalStringValue(&opts.format),
},
}, sharedFlags...), srcFlags...), destFlags...),
}
}
func (opts *copyOptions) run(args []string, stdout io.Writer) error {
if len(args) != 2 {
return errorShouldDisplayUsage{errors.New("Exactly two arguments expected")}
}
imageNames := args
if err := reexecIfNecessaryForImages(imageNames...); err != nil {
return err
}
policyContext, err := opts.global.getPolicyContext()
if err != nil {
return fmt.Errorf("Error loading trust policy: %v", err)
}
defer policyContext.Destroy()
srcRef, err := alltransports.ParseImageName(imageNames[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", imageNames[0], err)
}
destRef, err := alltransports.ParseImageName(imageNames[1])
if err != nil {
return fmt.Errorf("Invalid destination name %s: %v", imageNames[1], err)
}
sourceCtx, err := opts.srcImage.newSystemContext()
if err != nil {
return err
}
destinationCtx, err := opts.destImage.newSystemContext()
if err != nil {
return err
}
var manifestType string
if opts.format.present {
switch opts.format.value {
case "oci":
manifestType = imgspecv1.MediaTypeImageManifest
case "v2s1":
manifestType = manifest.DockerV2Schema1SignedMediaType
case "v2s2":
manifestType = manifest.DockerV2Schema2MediaType
default:
return fmt.Errorf("unknown format %q. Choose one of the supported formats: 'oci', 'v2s1', or 'v2s2'", opts.format.value)
}
}
for _, image := range opts.additionalTags {
ref, err := reference.ParseNormalizedNamed(image)
if err != nil {
return fmt.Errorf("error parsing additional-tag '%s': %v", image, err)
}
namedTagged, isNamedTagged := ref.(reference.NamedTagged)
if !isNamedTagged {
return fmt.Errorf("additional-tag '%s' must be a tagged reference", image)
}
destinationCtx.DockerArchiveAdditionalTags = append(destinationCtx.DockerArchiveAdditionalTags, namedTagged)
}
ctx, cancel := opts.global.commandTimeoutContext()
defer cancel()
if opts.quiet {
stdout = nil
}
_, err = copy.Image(ctx, policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: opts.removeSignatures,
SignBy: opts.signByFingerprint,
ReportWriter: stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
ForceManifestMIMEType: manifestType,
})
return err
}

View File

@@ -3,6 +3,7 @@ package main
import (
"errors"
"fmt"
"io"
"strings"
"github.com/containers/image/transports"
@@ -10,27 +11,22 @@ import (
"github.com/urfave/cli"
)
func deleteHandler(context *cli.Context) error {
if len(context.Args()) != 1 {
return errors.New("Usage: delete imageReference")
}
ref, err := alltransports.ParseImageName(context.Args()[0])
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
}
ctx, err := contextFromGlobalOptions(context, "")
if err != nil {
return err
}
return ref.DeleteImage(ctx)
type deleteOptions struct {
global *globalOptions
image *imageOptions
}
var deleteCmd = cli.Command{
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Description: fmt.Sprintf(`
func deleteCmd(global *globalOptions) cli.Command {
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageFlags(global, sharedOpts, "", "")
opts := deleteOptions{
global: global,
image: imageOpts,
}
return cli.Command{
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Description: fmt.Sprintf(`
Delete an "IMAGE_NAME" from a transport
Supported transports:
@@ -38,26 +34,33 @@ var deleteCmd = cli.Command{
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Action: deleteHandler,
Flags: []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
},
ArgsUsage: "IMAGE-NAME",
Action: commandAction(opts.run),
Flags: append(sharedFlags, imageFlags...),
}
}
func (opts *deleteOptions) run(args []string, stdout io.Writer) error {
if len(args) != 1 {
return errors.New("Usage: delete imageReference")
}
imageName := args[0]
if err := reexecIfNecessaryForImages(imageName); err != nil {
return err
}
ref, err := alltransports.ParseImageName(imageName)
if err != nil {
return fmt.Errorf("Invalid source name %s: %v", imageName, err)
}
sys, err := opts.image.newSystemContext()
if err != nil {
return err
}
ctx, cancel := opts.global.commandTimeoutContext()
defer cancel()
return ref.DeleteImage(ctx, sys)
}

75
cmd/skopeo/flag.go Normal file
View File

@@ -0,0 +1,75 @@
package main
import (
"strconv"
"github.com/urfave/cli"
)
// optionalBool is a boolean with a separate presence flag.
type optionalBool struct {
present bool
value bool
}
// optionalBool is a cli.Generic == flag.Value implementation equivalent to
// the one underlying flag.Bool, except that it records whether the flag has been set.
// This is distinct from optionalBool to (pretend to) force callers to use
// newOptionalBool
type optionalBoolValue optionalBool
func newOptionalBoolValue(p *optionalBool) cli.Generic {
p.present = false
return (*optionalBoolValue)(p)
}
func (ob *optionalBoolValue) Set(s string) error {
v, err := strconv.ParseBool(s)
if err != nil {
return err
}
ob.value = v
ob.present = true
return nil
}
func (ob *optionalBoolValue) String() string {
if !ob.present {
return "" // This is, sadly, not round-trip safe: --flag is interpreted as --flag=true
}
return strconv.FormatBool(ob.value)
}
func (ob *optionalBoolValue) IsBoolFlag() bool {
return true
}
// optionalString is a string with a separate presence flag.
type optionalString struct {
present bool
value string
}
// optionalString is a cli.Generic == flag.Value implementation equivalent to
// the one underlying flag.String, except that it records whether the flag has been set.
// This is distinct from optionalString to (pretend to) force callers to use
// newoptionalString
type optionalStringValue optionalString
func newOptionalStringValue(p *optionalString) cli.Generic {
p.present = false
return (*optionalStringValue)(p)
}
func (ob *optionalStringValue) Set(s string) error {
ob.value = s
ob.present = true
return nil
}
func (ob *optionalStringValue) String() string {
if !ob.present {
return "" // This is, sadly, not round-trip safe: --flag= is interpreted as {present:true, value:""}
}
return ob.value
}

239
cmd/skopeo/flag_test.go Normal file
View File

@@ -0,0 +1,239 @@
package main
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/urfave/cli"
)
func TestOptionalBoolSet(t *testing.T) {
for _, c := range []struct {
input string
accepted bool
value bool
}{
// Valid inputs documented for strconv.ParseBool == flag.BoolVar
{"1", true, true},
{"t", true, true},
{"T", true, true},
{"TRUE", true, true},
{"true", true, true},
{"True", true, true},
{"0", true, false},
{"f", true, false},
{"F", true, false},
{"FALSE", true, false},
{"false", true, false},
{"False", true, false},
// A few invalid inputs
{"", false, false},
{"yes", false, false},
{"no", false, false},
{"2", false, false},
} {
var ob optionalBool
v := newOptionalBoolValue(&ob)
require.False(t, ob.present)
err := v.Set(c.input)
if c.accepted {
assert.NoError(t, err, c.input)
assert.Equal(t, c.value, ob.value)
} else {
assert.Error(t, err, c.input)
assert.False(t, ob.present) // Just to be extra paranoid.
}
}
// Nothing actually explicitly says that .Set() is never called when the flag is not present on the command line;
// so, check that it is not being called, at least in the straightforward case (it's not possible to test that it
// is not called in any possible situation).
var globalOB, commandOB optionalBool
actionRun := false
app := cli.NewApp()
app.EnableBashCompletion = true
app.Flags = []cli.Flag{
cli.GenericFlag{
Name: "global-OB",
Value: newOptionalBoolValue(&globalOB),
},
}
app.Commands = []cli.Command{{
Name: "cmd",
Flags: []cli.Flag{
cli.GenericFlag{
Name: "command-OB",
Value: newOptionalBoolValue(&commandOB),
},
},
Action: func(*cli.Context) error {
assert.False(t, globalOB.present)
assert.False(t, commandOB.present)
actionRun = true
return nil
},
}}
err := app.Run([]string{"app", "cmd"})
require.NoError(t, err)
assert.True(t, actionRun)
}
func TestOptionalBoolString(t *testing.T) {
for _, c := range []struct {
input optionalBool
expected string
}{
{optionalBool{present: true, value: true}, "true"},
{optionalBool{present: true, value: false}, "false"},
{optionalBool{present: false, value: true}, ""},
{optionalBool{present: false, value: false}, ""},
} {
var ob optionalBool
v := newOptionalBoolValue(&ob)
ob = c.input
res := v.String()
assert.Equal(t, c.expected, res)
}
}
func TestOptionalBoolIsBoolFlag(t *testing.T) {
// IsBoolFlag means that the argument value must either be part of the same argument, with =;
// if there is no =, the value is set to true.
// This differs form other flags, where the argument is required and may be either separated with = or supplied in the next argument.
for _, c := range []struct {
input []string
expectedOB optionalBool
expectedArgs []string
}{
{[]string{"1", "2"}, optionalBool{present: false}, []string{"1", "2"}}, // Flag not present
{[]string{"--OB=true", "1", "2"}, optionalBool{present: true, value: true}, []string{"1", "2"}}, // --OB=true
{[]string{"--OB=false", "1", "2"}, optionalBool{present: true, value: false}, []string{"1", "2"}}, // --OB=false
{[]string{"--OB", "true", "1", "2"}, optionalBool{present: true, value: true}, []string{"true", "1", "2"}}, // --OB true
{[]string{"--OB", "false", "1", "2"}, optionalBool{present: true, value: true}, []string{"false", "1", "2"}}, // --OB false
} {
var ob optionalBool
actionRun := false
app := cli.NewApp()
app.Commands = []cli.Command{{
Name: "cmd",
Flags: []cli.Flag{
cli.GenericFlag{
Name: "OB",
Value: newOptionalBoolValue(&ob),
},
},
Action: func(ctx *cli.Context) error {
assert.Equal(t, c.expectedOB, ob)
assert.Equal(t, c.expectedArgs, ([]string)(ctx.Args()))
actionRun = true
return nil
},
}}
err := app.Run(append([]string{"app", "cmd"}, c.input...))
require.NoError(t, err)
assert.True(t, actionRun)
}
}
func TestOptionalStringSet(t *testing.T) {
// Really just a smoke test, but differentiating between not present and empty.
for _, c := range []string{"", "hello"} {
var os optionalString
v := newOptionalStringValue(&os)
require.False(t, os.present)
err := v.Set(c)
assert.NoError(t, err, c)
assert.Equal(t, c, os.value)
}
// Nothing actually explicitly says that .Set() is never called when the flag is not present on the command line;
// so, check that it is not being called, at least in the straightforward case (it's not possible to test that it
// is not called in any possible situation).
var globalOS, commandOS optionalString
actionRun := false
app := cli.NewApp()
app.EnableBashCompletion = true
app.Flags = []cli.Flag{
cli.GenericFlag{
Name: "global-OS",
Value: newOptionalStringValue(&globalOS),
},
}
app.Commands = []cli.Command{{
Name: "cmd",
Flags: []cli.Flag{
cli.GenericFlag{
Name: "command-OS",
Value: newOptionalStringValue(&commandOS),
},
},
Action: func(*cli.Context) error {
assert.False(t, globalOS.present)
assert.False(t, commandOS.present)
actionRun = true
return nil
},
}}
err := app.Run([]string{"app", "cmd"})
require.NoError(t, err)
assert.True(t, actionRun)
}
func TestOptionalStringString(t *testing.T) {
for _, c := range []struct {
input optionalString
expected string
}{
{optionalString{present: true, value: "hello"}, "hello"},
{optionalString{present: true, value: ""}, ""},
{optionalString{present: false, value: "hello"}, ""},
{optionalString{present: false, value: ""}, ""},
} {
var os optionalString
v := newOptionalStringValue(&os)
os = c.input
res := v.String()
assert.Equal(t, c.expected, res)
}
}
func TestOptionalStringIsBoolFlag(t *testing.T) {
// NOTE: optionalStringValue does not implement IsBoolFlag!
// IsBoolFlag means that the argument value must either be part of the same argument, with =;
// if there is no =, the value is set to true.
// This differs form other flags, where the argument is required and may be either separated with = or supplied in the next argument.
for _, c := range []struct {
input []string
expectedOS optionalString
expectedArgs []string
}{
{[]string{"1", "2"}, optionalString{present: false}, []string{"1", "2"}}, // Flag not present
{[]string{"--OS=hello", "1", "2"}, optionalString{present: true, value: "hello"}, []string{"1", "2"}}, // --OS=true
{[]string{"--OS=", "1", "2"}, optionalString{present: true, value: ""}, []string{"1", "2"}}, // --OS=false
{[]string{"--OS", "hello", "1", "2"}, optionalString{present: true, value: "hello"}, []string{"1", "2"}}, // --OS true
{[]string{"--OS", "", "1", "2"}, optionalString{present: true, value: ""}, []string{"1", "2"}}, // --OS false
} {
var os optionalString
actionRun := false
app := cli.NewApp()
app.Commands = []cli.Command{{
Name: "cmd",
Flags: []cli.Flag{
cli.GenericFlag{
Name: "OS",
Value: newOptionalStringValue(&os),
},
},
Action: func(ctx *cli.Context) error {
assert.Equal(t, c.expectedOS, os)
assert.Equal(t, c.expectedArgs, ([]string)(ctx.Args()))
actionRun = true
return nil
},
}}
err := app.Run(append([]string{"app", "cmd"}, c.input...))
require.NoError(t, err)
assert.True(t, actionRun)
}
}

View File

@@ -3,6 +3,7 @@ package main
import (
"encoding/json"
"fmt"
"io"
"strings"
"time"
@@ -21,7 +22,7 @@ type inspectOutput struct {
Tag string `json:",omitempty"`
Digest digest.Digest
RepoTags []string
Created time.Time
Created *time.Time
DockerVersion string
Labels map[string]string
Architecture string
@@ -29,10 +30,24 @@ type inspectOutput struct {
Layers []string
}
var inspectCmd = cli.Command{
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Description: fmt.Sprintf(`
type inspectOptions struct {
global *globalOptions
image *imageOptions
raw bool // Output the raw manifest instead of parsing information about the image
config bool // Output the raw config blob instead of parsing information about the image
}
func inspectCmd(global *globalOptions) cli.Command {
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageFlags(global, sharedOpts, "", "")
opts := inspectOptions{
global: global,
image: imageOpts,
}
return cli.Command{
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Description: fmt.Sprintf(`
Return low-level information about "IMAGE-NAME" in a registry/transport
Supported transports:
@@ -40,92 +55,121 @@ var inspectCmd = cli.Command{
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Flags: []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
cli.BoolFlag{
Name: "raw",
Usage: "output raw manifest",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
},
},
Action: func(c *cli.Context) (retErr error) {
img, err := parseImage(c)
if err != nil {
return err
}
defer func() {
if err := img.Close(); err != nil {
retErr = errors.Wrapf(retErr, fmt.Sprintf("(could not close image: %v) ", err))
}
}()
rawManifest, _, err := img.Manifest()
if err != nil {
return err
}
if c.Bool("raw") {
_, err := c.App.Writer.Write(rawManifest)
if err != nil {
return fmt.Errorf("Error writing manifest to standard output: %v", err)
}
return nil
}
imgInspect, err := img.Inspect()
if err != nil {
return err
}
outputData := inspectOutput{
Name: "", // Possibly overridden for a docker.Image.
Tag: imgInspect.Tag,
// Digest is set below.
RepoTags: []string{}, // Possibly overriden for a docker.Image.
Created: imgInspect.Created,
DockerVersion: imgInspect.DockerVersion,
Labels: imgInspect.Labels,
Architecture: imgInspect.Architecture,
Os: imgInspect.Os,
Layers: imgInspect.Layers,
}
outputData.Digest, err = manifest.Digest(rawManifest)
if err != nil {
return fmt.Errorf("Error computing manifest digest: %v", err)
}
if dockerImg, ok := img.(*docker.Image); ok {
outputData.Name = dockerImg.SourceRefFullName()
outputData.RepoTags, err = dockerImg.GetRepositoryTags()
if err != nil {
// some registries may decide to block the "list all tags" endpoint
// gracefully allow the inspect to continue in this case. Currently
// the IBM Bluemix container registry has this restriction.
if !strings.Contains(err.Error(), "401") {
return fmt.Errorf("Error determining repository tags: %v", err)
}
logrus.Warnf("Registry disallows tag list retrieval; skipping")
}
}
out, err := json.MarshalIndent(outputData, "", " ")
if err != nil {
return err
}
fmt.Fprintln(c.App.Writer, string(out))
return nil
},
ArgsUsage: "IMAGE-NAME",
Flags: append(append([]cli.Flag{
cli.BoolFlag{
Name: "raw",
Usage: "output raw manifest or configuration",
Destination: &opts.raw,
},
cli.BoolFlag{
Name: "config",
Usage: "output configuration",
Destination: &opts.config,
},
}, sharedFlags...), imageFlags...),
Action: commandAction(opts.run),
}
}
func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error) {
ctx, cancel := opts.global.commandTimeoutContext()
defer cancel()
if len(args) != 1 {
return errors.New("Exactly one argument expected")
}
imageName := args[0]
if err := reexecIfNecessaryForImages(imageName); err != nil {
return err
}
img, err := parseImage(ctx, opts.image, imageName)
if err != nil {
return err
}
defer func() {
if err := img.Close(); err != nil {
retErr = errors.Wrapf(retErr, fmt.Sprintf("(could not close image: %v) ", err))
}
}()
rawManifest, _, err := img.Manifest(ctx)
if err != nil {
return err
}
if opts.config && opts.raw {
configBlob, err := img.ConfigBlob(ctx)
if err != nil {
return fmt.Errorf("Error reading configuration blob: %v", err)
}
_, err = stdout.Write(configBlob)
if err != nil {
return fmt.Errorf("Error writing configuration blob to standard output: %v", err)
}
return nil
} else if opts.raw {
_, err := stdout.Write(rawManifest)
if err != nil {
return fmt.Errorf("Error writing manifest to standard output: %v", err)
}
return nil
} else if opts.config {
config, err := img.OCIConfig(ctx)
if err != nil {
return fmt.Errorf("Error reading OCI-formatted configuration data: %v", err)
}
err = json.NewEncoder(stdout).Encode(config)
if err != nil {
return fmt.Errorf("Error writing OCI-formatted configuration data to standard output: %v", err)
}
return nil
}
imgInspect, err := img.Inspect(ctx)
if err != nil {
return err
}
outputData := inspectOutput{
Name: "", // Set below if DockerReference() is known
Tag: imgInspect.Tag,
// Digest is set below.
RepoTags: []string{}, // Possibly overriden for docker.Transport.
Created: imgInspect.Created,
DockerVersion: imgInspect.DockerVersion,
Labels: imgInspect.Labels,
Architecture: imgInspect.Architecture,
Os: imgInspect.Os,
Layers: imgInspect.Layers,
}
outputData.Digest, err = manifest.Digest(rawManifest)
if err != nil {
return fmt.Errorf("Error computing manifest digest: %v", err)
}
if dockerRef := img.Reference().DockerReference(); dockerRef != nil {
outputData.Name = dockerRef.Name()
}
if img.Reference().Transport() == docker.Transport {
sys, err := opts.image.newSystemContext()
if err != nil {
return err
}
outputData.RepoTags, err = docker.GetRepositoryTags(ctx, sys, img.Reference())
if err != nil {
// some registries may decide to block the "list all tags" endpoint
// gracefully allow the inspect to continue in this case. Currently
// the IBM Bluemix container registry has this restriction.
if !strings.Contains(err.Error(), "401") {
return fmt.Errorf("Error determining repository tags: %v", err)
}
logrus.Warnf("Registry disallows tag list retrieval; skipping")
}
}
out, err := json.MarshalIndent(outputData, "", " ")
if err != nil {
return err
}
fmt.Fprintf(stdout, "%s\n", string(out))
return nil
}

View File

@@ -2,117 +2,149 @@ package main
import (
"fmt"
"io"
"io/ioutil"
"os"
"strings"
"github.com/containers/image/directory"
"github.com/containers/image/image"
"github.com/containers/image/pkg/blobinfocache"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var layersCmd = cli.Command{
Name: "layers",
Usage: "Get layers of IMAGE-NAME",
ArgsUsage: "IMAGE-NAME [LAYER...]",
Hidden: true,
Action: func(c *cli.Context) (retErr error) {
fmt.Fprintln(os.Stderr, `DEPRECATED: skopeo layers is deprecated in favor of skopeo copy`)
if c.NArg() == 0 {
return errors.New("Usage: layers imageReference [layer...]")
type layersOptions struct {
global *globalOptions
image *imageOptions
}
func layersCmd(global *globalOptions) cli.Command {
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageFlags(global, sharedOpts, "", "")
opts := layersOptions{
global: global,
image: imageOpts,
}
return cli.Command{
Name: "layers",
Usage: "Get layers of IMAGE-NAME",
ArgsUsage: "IMAGE-NAME [LAYER...]",
Hidden: true,
Action: commandAction(opts.run),
Flags: append(sharedFlags, imageFlags...),
}
}
func (opts *layersOptions) run(args []string, stdout io.Writer) (retErr error) {
fmt.Fprintln(os.Stderr, `DEPRECATED: skopeo layers is deprecated in favor of skopeo copy`)
if len(args) == 0 {
return errors.New("Usage: layers imageReference [layer...]")
}
imageName := args[0]
if err := reexecIfNecessaryForImages(imageName); err != nil {
return err
}
ctx, cancel := opts.global.commandTimeoutContext()
defer cancel()
sys, err := opts.image.newSystemContext()
if err != nil {
return err
}
cache := blobinfocache.DefaultCache(sys)
rawSource, err := parseImageSource(ctx, opts.image, imageName)
if err != nil {
return err
}
src, err := image.FromSource(ctx, sys, rawSource)
if err != nil {
if closeErr := rawSource.Close(); closeErr != nil {
return errors.Wrapf(err, " (close error: %v)", closeErr)
}
ctx, err := contextFromGlobalOptions(c, "")
return err
}
defer func() {
if err := src.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (close error: %v)", err)
}
}()
type blobDigest struct {
digest digest.Digest
isConfig bool
}
var blobDigests []blobDigest
for _, dString := range args[1:] {
if !strings.HasPrefix(dString, "sha256:") {
dString = "sha256:" + dString
}
d, err := digest.Parse(dString)
if err != nil {
return err
}
rawSource, err := parseImageSource(c, c.Args()[0])
blobDigests = append(blobDigests, blobDigest{digest: d, isConfig: false})
}
if len(blobDigests) == 0 {
layers := src.LayerInfos()
seenLayers := map[digest.Digest]struct{}{}
for _, info := range layers {
if _, ok := seenLayers[info.Digest]; !ok {
blobDigests = append(blobDigests, blobDigest{digest: info.Digest, isConfig: false})
seenLayers[info.Digest] = struct{}{}
}
}
configInfo := src.ConfigInfo()
if configInfo.Digest != "" {
blobDigests = append(blobDigests, blobDigest{digest: configInfo.Digest, isConfig: true})
}
}
tmpDir, err := ioutil.TempDir(".", "layers-")
if err != nil {
return err
}
tmpDirRef, err := directory.NewReference(tmpDir)
if err != nil {
return err
}
dest, err := tmpDirRef.NewImageDestination(ctx, nil)
if err != nil {
return err
}
defer func() {
if err := dest.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (close error: %v)", err)
}
}()
for _, bd := range blobDigests {
r, blobSize, err := rawSource.GetBlob(ctx, types.BlobInfo{Digest: bd.digest, Size: -1}, cache)
if err != nil {
return err
}
src, err := image.FromSource(ctx, rawSource)
if err != nil {
if closeErr := rawSource.Close(); closeErr != nil {
if _, err := dest.PutBlob(ctx, r, types.BlobInfo{Digest: bd.digest, Size: blobSize}, cache, bd.isConfig); err != nil {
if closeErr := r.Close(); closeErr != nil {
return errors.Wrapf(err, " (close error: %v)", closeErr)
}
return err
}
defer func() {
if err := src.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (close error: %v)", err)
}
}()
}
var blobDigests []digest.Digest
for _, dString := range c.Args().Tail() {
if !strings.HasPrefix(dString, "sha256:") {
dString = "sha256:" + dString
}
d, err := digest.Parse(dString)
if err != nil {
return err
}
blobDigests = append(blobDigests, d)
}
manifest, _, err := src.Manifest(ctx)
if err != nil {
return err
}
if err := dest.PutManifest(ctx, manifest); err != nil {
return err
}
if len(blobDigests) == 0 {
layers := src.LayerInfos()
seenLayers := map[digest.Digest]struct{}{}
for _, info := range layers {
if _, ok := seenLayers[info.Digest]; !ok {
blobDigests = append(blobDigests, info.Digest)
seenLayers[info.Digest] = struct{}{}
}
}
configInfo := src.ConfigInfo()
if configInfo.Digest != "" {
blobDigests = append(blobDigests, configInfo.Digest)
}
}
tmpDir, err := ioutil.TempDir(".", "layers-")
if err != nil {
return err
}
tmpDirRef, err := directory.NewReference(tmpDir)
if err != nil {
return err
}
dest, err := tmpDirRef.NewImageDestination(nil)
if err != nil {
return err
}
defer func() {
if err := dest.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (close error: %v)", err)
}
}()
for _, digest := range blobDigests {
r, blobSize, err := rawSource.GetBlob(types.BlobInfo{Digest: digest, Size: -1})
if err != nil {
return err
}
if _, err := dest.PutBlob(r, types.BlobInfo{Digest: digest, Size: blobSize}); err != nil {
if closeErr := r.Close(); closeErr != nil {
return errors.Wrapf(err, " (close error: %v)", closeErr)
}
return err
}
}
manifest, _, err := src.Manifest()
if err != nil {
return err
}
if err := dest.PutManifest(manifest); err != nil {
return err
}
return dest.Commit()
},
return dest.Commit(ctx)
}

View File

@@ -1,12 +1,14 @@
package main
import (
"context"
"fmt"
"os"
"time"
"github.com/containers/image/signature"
"github.com/containers/skopeo/version"
"github.com/containers/storage/pkg/reexec"
"github.com/projectatomic/skopeo/version"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@@ -15,8 +17,22 @@ import (
// and will be populated by the Makefile
var gitCommit = ""
// createApp returns a cli.App to be run or tested.
func createApp() *cli.App {
type globalOptions struct {
debug bool // Enable debug output
tlsVerify optionalBool // Require HTTPS and verify certificates (for docker: and docker-daemon:)
policyPath string // Path to a signature verification policy file
insecurePolicy bool // Use an "allow everything" signature verification policy
registriesDirPath string // Path to a "registries.d" registry configuratio directory
overrideArch string // Architecture to use for choosing images, instead of the runtime one
overrideOS string // OS to use for choosing images, instead of the runtime one
commandTimeout time.Duration // Timeout for the command execution
registriesConfPath string // Path to the "registries.conf" file
}
// createApp returns a cli.App, and the underlying globalOptions object, to be run or tested.
func createApp() (*cli.App, *globalOptions) {
opts := globalOptions{}
app := cli.NewApp()
app.EnableBashCompletion = true
app.Name = "skopeo"
@@ -28,85 +44,112 @@ func createApp() *cli.App {
app.Usage = "Various operations with container images and container image registries"
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug",
Usage: "enable debug output",
Name: "debug",
Usage: "enable debug output",
Destination: &opts.debug,
},
cli.BoolTFlag{
cli.GenericFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
Hidden: true,
Value: newOptionalBoolValue(&opts.tlsVerify),
},
cli.StringFlag{
Name: "policy",
Value: "",
Usage: "Path to a trust policy file",
Name: "policy",
Usage: "Path to a trust policy file",
Destination: &opts.policyPath,
},
cli.BoolFlag{
Name: "insecure-policy",
Usage: "run the tool without any policy check",
Name: "insecure-policy",
Usage: "run the tool without any policy check",
Destination: &opts.insecurePolicy,
},
cli.StringFlag{
Name: "registries.d",
Value: "",
Usage: "use registry configuration files in `DIR` (e.g. for container signature storage)",
Name: "registries.d",
Usage: "use registry configuration files in `DIR` (e.g. for container signature storage)",
Destination: &opts.registriesDirPath,
},
cli.StringFlag{
Name: "override-arch",
Value: "",
Usage: "use `ARCH` instead of the architecture of the machine for choosing images",
Name: "override-arch",
Usage: "use `ARCH` instead of the architecture of the machine for choosing images",
Destination: &opts.overrideArch,
},
cli.StringFlag{
Name: "override-os",
Value: "",
Usage: "use `OS` instead of the running OS for choosing images",
Name: "override-os",
Usage: "use `OS` instead of the running OS for choosing images",
Destination: &opts.overrideOS,
},
cli.DurationFlag{
Name: "command-timeout",
Usage: "timeout for the command execution",
Destination: &opts.commandTimeout,
},
cli.StringFlag{
Name: "registries-conf",
Usage: "path to the registries.conf file",
Destination: &opts.registriesConfPath,
Hidden: true,
},
}
app.Before = func(c *cli.Context) error {
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
if c.GlobalIsSet("tls-verify") {
logrus.Warn("'--tls-verify' is deprecated, please set this on the specific subcommand")
}
return nil
}
app.Before = opts.before
app.Commands = []cli.Command{
copyCmd,
inspectCmd,
layersCmd,
deleteCmd,
manifestDigestCmd,
standaloneSignCmd,
standaloneVerifyCmd,
untrustedSignatureDumpCmd,
copyCmd(&opts),
inspectCmd(&opts),
layersCmd(&opts),
deleteCmd(&opts),
manifestDigestCmd(),
standaloneSignCmd(),
standaloneVerifyCmd(),
untrustedSignatureDumpCmd(),
}
return app
return app, &opts
}
// before is run by the cli package for any command, before running the command-specific handler.
func (opts *globalOptions) before(ctx *cli.Context) error {
if opts.debug {
logrus.SetLevel(logrus.DebugLevel)
}
if opts.tlsVerify.present {
logrus.Warn("'--tls-verify' is deprecated, please set this on the specific subcommand")
}
return nil
}
func main() {
if reexec.Init() {
return
}
app := createApp()
app, _ := createApp()
if err := app.Run(os.Args); err != nil {
logrus.Fatal(err)
}
}
// getPolicyContext handles the global "policy" flag.
func getPolicyContext(c *cli.Context) (*signature.PolicyContext, error) {
policyPath := c.GlobalString("policy")
var policy *signature.Policy // This could be cached across calls, if we had an application context.
// getPolicyContext returns a *signature.PolicyContext based on opts.
func (opts *globalOptions) getPolicyContext() (*signature.PolicyContext, error) {
var policy *signature.Policy // This could be cached across calls in opts.
var err error
if c.GlobalBool("insecure-policy") {
if opts.insecurePolicy {
policy = &signature.Policy{Default: []signature.PolicyRequirement{signature.NewPRInsecureAcceptAnything()}}
} else if policyPath == "" {
} else if opts.policyPath == "" {
policy, err = signature.DefaultPolicy(nil)
} else {
policy, err = signature.NewPolicyFromFile(policyPath)
policy, err = signature.NewPolicyFromFile(opts.policyPath)
}
if err != nil {
return nil, err
}
return signature.NewPolicyContext(policy)
}
// commandTimeoutContext returns a context.Context and a cancellation callback based on opts.
// The caller should usually "defer cancel()" immediately after calling this.
func (opts *globalOptions) commandTimeoutContext() (context.Context, context.CancelFunc) {
ctx := context.Background()
var cancel context.CancelFunc = func() {}
if opts.commandTimeout > 0 {
ctx, cancel = context.WithTimeout(ctx, opts.commandTimeout)
}
return ctx, cancel
}

View File

@@ -5,7 +5,7 @@ import "bytes"
// runSkopeo creates an app object and runs it with args, with an implied first "skopeo".
// Returns output intended for stdout and the returned error, if any.
func runSkopeo(args ...string) (string, error) {
app := createApp()
app, _ := createApp()
stdout := bytes.Buffer{}
app.Writer = &stdout
args = append([]string{"skopeo"}, args...)

View File

@@ -3,17 +3,31 @@ package main
import (
"errors"
"fmt"
"io"
"io/ioutil"
"github.com/containers/image/manifest"
"github.com/urfave/cli"
)
func manifestDigest(context *cli.Context) error {
if len(context.Args()) != 1 {
type manifestDigestOptions struct {
}
func manifestDigestCmd() cli.Command {
opts := manifestDigestOptions{}
return cli.Command{
Name: "manifest-digest",
Usage: "Compute a manifest digest of a file",
ArgsUsage: "MANIFEST",
Action: commandAction(opts.run),
}
}
func (opts *manifestDigestOptions) run(args []string, stdout io.Writer) error {
if len(args) != 1 {
return errors.New("Usage: skopeo manifest-digest manifest")
}
manifestPath := context.Args()[0]
manifestPath := args[0]
man, err := ioutil.ReadFile(manifestPath)
if err != nil {
@@ -23,13 +37,6 @@ func manifestDigest(context *cli.Context) error {
if err != nil {
return fmt.Errorf("Error computing digest: %v", err)
}
fmt.Fprintf(context.App.Writer, "%s\n", digest)
fmt.Fprintf(stdout, "%s\n", digest)
return nil
}
var manifestDigestCmd = cli.Command{
Name: "manifest-digest",
Usage: "Compute a manifest digest of a file",
ArgsUsage: "MANIFEST",
Action: manifestDigest,
}

View File

@@ -4,20 +4,41 @@ import (
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"github.com/containers/image/signature"
"github.com/urfave/cli"
)
func standaloneSign(context *cli.Context) error {
outputFile := context.String("output")
if len(context.Args()) != 3 || outputFile == "" {
type standaloneSignOptions struct {
output string // Output file path
}
func standaloneSignCmd() cli.Command {
opts := standaloneSignOptions{}
return cli.Command{
Name: "standalone-sign",
Usage: "Create a signature using local files",
ArgsUsage: "MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT",
Action: commandAction(opts.run),
Flags: []cli.Flag{
cli.StringFlag{
Name: "output, o",
Usage: "output the signature to `SIGNATURE`",
Destination: &opts.output,
},
},
}
}
func (opts *standaloneSignOptions) run(args []string, stdout io.Writer) error {
if len(args) != 3 || opts.output == "" {
return errors.New("Usage: skopeo standalone-sign manifest docker-reference key-fingerprint -o signature")
}
manifestPath := context.Args()[0]
dockerReference := context.Args()[1]
fingerprint := context.Args()[2]
manifestPath := args[0]
dockerReference := args[1]
fingerprint := args[2]
manifest, err := ioutil.ReadFile(manifestPath)
if err != nil {
@@ -34,33 +55,33 @@ func standaloneSign(context *cli.Context) error {
return fmt.Errorf("Error creating signature: %v", err)
}
if err := ioutil.WriteFile(outputFile, signature, 0644); err != nil {
return fmt.Errorf("Error writing signature to %s: %v", outputFile, err)
if err := ioutil.WriteFile(opts.output, signature, 0644); err != nil {
return fmt.Errorf("Error writing signature to %s: %v", opts.output, err)
}
return nil
}
var standaloneSignCmd = cli.Command{
Name: "standalone-sign",
Usage: "Create a signature using local files",
ArgsUsage: "MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT",
Action: standaloneSign,
Flags: []cli.Flag{
cli.StringFlag{
Name: "output, o",
Usage: "output the signature to `SIGNATURE`",
},
},
type standaloneVerifyOptions struct {
}
func standaloneVerify(context *cli.Context) error {
if len(context.Args()) != 4 {
func standaloneVerifyCmd() cli.Command {
opts := standaloneVerifyOptions{}
return cli.Command{
Name: "standalone-verify",
Usage: "Verify a signature using local files",
ArgsUsage: "MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT SIGNATURE",
Action: commandAction(opts.run),
}
}
func (opts *standaloneVerifyOptions) run(args []string, stdout io.Writer) error {
if len(args) != 4 {
return errors.New("Usage: skopeo standalone-verify manifest docker-reference key-fingerprint signature")
}
manifestPath := context.Args()[0]
expectedDockerReference := context.Args()[1]
expectedFingerprint := context.Args()[2]
signaturePath := context.Args()[3]
manifestPath := args[0]
expectedDockerReference := args[1]
expectedFingerprint := args[2]
signaturePath := args[3]
unverifiedManifest, err := ioutil.ReadFile(manifestPath)
if err != nil {
@@ -81,22 +102,35 @@ func standaloneVerify(context *cli.Context) error {
return fmt.Errorf("Error verifying signature: %v", err)
}
fmt.Fprintf(context.App.Writer, "Signature verified, digest %s\n", sig.DockerManifestDigest)
fmt.Fprintf(stdout, "Signature verified, digest %s\n", sig.DockerManifestDigest)
return nil
}
var standaloneVerifyCmd = cli.Command{
Name: "standalone-verify",
Usage: "Verify a signature using local files",
ArgsUsage: "MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT SIGNATURE",
Action: standaloneVerify,
// WARNING: Do not use the contents of this for ANY security decisions,
// and be VERY CAREFUL about showing this information to humans in any way which suggest that these values “are probably” reliable.
// There is NO REASON to expect the values to be correct, or not intentionally misleading
// (including things like “✅ Verified by $authority”)
//
// The subcommand is undocumented, and it may be renamed or entirely disappear in the future.
type untrustedSignatureDumpOptions struct {
}
func untrustedSignatureDump(context *cli.Context) error {
if len(context.Args()) != 1 {
func untrustedSignatureDumpCmd() cli.Command {
opts := untrustedSignatureDumpOptions{}
return cli.Command{
Name: "untrusted-signature-dump-without-verification",
Usage: "Dump contents of a signature WITHOUT VERIFYING IT",
ArgsUsage: "SIGNATURE",
Hidden: true,
Action: commandAction(opts.run),
}
}
func (opts *untrustedSignatureDumpOptions) run(args []string, stdout io.Writer) error {
if len(args) != 1 {
return errors.New("Usage: skopeo untrusted-signature-dump-without-verification signature")
}
untrustedSignaturePath := context.Args()[0]
untrustedSignaturePath := args[0]
untrustedSignature, err := ioutil.ReadFile(untrustedSignaturePath)
if err != nil {
@@ -111,20 +145,6 @@ func untrustedSignatureDump(context *cli.Context) error {
if err != nil {
return err
}
fmt.Fprintln(context.App.Writer, string(untrustedOut))
fmt.Fprintln(stdout, string(untrustedOut))
return nil
}
// WARNING: Do not use the contents of this for ANY security decisions,
// and be VERY CAREFUL about showing this information to humans in any way which suggest that these values “are probably” reliable.
// There is NO REASON to expect the values to be correct, or not intentionally misleading
// (including things like “✅ Verified by $authority”)
//
// The subcommand is undocumented, and it may be renamed or entirely disappear in the future.
var untrustedSignatureDumpCmd = cli.Command{
Name: "untrusted-signature-dump-without-verification",
Usage: "Dump contents of a signature WITHOUT VERIFYING IT",
ArgsUsage: "SIGNATURE",
Hidden: true,
Action: untrustedSignatureDump,
}

11
cmd/skopeo/unshare.go Normal file
View File

@@ -0,0 +1,11 @@
// +build !linux
package main
func maybeReexec() error {
return nil
}
func reexecIfNecessaryForImages(inputImageNames ...string) error {
return nil
}

View File

@@ -0,0 +1,47 @@
package main
import (
"github.com/containers/buildah/pkg/unshare"
"github.com/containers/image/storage"
"github.com/containers/image/transports/alltransports"
"github.com/pkg/errors"
"github.com/syndtr/gocapability/capability"
)
var neededCapabilities = []capability.Cap{
capability.CAP_CHOWN,
capability.CAP_DAC_OVERRIDE,
capability.CAP_FOWNER,
capability.CAP_FSETID,
capability.CAP_MKNOD,
capability.CAP_SETFCAP,
}
func maybeReexec() error {
// With Skopeo we need only the subset of the root capabilities necessary
// for pulling an image to the storage. Do not attempt to create a namespace
// if we already have the capabilities we need.
capabilities, err := capability.NewPid(0)
if err != nil {
return errors.Wrapf(err, "error reading the current capabilities sets")
}
for _, cap := range neededCapabilities {
if !capabilities.Get(capability.EFFECTIVE, cap) {
// We miss a capability we need, create a user namespaces
unshare.MaybeReexecUsingUserNamespace(true)
return nil
}
}
return nil
}
func reexecIfNecessaryForImages(imageNames ...string) error {
// Check if container-storage are used before doing unshare
for _, imageName := range imageNames {
transport := alltransports.TransportFromImageName(imageName)
if transport != nil && transport.Name() == storage.Transport.Name() {
return maybeReexec()
}
}
return nil
}

View File

@@ -1,7 +1,9 @@
package main
import (
"context"
"errors"
"io"
"strings"
"github.com/containers/image/transports/alltransports"
@@ -9,33 +11,184 @@ import (
"github.com/urfave/cli"
)
func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemContext, error) {
// errorShouldDisplayUsage is a subtype of error used by command handlers to indicate that cli.ShowSubcommandHelp should be called.
type errorShouldDisplayUsage struct {
error
}
// commandAction intermediates between the cli.ActionFunc interface and the real handler,
// primarily to ensure that cli.Context is not available to the handler, which in turn
// makes sure that the cli.String() etc. flag access functions are not used,
// and everything is done using the *Options structures and the Destination: members of cli.Flag.
// handler may return errorShouldDisplayUsage to cause cli.ShowSubcommandHelp to be called.
func commandAction(handler func(args []string, stdout io.Writer) error) cli.ActionFunc {
return func(c *cli.Context) error {
err := handler(([]string)(c.Args()), c.App.Writer)
if _, ok := err.(errorShouldDisplayUsage); ok {
cli.ShowSubcommandHelp(c)
}
return err
}
}
// sharedImageOptions collects CLI flags which are image-related, but do not change across images.
// This really should be a part of globalOptions, but that would break existing users of (skopeo copy --authfile=).
type sharedImageOptions struct {
authFilePath string // Path to a */containers/auth.json
}
// imageFlags prepares a collection of CLI flags writing into sharedImageOptions, and the managed sharedImageOptions structure.
func sharedImageFlags() ([]cli.Flag, *sharedImageOptions) {
opts := sharedImageOptions{}
return []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
Destination: &opts.authFilePath,
},
}, &opts
}
// imageOptions collects CLI flags which are the same across subcommands, but may be different for each image
// (e.g. may differ between the source and destination of a copy)
type imageOptions struct {
global *globalOptions // May be shared across several imageOptions instances.
shared *sharedImageOptions // May be shared across several imageOptions instances.
credsOption optionalString // username[:password] for accessing a registry
dockerCertPath string // A directory using Docker-like *.{crt,cert,key} files for connecting to a registry or a daemon
tlsVerify optionalBool // Require HTTPS and verify certificates (for docker: and docker-daemon:)
sharedBlobDir string // A directory to use for OCI blobs, shared across repositories
dockerDaemonHost string // docker-daemon: host to connect to
noCreds bool // Access the registry anonymously
}
// imageFlags prepares a collection of CLI flags writing into imageOptions, and the managed imageOptions structure.
func imageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) ([]cli.Flag, *imageOptions) {
opts := imageOptions{
global: global,
shared: shared,
}
// This is horribly ugly, but we need to support the old option forms of (skopeo copy) for compatibility.
// Don't add any more cases like this.
credsOptionExtra := ""
if credsOptionAlias != "" {
credsOptionExtra += "," + credsOptionAlias
}
return []cli.Flag{
cli.GenericFlag{
Name: flagPrefix + "creds" + credsOptionExtra,
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
Value: newOptionalStringValue(&opts.credsOption),
},
cli.StringFlag{
Name: flagPrefix + "cert-dir",
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry or daemon",
Destination: &opts.dockerCertPath,
},
cli.GenericFlag{
Name: flagPrefix + "tls-verify",
Usage: "require HTTPS and verify certificates when talking to the container registry or daemon (defaults to true)",
Value: newOptionalBoolValue(&opts.tlsVerify),
},
cli.StringFlag{
Name: flagPrefix + "shared-blob-dir",
Usage: "`DIRECTORY` to use to share blobs across OCI repositories",
Destination: &opts.sharedBlobDir,
},
cli.StringFlag{
Name: flagPrefix + "daemon-host",
Usage: "use docker daemon host at `HOST` (docker-daemon: only)",
Destination: &opts.dockerDaemonHost,
},
cli.BoolFlag{
Name: flagPrefix + "no-creds",
Usage: "Access the registry anonymously",
Destination: &opts.noCreds,
},
}, &opts
}
// newSystemContext returns a *types.SystemContext corresponding to opts.
// It is guaranteed to return a fresh instance, so it is safe to make additional updates to it.
func (opts *imageOptions) newSystemContext() (*types.SystemContext, error) {
ctx := &types.SystemContext{
RegistriesDirPath: c.GlobalString("registries.d"),
ArchitectureChoice: c.GlobalString("override-arch"),
OSChoice: c.GlobalString("override-os"),
DockerCertPath: c.String(flagPrefix + "cert-dir"),
// DEPRECATED: keep this here for backward compatibility, but override
// them if per subcommand flags are provided (see below).
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
OSTreeTmpDirPath: c.String(flagPrefix + "ostree-tmp-dir"),
OCISharedBlobDirPath: c.String(flagPrefix + "shared-blob-dir"),
DirForceCompress: c.Bool(flagPrefix + "compress"),
AuthFilePath: c.String("authfile"),
RegistriesDirPath: opts.global.registriesDirPath,
ArchitectureChoice: opts.global.overrideArch,
OSChoice: opts.global.overrideOS,
DockerCertPath: opts.dockerCertPath,
OCISharedBlobDirPath: opts.sharedBlobDir,
AuthFilePath: opts.shared.authFilePath,
DockerDaemonHost: opts.dockerDaemonHost,
DockerDaemonCertPath: opts.dockerCertPath,
SystemRegistriesConfPath: opts.global.registriesConfPath,
}
if c.IsSet(flagPrefix + "tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT(flagPrefix + "tls-verify")
if opts.tlsVerify.present {
ctx.DockerDaemonInsecureSkipTLSVerify = !opts.tlsVerify.value
}
if c.IsSet(flagPrefix + "creds") {
// DEPRECATED: We support this for backward compatibility, but override it if a per-image flag is provided.
if opts.global.tlsVerify.present {
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.global.tlsVerify.value)
}
if opts.tlsVerify.present {
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.value)
}
if opts.credsOption.present && opts.noCreds {
return nil, errors.New("creds and no-creds cannot be specified at the same time")
}
if opts.credsOption.present {
var err error
ctx.DockerAuthConfig, err = getDockerAuth(c.String(flagPrefix + "creds"))
ctx.DockerAuthConfig, err = getDockerAuth(opts.credsOption.value)
if err != nil {
return nil, err
}
}
if opts.noCreds {
ctx.DockerAuthConfig = &types.DockerAuthConfig{}
}
return ctx, nil
}
// imageDestOptions is a superset of imageOptions specialized for iamge destinations.
type imageDestOptions struct {
*imageOptions
osTreeTmpDir string // A directory to use for OSTree temporary files
dirForceCompression bool // Compress layers when saving to the dir: transport
}
// imageDestFlags prepares a collection of CLI flags writing into imageDestOptions, and the managed imageDestOptions structure.
func imageDestFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) ([]cli.Flag, *imageDestOptions) {
genericFlags, genericOptions := imageFlags(global, shared, flagPrefix, credsOptionAlias)
opts := imageDestOptions{imageOptions: genericOptions}
return append(genericFlags, []cli.Flag{
cli.StringFlag{
Name: flagPrefix + "ostree-tmp-dir",
Usage: "`DIRECTORY` to use for OSTree temporary files",
Destination: &opts.osTreeTmpDir,
},
cli.BoolFlag{
Name: flagPrefix + "compress",
Usage: "Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)",
Destination: &opts.dirForceCompression,
},
}...), &opts
}
// newSystemContext returns a *types.SystemContext corresponding to opts.
// It is guaranteed to return a fresh instance, so it is safe to make additional updates to it.
func (opts *imageDestOptions) newSystemContext() (*types.SystemContext, error) {
ctx, err := opts.imageOptions.newSystemContext()
if err != nil {
return nil, err
}
ctx.OSTreeTmpDirPath = opts.osTreeTmpDir
ctx.DirForceCompress = opts.dirForceCompression
return ctx, err
}
func parseCreds(creds string) (string, string, error) {
if creds == "" {
return "", "", errors.New("credentials can't be empty")
@@ -63,29 +216,28 @@ func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
// parseImage converts image URL-like string to an initialized handler for that image.
// The caller must call .Close() on the returned ImageCloser.
func parseImage(c *cli.Context) (types.ImageCloser, error) {
imgName := c.Args().First()
ref, err := alltransports.ParseImageName(imgName)
if err != nil {
return nil, err
}
ctx, err := contextFromGlobalOptions(c, "")
if err != nil {
return nil, err
}
return ref.NewImage(ctx)
}
// parseImageSource converts image URL-like string to an ImageSource.
// The caller must call .Close() on the returned ImageSource.
func parseImageSource(c *cli.Context, name string) (types.ImageSource, error) {
func parseImage(ctx context.Context, opts *imageOptions, name string) (types.ImageCloser, error) {
ref, err := alltransports.ParseImageName(name)
if err != nil {
return nil, err
}
ctx, err := contextFromGlobalOptions(c, "")
sys, err := opts.newSystemContext()
if err != nil {
return nil, err
}
return ref.NewImageSource(ctx)
return ref.NewImage(ctx, sys)
}
// parseImageSource converts image URL-like string to an ImageSource.
// The caller must call .Close() on the returned ImageSource.
func parseImageSource(ctx context.Context, opts *imageOptions, name string) (types.ImageSource, error) {
ref, err := alltransports.ParseImageName(name)
if err != nil {
return nil, err
}
sys, err := opts.newSystemContext()
if err != nil {
return nil, err
}
return ref.NewImageSource(ctx, sys)
}

184
cmd/skopeo/utils_test.go Normal file
View File

@@ -0,0 +1,184 @@
package main
import (
"flag"
"testing"
"github.com/containers/image/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// fakeGlobalOptions creates globalOptions and sets it according to flags.
// NOTE: This is QUITE FAKE; none of the urfave/cli normalization and the like happens.
func fakeGlobalOptions(t *testing.T, flags []string) *globalOptions {
app, opts := createApp()
flagSet := flag.NewFlagSet(app.Name, flag.ContinueOnError)
for _, f := range app.Flags {
f.Apply(flagSet)
}
err := flagSet.Parse(flags)
require.NoError(t, err)
return opts
}
// fakeImageOptions creates imageOptions and sets it according to globalFlags/cmdFlags.
// NOTE: This is QUITE FAKE; none of the urfave/cli normalization and the like happens.
func fakeImageOptions(t *testing.T, flagPrefix string, globalFlags []string, cmdFlags []string) *imageOptions {
globalOpts := fakeGlobalOptions(t, globalFlags)
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageFlags(globalOpts, sharedOpts, flagPrefix, "")
flagSet := flag.NewFlagSet("fakeImageOptions", flag.ContinueOnError)
for _, f := range append(sharedFlags, imageFlags...) {
f.Apply(flagSet)
}
err := flagSet.Parse(cmdFlags)
require.NoError(t, err)
return imageOpts
}
func TestImageOptionsNewSystemContext(t *testing.T) {
// Default state
opts := fakeImageOptions(t, "dest-", []string{}, []string{})
res, err := opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, &types.SystemContext{}, res)
// Set everything to non-default values.
opts = fakeImageOptions(t, "dest-", []string{
"--registries.d", "/srv/registries.d",
"--override-arch", "overridden-arch",
"--override-os", "overridden-os",
}, []string{
"--authfile", "/srv/authfile",
"--dest-cert-dir", "/srv/cert-dir",
"--dest-shared-blob-dir", "/srv/shared-blob-dir",
"--dest-daemon-host", "daemon-host.example.com",
"--dest-tls-verify=false",
"--dest-creds", "creds-user:creds-password",
})
res, err = opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, &types.SystemContext{
RegistriesDirPath: "/srv/registries.d",
AuthFilePath: "/srv/authfile",
ArchitectureChoice: "overridden-arch",
OSChoice: "overridden-os",
OCISharedBlobDirPath: "/srv/shared-blob-dir",
DockerCertPath: "/srv/cert-dir",
DockerInsecureSkipTLSVerify: types.OptionalBoolTrue,
DockerAuthConfig: &types.DockerAuthConfig{Username: "creds-user", Password: "creds-password"},
DockerDaemonCertPath: "/srv/cert-dir",
DockerDaemonHost: "daemon-host.example.com",
DockerDaemonInsecureSkipTLSVerify: true,
}, res)
// Global/per-command tlsVerify behavior
for _, c := range []struct {
global, cmd string
expectedDocker types.OptionalBool
expectedDockerDaemon bool
}{
{"", "", types.OptionalBoolUndefined, false},
{"", "false", types.OptionalBoolTrue, true},
{"", "true", types.OptionalBoolFalse, false},
{"false", "", types.OptionalBoolTrue, false},
{"false", "false", types.OptionalBoolTrue, true},
{"false", "true", types.OptionalBoolFalse, false},
{"true", "", types.OptionalBoolFalse, false},
{"true", "false", types.OptionalBoolTrue, true},
{"true", "true", types.OptionalBoolFalse, false},
} {
globalFlags := []string{}
if c.global != "" {
globalFlags = append(globalFlags, "--tls-verify="+c.global)
}
cmdFlags := []string{}
if c.cmd != "" {
cmdFlags = append(cmdFlags, "--dest-tls-verify="+c.cmd)
}
opts := fakeImageOptions(t, "dest-", globalFlags, cmdFlags)
res, err = opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, c.expectedDocker, res.DockerInsecureSkipTLSVerify, "%#v", c)
assert.Equal(t, c.expectedDockerDaemon, res.DockerDaemonInsecureSkipTLSVerify, "%#v", c)
}
// Invalid option values
opts = fakeImageOptions(t, "dest-", []string{}, []string{"--dest-creds", ""})
_, err = opts.newSystemContext()
assert.Error(t, err)
}
// fakeImageDestOptions creates imageDestOptions and sets it according to globalFlags/cmdFlags.
// NOTE: This is QUITE FAKE; none of the urfave/cli normalization and the like happens.
func fakeImageDestOptions(t *testing.T, flagPrefix string, globalFlags []string, cmdFlags []string) *imageDestOptions {
globalOpts := fakeGlobalOptions(t, globalFlags)
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageDestFlags(globalOpts, sharedOpts, flagPrefix, "")
flagSet := flag.NewFlagSet("fakeImageDestOptions", flag.ContinueOnError)
for _, f := range append(sharedFlags, imageFlags...) {
f.Apply(flagSet)
}
err := flagSet.Parse(cmdFlags)
require.NoError(t, err)
return imageOpts
}
func TestImageDestOptionsNewSystemContext(t *testing.T) {
// Default state
opts := fakeImageDestOptions(t, "dest-", []string{}, []string{})
res, err := opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, &types.SystemContext{}, res)
// Explicitly set everything to default, except for when the default is “not present”
opts = fakeImageDestOptions(t, "dest-", []string{}, []string{
"--dest-compress=false",
})
res, err = opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, &types.SystemContext{}, res)
// Set everything to non-default values.
opts = fakeImageDestOptions(t, "dest-", []string{
"--registries.d", "/srv/registries.d",
"--override-arch", "overridden-arch",
"--override-os", "overridden-os",
}, []string{
"--authfile", "/srv/authfile",
"--dest-cert-dir", "/srv/cert-dir",
"--dest-ostree-tmp-dir", "/srv/ostree-tmp-dir",
"--dest-shared-blob-dir", "/srv/shared-blob-dir",
"--dest-compress=true",
"--dest-daemon-host", "daemon-host.example.com",
"--dest-tls-verify=false",
"--dest-creds", "creds-user:creds-password",
})
res, err = opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, &types.SystemContext{
RegistriesDirPath: "/srv/registries.d",
AuthFilePath: "/srv/authfile",
ArchitectureChoice: "overridden-arch",
OSChoice: "overridden-os",
OCISharedBlobDirPath: "/srv/shared-blob-dir",
DockerCertPath: "/srv/cert-dir",
DockerInsecureSkipTLSVerify: types.OptionalBoolTrue,
DockerAuthConfig: &types.DockerAuthConfig{Username: "creds-user", Password: "creds-password"},
OSTreeTmpDirPath: "/srv/ostree-tmp-dir",
DockerDaemonCertPath: "/srv/cert-dir",
DockerDaemonHost: "daemon-host.example.com",
DockerDaemonInsecureSkipTLSVerify: true,
DirForceCompress: true,
}, res)
// Invalid option values in imageOptions
opts = fakeImageDestOptions(t, "dest-", []string{}, []string{"--dest-creds", ""})
_, err = opts.newSystemContext()
assert.Error(t, err)
}

View File

@@ -5,20 +5,37 @@
_complete_() {
local options_with_args=$1
local boolean_options="$2 -h --help"
local transports=$3
case "$prev" in
$options_with_args)
return
;;
esac
local option_with_args
for option_with_args in $options_with_args $transports
do
if [ "$option_with_args" == "$prev" -o "$option_with_args" == "$cur" ]
then
return
fi
done
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
if [ -n "$transports" ]
then
compopt -o nospace
COMPREPLY=( $( compgen -W "$transports" -- "$cur" ) )
fi
;;
esac
}
_skopeo_supported_transports() {
local subcommand=$1
${PROG} $subcommand --help | grep "Supported transports" -A 1 | tail -n 1 | sed -e 's/,/:/g' -e 's/$/:/'
}
_skopeo_copy() {
local options_with_args="
--authfile
@@ -31,14 +48,22 @@ _skopeo_copy() {
--dest-cert-dir
--dest-ostree-tmp-dir
--dest-tls-verify
--src-daemon-host
--dest-daemon-host
"
local boolean_options="
--dest-compress
--remove-signatures
--src-no-creds
--dest-no-creds
"
_complete_ "$options_with_args" "$boolean_options"
local transports="
$(_skopeo_supported_transports $(echo $FUNCNAME | sed 's/_skopeo_//'))
"
_complete_ "$options_with_args" "$boolean_options" "$transports"
}
_skopeo_inspect() {
@@ -48,15 +73,22 @@ _skopeo_inspect() {
--cert-dir
"
local boolean_options="
--config
--raw
--tls-verify
--no-creds
"
_complete_ "$options_with_args" "$boolean_options"
local transports="
$(_skopeo_supported_transports $(echo $FUNCNAME | sed 's/_skopeo_//'))
"
_complete_ "$options_with_args" "$boolean_options" "$transports"
}
_skopeo_standalone_sign() {
local options_with_args="
-o --output
-o --output
"
local boolean_options="
"
@@ -87,49 +119,56 @@ _skopeo_delete() {
"
local boolean_options="
--tls-verify
--no-creds
"
_complete_ "$options_with_args" "$boolean_options"
local transports="
$(_skopeo_supported_transports $(echo $FUNCNAME | sed 's/_skopeo_//'))
"
_complete_ "$options_with_args" "$boolean_options" "$transports"
}
_skopeo_layers() {
local options_with_args="
--creds
--cert-dir
--creds
--cert-dir
"
local boolean_options="
--tls-verify
--tls-verify
"
_complete_ "$options_with_args" "$boolean_options"
}
_skopeo_skopeo() {
local options_with_args="
--policy
--registries.d
--policy
--registries.d
--override-arch
--override-os
--command-timeout
"
local boolean_options="
--insecure-policy
--debug
--version -v
--help -h
--insecure-policy
--debug
--version -v
--help -h
"
commands=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
case "$prev" in
$main_options_with_args_glob )
return
;;
$main_options_with_args_glob )
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]} help" -- "$cur" ) )
;;
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]} help" -- "$cur" ) )
;;
esac
}
@@ -147,15 +186,17 @@ _cli_bash_autocomplete() {
local counter=1
counter=1
while [ $counter -lt $cword ]; do
case "!${words[$counter]}" in
*)
command=$(echo "${words[$counter]}" | sed 's/-/_/g')
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
case "${words[$counter]}" in
-*)
;;
*)
command=$(echo "${words[$counter]}" | sed 's/-/_/g')
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
done
local completions_func=_skopeo_${command}

View File

@@ -0,0 +1,60 @@
% storage.conf(5) Container Storage Configuration File
% Dan Walsh
% May 2017
# NAME
storage.conf - Syntax of Container Storage configuration file
# DESCRIPTION
The STORAGE configuration file specifies all of the available container storage options
for tools using shared container storage.
# FORMAT
The [TOML format][toml] is used as the encoding of the configuration file.
Every option and subtable listed here is nested under a global "storage" table.
No bare options are used. The format of TOML can be simplified to:
[table]
option = value
[table.subtable1]
option = value
[table.subtable2]
option = value
## STORAGE TABLE
The `storage` table supports the following options:
**graphroot**=""
container storage graph dir (default: "/var/lib/containers/storage")
Default directory to store all writable content created by container storage programs.
**runroot**=""
container storage run dir (default: "/var/run/containers/storage")
Default directory to store all temporary writable content created by container storage programs.
**driver**=""
container storage driver (default is "overlay")
Default Copy On Write (COW) container storage driver.
### STORAGE OPTIONS TABLE
The `storage.options` table supports the following options:
**additionalimagestores**=[]
Paths to additional container image stores. Usually these are read-only and stored on remote network shares.
**size**=""
Maximum size of a container image. Default is 10GB. This flag can be used to set quota
on the size of container images.
**override_kernel_check**=""
Tell storage drivers to ignore kernel version checks. Some storage drivers assume that if a kernel is too
old, the driver is not supported. But for kernels that have had the drivers backported, this flag
allows users to override the checks.
# HISTORY
May 2017, Originally compiled by Dan Walsh <dwalsh@redhat.com>
Format copied from crio.conf man page created by Aleksa Sarai <asarai@suse.de>

28
contrib/storage.conf Normal file
View File

@@ -0,0 +1,28 @@
# storage.conf is the configuration file for all tools
# that share the containers/storage libraries
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]
# Default Storage Driver
driver = "overlay"
# Temporary storage location
runroot = "/var/run/containers/storage"
# Primary read-write location of container storage
graphroot = "/var/lib/containers/storage"
[storage.options]
# AdditionalImageStores is used to pass paths to additional read-only image stores
# Must be comma separated list.
additionalimagestores = [
]
# Size is used to set a maximum size of the container image. Only supported by
# certain container storage drivers (currently overlay, zfs, vfs, btrfs)
size = ""
# OverrideKernelCheck tells the driver to ignore kernel checks based on kernel version
override_kernel_check = "true"

83
docs/skopeo-copy.1.md Normal file
View File

@@ -0,0 +1,83 @@
% skopeo-copy(1)
## NAME
skopeo\-copy - Copy an image (manifest, filesystem layers, signatures) from one location to another.
## SYNOPSIS
**skopeo copy** [**--sign-by=**_key-ID_] _source-image destination-image_
## DESCRIPTION
Copy an image (manifest, filesystem layers, signatures) from one location to another.
Uses the system's trust policy to validate images, rejects images not trusted by the policy.
_source-image_ use the "image name" format described above
_destination-image_ use the "image name" format described above
## OPTIONS
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--format, -f** _manifest-type_ Manifest type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
**--quiet, -q** suppress output information when copying images
**--remove-signatures** do not copy signatures, if any, from _source-image_. Necessary when copying a signed image to a destination which does not support signatures.
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
**--src-creds** _username[:password]_ for accessing the source registry
**--dest-compress** _bool-value_ Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)
**--dest-creds** _username[:password]_ for accessing the destination registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry or daemon
**--src-no-creds** _bool-value_ Access the registry anonymously.
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry or daemon (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry or daemon
**--dest-no-creds** _bool-value_ Access the registry anonymously.
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry or daemon (defaults to true)
**--src-daemon-host** _host_ Copy from docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
**--dest-daemon-host** _host_ Copy to docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
Existing signatures, if any, are preserved as well.
## EXAMPLES
To copy the layers of the docker.io busybox image to a local directory:
```sh
$ mkdir -p /var/lib/images/busybox
$ skopeo copy docker://busybox:latest dir:/var/lib/images/busybox
$ ls /var/lib/images/busybox/*
/tmp/busybox/2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749.tar
/tmp/busybox/manifest.json
/tmp/busybox/8ddc19f16526912237dd8af81971d5e4dd0587907234be2b83e249518d5b673f.tar
```
To copy and sign an image:
```sh
$ skopeo copy --sign-by dev@example.com atomic:example/busybox:streaming atomic:example/busybox:gold
```
## SEE ALSO
skopeo(1), podman-login(1), docker-login(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

52
docs/skopeo-delete.1.md Normal file
View File

@@ -0,0 +1,52 @@
% skopeo-delete(1)
## NAME
skopeo\-delete - Mark _image-name_ for deletion.
## SYNOPSIS
**skopeo delete** _image-name_
Mark _image-name_ for deletion. To release the allocated disk space, you must login to the container registry server and execute the container registry garbage collector. E.g.,
```
/usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
Note: sometimes the config.yml is stored in /etc/docker/registry/config.yml
If you are running the container registry inside of a container you would execute something like:
$ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
```
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
**--no-creds** _bool-value_ Access the registry anonymously.
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
## EXAMPLES
Mark image example/pause for deletion from the registry.example.com registry:
```sh
$ skopeo delete --force docker://registry.example.com/example/pause:latest
```
See above for additional details on using the command **delete**.
## SEE ALSO
skopeo(1), podman-login(1), docker-login(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

71
docs/skopeo-inspect.1.md Normal file
View File

@@ -0,0 +1,71 @@
% skopeo-inspect(1)
## NAME
skopeo\-inspect - Return low-level information about _image-name_ in a registry
## SYNOPSIS
**skopeo inspect** [**--raw**] [**--config**] _image-name_
Return low-level information about _image-name_ in a registry
**--raw** output raw manifest, default is to format in JSON
_image-name_ name of image to retrieve information about
**--config** output configuration in OCI format, default is to format in JSON
_image-name_ name of image to retrieve configuration for
**--config** **--raw** output configuration in raw format
_image-name_ name of image to retrieve configuration for
**--authfile** _path_
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (\*.crt, \*.cert, \*.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
**--no-creds** _bool-value_ Access the registry anonymously.
## EXAMPLES
To review information for the image fedora from the docker.io registry:
```sh
$ skopeo inspect docker://docker.io/fedora
{
"Name": "docker.io/library/fedora",
"Digest": "sha256:a97914edb6ba15deb5c5acf87bd6bd5b6b0408c96f48a5cbd450b5b04509bb7d",
"RepoTags": [
"20",
"21",
"22",
"23",
"24",
"heisenbug",
"latest",
"rawhide"
],
"Created": "2016-06-20T19:33:43.220526898Z",
"DockerVersion": "1.10.3",
"Labels": {},
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:7c91a140e7a1025c3bc3aace4c80c0d9933ac4ee24b8630a6b0b5d8b9ce6b9d4"
]
}
```
# SEE ALSO
skopeo(1), podman-login(1), docker-login(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -0,0 +1,26 @@
% skopeo-manifest-digest(1)
## NAME
skopeo\-manifest\-digest -Compute a manifest digest of manifest-file and write it to standard output.
## SYNOPSIS
**skopeo manifest-digest** _manifest-file_
## DESCRIPTION
Compute a manifest digest of _manifest-file_ and write it to standard output.
## EXAMPLES
```sh
$ skopeo manifest-digest manifest.json
sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6
```
## SEE ALSO
skopeo(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -0,0 +1,34 @@
% skopeo-standalone-sign(1)
## NAME
skopeo\-standalone-sign - Simple Sign an image
## SYNOPSIS
**skopeo standalone-sign** _manifest docker-reference key-fingerprint_ **--output**|**-o** _signature_
## DESCRIPTION
This is primarily a debugging tool, or useful for special cases,
and usually should not be a part of your normal operational workflow; use `skopeo copy --sign-by` instead to publish and sign an image in one step.
_manifest_ Path to a file containing the image manifest
_docker-reference_ A docker reference to identify the image with
_key-fingerprint_ Key identity to use for signing
**--output**|**-o** output file
## EXAMPLES
```sh
$ skopeo standalone-sign busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 --output busybox.signature
$
```
## SEE ALSO
skopeo(1), skopeo-copy(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -0,0 +1,36 @@
% skopeo-standalone-verify(1)
## NAME
skopeo\-standalone\-verify - Verify an image signature
## SYNOPSIS
**skopeo standalone-verify** _manifest docker-reference key-fingerprint signature_
## DESCRIPTION
Verify a signature using local files, digest will be printed on success.
_manifest_ Path to a file containing the image manifest
_docker-reference_ A docker reference expected to identify the image in the signature
_key-fingerprint_ Expected identity of the signing key
_signature_ Path to signature file
**Note:** If you do use this, make sure that the image can not be changed at the source location between the times of its verification and use.
## EXAMPLES
```sh
$ skopeo standalone-verify busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 busybox.signature
Signature verified, digest sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55
```
## SEE ALSO
skopeo(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -1,11 +1,13 @@
% SKOPEO(1) Skopeo Man Pages
% Jhon Honce
% August 2016
# NAME
## NAME
skopeo -- Command line utility used to interact with local and remote container images and container image registries
# SYNOPSIS
## SYNOPSIS
**skopeo** [_global options_] _command_ [_command options_]
# DESCRIPTION
## DESCRIPTION
`skopeo` is a command line utility providing various operations with container images and container image registries.
`skopeo` can copy container images between various containers image stores, converting them as necessary. For example you can use `skopeo` to copy container images from one container registry to another.
@@ -31,7 +33,7 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(kpod login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(podman login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
**docker-archive:**_path_[**:**_docker-reference_]
An image is stored in the `docker save` formatted file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
@@ -45,7 +47,7 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
# OPTIONS
## OPTIONS
**--debug** enable debug output
@@ -59,230 +61,35 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**--override-os** _OS_ Use _OS_ instead of the running OS for choosing images.
**--command-timeout** _duration_ Timeout for the command execution.
**--help**|**-h** Show help
**--version**|**-v** print the version number
# COMMANDS
## COMMANDS
## skopeo copy
**skopeo copy** [**--sign-by=**_key-ID_] _source-image destination-image_
| Command | Description |
| ----------------------------------------- | ------------------------------------------------------------------------------ |
| [skopeo-copy(1)](skopeo-copy.1.md) | Copy an image (manifest, filesystem layers, signatures) from one location to another. |
| [skopeo-delete(1)](skopeo-delete.1.md) | Mark image-name for deletion. |
| [skopeo-inspect(1)](skopeo-inspect.1.md) | Return low-level information about image-name in a registry. |
| [skopeo-manifest-digest(1)](skopeo-manifest-digest.1.md) | Compute a manifest digest of manifest-file and write it to standard output.|
| [skopeo-standalone-sign(1)](skopeo-standalone-sign.1.md) | Sign an image. |
| [skopeo-standalone-verify(1)](skopeo-standalone-verify.1.md)| Verify an image. |
Copy an image (manifest, filesystem layers, signatures) from one location to another.
Uses the system's trust policy to validate images, rejects images not trusted by the policy.
_source-image_ use the "image name" format described above
_destination-image_ use the "image name" format described above
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--format, -f** _manifest-type_ Manifest type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
**--remove-signatures** do not copy signatures, if any, from _source-image_. Necessary when copying a signed image to a destination which does not support signatures.
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
**--src-creds** _username[:password]_ for accessing the source registry
**--dest-compress** _bool-value_ Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)
**--dest-creds** _username[:password]_ for accessing the destination registry
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry (defaults to true)
Existing signatures, if any, are preserved as well.
## skopeo delete
**skopeo delete** _image-name_
Mark _image-name_ for deletion. To release the allocated disk space, you must login to the container registry server and execute the container registry garbage collector. E.g.,
```
/usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
Note: sometimes the config.yml is stored in /etc/docker/registry/config.yml
If you are running the container registry inside of a container you would execute something like:
$ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
```
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
## skopeo inspect
**skopeo inspect** [**--raw**] _image-name_
Return low-level information about _image-name_ in a registry
**--raw** output raw manifest, default is to format in JSON
_image-name_ name of image to retrieve information about
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
## skopeo manifest-digest
**skopeo manifest-digest** _manifest-file_
Compute a manifest digest of _manifest-file_ and write it to standard output.
## skopeo standalone-sign
**skopeo standalone-sign** _manifest docker-reference key-fingerprint_ **--output**|**-o** _signature_
This is primarily a debugging tool, or useful for special cases,
and usually should not be a part of your normal operational workflow; use `skopeo copy --sign-by` instead to publish and sign an image in one step.
_manifest_ Path to a file containing the image manifest
_docker-reference_ A docker reference to identify the image with
_key-fingerprint_ Key identity to use for signing
**--output**|**-o** output file
## skopeo standalone-verify
**skopeo standalone-verify** _manifest docker-reference key-fingerprint signature_
Verify a signature using local files, digest will be printed on success.
_manifest_ Path to a file containing the image manifest
_docker-reference_ A docker reference expected to identify the image in the signature
_key-fingerprint_ Expected identity of the signing key
_signature_ Path to signature file
**Note:** If you do use this, make sure that the image can not be changed at the source location between the times of its verification and use.
## skopeo help
show help for `skopeo`
# FILES
## FILES
**/etc/containers/policy.json**
Default trust policy file, if **--policy** is not specified.
The policy format is documented in https://github.com/containers/image/blob/master/docs/policy.json.md .
The policy format is documented in https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md .
**/etc/containers/registries.d**
Default directory containing registry configuration, if **--registries.d** is not specified.
The contents of this directory are documented in https://github.com/containers/image/blob/master/docs/registries.d.md .
The contents of this directory are documented in https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md .
# EXAMPLES
## SEE ALSO
podman-login(1), docker-login(1)
## skopeo copy
To copy the layers of the docker.io busybox image to a local directory:
```sh
$ mkdir -p /var/lib/images/busybox
$ skopeo copy docker://busybox:latest dir:/var/lib/images/busybox
$ ls /var/lib/images/busybox/*
/tmp/busybox/2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749.tar
/tmp/busybox/manifest.json
/tmp/busybox/8ddc19f16526912237dd8af81971d5e4dd0587907234be2b83e249518d5b673f.tar
```
To copy and sign an image:
```sh
$ skopeo copy --sign-by dev@example.com atomic:example/busybox:streaming atomic:example/busybox:gold
```
## skopeo delete
Mark image example/pause for deletion from the registry.example.com registry:
```sh
$ skopeo delete --force docker://registry.example.com/example/pause:latest
```
See above for additional details on using the command **delete**.
## skopeo inspect
To review information for the image fedora from the docker.io registry:
```sh
$ skopeo inspect docker://docker.io/fedora
{
"Name": "docker.io/library/fedora",
"Digest": "sha256:a97914edb6ba15deb5c5acf87bd6bd5b6b0408c96f48a5cbd450b5b04509bb7d",
"RepoTags": [
"20",
"21",
"22",
"23",
"24",
"heisenbug",
"latest",
"rawhide"
],
"Created": "2016-06-20T19:33:43.220526898Z",
"DockerVersion": "1.10.3",
"Labels": {},
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:7c91a140e7a1025c3bc3aace4c80c0d9933ac4ee24b8630a6b0b5d8b9ce6b9d4"
]
}
```
## skopeo layers
Another method to retrieve the layers for the busybox image from the docker.io registry:
```sh
$ skopeo layers docker://busybox
$ ls layers-500650331/
8ddc19f16526912237dd8af81971d5e4dd0587907234be2b83e249518d5b673f.tar
manifest.json
a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4.tar
```
## skopeo manifest-digest
```sh
$ skopeo manifest-digest manifest.json
sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6
```
## skopeo standalone-sign
```sh
$ skopeo standalone-sign busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 --output busybox.signature
$
```
See `skopeo copy` above for the preferred method of signing images.
## skopeo standalone-verify
```sh
$ skopeo standalone-verify busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 busybox.signature
Signature verified, digest sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55
```
# SEE ALSO
kpod-login(1), docker-login(1)
# AUTHORS
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

7
hack/btrfs_installed_tag.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
cc -E - > /dev/null 2> /dev/null << EOF
#include <btrfs/ioctl.h>
EOF
if test $? -ne 0 ; then
echo exclude_graphdriver_btrfs
fi

View File

@@ -6,7 +6,7 @@ set -e
#
# Requirements:
# - The current directory should be a checkout of the skopeo source code
# (https://github.com/projectatomic/skopeo). Whatever version is checked out
# (https://github.com/containers/skopeo). Whatever version is checked out
# will be built.
# - The script is intended to be run inside the docker container specified
# in the Dockerfile at the root of the source. In other words:
@@ -19,7 +19,7 @@ set -e
set -o pipefail
export SKOPEO_PKG='github.com/projectatomic/skopeo'
export SKOPEO_PKG='github.com/containers/skopeo'
export SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
export MAKEDIR="$SCRIPTDIR/make"

View File

@@ -4,7 +4,7 @@ if [ -z "$VALIDATE_UPSTREAM" ]; then
# this is kind of an expensive check, so let's not do this twice if we
# are running more than one validate bundlescript
VALIDATE_REPO='https://github.com/projectatomic/skopeo.git'
VALIDATE_REPO='https://github.com/containers/skopeo.git'
VALIDATE_BRANCH='master'
if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then

18
hack/make/test-system Executable file
View File

@@ -0,0 +1,18 @@
#!/bin/bash
set -e
# Before running podman for the first time, make sure
# to set storage to vfs (not overlay): podman-in-podman
# doesn't work with overlay. And, disable mountopt,
# which causes error with vfs.
sed -i \
-e 's/^driver\s*=.*/driver = "vfs"/' \
-e 's/^mountopt/#mountopt/' \
/etc/containers/storage.conf
# Build skopeo, install into /usr/bin
make binary-local ${BUILDTAGS:+BUILDTAGS="$BUILDTAGS"}
make install
# Run tests
SKOPEO_BINARY=/usr/bin/skopeo bats --tap systemtest

View File

@@ -1,28 +1,13 @@
#!/bin/bash
source "$(dirname "$BASH_SOURCE")/.validate"
errors=$(go vet $(go list -e ./... | grep -v "$SKOPEO_PKG"/vendor))
IFS=$'\n'
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/' || true) )
unset IFS
errors=()
for f in "${files[@]}"; do
failedVet=$(go vet "$f")
if [ "$failedVet" ]; then
errors+=( "$failedVet" )
fi
done
if [ ${#errors[@]} -eq 0 ]; then
if [ -z "$errors" ]; then
echo 'Congratulations! All Go source files have been vetted.'
else
{
echo "Errors from go vet:"
for err in "${errors[@]}"; do
echo " - $err"
done
echo "$errors"
echo
echo 'Please fix the above errors. You can test via "go vet" and commit the result.'
echo

6
hack/ostree_tag.sh Executable file
View File

@@ -0,0 +1,6 @@
#!/bin/bash
if pkg-config ostree-1 2> /dev/null ; then
echo ostree
else
echo containers_image_ostree_stub
fi

View File

@@ -4,13 +4,14 @@ set -e
export GOPATH=$(pwd)/_gopath
export PATH=$GOPATH/bin:$PATH
_projectatomic="${GOPATH}/src/github.com/projectatomic"
mkdir -vp ${_projectatomic}
ln -vsf $(pwd) ${_projectatomic}/skopeo
_containers="${GOPATH}/src/github.com/containers"
mkdir -vp ${_containers}
ln -vsf $(pwd) ${_containers}/skopeo
go get -u github.com/cpuguy83/go-md2man github.com/golang/lint/golint
go version
go get -u github.com/cpuguy83/go-md2man golang.org/x/lint/golint
cd ${_projectatomic}/skopeo
cd ${_containers}/skopeo
make validate-local test-unit-local binary-local
sudo make install
skopeo -v

13
hack/tree_status.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/bash
set -e
STATUS=$(git status --porcelain)
if [[ -z $STATUS ]]
then
echo "tree is clean"
else
echo "tree is dirty, please commit all changes and sync the vendor.conf"
echo ""
echo "$STATUS"
exit 1
fi

View File

@@ -1,15 +0,0 @@
#!/bin/bash
# This file is just wrapper around vndr (github.com/LK4D4/vndr) tool.
# For updating dependencies you should change `vendor.conf` file in root of the
# project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for
# vndr usage.
set -e
if ! hash vndr; then
echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH"
exit 1
fi
vndr "$@"

View File

@@ -5,8 +5,8 @@ import (
"os/exec"
"testing"
"github.com/containers/skopeo/version"
"github.com/go-check/check"
"github.com/projectatomic/skopeo/version"
)
const (
@@ -87,3 +87,7 @@ func (s *SkopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound(c *check.C)
wanted = ".*unauthorized: authentication required.*"
c.Assert(string(out), check.Not(check.Matches), "(?s)"+wanted) // (?s) : '.' will also match newlines
}
func (s *SkopeoSuite) TestInspectFailsWhenReferenceIsInvalid(c *check.C) {
assertSkopeoFails(c, `.*Invalid image name.*`, "inspect", "unknown")
}

View File

@@ -14,7 +14,7 @@ import (
"github.com/containers/image/manifest"
"github.com/containers/image/signature"
"github.com/go-check/check"
"github.com/opencontainers/go-digest"
digest "github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/image-tools/image"
)
@@ -64,7 +64,7 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
os.Setenv("GNUPGHOME", s.gpgHome)
for _, key := range []string{"personal", "official"} {
batchInput := fmt.Sprintf("Key-Type: RSA\nName-Real: Test key - %s\nName-email: %s@example.com\n%%commit\n",
batchInput := fmt.Sprintf("Key-Type: RSA\nName-Real: Test key - %s\nName-email: %s@example.com\n%%no-protection\n%%commit\n",
key, key)
runCommandWithInput(c, batchInput, gpgBinary, "--batch", "--gen-key")
@@ -154,7 +154,10 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
// docker v2s2 -> OCI image layout without image name
ociDest = "busybox-latest-noimage"
defer os.RemoveAll(ociDest)
assertSkopeoFails(c, ".*Error initializing destination oci:busybox-latest-noimage:: cannot save image with empty image.ref.name.*", "copy", "docker://busybox:latest", "oci:"+ociDest)
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:latest", "oci:"+ociDest)
_, err = os.Stat(ociDest)
c.Assert(err, check.IsNil)
}
// Check whether dir: images in dir1 and dir2 are equal, ignoring schema1 signatures.
@@ -379,7 +382,7 @@ func (s *CopySuite) TestCopyDirSignatures(c *check.C) {
// Compression during copy
func (s *CopySuite) TestCopyCompression(c *check.C) {
const uncompresssedLayerFile = "160d823fdc48e62f97ba62df31e55424f8f5eb6b679c865eec6e59adfe304710.tar"
const uncompresssedLayerFile = "160d823fdc48e62f97ba62df31e55424f8f5eb6b679c865eec6e59adfe304710"
topDir, err := ioutil.TempDir("", "compression-top")
c.Assert(err, check.IsNil)
@@ -411,9 +414,7 @@ func (s *CopySuite) TestCopyCompression(c *check.C) {
fis, err := dirf.Readdir(-1)
c.Assert(err, check.IsNil)
for _, fi := range fis {
if strings.HasSuffix(fi.Name(), ".tar") {
c.Assert(fi.Size() < 2048, check.Equals, true)
}
c.Assert(fi.Size() < 2048, check.Equals, true)
}
}
}
@@ -661,3 +662,41 @@ func verifyManifestMIMEType(c *check.C, dir string, expectedMIMEType string) {
mimeType := manifest.GuessMIMEType(manifestBlob)
c.Assert(mimeType, check.Equals, expectedMIMEType)
}
const regConfFixture = "./fixtures/registries.conf"
func (s *SkopeoSuite) TestSuccessCopySrcWithMirror(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "--registries-conf="+regConfFixture, "copy",
"docker://mirror.invalid/busybox", "dir:"+dir)
}
func (s *SkopeoSuite) TestFailureCopySrcWithMirrorsUnavailable(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoFails(c, ".*no such host.*", "--registries-conf="+regConfFixture, "copy",
"docker://invalid.invalid/busybox", "dir:"+dir)
}
func (s *SkopeoSuite) TestSuccessCopySrcWithMirrorAndPrefix(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "--registries-conf="+regConfFixture, "copy",
"docker://gcr.invalid/foo/bar/busybox", "dir:"+dir)
}
func (s *SkopeoSuite) TestFailureCopySrcWithMirrorAndPrefixUnavailable(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoFails(c, ".*no such host.*", "--registries-conf="+regConfFixture, "copy",
"docker://gcr.invalid/wrong/prefix/busybox", "dir:"+dir)
}
func (s *CopySuite) TestCopyFailsWhenReferenceIsInvalid(c *check.C) {
assertSkopeoFails(c, `.*Invalid image name.*`, "copy", "unknown:transport", "unknown:test")
}

View File

@@ -0,0 +1,28 @@
[[registry]]
location = "mirror.invalid"
mirror = [
{ location = "mirror-0.invalid" },
{ location = "mirror-1.invalid" },
{ location = "gcr.io/google-containers" },
]
# This entry is currently unused and exists only to ensure
# that the mirror.invalid/busybox is not rewritten twice.
[[registry]]
location = "gcr.io"
prefix = "gcr.io/google-containers"
[[registry]]
location = "invalid.invalid"
mirror = [
{ location = "invalid-mirror-0.invalid" },
{ location = "invalid-mirror-1.invalid" },
]
[[registry]]
location = "gcr.invalid"
prefix = "gcr.invalid/foo/bar"
mirror = [
{ location = "wrong-mirror-0.invalid" },
{ location = "gcr.io/google-containers" },
]

View File

@@ -44,7 +44,7 @@ func (s *SigningSuite) SetUpSuite(c *check.C) {
c.Assert(err, check.IsNil)
os.Setenv("GNUPGHOME", s.gpgHome)
runCommandWithInput(c, "Key-Type: RSA\nName-Real: Testing user\n%commit\n", gpgBinary, "--homedir", s.gpgHome, "--batch", "--gen-key")
runCommandWithInput(c, "Key-Type: RSA\nName-Real: Testing user\n%no-protection\n%commit\n", gpgBinary, "--homedir", s.gpgHome, "--batch", "--gen-key")
lines, err := exec.Command(gpgBinary, "--homedir", s.gpgHome, "--with-colons", "--no-permission-warning", "--fingerprint").Output()
c.Assert(err, check.IsNil)

19
systemtest/001-basic.bats Normal file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bats
#
# Simplest set of skopeo tests. If any of these fail, we have serious problems.
#
load helpers
# Override standard setup! We don't yet trust anything
function setup() {
:
}
@test "skopeo version emits reasonable output" {
run_skopeo --version
expect_output --substring "skopeo version [0-9.]+"
}
# vim: filetype=sh

View File

@@ -0,0 +1,67 @@
#!/usr/bin/env bats
#
# Simplest test for skopeo inspect
#
load helpers
@test "inspect: basic" {
workdir=$TESTDIR/inspect
remote_image=docker://quay.io/libpod/alpine_labels:latest
# Inspect remote source, then pull it. There's a small race condition
# in which the remote image can get updated between the inspect and
# the copy; let's just not worry about it.
run_skopeo inspect $remote_image
inspect_remote=$output
# Now pull it into a directory
run_skopeo copy $remote_image dir:$workdir
expect_output --substring "Getting image source signatures"
expect_output --substring "Writing manifest to image destination"
# Unpacked contents must include a manifest and version
[ -e $workdir/manifest.json ]
[ -e $workdir/version ]
# Now run inspect locally
run_skopeo inspect dir:$workdir
inspect_local=$output
# Each SHA-named file must be listed in the output of 'inspect'
for sha in $(find $workdir -type f | xargs -l1 basename | egrep '^[0-9a-f]{64}$'); do
expect_output --from="$inspect_local" --substring "sha256:$sha" \
"Locally-extracted SHA file is present in 'inspect'"
done
# Simple sanity check on 'inspect' output.
# For each of the given keys (LHS of the table below):
# 1) Get local and remote values
# 2) Sanity-check local value using simple expression
# 3) Confirm that local and remote values match.
#
# The reason for (2) is to make sure that we don't compare bad results
#
# The reason for a hardcoded list, instead of 'jq keys', is that RepoTags
# is always empty locally, but a list remotely.
while read key expect; do
local=$(echo "$inspect_local" | jq -r ".$key")
remote=$(echo "$inspect_remote" | jq -r ".$key")
expect_output --from="$local" --substring "$expect" \
"local $key is sane"
expect_output --from="$remote" "$local" \
"local $key matches remote"
done <<END_EXPECT
Architecture amd64
Created [0-9-]+T[0-9:]+\.[0-9]+Z
Digest sha256:[0-9a-f]{64}
DockerVersion [0-9]+\.[0-9][0-9.-]+
Labels \\\{.*PODMAN.*podman.*\\\}
Layers \\\[.*sha256:.*\\\]
Os linux
END_EXPECT
}
# vim: filetype=sh

79
systemtest/020-copy.bats Normal file
View File

@@ -0,0 +1,79 @@
#!/usr/bin/env bats
#
# Copy tests
#
load helpers
function setup() {
standard_setup
start_registry reg
}
# From remote, to dir1, to local, to dir2;
# compare dir1 and dir2, expect no changes
@test "copy: dir, round trip" {
local remote_image=docker://busybox:latest
local localimg=docker://localhost:5000/busybox:unsigned
local dir1=$TESTDIR/dir1
local dir2=$TESTDIR/dir2
run_skopeo copy $remote_image dir:$dir1
run_skopeo copy --dest-tls-verify=false dir:$dir1 $localimg
run_skopeo copy --src-tls-verify=false $localimg dir:$dir2
# Both extracted copies must be identical
diff -urN $dir1 $dir2
}
# Same as above, but using 'oci:' instead of 'dir:' and with a :latest tag
@test "copy: oci, round trip" {
local remote_image=docker://busybox:latest
local localimg=docker://localhost:5000/busybox:unsigned
local dir1=$TESTDIR/oci1
local dir2=$TESTDIR/oci2
run_skopeo copy $remote_image oci:$dir1:latest
run_skopeo copy --dest-tls-verify=false oci:$dir1:latest $localimg
run_skopeo copy --src-tls-verify=false $localimg oci:$dir2:latest
# Both extracted copies must be identical
diff -urN $dir1 $dir2
}
# Same image, extracted once with :tag and once without
@test "copy: oci w/ and w/o tags" {
local remote_image=docker://busybox:latest
local dir1=$TESTDIR/dir1
local dir2=$TESTDIR/dir2
run_skopeo copy $remote_image oci:$dir1
run_skopeo copy $remote_image oci:$dir2:withtag
# Both extracted copies must be identical, except for index.json
diff -urN --exclude=index.json $dir1 $dir2
# ...which should differ only in the tag. (But that's too hard to check)
grep '"org.opencontainers.image.ref.name":"withtag"' $dir2/index.json
}
# This one seems unlikely to get fixed
@test "copy: bug 651" {
skip "Enable this once skopeo issue #651 has been fixed"
run_skopeo copy --dest-tls-verify=false \
docker://quay.io/libpod/alpine_labels:latest \
docker://localhost:5000/foo
}
teardown() {
podman rm -f reg
standard_teardown
}
# vim: filetype=sh

View File

@@ -0,0 +1,32 @@
#!/usr/bin/env bats
#
# Confirm that skopeo will push to and pull from a local
# registry with locally-created TLS certificates.
#
load helpers
function setup() {
standard_setup
start_registry --with-cert reg
}
@test "local registry, with cert" {
# Push to local registry...
run_skopeo copy --dest-cert-dir=$TESTDIR/client-auth \
docker://busybox:latest \
docker://localhost:5000/busybox:unsigned
# ...and pull it back out
run_skopeo copy --src-cert-dir=$TESTDIR/client-auth \
docker://localhost:5000/busybox:unsigned \
dir:$TESTDIR/extracted
}
teardown() {
podman rm -f reg
standard_teardown
}
# vim: filetype=sh

View File

@@ -0,0 +1,78 @@
#!/usr/bin/env bats
#
# Tests with a local registry with auth
#
load helpers
function setup() {
standard_setup
# Remove old/stale cred file
_cred_dir=$TESTDIR/credentials
export XDG_RUNTIME_DIR=$_cred_dir
mkdir -p $_cred_dir/containers
rm -f $_cred_dir/containers/auth.json
# Start authenticated registry with random password
testuser=testuser
testpassword=$(random_string 15)
start_registry --testuser=testuser --testpassword=$testpassword reg
}
@test "auth: credentials on command line" {
# No creds
run_skopeo 1 inspect --tls-verify=false docker://localhost:5000/nonesuch
expect_output --substring "unauthorized: authentication required"
# Wrong user
run_skopeo 1 inspect --tls-verify=false --creds=baduser:badpassword \
docker://localhost:5000/nonesuch
expect_output --substring "unauthorized: authentication required"
# Wrong password
run_skopeo 1 inspect --tls-verify=false --creds=$testuser:badpassword \
docker://localhost:5000/nonesuch
expect_output --substring "unauthorized: authentication required"
# Correct creds, but no such image
run_skopeo 1 inspect --tls-verify=false --creds=$testuser:$testpassword \
docker://localhost:5000/nonesuch
expect_output --substring "manifest unknown: manifest unknown"
# These should pass
run_skopeo copy --dest-tls-verify=false --dcreds=$testuser:$testpassword \
docker://busybox:latest docker://localhost:5000/busybox:mine
run_skopeo inspect --tls-verify=false --creds=$testuser:$testpassword \
docker://localhost:5000/busybox:mine
expect_output --substring "localhost:5000/busybox"
}
@test "auth: credentials via podman login" {
# Logged in: skopeo should work
podman login --tls-verify=false -u $testuser -p $testpassword localhost:5000
run_skopeo copy --dest-tls-verify=false \
docker://busybox:latest docker://localhost:5000/busybox:mine
run_skopeo inspect --tls-verify=false docker://localhost:5000/busybox:mine
expect_output --substring "localhost:5000/busybox"
# Logged out: should fail
podman logout localhost:5000
run_skopeo 1 inspect --tls-verify=false docker://localhost:5000/busybox:mine
expect_output --substring "unauthorized: authentication required"
}
teardown() {
podman rm -f reg
if [[ -n $_cred_dir ]]; then
rm -rf $_cred_dir
fi
standard_teardown
}
# vim: filetype=sh

151
systemtest/050-signing.bats Normal file
View File

@@ -0,0 +1,151 @@
#!/usr/bin/env bats
#
# Tests with gpg signing
#
load helpers
function setup() {
standard_setup
# Create dummy gpg keys
export GNUPGHOME=$TESTDIR/skopeo-gpg
mkdir --mode=0700 $GNUPGHOME
# gpg on f30 needs this, otherwise:
# gpg: agent_genkey failed: Inappropriate ioctl for device
# ...but gpg on f29 (and, probably, Ubuntu) doesn't grok this
GPGOPTS='--pinentry-mode loopback'
if gpg --pinentry-mode asdf 2>&1 | grep -qi 'Invalid option'; then
GPGOPTS=
fi
for k in alice bob;do
gpg --batch $GPGOPTS --gen-key --passphrase '' <<END_GPG
Key-Type: RSA
Name-Real: Test key - $k
Name-email: $k@test.redhat.com
%commit
END_GPG
gpg --armor --export $k@test.redhat.com >$GNUPGHOME/pubkey-$k.gpg
done
# Registries. The important part here seems to be sigstore,
# because (I guess?) the registry itself has no mechanism
# for storing or validating signatures.
REGISTRIES_D=$TESTDIR/registries.d
mkdir $REGISTRIES_D $TESTDIR/sigstore
cat >$REGISTRIES_D/registries.yaml <<EOF
docker:
localhost:5000:
sigstore: file://$TESTDIR/sigstore
EOF
# Policy file. Basically, require /myns/alice and /myns/bob
# to be signed; allow /open; and reject anything else.
POLICY_JSON=$TESTDIR/policy.json
cat >$POLICY_JSON <<END_POLICY_JSON
{
"default": [
{
"type": "reject"
}
],
"transports": {
"docker": {
"localhost:5000/myns/alice": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "$GNUPGHOME/pubkey-alice.gpg"
}
],
"localhost:5000/myns/bob": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "$GNUPGHOME/pubkey-bob.gpg"
}
],
"localhost:5000/open": [
{
"type": "insecureAcceptAnything"
}
]
}
}
}
END_POLICY_JSON
start_registry reg
}
@test "signing" {
run_skopeo '?' standalone-sign /dev/null busybox alice@test.redhat.com -o /dev/null
if [[ "$output" =~ 'signing is not supported' ]]; then
skip "skopeo built without support for creating signatures"
return 1
fi
if [ "$status" -ne 0 ]; then
die "exit code is $status; expected $expected_rc"
fi
# Cache local copy
run_skopeo copy docker://busybox:latest dir:$TESTDIR/busybox
# Push a bunch of images. Do so *without* --policy flag; this lets us
# sign or not, creating images that will or won't conform to policy.
while read path sig comments; do
local sign_opt=
if [[ $sig != '-' ]]; then
sign_opt="--sign-by=${sig}@test.redhat.com"
fi
run_skopeo --registries.d $REGISTRIES_D \
copy --dest-tls-verify=false \
$sign_opt \
dir:$TESTDIR/busybox \
docker://localhost:5000$path
done <<END_PUSH
/myns/alice:signed alice # Properly-signed image
/myns/alice:unsigned - # Unsigned image to path that requires signature
/myns/bob:signedbyalice alice # Bad signature: image under /bob
/myns/carol:latest - # No signature
/open/forall:latest - # No signature, but none needed
END_PUSH
# Done pushing. Now try to fetch. From here on we use the --policy option.
# The table below lists the paths to fetch, and the expected errors (or
# none, if we expect them to pass).
while read path expected_error; do
expected_rc=
if [[ -n $expected_error ]]; then
expected_rc=1
fi
rm -rf $TESTDIR/d
run_skopeo $expected_rc \
--registries.d $REGISTRIES_D \
--policy $POLICY_JSON \
copy --src-tls-verify=false \
docker://localhost:5000$path \
dir:$TESTDIR/d
if [[ -n $expected_error ]]; then
expect_output --substring "Source image rejected: $expected_error"
fi
done <<END_TESTS
/myns/alice:signed
/myns/bob:signedbyalice Invalid GPG signature
/myns/alice:unsigned Signature for identity localhost:5000/myns/alice:signed is not accepted
/myns/carol:latest Running image docker://localhost:5000/myns/carol:latest is rejected by policy.
/open/forall:latest
END_TESTS
}
teardown() {
podman rm -f reg
standard_teardown
}
# vim: filetype=sh

344
systemtest/helpers.bash Normal file
View File

@@ -0,0 +1,344 @@
#!/bin/bash
SKOPEO_BINARY=${SKOPEO_BINARY:-$(dirname ${BASH_SOURCE})/../skopeo}
# Default timeout for a skopeo command.
SKOPEO_TIMEOUT=${SKOPEO_TIMEOUT:-300}
###############################################################################
# BEGIN setup/teardown
# Provide common setup and teardown functions, but do not name them such!
# That way individual tests can override with their own setup/teardown,
# while retaining the ability to include these if they so desire.
function standard_setup() {
# Argh. Although BATS provides $BATS_TMPDIR, it's just /tmp!
# That's bloody worthless. Let's make our own, in which subtests
# can write whatever they like and trust that it'll be deleted
# on cleanup.
TESTDIR=$(mktemp -d --tmpdir=${BATS_TMPDIR:-/tmp} skopeo_bats.XXXXXX)
}
function standard_teardown() {
if [[ -n $TESTDIR ]]; then
rm -rf $TESTDIR
fi
}
# Individual .bats files may override or extend these
function setup() {
standard_setup
}
function teardown() {
standard_teardown
}
# END setup/teardown
###############################################################################
# BEGIN standard helpers for running skopeo and testing results
#################
# run_skopeo # Invoke skopeo, with timeout, using BATS 'run'
#################
#
# This is the preferred mechanism for invoking skopeo:
#
# * we use 'timeout' to abort (with a diagnostic) if something
# takes too long; this is preferable to a CI hang.
# * we log the command run and its output. This doesn't normally
# appear in BATS output, but it will if there's an error.
# * we check exit status. Since the normal desired code is 0,
# that's the default; but the first argument can override:
#
# run_skopeo 125 nonexistent-subcommand
# run_skopeo '?' some-other-command # let our caller check status
#
# Since we use the BATS 'run' mechanism, $output and $status will be
# defined for our caller.
#
function run_skopeo() {
# Number as first argument = expected exit code; default 0
expected_rc=0
case "$1" in
[0-9]) expected_rc=$1; shift;;
[1-9][0-9]) expected_rc=$1; shift;;
[12][0-9][0-9]) expected_rc=$1; shift;;
'?') expected_rc= ; shift;; # ignore exit code
esac
# Remember command args, for possible use in later diagnostic messages
MOST_RECENT_SKOPEO_COMMAND="skopeo $*"
# stdout is only emitted upon error; this echo is to help a debugger
echo "\$ $SKOPEO_BINARY $*"
run timeout --foreground --kill=10 $SKOPEO_TIMEOUT ${SKOPEO_BINARY} "$@"
# without "quotes", multiple lines are glommed together into one
if [ -n "$output" ]; then
echo "$output"
fi
if [ "$status" -ne 0 ]; then
echo -n "[ rc=$status ";
if [ -n "$expected_rc" ]; then
if [ "$status" -eq "$expected_rc" ]; then
echo -n "(expected) ";
else
echo -n "(** EXPECTED $expected_rc **) ";
fi
fi
echo "]"
fi
if [ "$status" -eq 124 -o "$status" -eq 137 ]; then
# FIXME: 'timeout -v' requires coreutils-8.29; travis seems to have
# an older version. If/when travis updates, please add -v
# to the 'timeout' command above, and un-comment this out:
# if expr "$output" : ".*timeout: sending" >/dev/null; then
echo "*** TIMED OUT ***"
false
fi
if [ -n "$expected_rc" ]; then
if [ "$status" -ne "$expected_rc" ]; then
die "exit code is $status; expected $expected_rc"
fi
fi
}
#########
# die # Abort with helpful message
#########
function die() {
echo "#/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv" >&2
echo "#| FAIL: $*" >&2
echo "#\\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^" >&2
false
}
###################
# expect_output # Compare actual vs expected string; fail if mismatch
###################
#
# Compares $output against the given string argument. Optional second
# argument is descriptive text to show as the error message (default:
# the command most recently run by 'run_skopeo'). This text can be
# useful to isolate a failure when there are multiple identical
# run_skopeo invocations, and the difference is solely in the
# config or setup; see, e.g., run.bats:run-cmd().
#
# By default we run an exact string comparison; use --substring to
# look for the given string anywhere in $output.
#
# By default we look in "$output", which is set in run_skopeo().
# To override, use --from="some-other-string" (e.g. "${lines[0]}")
#
# Examples:
#
# expect_output "this is exactly what we expect"
# expect_output "foo=bar" "description of this particular test"
# expect_output --from="${lines[0]}" "expected first line"
#
function expect_output() {
# By default we examine $output, the result of run_skopeo
local actual="$output"
local check_substring=
# option processing: recognize --from="...", --substring
local opt
for opt; do
local value=$(expr "$opt" : '[^=]*=\(.*\)')
case "$opt" in
--from=*) actual="$value"; shift;;
--substring) check_substring=1; shift;;
--) shift; break;;
-*) die "Invalid option '$opt'" ;;
*) break;;
esac
done
local expect="$1"
local testname="${2:-${MOST_RECENT_SKOPEO_COMMAND:-[no test name given]}}"
if [ -z "$expect" ]; then
if [ -z "$actual" ]; then
return
fi
expect='[no output]'
elif [ "$actual" = "$expect" ]; then
return
elif [ -n "$check_substring" ]; then
if [[ "$actual" =~ $expect ]]; then
return
fi
fi
# This is a multi-line message, which may in turn contain multi-line
# output, so let's format it ourself, readably
local -a actual_split
readarray -t actual_split <<<"$actual"
printf "#/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv\n" >&2
printf "#| FAIL: $testname\n" >&2
printf "#| expected: '%s'\n" "$expect" >&2
printf "#| actual: '%s'\n" "${actual_split[0]}" >&2
local line
for line in "${actual_split[@]:1}"; do
printf "#| > '%s'\n" "$line" >&2
done
printf "#\\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n" >&2
false
}
#######################
# expect_line_count # Check the expected number of output lines
#######################
#
# ...from the most recent run_skopeo command
#
function expect_line_count() {
local expect="$1"
local testname="${2:-${MOST_RECENT_SKOPEO_COMMAND:-[no test name given]}}"
local actual="${#lines[@]}"
if [ "$actual" -eq "$expect" ]; then
return
fi
printf "#/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv\n" >&2
printf "#| FAIL: $testname\n" >&2
printf "#| Expected %d lines of output, got %d\n" $expect $actual >&2
printf "#| Output was:\n" >&2
local line
for line in "${lines[@]}"; do
printf "#| >%s\n" "$line" >&2
done
printf "#\\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n" >&2
false
}
# END standard helpers for running skopeo and testing results
###############################################################################
# BEGIN helpers for starting/stopping registries
####################
# start_registry # Run a local registry container
####################
#
# Usage: start_registry [OPTIONS] NAME
#
# OPTIONS
# --port=NNNN Port to listen on (default: 5000)
# --testuser=XXX Require authentication; this is the username
# --testpassword=XXX ...and the password (these two go together)
# --with-cert Create a cert for running with TLS (not working)
#
# NAME is the container name to assign.
#
start_registry() {
local port=5000
local testuser=
local testpassword=
local create_cert=
# option processing: recognize options for running the registry
# in different modes.
local opt
for opt; do
local value=$(expr "$opt" : '[^=]*=\(.*\)')
case "$opt" in
--port=*) port="$value"; shift;;
--testuser=*) testuser="$value"; shift;;
--testpassword=*) testpassword="$value"; shift;;
--with-cert) create_cert=1; shift;;
-*) die "Invalid option '$opt'" ;;
*) break;;
esac
done
local name=${1?start_registry() invoked without a NAME}
# Temp directory must be defined and must exist
[[ -n $TESTDIR && -d $TESTDIR ]]
AUTHDIR=$TESTDIR/auth
mkdir -p $AUTHDIR
local -a reg_args=(-v $AUTHDIR:/auth:Z -p $port:5000)
# cgroup option necessary under podman-in-podman (CI tests),
# and doesn't seem to do any harm otherwise.
PODMAN="podman --cgroup-manager=cgroupfs"
# Called with --testuser? Create an htpasswd file
if [[ -n $testuser ]]; then
if [[ -z $testpassword ]]; then
die "start_registry() invoked with testuser but no testpassword"
fi
if ! egrep -q "^$testuser:" $AUTHDIR/htpasswd; then
$PODMAN run --rm --entrypoint htpasswd registry:2 \
-Bbn $testuser $testpassword >> $AUTHDIR/htpasswd
fi
reg_args+=(
-e REGISTRY_AUTH=htpasswd
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm"
)
fi
# Called with --with-cert? Create certificates.
if [[ -n $create_cert ]]; then
CERT=$AUTHDIR/domain.crt
if [ ! -e $CERT ]; then
openssl req -newkey rsa:4096 -nodes -sha256 \
-keyout $AUTHDIR/domain.key -x509 -days 2 \
-out $CERT \
-subj "/C=US/ST=Foo/L=Bar/O=Red Hat, Inc./CN=localhost"
fi
reg_args+=(
-e REGISTRY_HTTP_TLS_CERTIFICATE=/auth/domain.crt
-e REGISTRY_HTTP_TLS_KEY=/auth/domain.key
)
# Copy .crt file to a directory *without* the .key one, so we can
# test the client. (If client sees a matching .key file, it fails)
# Thanks to Miloslav Trmac for this hint.
mkdir -p $TESTDIR/client-auth
cp $CERT $TESTDIR/client-auth/
fi
$PODMAN run -d --name $name "${reg_args[@]}" registry:2
# Wait for registry to actually come up
timeout=10
while [[ $timeout -ge 1 ]]; do
if curl localhost:$port/; then
return
fi
timeout=$(expr $timeout - 1)
sleep 1
done
die "Timed out waiting for registry container to respond on :$port"
}
# END helpers for starting/stopping registries
###############################################################################
# BEGIN miscellaneous tools
###################
# random_string # Returns a pseudorandom human-readable string
###################
#
# Numeric argument, if present, is desired length of string
#
function random_string() {
local length=${1:-10}
head /dev/urandom | tr -dc a-zA-Z0-9 | head -c$length
}
# END miscellaneous tools
###############################################################################

16
systemtest/run-tests Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/bash
#
# run-tests - simple wrapper allowing shortcuts on invocation
#
TEST_DIR=$(dirname $0)
TESTS=$TEST_DIR
for i; do
case "$i" in
*.bats) TESTS=$i ;;
*) TESTS=$(echo $TEST_DIR/*$i*.bats) ;;
esac
done
bats $TESTS

View File

@@ -1,52 +1,67 @@
github.com/urfave/cli v1.17.0
github.com/containers/image master
github.com/opencontainers/go-digest master
gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
github.com/containers/storage master
github.com/urfave/cli v1.20.0
github.com/kr/pretty v0.1.0
github.com/kr/text v0.1.0
github.com/containers/image v2.0.0
github.com/containers/buildah v1.8.4
github.com/vbauerster/mpb v3.3.4
github.com/mattn/go-isatty v0.0.4
github.com/VividCortex/ewma v1.1.1
golang.org/x/sync 42b317875d0fa942474b76e1b46a6060d720ae6e
github.com/opencontainers/go-digest c9281466c8b2f606084ac71339773efd177436e7
github.com/containers/storage v1.12.10
github.com/sirupsen/logrus v1.0.0
github.com/go-check/check v1
github.com/stretchr/testify v1.1.3
github.com/davecgh/go-spew master
github.com/pmezard/go-difflib master
github.com/pkg/errors master
golang.org/x/crypto master
github.com/davecgh/go-spew v1.1.1
github.com/pmezard/go-difflib 5d4384ee4fb2527b0a1256a821ebfc92f91efefc
github.com/pkg/errors v0.8.1
golang.org/x/crypto a4c6cb3142f211c99e4bf4cd769535b29a9b616f
github.com/ulikunitz/xz v0.5.4
github.com/etcd-io/bbolt v1.3.2
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
github.com/docker/docker 30eb4d8cdc422b023d5f11f29a82ecb73554183b
github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
github.com/docker/docker da99009bbb1165d1ac5688b5c81d2f589d418341
github.com/docker/go-connections 7beb39f0b969b075d1325fecb092faf27fd357b6
github.com/containerd/continuity d8fb8589b0e8e85b8c8bbaa8840226d0dfeb7371
github.com/vbatts/tar-split v0.10.2
github.com/gorilla/context 14f550f51a
github.com/gorilla/mux e444e69cbd
github.com/docker/go-units 8a7beacffa3009a9ac66bad506b18ffdd110cf97
golang.org/x/net master
golang.org/x/net 45ffb0cd1ba084b73e26dee67e667e1be5acce83
github.com/gogo/protobuf fcdc5011193ff531a548e9b0301828d5a5b97fd8
# end docker deps
golang.org/x/text master
github.com/docker/distribution master
github.com/docker/libtrust master
golang.org/x/text e6919f6577db79269a6443b9dc46d18f2238fb5d
github.com/docker/distribution 5f6282db7d65e6d72ad7c2cc66310724a57be716
# docker/distributions dependencies
# end of docker/distribution dependencies
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
github.com/docker/docker-credential-helpers d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1
github.com/opencontainers/runc master
github.com/opencontainers/image-spec v1.0.0
github.com/opencontainers/runc v1.0.0-rc6
github.com/opencontainers/image-spec 7b1e489870acb042978a3935d2fb76f8a79aff81
# -- start OCI image validation requirements.
github.com/opencontainers/runtime-spec v1.0.0
github.com/opencontainers/image-tools 6d941547fa1df31900990b3fb47ec2468c9c6469
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
go4.org master https://github.com/camlistore/go4
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460
github.com/xeipuuv/gojsonpointer 4e3ac2762d5f479393488629ee9370b50873b3a6
github.com/xeipuuv/gojsonreference bd5ef7bd5415a7ac448318e64f11a24cd21e594b
github.com/xeipuuv/gojsonschema v1.1.0
go4.org ce4c26f7be8eb27dc77f996b08d286dd80bc4a01 https://github.com/camlistore/go4
github.com/ostreedev/ostree-go 56f3a639dbc0f2f5051c6d52dade28a882ba78ce
# -- end OCI image validation requirements
github.com/mtrmac/gpgme master
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
k8s.io/client-go master
k8s.io/client-go kubernetes-1.10.13-beta.0
github.com/ghodss/yaml 73d445a93680fa1a78ae23a5839bad48f32ba1ee
gopkg.in/yaml.v2 d466437aa4adc35830964cffc5b5f262c63ddcb4
github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
# containers/storage's dependencies that aren't already being pulled in
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
github.com/opencontainers/selinux master
golang.org/x/sys master
github.com/opencontainers/selinux v1.1
golang.org/x/sys 43e60d72a8e2bd92ee98319ba9a384a0e9837c08
github.com/tchap/go-patricia v2.2.6
github.com/BurntSushi/toml master
github.com/BurntSushi/toml v0.3.1
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac
github.com/syndtr/gocapability d98352740cb2c55f81556b63d4a1ec64c5a319c2
github.com/klauspost/pgzip v1.2.1
github.com/klauspost/compress v1.4.1
github.com/klauspost/cpuid v1.2.0

21
vendor/github.com/VividCortex/ewma/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License
Copyright (c) 2013 VividCortex
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

140
vendor/github.com/VividCortex/ewma/README.md generated vendored Normal file
View File

@@ -0,0 +1,140 @@
# EWMA [![GoDoc](https://godoc.org/github.com/VividCortex/ewma?status.svg)](https://godoc.org/github.com/VividCortex/ewma) ![Build Status](https://circleci.com/gh/VividCortex/moving_average.png?circle-token=1459fa37f9ca0e50cef05d1963146d96d47ea523)
This repo provides Exponentially Weighted Moving Average algorithms, or EWMAs for short, [based on our
Quantifying Abnormal Behavior talk](https://vividcortex.com/blog/2013/07/23/a-fast-go-library-for-exponential-moving-averages/).
### Exponentially Weighted Moving Average
An exponentially weighted moving average is a way to continuously compute a type of
average for a series of numbers, as the numbers arrive. After a value in the series is
added to the average, its weight in the average decreases exponentially over time. This
biases the average towards more recent data. EWMAs are useful for several reasons, chiefly
their inexpensive computational and memory cost, as well as the fact that they represent
the recent central tendency of the series of values.
The EWMA algorithm requires a decay factor, alpha. The larger the alpha, the more the average
is biased towards recent history. The alpha must be between 0 and 1, and is typically
a fairly small number, such as 0.04. We will discuss the choice of alpha later.
The algorithm works thus, in pseudocode:
1. Multiply the next number in the series by alpha.
2. Multiply the current value of the average by 1 minus alpha.
3. Add the result of steps 1 and 2, and store it as the new current value of the average.
4. Repeat for each number in the series.
There are special-case behaviors for how to initialize the current value, and these vary
between implementations. One approach is to start with the first value in the series;
another is to average the first 10 or so values in the series using an arithmetic average,
and then begin the incremental updating of the average. Each method has pros and cons.
It may help to look at it pictorially. Suppose the series has five numbers, and we choose
alpha to be 0.50 for simplicity. Here's the series, with numbers in the neighborhood of 300.
![Data Series](https://user-images.githubusercontent.com/279875/28242350-463289a2-6977-11e7-88ca-fd778ccef1f0.png)
Now let's take the moving average of those numbers. First we set the average to the value
of the first number.
![EWMA Step 1](https://user-images.githubusercontent.com/279875/28242353-464c96bc-6977-11e7-9981-dc4e0789c7ba.png)
Next we multiply the next number by alpha, multiply the current value by 1-alpha, and add
them to generate a new value.
![EWMA Step 2](https://user-images.githubusercontent.com/279875/28242351-464abefa-6977-11e7-95d0-43900f29bef2.png)
This continues until we are done.
![EWMA Step N](https://user-images.githubusercontent.com/279875/28242352-464c58f0-6977-11e7-8cd0-e01e4efaac7f.png)
Notice how each of the values in the series decays by half each time a new value
is added, and the top of the bars in the lower portion of the image represents the
size of the moving average. It is a smoothed, or low-pass, average of the original
series.
For further reading, see [Exponentially weighted moving average](http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average) on wikipedia.
### Choosing Alpha
Consider a fixed-size sliding-window moving average (not an exponentially weighted moving average)
that averages over the previous N samples. What is the average age of each sample? It is N/2.
Now suppose that you wish to construct a EWMA whose samples have the same average age. The formula
to compute the alpha required for this is: alpha = 2/(N+1). Proof is in the book
"Production and Operations Analysis" by Steven Nahmias.
So, for example, if you have a time-series with samples once per second, and you want to get the
moving average over the previous minute, you should use an alpha of .032786885. This, by the way,
is the constant alpha used for this repository's SimpleEWMA.
### Implementations
This repository contains two implementations of the EWMA algorithm, with different properties.
The implementations all conform to the MovingAverage interface, and the constructor returns
that type.
Current implementations assume an implicit time interval of 1.0 between every sample added.
That is, the passage of time is treated as though it's the same as the arrival of samples.
If you need time-based decay when samples are not arriving precisely at set intervals, then
this package will not support your needs at present.
#### SimpleEWMA
A SimpleEWMA is designed for low CPU and memory consumption. It **will** have different behavior than the VariableEWMA
for multiple reasons. It has no warm-up period and it uses a constant
decay. These properties let it use less memory. It will also behave
differently when it's equal to zero, which is assumed to mean
uninitialized, so if a value is likely to actually become zero over time,
then any non-zero value will cause a sharp jump instead of a small change.
#### VariableEWMA
Unlike SimpleEWMA, this supports a custom age which must be stored, and thus uses more memory.
It also has a "warmup" time when you start adding values to it. It will report a value of 0.0
until you have added the required number of samples to it. It uses some memory to store the
number of samples added to it. As a result it uses a little over twice the memory of SimpleEWMA.
## Usage
### API Documentation
View the GoDoc generated documentation [here](http://godoc.org/github.com/VividCortex/ewma).
```go
package main
import "github.com/VividCortex/ewma"
func main() {
samples := [100]float64{
4599, 5711, 4746, 4621, 5037, 4218, 4925, 4281, 5207, 5203, 5594, 5149,
}
e := ewma.NewMovingAverage() //=> Returns a SimpleEWMA if called without params
a := ewma.NewMovingAverage(5) //=> returns a VariableEWMA with a decay of 2 / (5 + 1)
for _, f := range samples {
e.Add(f)
a.Add(f)
}
e.Value() //=> 13.577404704631077
a.Value() //=> 1.5806140565521463e-12
}
```
## Contributing
We only accept pull requests for minor fixes or improvements. This includes:
* Small bug fixes
* Typos
* Documentation or comments
Please open issues to discuss new features. Pull requests for new features will be rejected,
so we recommend forking the repository and making changes in your fork for your use case.
## License
This repository is Copyright (c) 2013 VividCortex, Inc. All rights reserved.
It is licensed under the MIT license. Please see the LICENSE file for applicable license terms.

126
vendor/github.com/VividCortex/ewma/ewma.go generated vendored Normal file
View File

@@ -0,0 +1,126 @@
// Package ewma implements exponentially weighted moving averages.
package ewma
// Copyright (c) 2013 VividCortex, Inc. All rights reserved.
// Please see the LICENSE file for applicable license terms.
const (
// By default, we average over a one-minute period, which means the average
// age of the metrics in the period is 30 seconds.
AVG_METRIC_AGE float64 = 30.0
// The formula for computing the decay factor from the average age comes
// from "Production and Operations Analysis" by Steven Nahmias.
DECAY float64 = 2 / (float64(AVG_METRIC_AGE) + 1)
// For best results, the moving average should not be initialized to the
// samples it sees immediately. The book "Production and Operations
// Analysis" by Steven Nahmias suggests initializing the moving average to
// the mean of the first 10 samples. Until the VariableEwma has seen this
// many samples, it is not "ready" to be queried for the value of the
// moving average. This adds some memory cost.
WARMUP_SAMPLES uint8 = 10
)
// MovingAverage is the interface that computes a moving average over a time-
// series stream of numbers. The average may be over a window or exponentially
// decaying.
type MovingAverage interface {
Add(float64)
Value() float64
Set(float64)
}
// NewMovingAverage constructs a MovingAverage that computes an average with the
// desired characteristics in the moving window or exponential decay. If no
// age is given, it constructs a default exponentially weighted implementation
// that consumes minimal memory. The age is related to the decay factor alpha
// by the formula given for the DECAY constant. It signifies the average age
// of the samples as time goes to infinity.
func NewMovingAverage(age ...float64) MovingAverage {
if len(age) == 0 || age[0] == AVG_METRIC_AGE {
return new(SimpleEWMA)
}
return &VariableEWMA{
decay: 2 / (age[0] + 1),
}
}
// A SimpleEWMA represents the exponentially weighted moving average of a
// series of numbers. It WILL have different behavior than the VariableEWMA
// for multiple reasons. It has no warm-up period and it uses a constant
// decay. These properties let it use less memory. It will also behave
// differently when it's equal to zero, which is assumed to mean
// uninitialized, so if a value is likely to actually become zero over time,
// then any non-zero value will cause a sharp jump instead of a small change.
// However, note that this takes a long time, and the value may just
// decays to a stable value that's close to zero, but which won't be mistaken
// for uninitialized. See http://play.golang.org/p/litxBDr_RC for example.
type SimpleEWMA struct {
// The current value of the average. After adding with Add(), this is
// updated to reflect the average of all values seen thus far.
value float64
}
// Add adds a value to the series and updates the moving average.
func (e *SimpleEWMA) Add(value float64) {
if e.value == 0 { // this is a proxy for "uninitialized"
e.value = value
} else {
e.value = (value * DECAY) + (e.value * (1 - DECAY))
}
}
// Value returns the current value of the moving average.
func (e *SimpleEWMA) Value() float64 {
return e.value
}
// Set sets the EWMA's value.
func (e *SimpleEWMA) Set(value float64) {
e.value = value
}
// VariableEWMA represents the exponentially weighted moving average of a series of
// numbers. Unlike SimpleEWMA, it supports a custom age, and thus uses more memory.
type VariableEWMA struct {
// The multiplier factor by which the previous samples decay.
decay float64
// The current value of the average.
value float64
// The number of samples added to this instance.
count uint8
}
// Add adds a value to the series and updates the moving average.
func (e *VariableEWMA) Add(value float64) {
switch {
case e.count < WARMUP_SAMPLES:
e.count++
e.value += value
case e.count == WARMUP_SAMPLES:
e.count++
e.value = e.value / float64(WARMUP_SAMPLES)
e.value = (value * e.decay) + (e.value * (1 - e.decay))
default:
e.value = (value * e.decay) + (e.value * (1 - e.decay))
}
}
// Value returns the current value of the average, or 0.0 if the series hasn't
// warmed up yet.
func (e *VariableEWMA) Value() float64 {
if e.count <= WARMUP_SAMPLES {
return 0.0
}
return e.value
}
// Set sets the EWMA's value.
func (e *VariableEWMA) Set(value float64) {
e.value = value
if e.count <= WARMUP_SAMPLES {
e.count = WARMUP_SAMPLES + 1
}
}

202
vendor/github.com/containerd/continuity/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

74
vendor/github.com/containerd/continuity/README.md generated vendored Normal file
View File

@@ -0,0 +1,74 @@
# continuity
[![GoDoc](https://godoc.org/github.com/containerd/continuity?status.svg)](https://godoc.org/github.com/containerd/continuity)
[![Build Status](https://travis-ci.org/containerd/continuity.svg?branch=master)](https://travis-ci.org/containerd/continuity)
A transport-agnostic, filesystem metadata manifest system
This project is a staging area for experiments in providing transport agnostic
metadata storage.
Please see https://github.com/opencontainers/specs/issues/11 for more details.
## Manifest Format
A continuity manifest encodes filesystem metadata in Protocol Buffers.
Please refer to [proto/manifest.proto](proto/manifest.proto).
## Usage
Build:
```console
$ make
```
Create a manifest (of this repo itself):
```console
$ ./bin/continuity build . > /tmp/a.pb
```
Dump a manifest:
```console
$ ./bin/continuity ls /tmp/a.pb
...
-rw-rw-r-- 270 B /.gitignore
-rw-rw-r-- 88 B /.mailmap
-rw-rw-r-- 187 B /.travis.yml
-rw-rw-r-- 359 B /AUTHORS
-rw-rw-r-- 11 kB /LICENSE
-rw-rw-r-- 1.5 kB /Makefile
...
-rw-rw-r-- 986 B /testutil_test.go
drwxrwxr-x 0 B /version
-rw-rw-r-- 478 B /version/version.go
```
Verify a manifest:
```console
$ ./bin/continuity verify . /tmp/a.pb
```
Break the directory and restore using the manifest:
```console
$ chmod 777 Makefile
$ ./bin/continuity verify . /tmp/a.pb
2017/06/23 08:00:34 error verifying manifest: resource "/Makefile" has incorrect mode: -rwxrwxrwx != -rw-rw-r--
$ ./bin/continuity apply . /tmp/a.pb
$ stat -c %a Makefile
664
$ ./bin/continuity verify . /tmp/a.pb
```
## Contribution Guide
### Building Proto Package
If you change the proto file you will need to rebuild the generated Go with `go generate`.
```console
$ go generate ./proto
```

View File

@@ -0,0 +1,85 @@
package pathdriver
import (
"path/filepath"
)
// PathDriver provides all of the path manipulation functions in a common
// interface. The context should call these and never use the `filepath`
// package or any other package to manipulate paths.
type PathDriver interface {
Join(paths ...string) string
IsAbs(path string) bool
Rel(base, target string) (string, error)
Base(path string) string
Dir(path string) string
Clean(path string) string
Split(path string) (dir, file string)
Separator() byte
Abs(path string) (string, error)
Walk(string, filepath.WalkFunc) error
FromSlash(path string) string
ToSlash(path string) string
Match(pattern, name string) (matched bool, err error)
}
// pathDriver is a simple default implementation calls the filepath package.
type pathDriver struct{}
// LocalPathDriver is the exported pathDriver struct for convenience.
var LocalPathDriver PathDriver = &pathDriver{}
func (*pathDriver) Join(paths ...string) string {
return filepath.Join(paths...)
}
func (*pathDriver) IsAbs(path string) bool {
return filepath.IsAbs(path)
}
func (*pathDriver) Rel(base, target string) (string, error) {
return filepath.Rel(base, target)
}
func (*pathDriver) Base(path string) string {
return filepath.Base(path)
}
func (*pathDriver) Dir(path string) string {
return filepath.Dir(path)
}
func (*pathDriver) Clean(path string) string {
return filepath.Clean(path)
}
func (*pathDriver) Split(path string) (dir, file string) {
return filepath.Split(path)
}
func (*pathDriver) Separator() byte {
return filepath.Separator
}
func (*pathDriver) Abs(path string) (string, error) {
return filepath.Abs(path)
}
// Note that filepath.Walk calls os.Stat, so if the context wants to
// to call Driver.Stat() for Walk, they need to create a new struct that
// overrides this method.
func (*pathDriver) Walk(root string, walkFn filepath.WalkFunc) error {
return filepath.Walk(root, walkFn)
}
func (*pathDriver) FromSlash(path string) string {
return filepath.FromSlash(path)
}
func (*pathDriver) ToSlash(path string) string {
return filepath.ToSlash(path)
}
func (*pathDriver) Match(pattern, name string) (bool, error) {
return filepath.Match(pattern, name)
}

13
vendor/github.com/containerd/continuity/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,13 @@
bazil.org/fuse 371fbbdaa8987b715bdd21d6adc4c9b20155f748
github.com/dustin/go-humanize bb3d318650d48840a39aa21a027c6630e198e626
github.com/golang/protobuf 1e59b77b52bf8e4b449a57e6f79f21226d571845
github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
github.com/opencontainers/go-digest 279bed98673dd5bef374d3b6e4b09e2af76183bf
github.com/pkg/errors f15c970de5b76fac0b59abb32d62c17cc7bed265
github.com/sirupsen/logrus 89742aefa4b206dcf400792f3bd35b542998eb3b
github.com/spf13/cobra 2da4a54c5ceefcee7ca5dd0eea1e18a3b6366489
github.com/spf13/pflag 4c012f6dcd9546820e378d0bdda4d8fc772cdfea
golang.org/x/crypto 9f005a07e0d31d45e6656d241bb5c0f2efd4bc94
golang.org/x/net a337091b0525af65de94df2eb7e98bd9962dcbe2
golang.org/x/sync 450f422ab23cf9881c94e2db30cac0eb1b7cf80c
golang.org/x/sys 665f6529cca930e27b831a0d1dafffbe1c172924

201
vendor/github.com/containers/buildah/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

129
vendor/github.com/containers/buildah/README.md generated vendored Normal file
View File

@@ -0,0 +1,129 @@
![buildah logo](https://cdn.rawgit.com/containers/buildah/master/logos/buildah-logo_large.png)
# [Buildah](https://www.youtube.com/embed/YVk5NgSiUw8) - a tool that facilitates building [Open Container Initiative (OCI)](https://www.opencontainers.org/) container images
[![Go Report Card](https://goreportcard.com/badge/github.com/containers/buildah)](https://goreportcard.com/report/github.com/containers/buildah)
[![Travis](https://travis-ci.org/containers/buildah.svg?branch=master)](https://travis-ci.org/containers/buildah)
The Buildah package provides a command line tool that can be used to
* create a working container, either from scratch or using an image as a starting point
* create an image, either from a working container or via the instructions in a Dockerfile
* images can be built in either the OCI image format or the traditional upstream docker image format
* mount a working container's root filesystem for manipulation
* unmount a working container's root filesystem
* use the updated contents of a container's root filesystem as a filesystem layer to create a new image
* delete a working container or an image
* rename a local container
## Buildah Information for Developers
For blogs, release announcements and more, please checkout the [buildah.io](https://buildah.io) website!
**[Buildah Demos](demos)**
**[Changelog](CHANGELOG.md)**
**[Contributing](CONTRIBUTING.md)**
**[Development Plan](developmentplan.md)**
**[Installation notes](install.md)**
**[Troubleshooting Guide](troubleshooting.md)**
**[Tutorials](docs/tutorials)**
## Buildah and Podman relationship
Buildah and Podman are two complementary open-source projects that are
available on most Linux platforms and both projects reside at
[GitHub.com](https://github.com) with Buildah
[here](https://github.com/containers/buildah) and Podman
[here](https://github.com/containers/libpod). Both, Buildah and Podman are
command line tools that work on Open Container Initiative (OCI) images and
containers. The two projects differentiate in their specialization.
Buildah specializes in building OCI images. Buildah's commands replicate all
of the commands that are found in a Dockerfile. This allows building images
with and without Dockerfiles while not requiring any root privileges.
Buildahs ultimate goal is to provide a lower-level coreutils interface to
build images. The flexibility of building images without Dockerfiles allows
for the integration of other scripting languages into the build process.
Buildah follows a simple fork-exec model and does not run as a daemon
but it is based on a comprehensive API in golang, which can be vendored
into other tools.
Podman specializes in all of the commands and functions that help you to maintain and modify
OCI images, such as pulling and tagging. It also allows you to create, run, and maintain those containers
created from those images.
A major difference between Podman and Buildah is their concept of a container. Podman
allows users to create "traditional containers" where the intent of these containers is
to be long lived. While Buildah containers are really just created to allow content
to be added back to the container image. An easy way to think of it is the
`buildah run` command emulates the RUN command in a Dockerfile while the `podman run`
command emulates the `docker run` command in functionality. Because of this and their underlying
storage differences, you can not see Podman containers from within Buildah or vice versa.
In short, Buildah is an efficient way to create OCI images while Podman allows
you to manage and maintain those images and containers in a production environment using
familiar container cli commands. For more details, see the
[Container Tools Guide](https://github.com/containers/buildah/tree/master/docs/containertools).
## Example
From [`./examples/lighttpd.sh`](examples/lighttpd.sh):
```bash
$ cat > lighttpd.sh <<"EOF"
#!/bin/bash -x
ctr1=$(buildah from "${1:-fedora}")
## Get all updates and install our minimal httpd server
buildah run "$ctr1" -- dnf update -y
buildah run "$ctr1" -- dnf install -y lighttpd
## Include some buildtime annotations
buildah config --annotation "com.example.build.host=$(uname -n)" "$ctr1"
## Run our server and expose the port
buildah config --cmd "/usr/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf" "$ctr1"
buildah config --port 80 "$ctr1"
## Commit this container to an image name
buildah commit "$ctr1" "${2:-$USER/lighttpd}"
EOF
$ chmod +x lighttpd.sh
$ sudo ./lighttpd.sh
```
## Commands
| Command | Description |
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| [buildah-add(1)](/docs/buildah-add.md) | Add the contents of a file, URL, or a directory to the container. |
| [buildah-bud(1)](/docs/buildah-bud.md) | Build an image using instructions from Dockerfiles. |
| [buildah-commit(1)](/docs/buildah-commit.md) | Create an image from a working container. |
| [buildah-config(1)](/docs/buildah-config.md) | Update image configuration settings. |
| [buildah-containers(1)](/docs/buildah-containers.md) | List the working containers and their base images. |
| [buildah-copy(1)](/docs/buildah-copy.md) | Copies the contents of a file, URL, or directory into a container's working directory. |
| [buildah-from(1)](/docs/buildah-from.md) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| [buildah-images(1)](/docs/buildah-images.md) | List images in local storage. |
| [buildah-info(1)](/docs/buildah-info.md) | Display Buildah system information. |
| [buildah-inspect(1)](/docs/buildah-inspect.md) | Inspects the configuration of a container or image. |
| [buildah-mount(1)](/docs/buildah-mount.md) | Mount the working container's root filesystem. |
| [buildah-pull(1)](/docs/buildah-pull.md) | Pull an image from the specified location. |
| [buildah-push(1)](/docs/buildah-push.md) | Push an image from local storage to elsewhere. |
| [buildah-rename(1)](/docs/buildah-rename.md) | Rename a local container. |
| [buildah-rm(1)](/docs/buildah-rm.md) | Removes one or more working containers. |
| [buildah-rmi(1)](/docs/buildah-rmi.md) | Removes one or more images. |
| [buildah-run(1)](/docs/buildah-run.md) | Run a command inside of the container. |
| [buildah-tag(1)](/docs/buildah-tag.md) | Add an additional name to a local image. |
| [buildah-umount(1)](/docs/buildah-umount.md) | Unmount a working container's root file system. |
| [buildah-unshare(1)](/docs/buildah-unshare.md) | Launch a command in a user namespace with modified ID mappings. |
| [buildah-version(1)](/docs/buildah-version.md) | Display the Buildah Version Information |
**Future goals include:**
* more CI tests
* additional CLI commands (?)

View File

@@ -0,0 +1,287 @@
#define _GNU_SOURCE
#include <sys/types.h>
#include <sys/ioctl.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/mman.h>
#include <fcntl.h>
#include <grp.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <termios.h>
#include <errno.h>
#include <unistd.h>
/* Open Source projects like conda-forge, want to package podman and are based
off of centos:6, Conda-force has minimal libc requirements and is lacking
the memfd.h file, so we use mmam.h
*/
#ifndef MFD_ALLOW_SEALING
#define MFD_ALLOW_SEALING 2U
#endif
#ifndef MFD_CLOEXEC
#define MFD_CLOEXEC 1U
#endif
#ifndef F_LINUX_SPECIFIC_BASE
#define F_LINUX_SPECIFIC_BASE 1024
#endif
#ifndef F_ADD_SEALS
#define F_ADD_SEALS (F_LINUX_SPECIFIC_BASE + 9)
#define F_GET_SEALS (F_LINUX_SPECIFIC_BASE + 10)
#endif
#ifndef F_SEAL_SEAL
#define F_SEAL_SEAL 0x0001LU
#endif
#ifndef F_SEAL_SHRINK
#define F_SEAL_SHRINK 0x0002LU
#endif
#ifndef F_SEAL_GROW
#define F_SEAL_GROW 0x0004LU
#endif
#ifndef F_SEAL_WRITE
#define F_SEAL_WRITE 0x0008LU
#endif
#define BUFSTEP 1024
static const char *_max_user_namespaces = "/proc/sys/user/max_user_namespaces";
static const char *_unprivileged_user_namespaces = "/proc/sys/kernel/unprivileged_userns_clone";
static int _containers_unshare_parse_envint(const char *envname) {
char *p, *q;
long l;
p = getenv(envname);
if (p == NULL) {
return -1;
}
q = NULL;
l = strtol(p, &q, 10);
if ((q == NULL) || (*q != '\0')) {
fprintf(stderr, "Error parsing \"%s\"=\"%s\"!\n", envname, p);
_exit(1);
}
unsetenv(envname);
return l;
}
static void _check_proc_sys_file(const char *path)
{
FILE *fp;
char buf[32];
size_t n_read;
long r;
fp = fopen(path, "r");
if (fp == NULL) {
if (errno != ENOENT)
fprintf(stderr, "Error reading %s: %m\n", _max_user_namespaces);
} else {
memset(buf, 0, sizeof(buf));
n_read = fread(buf, 1, sizeof(buf) - 1, fp);
if (n_read > 0) {
r = atoi(buf);
if (r == 0) {
fprintf(stderr, "User namespaces are not enabled in %s.\n", path);
}
} else {
fprintf(stderr, "Error reading %s: no contents, should contain a number greater than 0.\n", path);
}
fclose(fp);
}
}
static char **parse_proc_stringlist(const char *list) {
int fd, n, i, n_strings;
char *buf, *new_buf, **ret;
size_t size, new_size, used;
fd = open(list, O_RDONLY);
if (fd == -1) {
return NULL;
}
buf = NULL;
size = 0;
used = 0;
for (;;) {
new_size = used + BUFSTEP;
new_buf = realloc(buf, new_size);
if (new_buf == NULL) {
free(buf);
fprintf(stderr, "realloc(%ld): out of memory\n", (long)(size + BUFSTEP));
return NULL;
}
buf = new_buf;
size = new_size;
memset(buf + used, '\0', size - used);
n = read(fd, buf + used, size - used - 1);
if (n < 0) {
fprintf(stderr, "read(): %m\n");
return NULL;
}
if (n == 0) {
break;
}
used += n;
}
close(fd);
n_strings = 0;
for (n = 0; n < used; n++) {
if ((n == 0) || (buf[n-1] == '\0')) {
n_strings++;
}
}
ret = calloc(n_strings + 1, sizeof(char *));
if (ret == NULL) {
fprintf(stderr, "calloc(): out of memory\n");
return NULL;
}
i = 0;
for (n = 0; n < used; n++) {
if ((n == 0) || (buf[n-1] == '\0')) {
ret[i++] = &buf[n];
}
}
ret[i] = NULL;
return ret;
}
static int containers_reexec(void) {
char **argv, *exename;
int fd, mmfd, n_read, n_written;
struct stat st;
char buf[2048];
argv = parse_proc_stringlist("/proc/self/cmdline");
if (argv == NULL) {
return -1;
}
fd = open("/proc/self/exe", O_RDONLY | O_CLOEXEC);
if (fd == -1) {
fprintf(stderr, "open(\"/proc/self/exe\"): %m\n");
return -1;
}
if (fstat(fd, &st) == -1) {
fprintf(stderr, "fstat(\"/proc/self/exe\"): %m\n");
return -1;
}
exename = basename(argv[0]);
mmfd = syscall(SYS_memfd_create, exename, (long) MFD_ALLOW_SEALING | MFD_CLOEXEC);
if (mmfd == -1) {
fprintf(stderr, "memfd_create(): %m\n");
return -1;
}
for (;;) {
n_read = read(fd, buf, sizeof(buf));
if (n_read < 0) {
fprintf(stderr, "read(\"/proc/self/exe\"): %m\n");
return -1;
}
if (n_read == 0) {
break;
}
n_written = write(mmfd, buf, n_read);
if (n_written < 0) {
fprintf(stderr, "write(anonfd): %m\n");
return -1;
}
if (n_written != n_read) {
fprintf(stderr, "write(anonfd): short write (%d != %d)\n", n_written, n_read);
return -1;
}
}
close(fd);
if (fcntl(mmfd, F_ADD_SEALS, F_SEAL_SHRINK | F_SEAL_GROW | F_SEAL_WRITE | F_SEAL_SEAL) == -1) {
close(mmfd);
fprintf(stderr, "Error sealing memfd copy: %m\n");
return -1;
}
if (fexecve(mmfd, argv, environ) == -1) {
close(mmfd);
fprintf(stderr, "Error during reexec(...): %m\n");
return -1;
}
return 0;
}
void _containers_unshare(void)
{
int flags, pidfd, continuefd, n, pgrp, sid, ctty;
char buf[2048];
flags = _containers_unshare_parse_envint("_Containers-unshare");
if (flags == -1) {
return;
}
if ((flags & CLONE_NEWUSER) != 0) {
if (unshare(CLONE_NEWUSER) == -1) {
fprintf(stderr, "Error during unshare(CLONE_NEWUSER): %m\n");
_check_proc_sys_file (_max_user_namespaces);
_check_proc_sys_file (_unprivileged_user_namespaces);
_exit(1);
}
}
pidfd = _containers_unshare_parse_envint("_Containers-pid-pipe");
if (pidfd != -1) {
snprintf(buf, sizeof(buf), "%llu", (unsigned long long) getpid());
size_t size = write(pidfd, buf, strlen(buf));
if (size != strlen(buf)) {
fprintf(stderr, "Error writing PID to pipe on fd %d: %m\n", pidfd);
_exit(1);
}
close(pidfd);
}
continuefd = _containers_unshare_parse_envint("_Containers-continue-pipe");
if (continuefd != -1) {
n = read(continuefd, buf, sizeof(buf));
if (n > 0) {
fprintf(stderr, "Error: %.*s\n", n, buf);
_exit(1);
}
close(continuefd);
}
sid = _containers_unshare_parse_envint("_Containers-setsid");
if (sid == 1) {
if (setsid() == -1) {
fprintf(stderr, "Error during setsid: %m\n");
_exit(1);
}
}
pgrp = _containers_unshare_parse_envint("_Containers-setpgrp");
if (pgrp == 1) {
if (setpgrp() == -1) {
fprintf(stderr, "Error during setpgrp: %m\n");
_exit(1);
}
}
ctty = _containers_unshare_parse_envint("_Containers-ctty");
if (ctty != -1) {
if (ioctl(ctty, TIOCSCTTY, 0) == -1) {
fprintf(stderr, "Error while setting controlling terminal to %d: %m\n", ctty);
_exit(1);
}
}
if ((flags & CLONE_NEWUSER) != 0) {
if (setresgid(0, 0, 0) != 0) {
fprintf(stderr, "Error during setresgid(0): %m\n");
_exit(1);
}
if (setresuid(0, 0, 0) != 0) {
fprintf(stderr, "Error during setresuid(0): %m\n");
_exit(1);
}
}
if ((flags & ~CLONE_NEWUSER) != 0) {
if (unshare(flags & ~CLONE_NEWUSER) == -1) {
fprintf(stderr, "Error during unshare(...): %m\n");
_exit(1);
}
}
if (containers_reexec() != 0) {
_exit(1);
}
return;
}

View File

@@ -0,0 +1,572 @@
// +build linux
package unshare
import (
"bufio"
"bytes"
"fmt"
"io"
"os"
"os/exec"
"os/user"
"runtime"
"strconv"
"strings"
"sync"
"syscall"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/reexec"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/syndtr/gocapability/capability"
)
// Cmd wraps an exec.Cmd created by the reexec package in unshare(), and
// handles setting ID maps and other related settings by triggering
// initialization code in the child.
type Cmd struct {
*exec.Cmd
UnshareFlags int
UseNewuidmap bool
UidMappings []specs.LinuxIDMapping
UseNewgidmap bool
GidMappings []specs.LinuxIDMapping
GidMappingsEnableSetgroups bool
Setsid bool
Setpgrp bool
Ctty *os.File
OOMScoreAdj *int
Hook func(pid int) error
}
// Command creates a new Cmd which can be customized.
func Command(args ...string) *Cmd {
cmd := reexec.Command(args...)
return &Cmd{
Cmd: cmd,
}
}
func (c *Cmd) Start() error {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
// Set an environment variable to tell the child to synchronize its startup.
if c.Env == nil {
c.Env = os.Environ()
}
c.Env = append(c.Env, fmt.Sprintf("_Containers-unshare=%d", c.UnshareFlags))
// Please the libpod "rootless" package to find the expected env variables.
if os.Geteuid() != 0 {
c.Env = append(c.Env, "_CONTAINERS_USERNS_CONFIGURED=done")
c.Env = append(c.Env, fmt.Sprintf("_CONTAINERS_ROOTLESS_UID=%d", os.Geteuid()))
c.Env = append(c.Env, fmt.Sprintf("_CONTAINERS_ROOTLESS_GID=%d", os.Getegid()))
}
// Create the pipe for reading the child's PID.
pidRead, pidWrite, err := os.Pipe()
if err != nil {
return errors.Wrapf(err, "error creating pid pipe")
}
c.Env = append(c.Env, fmt.Sprintf("_Containers-pid-pipe=%d", len(c.ExtraFiles)+3))
c.ExtraFiles = append(c.ExtraFiles, pidWrite)
// Create the pipe for letting the child know to proceed.
continueRead, continueWrite, err := os.Pipe()
if err != nil {
pidRead.Close()
pidWrite.Close()
return errors.Wrapf(err, "error creating pid pipe")
}
c.Env = append(c.Env, fmt.Sprintf("_Containers-continue-pipe=%d", len(c.ExtraFiles)+3))
c.ExtraFiles = append(c.ExtraFiles, continueRead)
// Pass along other instructions.
if c.Setsid {
c.Env = append(c.Env, "_Containers-setsid=1")
}
if c.Setpgrp {
c.Env = append(c.Env, "_Containers-setpgrp=1")
}
if c.Ctty != nil {
c.Env = append(c.Env, fmt.Sprintf("_Containers-ctty=%d", len(c.ExtraFiles)+3))
c.ExtraFiles = append(c.ExtraFiles, c.Ctty)
}
// Make sure we clean up our pipes.
defer func() {
if pidRead != nil {
pidRead.Close()
}
if pidWrite != nil {
pidWrite.Close()
}
if continueRead != nil {
continueRead.Close()
}
if continueWrite != nil {
continueWrite.Close()
}
}()
// Start the new process.
err = c.Cmd.Start()
if err != nil {
return err
}
// Close the ends of the pipes that the parent doesn't need.
continueRead.Close()
continueRead = nil
pidWrite.Close()
pidWrite = nil
// Read the child's PID from the pipe.
pidString := ""
b := new(bytes.Buffer)
io.Copy(b, pidRead)
pidString = b.String()
pid, err := strconv.Atoi(pidString)
if err != nil {
fmt.Fprintf(continueWrite, "error parsing PID %q: %v", pidString, err)
return errors.Wrapf(err, "error parsing PID %q", pidString)
}
pidString = fmt.Sprintf("%d", pid)
// If we created a new user namespace, set any specified mappings.
if c.UnshareFlags&syscall.CLONE_NEWUSER != 0 {
// Always set "setgroups".
setgroups, err := os.OpenFile(fmt.Sprintf("/proc/%s/setgroups", pidString), os.O_TRUNC|os.O_WRONLY, 0)
if err != nil {
fmt.Fprintf(continueWrite, "error opening setgroups: %v", err)
return errors.Wrapf(err, "error opening /proc/%s/setgroups", pidString)
}
defer setgroups.Close()
if c.GidMappingsEnableSetgroups {
if _, err := fmt.Fprintf(setgroups, "allow"); err != nil {
fmt.Fprintf(continueWrite, "error writing \"allow\" to setgroups: %v", err)
return errors.Wrapf(err, "error opening \"allow\" to /proc/%s/setgroups", pidString)
}
} else {
if _, err := fmt.Fprintf(setgroups, "deny"); err != nil {
fmt.Fprintf(continueWrite, "error writing \"deny\" to setgroups: %v", err)
return errors.Wrapf(err, "error writing \"deny\" to /proc/%s/setgroups", pidString)
}
}
if len(c.UidMappings) == 0 || len(c.GidMappings) == 0 {
uidmap, gidmap, err := GetHostIDMappings("")
if err != nil {
fmt.Fprintf(continueWrite, "error reading ID mappings in parent: %v", err)
return errors.Wrapf(err, "error reading ID mappings in parent")
}
if len(c.UidMappings) == 0 {
c.UidMappings = uidmap
for i := range c.UidMappings {
c.UidMappings[i].HostID = c.UidMappings[i].ContainerID
}
}
if len(c.GidMappings) == 0 {
c.GidMappings = gidmap
for i := range c.GidMappings {
c.GidMappings[i].HostID = c.GidMappings[i].ContainerID
}
}
}
if len(c.GidMappings) > 0 {
// Build the GID map, since writing to the proc file has to be done all at once.
g := new(bytes.Buffer)
for _, m := range c.GidMappings {
fmt.Fprintf(g, "%d %d %d\n", m.ContainerID, m.HostID, m.Size)
}
gidmapSet := false
// Set the GID map.
if c.UseNewgidmap {
cmd := exec.Command("newgidmap", append([]string{pidString}, strings.Fields(strings.Replace(g.String(), "\n", " ", -1))...)...)
g.Reset()
cmd.Stdout = g
cmd.Stderr = g
err := cmd.Run()
if err == nil {
gidmapSet = true
} else {
logrus.Warnf("error running newgidmap: %v: %s", err, g.String())
logrus.Warnf("falling back to single mapping")
g.Reset()
g.Write([]byte(fmt.Sprintf("0 %d 1\n", os.Getegid())))
}
}
if !gidmapSet {
if c.UseNewgidmap {
setgroups, err := os.OpenFile(fmt.Sprintf("/proc/%s/setgroups", pidString), os.O_TRUNC|os.O_WRONLY, 0)
if err != nil {
fmt.Fprintf(continueWrite, "error opening /proc/%s/setgroups: %v", pidString, err)
return errors.Wrapf(err, "error opening /proc/%s/setgroups", pidString)
}
defer setgroups.Close()
if _, err := fmt.Fprintf(setgroups, "deny"); err != nil {
fmt.Fprintf(continueWrite, "error writing 'deny' to /proc/%s/setgroups: %v", pidString, err)
return errors.Wrapf(err, "error writing 'deny' to /proc/%s/setgroups", pidString)
}
}
gidmap, err := os.OpenFile(fmt.Sprintf("/proc/%s/gid_map", pidString), os.O_TRUNC|os.O_WRONLY, 0)
if err != nil {
fmt.Fprintf(continueWrite, "error opening /proc/%s/gid_map: %v", pidString, err)
return errors.Wrapf(err, "error opening /proc/%s/gid_map", pidString)
}
defer gidmap.Close()
if _, err := fmt.Fprintf(gidmap, "%s", g.String()); err != nil {
fmt.Fprintf(continueWrite, "error writing %q to /proc/%s/gid_map: %v", g.String(), pidString, err)
return errors.Wrapf(err, "error writing %q to /proc/%s/gid_map", g.String(), pidString)
}
}
}
if len(c.UidMappings) > 0 {
// Build the UID map, since writing to the proc file has to be done all at once.
u := new(bytes.Buffer)
for _, m := range c.UidMappings {
fmt.Fprintf(u, "%d %d %d\n", m.ContainerID, m.HostID, m.Size)
}
uidmapSet := false
// Set the GID map.
if c.UseNewuidmap {
cmd := exec.Command("newuidmap", append([]string{pidString}, strings.Fields(strings.Replace(u.String(), "\n", " ", -1))...)...)
u.Reset()
cmd.Stdout = u
cmd.Stderr = u
err := cmd.Run()
if err == nil {
uidmapSet = true
} else {
logrus.Warnf("error running newuidmap: %v: %s", err, u.String())
logrus.Warnf("falling back to single mapping")
u.Reset()
u.Write([]byte(fmt.Sprintf("0 %d 1\n", os.Geteuid())))
}
}
if !uidmapSet {
uidmap, err := os.OpenFile(fmt.Sprintf("/proc/%s/uid_map", pidString), os.O_TRUNC|os.O_WRONLY, 0)
if err != nil {
fmt.Fprintf(continueWrite, "error opening /proc/%s/uid_map: %v", pidString, err)
return errors.Wrapf(err, "error opening /proc/%s/uid_map", pidString)
}
defer uidmap.Close()
if _, err := fmt.Fprintf(uidmap, "%s", u.String()); err != nil {
fmt.Fprintf(continueWrite, "error writing %q to /proc/%s/uid_map: %v", u.String(), pidString, err)
return errors.Wrapf(err, "error writing %q to /proc/%s/uid_map", u.String(), pidString)
}
}
}
}
if c.OOMScoreAdj != nil {
oomScoreAdj, err := os.OpenFile(fmt.Sprintf("/proc/%s/oom_score_adj", pidString), os.O_TRUNC|os.O_WRONLY, 0)
if err != nil {
fmt.Fprintf(continueWrite, "error opening oom_score_adj: %v", err)
return errors.Wrapf(err, "error opening /proc/%s/oom_score_adj", pidString)
}
defer oomScoreAdj.Close()
if _, err := fmt.Fprintf(oomScoreAdj, "%d\n", *c.OOMScoreAdj); err != nil {
fmt.Fprintf(continueWrite, "error writing \"%d\" to oom_score_adj: %v", c.OOMScoreAdj, err)
return errors.Wrapf(err, "error writing \"%d\" to /proc/%s/oom_score_adj", c.OOMScoreAdj, pidString)
}
}
// Run any additional setup that we want to do before the child starts running proper.
if c.Hook != nil {
if err = c.Hook(pid); err != nil {
fmt.Fprintf(continueWrite, "hook error: %v", err)
return err
}
}
return nil
}
func (c *Cmd) Run() error {
if err := c.Start(); err != nil {
return err
}
return c.Wait()
}
func (c *Cmd) CombinedOutput() ([]byte, error) {
return nil, errors.New("unshare: CombinedOutput() not implemented")
}
func (c *Cmd) Output() ([]byte, error) {
return nil, errors.New("unshare: Output() not implemented")
}
var (
isRootlessOnce sync.Once
isRootless bool
)
const (
// UsernsEnvName is the environment variable, if set indicates in rootless mode
UsernsEnvName = "_CONTAINERS_USERNS_CONFIGURED"
)
// IsRootless tells us if we are running in rootless mode
func IsRootless() bool {
isRootlessOnce.Do(func() {
isRootless = os.Geteuid() != 0 || os.Getenv(UsernsEnvName) != ""
})
return isRootless
}
// GetRootlessUID returns the UID of the user in the parent userNS
func GetRootlessUID() int {
uidEnv := os.Getenv("_CONTAINERS_ROOTLESS_UID")
if uidEnv != "" {
u, _ := strconv.Atoi(uidEnv)
return u
}
return os.Getuid()
}
// RootlessEnv returns the environment settings for the rootless containers
func RootlessEnv() []string {
return append(os.Environ(), UsernsEnvName+"=done")
}
type Runnable interface {
Run() error
}
func bailOnError(err error, format string, a ...interface{}) {
if err != nil {
if format != "" {
logrus.Errorf("%s: %v", fmt.Sprintf(format, a...), err)
} else {
logrus.Errorf("%v", err)
}
os.Exit(1)
}
}
// MaybeReexecUsingUserNamespace re-exec the process in a new namespace
func MaybeReexecUsingUserNamespace(evenForRoot bool) {
// If we've already been through this once, no need to try again.
if os.Geteuid() == 0 && IsRootless() {
return
}
var uidNum, gidNum uint64
// Figure out who we are.
me, err := user.Current()
if !os.IsNotExist(err) {
bailOnError(err, "error determining current user")
uidNum, err = strconv.ParseUint(me.Uid, 10, 32)
bailOnError(err, "error parsing current UID %s", me.Uid)
gidNum, err = strconv.ParseUint(me.Gid, 10, 32)
bailOnError(err, "error parsing current GID %s", me.Gid)
}
runtime.LockOSThread()
defer runtime.UnlockOSThread()
// ID mappings to use to reexec ourselves.
var uidmap, gidmap []specs.LinuxIDMapping
if uidNum != 0 || evenForRoot {
// Read the set of ID mappings that we're allowed to use. Each
// range in /etc/subuid and /etc/subgid file is a starting host
// ID and a range size.
uidmap, gidmap, err = GetSubIDMappings(me.Username, me.Username)
if err != nil {
logrus.Warnf("error reading allowed ID mappings: %v", err)
}
if len(uidmap) == 0 {
logrus.Warnf("Found no UID ranges set aside for user %q in /etc/subuid.", me.Username)
}
if len(gidmap) == 0 {
logrus.Warnf("Found no GID ranges set aside for user %q in /etc/subgid.", me.Username)
}
// Map our UID and GID, then the subuid and subgid ranges,
// consecutively, starting at 0, to get the mappings to use for
// a copy of ourselves.
uidmap = append([]specs.LinuxIDMapping{{HostID: uint32(uidNum), ContainerID: 0, Size: 1}}, uidmap...)
gidmap = append([]specs.LinuxIDMapping{{HostID: uint32(gidNum), ContainerID: 0, Size: 1}}, gidmap...)
var rangeStart uint32
for i := range uidmap {
uidmap[i].ContainerID = rangeStart
rangeStart += uidmap[i].Size
}
rangeStart = 0
for i := range gidmap {
gidmap[i].ContainerID = rangeStart
rangeStart += gidmap[i].Size
}
} else {
// If we have CAP_SYS_ADMIN, then we don't need to create a new namespace in order to be able
// to use unshare(), so don't bother creating a new user namespace at this point.
capabilities, err := capability.NewPid(0)
bailOnError(err, "error reading the current capabilities sets")
if capabilities.Get(capability.EFFECTIVE, capability.CAP_SYS_ADMIN) {
return
}
// Read the set of ID mappings that we're currently using.
uidmap, gidmap, err = GetHostIDMappings("")
bailOnError(err, "error reading current ID mappings")
// Just reuse them.
for i := range uidmap {
uidmap[i].HostID = uidmap[i].ContainerID
}
for i := range gidmap {
gidmap[i].HostID = gidmap[i].ContainerID
}
}
// Unlike most uses of reexec or unshare, we're using a name that
// _won't_ be recognized as a registered reexec handler, since we
// _want_ to fall through reexec.Init() to the normal main().
cmd := Command(append([]string{fmt.Sprintf("%s-in-a-user-namespace", os.Args[0])}, os.Args[1:]...)...)
// If, somehow, we don't become UID 0 in our child, indicate that the child shouldn't try again.
err = os.Setenv(UsernsEnvName, "1")
bailOnError(err, "error setting %s=1 in environment", UsernsEnvName)
// Set the default isolation type to use the "rootless" method.
if _, present := os.LookupEnv("BUILDAH_ISOLATION"); !present {
if err = os.Setenv("BUILDAH_ISOLATION", "rootless"); err != nil {
if err := os.Setenv("BUILDAH_ISOLATION", "rootless"); err != nil {
logrus.Errorf("error setting BUILDAH_ISOLATION=rootless in environment: %v", err)
os.Exit(1)
}
}
}
// Reuse our stdio.
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
// Set up a new user namespace with the ID mapping.
cmd.UnshareFlags = syscall.CLONE_NEWUSER | syscall.CLONE_NEWNS
cmd.UseNewuidmap = uidNum != 0
cmd.UidMappings = uidmap
cmd.UseNewgidmap = uidNum != 0
cmd.GidMappings = gidmap
cmd.GidMappingsEnableSetgroups = true
// Finish up.
logrus.Debugf("running %+v with environment %+v, UID map %+v, and GID map %+v", cmd.Cmd.Args, os.Environ(), cmd.UidMappings, cmd.GidMappings)
ExecRunnable(cmd)
}
// ExecRunnable runs the specified unshare command, captures its exit status,
// and exits with the same status.
func ExecRunnable(cmd Runnable) {
if err := cmd.Run(); err != nil {
if exitError, ok := errors.Cause(err).(*exec.ExitError); ok {
if exitError.ProcessState.Exited() {
if waitStatus, ok := exitError.ProcessState.Sys().(syscall.WaitStatus); ok {
if waitStatus.Exited() {
logrus.Errorf("%v", exitError)
os.Exit(waitStatus.ExitStatus())
}
if waitStatus.Signaled() {
logrus.Errorf("%v", exitError)
os.Exit(int(waitStatus.Signal()) + 128)
}
}
}
}
logrus.Errorf("%v", err)
logrus.Errorf("(unable to determine exit status)")
os.Exit(1)
}
os.Exit(0)
}
// getHostIDMappings reads mappings from the named node under /proc.
func getHostIDMappings(path string) ([]specs.LinuxIDMapping, error) {
var mappings []specs.LinuxIDMapping
f, err := os.Open(path)
if err != nil {
return nil, errors.Wrapf(err, "error reading ID mappings from %q", path)
}
defer f.Close()
scanner := bufio.NewScanner(f)
for scanner.Scan() {
line := scanner.Text()
fields := strings.Fields(line)
if len(fields) != 3 {
return nil, errors.Errorf("line %q from %q has %d fields, not 3", line, path, len(fields))
}
cid, err := strconv.ParseUint(fields[0], 10, 32)
if err != nil {
return nil, errors.Wrapf(err, "error parsing container ID value %q from line %q in %q", fields[0], line, path)
}
hid, err := strconv.ParseUint(fields[1], 10, 32)
if err != nil {
return nil, errors.Wrapf(err, "error parsing host ID value %q from line %q in %q", fields[1], line, path)
}
size, err := strconv.ParseUint(fields[2], 10, 32)
if err != nil {
return nil, errors.Wrapf(err, "error parsing size value %q from line %q in %q", fields[2], line, path)
}
mappings = append(mappings, specs.LinuxIDMapping{ContainerID: uint32(cid), HostID: uint32(hid), Size: uint32(size)})
}
return mappings, nil
}
// GetHostIDMappings reads mappings for the specified process (or the current
// process if pid is "self" or an empty string) from the kernel.
func GetHostIDMappings(pid string) ([]specs.LinuxIDMapping, []specs.LinuxIDMapping, error) {
if pid == "" {
pid = "self"
}
uidmap, err := getHostIDMappings(fmt.Sprintf("/proc/%s/uid_map", pid))
if err != nil {
return nil, nil, err
}
gidmap, err := getHostIDMappings(fmt.Sprintf("/proc/%s/gid_map", pid))
if err != nil {
return nil, nil, err
}
return uidmap, gidmap, nil
}
// GetSubIDMappings reads mappings from /etc/subuid and /etc/subgid.
func GetSubIDMappings(user, group string) ([]specs.LinuxIDMapping, []specs.LinuxIDMapping, error) {
mappings, err := idtools.NewIDMappings(user, group)
if err != nil {
return nil, nil, errors.Wrapf(err, "error reading subuid mappings for user %q and subgid mappings for group %q", user, group)
}
var uidmap, gidmap []specs.LinuxIDMapping
for _, m := range mappings.UIDs() {
uidmap = append(uidmap, specs.LinuxIDMapping{
ContainerID: uint32(m.ContainerID),
HostID: uint32(m.HostID),
Size: uint32(m.Size),
})
}
for _, m := range mappings.GIDs() {
gidmap = append(gidmap, specs.LinuxIDMapping{
ContainerID: uint32(m.ContainerID),
HostID: uint32(m.HostID),
Size: uint32(m.Size),
})
}
return uidmap, gidmap, nil
}
// ParseIDMappings parses mapping triples.
func ParseIDMappings(uidmap, gidmap []string) ([]idtools.IDMap, []idtools.IDMap, error) {
uid, err := idtools.ParseIDMap(uidmap, "userns-uid-map")
if err != nil {
return nil, nil, err
}
gid, err := idtools.ParseIDMap(gidmap, "userns-gid-map")
if err != nil {
return nil, nil, err
}
return uid, gid, nil
}

View File

@@ -0,0 +1,10 @@
// +build linux,cgo,!gccgo
package unshare
// #cgo CFLAGS: -Wall
// extern void _containers_unshare(void);
// void __attribute__((constructor)) init(void) {
// _containers_unshare();
// }
import "C"

View File

@@ -0,0 +1,25 @@
// +build linux,cgo,gccgo
package unshare
// #cgo CFLAGS: -Wall -Wextra
// extern void _containers_unshare(void);
// void __attribute__((constructor)) init(void) {
// _containers_unshare();
// }
import "C"
// This next bit is straight out of libcontainer.
// AlwaysFalse is here to stay false
// (and be exported so the compiler doesn't optimize out its reference)
var AlwaysFalse bool
func init() {
if AlwaysFalse {
// by referencing this C init() in a noop test, it will ensure the compiler
// links in the C function.
// https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65134
C.init()
}
}

View File

@@ -0,0 +1,45 @@
// +build !linux
package unshare
import (
"os"
"github.com/containers/storage/pkg/idtools"
"github.com/opencontainers/runtime-spec/specs-go"
)
const (
// UsernsEnvName is the environment variable, if set indicates in rootless mode
UsernsEnvName = "_CONTAINERS_USERNS_CONFIGURED"
)
// IsRootless tells us if we are running in rootless mode
func IsRootless() bool {
return false
}
// GetRootlessUID returns the UID of the user in the parent userNS
func GetRootlessUID() int {
return os.Getuid()
}
// RootlessEnv returns the environment settings for the rootless containers
func RootlessEnv() []string {
return append(os.Environ(), UsernsEnvName+"=")
}
// MaybeReexecUsingUserNamespace re-exec the process in a new namespace
func MaybeReexecUsingUserNamespace(evenForRoot bool) {
}
// GetHostIDMappings reads mappings for the specified process (or the current
// process if pid is "self" or an empty string) from the kernel.
func GetHostIDMappings(pid string) ([]specs.LinuxIDMapping, []specs.LinuxIDMapping, error) {
return nil, nil, nil
}
// ParseIDMappings parses mapping triples.
func ParseIDMappings(uidmap, gidmap []string) ([]idtools.IDMap, []idtools.IDMap, error) {
return nil, nil, nil
}

69
vendor/github.com/containers/buildah/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,69 @@
github.com/Azure/go-ansiterm d6e3b3328b783f23731bc4d058875b0371ff8109
github.com/blang/semver v3.5.0
github.com/BurntSushi/toml v0.2.0
github.com/containerd/continuity 004b46473808b3e7a4a3049c20e4376c91eb966d
github.com/containernetworking/cni v0.7.0-rc2
github.com/containers/image v2.0.0
github.com/cyphar/filepath-securejoin v0.2.1
github.com/vbauerster/mpb v3.3.4
github.com/mattn/go-isatty v0.0.4
github.com/VividCortex/ewma v1.1.1
github.com/containers/storage v1.12.10
github.com/docker/distribution 5f6282db7d65e6d72ad7c2cc66310724a57be716
github.com/docker/docker 54dddadc7d5d89fe0be88f76979f6f6ab0dede83
github.com/docker/docker-credential-helpers v0.6.1
github.com/docker/go-connections v0.4.0
github.com/docker/go-units v0.3.2
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
github.com/docker/libnetwork 1a06131fb8a047d919f7deaf02a4c414d7884b83
github.com/fsouza/go-dockerclient v1.3.0
github.com/ghodss/yaml v1.0.0
github.com/gogo/protobuf v1.2.0
github.com/gorilla/context v1.1.1
github.com/gorilla/mux v1.6.2
github.com/hashicorp/errwrap v1.0.0
github.com/hashicorp/go-multierror v1.0.0
github.com/imdario/mergo v0.3.6
github.com/mattn/go-shellwords v1.0.3
github.com/Microsoft/go-winio v0.4.11
github.com/Microsoft/hcsshim v0.8.3
github.com/mistifyio/go-zfs v2.1.1
github.com/moby/moby f8806b18b4b92c5e1980f6e11c917fad201cd73c
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
# TODO: Gotty has not been updated since 2012. Can we find a replacement?
github.com/Nvveen/Gotty cd527374f1e5bff4938207604a14f2e38a9cf512
github.com/opencontainers/go-digest c9281466c8b2f606084ac71339773efd177436e7
github.com/opencontainers/image-spec v1.0.0
github.com/opencontainers/runc v1.0.0-rc6
github.com/opencontainers/runtime-spec v1.0.0
github.com/opencontainers/runtime-tools v0.8.0
github.com/opencontainers/selinux v1.1
github.com/openshift/imagebuilder v1.1.0
github.com/ostreedev/ostree-go 9ab99253d365aac3a330d1f7281cf29f3d22820b
github.com/pkg/errors v0.8.1
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac
github.com/seccomp/libseccomp-golang v0.9.0
github.com/seccomp/containers-golang v0.1
github.com/sirupsen/logrus v1.0.0
github.com/syndtr/gocapability d98352740cb2c55f81556b63d4a1ec64c5a319c2
github.com/tchap/go-patricia v2.2.6
github.com/ulikunitz/xz v0.5.5
github.com/vbatts/tar-split v0.10.2
github.com/xeipuuv/gojsonpointer 4e3ac2762d5f479393488629ee9370b50873b3a6
github.com/xeipuuv/gojsonreference bd5ef7bd5415a7ac448318e64f11a24cd21e594b
github.com/xeipuuv/gojsonschema v1.1.0
golang.org/x/crypto ff983b9c42bc9fbf91556e191cc8efb585c16908 https://github.com/golang/crypto
golang.org/x/net 45ffb0cd1ba084b73e26dee67e667e1be5acce83 https://github.com/golang/net
golang.org/x/sync 37e7f081c4d4c64e13b10787722085407fe5d15f https://github.com/golang/sync
golang.org/x/sys 7fbe1cd0fcc20051e1fcb87fbabec4a1bacaaeba https://github.com/golang/sys
golang.org/x/text e6919f6577db79269a6443b9dc46d18f2238fb5d https://github.com/golang/text
gopkg.in/yaml.v2 v2.2.2
k8s.io/client-go kubernetes-1.10.13-beta.0 https://github.com/kubernetes/client-go
github.com/klauspost/pgzip v1.2.1
github.com/klauspost/compress v1.4.1
github.com/klauspost/cpuid v1.2.0
github.com/onsi/gomega v1.4.3
github.com/spf13/cobra v0.0.3
github.com/spf13/pflag v1.0.3
github.com/ishidawataru/sctp 07191f837fedd2f13d1ec7b5f885f0f3ec54b1cb
github.com/etcd-io/bbolt v1.3.2

View File

@@ -25,7 +25,7 @@ them as necessary, and to sign and verify images.
The containers/image project is only a library with no user interface;
you can either incorporate it into your Go programs, or use the `skopeo` tool:
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
The [skopeo](https://github.com/containers/skopeo) tool uses the
containers/image library and takes advantage of many of its features,
e.g. `skopeo copy` exposes the `containers/image/copy.Image` functionality.
@@ -42,7 +42,7 @@ What this project tests against dependencies-wise is located
## Building
If you want to see what the library can do, or an example of how it is called,
consider starting with the [skopeo](https://github.com/projectatomic/skopeo) tool
consider starting with the [skopeo](https://github.com/containers/skopeo) tool
instead.
To integrate this library into your project, put it into `$GOPATH` or use
@@ -53,7 +53,7 @@ are also available
This library, by default, also depends on the GpgME and libostree C libraries. Either install them:
```sh
Fedora$ dnf install gpgme-devel libassuan-devel libostree-devel
Fedora$ dnf install gpgme-devel libassuan-devel ostree-devel
macOS$ brew install gpgme
```
or use the build tags described below to avoid the dependencies (e.g. using `go build -tags …`)
@@ -65,13 +65,17 @@ the primary downside is that creating new signatures with the Golang-only implem
- `containers_image_ostree_stub`: Instead of importing `ostree:` transport in `github.com/containers/image/transports/alltransports`, use a stub which reports that the transport is not supported. This allows building the library without requiring the `libostree` development libraries. The `github.com/containers/image/ostree` package is completely disabled
and impossible to import when this build tag is in use.
## Contributing
## [Contributing](CONTRIBUTING.md)
Information about contributing to this project.
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.
## License
ASL 2.0
Apache License 2.0
SPDX-License-Identifier: Apache-2.0
## Contact

View File

@@ -2,36 +2,50 @@ package copy
import (
"bytes"
"compress/gzip"
"context"
"fmt"
"io"
"io/ioutil"
"os"
"reflect"
"runtime"
"strings"
"sync"
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/blobinfocache"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/klauspost/pgzip"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
pb "gopkg.in/cheggaaa/pb.v1"
"github.com/vbauerster/mpb"
"github.com/vbauerster/mpb/decor"
"golang.org/x/crypto/ssh/terminal"
"golang.org/x/sync/semaphore"
)
type digestingReader struct {
source io.Reader
digester digest.Digester
expectedDigest digest.Digest
validationFailed bool
source io.Reader
digester digest.Digester
expectedDigest digest.Digest
validationFailed bool
validationSucceeded bool
}
// maxParallelDownloads is used to limit the maxmimum number of parallel
// downloads. Let's follow Firefox by limiting it to 6.
var maxParallelDownloads = 6
// newDigestingReader returns an io.Reader implementation with contents of source, which will eventually return a non-EOF error
// and set validationFailed to true if the source stream does not match expectedDigest.
// or set validationSucceeded/validationFailed to true if the source stream does/does not match expectedDigest.
// (neither is set if EOF is never reached).
func newDigestingReader(source io.Reader, expectedDigest digest.Digest) (*digestingReader, error) {
if err := expectedDigest.Validate(); err != nil {
return nil, errors.Errorf("Invalid digest specification %s", expectedDigest)
@@ -64,6 +78,7 @@ func (d *digestingReader) Read(p []byte) (int, error) {
d.validationFailed = true
return 0, errors.Errorf("Digest did not match, expected %s, got %s", d.expectedDigest, actualDigest)
}
d.validationSucceeded = true
}
return n, err
}
@@ -71,22 +86,24 @@ func (d *digestingReader) Read(p []byte) (int, error) {
// copier allows us to keep track of diffID values for blobs, and other
// data shared across one or more images in a possible manifest list.
type copier struct {
copiedBlobs map[digest.Digest]digest.Digest
cachedDiffIDs map[digest.Digest]digest.Digest
dest types.ImageDestination
rawSource types.ImageSource
reportWriter io.Writer
progressOutput io.Writer
progressInterval time.Duration
progress chan types.ProgressProperties
blobInfoCache types.BlobInfoCache
copyInParallel bool
}
// imageCopier tracks state specific to a single image (possibly an item of a manifest list)
type imageCopier struct {
c *copier
manifestUpdates *types.ManifestUpdateOptions
src types.Image
diffIDsAreNeeded bool
canModifyManifest bool
c *copier
manifestUpdates *types.ManifestUpdateOptions
src types.Image
diffIDsAreNeeded bool
canModifyManifest bool
canSubstituteBlobs bool
}
// Options allows supplying non-default configuration modifying the behavior of CopyImage.
@@ -103,8 +120,9 @@ type Options struct {
}
// Image copies image from srcRef to destRef, using policyContext to validate
// source image admissibility.
func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) (retErr error) {
// source image admissibility. It returns the manifest which was written to
// the new copy of the image.
func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) (manifest []byte, retErr error) {
// NOTE this function uses an output parameter for the error return value.
// Setting this and returning is the ideal way to return an error.
//
@@ -120,9 +138,9 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
reportWriter = options.ReportWriter
}
dest, err := destRef.NewImageDestination(options.DestinationCtx)
dest, err := destRef.NewImageDestination(ctx, options.DestinationCtx)
if err != nil {
return errors.Wrapf(err, "Error initializing destination %s", transports.ImageName(destRef))
return nil, errors.Wrapf(err, "Error initializing destination %s", transports.ImageName(destRef))
}
defer func() {
if err := dest.Close(); err != nil {
@@ -130,9 +148,9 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
}
}()
rawSource, err := srcRef.NewImageSource(options.SourceCtx)
rawSource, err := srcRef.NewImageSource(ctx, options.SourceCtx)
if err != nil {
return errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
return nil, errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
}
defer func() {
if err := rawSource.Close(); err != nil {
@@ -140,76 +158,108 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
}
}()
// If reportWriter is not a TTY (e.g., when piping to a file), do not
// print the progress bars to avoid long and hard to parse output.
// createProgressBar() will print a single line instead.
progressOutput := reportWriter
if !isTTY(reportWriter) {
progressOutput = ioutil.Discard
}
copyInParallel := dest.HasThreadSafePutBlob() && rawSource.HasThreadSafeGetBlob()
c := &copier{
copiedBlobs: make(map[digest.Digest]digest.Digest),
cachedDiffIDs: make(map[digest.Digest]digest.Digest),
dest: dest,
rawSource: rawSource,
reportWriter: reportWriter,
progressOutput: progressOutput,
progressInterval: options.ProgressInterval,
progress: options.Progress,
copyInParallel: copyInParallel,
// FIXME? The cache is used for sources and destinations equally, but we only have a SourceCtx and DestinationCtx.
// For now, use DestinationCtx (because blob reuse changes the behavior of the destination side more); eventually
// we might want to add a separate CommonCtx — or would that be too confusing?
blobInfoCache: blobinfocache.DefaultCache(options.DestinationCtx),
}
unparsedToplevel := image.UnparsedInstance(rawSource, nil)
multiImage, err := isMultiImage(unparsedToplevel)
multiImage, err := isMultiImage(ctx, unparsedToplevel)
if err != nil {
return errors.Wrapf(err, "Error determining manifest MIME type for %s", transports.ImageName(srcRef))
return nil, errors.Wrapf(err, "Error determining manifest MIME type for %s", transports.ImageName(srcRef))
}
if !multiImage {
// The simple case: Just copy a single image.
if err := c.copyOneImage(policyContext, options, unparsedToplevel); err != nil {
return err
if manifest, err = c.copyOneImage(ctx, policyContext, options, unparsedToplevel); err != nil {
return nil, err
}
} else {
// This is a manifest list. Choose a single image and copy it.
// FIXME: Copy to destinations which support manifest lists, one image at a time.
instanceDigest, err := image.ChooseManifestInstanceFromManifestList(options.SourceCtx, unparsedToplevel)
instanceDigest, err := image.ChooseManifestInstanceFromManifestList(ctx, options.SourceCtx, unparsedToplevel)
if err != nil {
return errors.Wrapf(err, "Error choosing an image from manifest list %s", transports.ImageName(srcRef))
return nil, errors.Wrapf(err, "Error choosing an image from manifest list %s", transports.ImageName(srcRef))
}
logrus.Debugf("Source is a manifest list; copying (only) instance %s", instanceDigest)
unparsedInstance := image.UnparsedInstance(rawSource, &instanceDigest)
if err := c.copyOneImage(policyContext, options, unparsedInstance); err != nil {
return err
if manifest, err = c.copyOneImage(ctx, policyContext, options, unparsedInstance); err != nil {
return nil, err
}
}
if err := c.dest.Commit(); err != nil {
return errors.Wrap(err, "Error committing the finished image")
if err := c.dest.Commit(ctx); err != nil {
return nil, errors.Wrap(err, "Error committing the finished image")
}
return nil
return manifest, nil
}
// Image copies a single (on-manifest-list) image unparsedImage, using policyContext to validate
// source image admissibility.
func (c *copier) copyOneImage(policyContext *signature.PolicyContext, options *Options, unparsedImage *image.UnparsedImage) (retErr error) {
func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.PolicyContext, options *Options, unparsedImage *image.UnparsedImage) (manifestBytes []byte, retErr error) {
// The caller is handling manifest lists; this could happen only if a manifest list contains a manifest list.
// Make sure we fail cleanly in such cases.
multiImage, err := isMultiImage(unparsedImage)
multiImage, err := isMultiImage(ctx, unparsedImage)
if err != nil {
// FIXME FIXME: How to name a reference for the sub-image?
return errors.Wrapf(err, "Error determining manifest MIME type for %s", transports.ImageName(unparsedImage.Reference()))
return nil, errors.Wrapf(err, "Error determining manifest MIME type for %s", transports.ImageName(unparsedImage.Reference()))
}
if multiImage {
return fmt.Errorf("Unexpectedly received a manifest list instead of a manifest for a single image")
return nil, fmt.Errorf("Unexpectedly received a manifest list instead of a manifest for a single image")
}
// Please keep this policy check BEFORE reading any other information about the image.
// (the multiImage check above only matches the MIME type, which we have received anyway.
// Actual parsing of anything should be deferred.)
if allowed, err := policyContext.IsRunningImageAllowed(unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return errors.Wrap(err, "Source image rejected")
if allowed, err := policyContext.IsRunningImageAllowed(ctx, unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return nil, errors.Wrap(err, "Source image rejected")
}
src, err := image.FromUnparsedImage(options.SourceCtx, unparsedImage)
src, err := image.FromUnparsedImage(ctx, options.SourceCtx, unparsedImage)
if err != nil {
return errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(c.rawSource.Reference()))
return nil, errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(c.rawSource.Reference()))
}
if err := checkImageDestinationForCurrentRuntimeOS(options.DestinationCtx, src, c.dest); err != nil {
return err
// If the destination is a digested reference, make a note of that, determine what digest value we're
// expecting, and check that the source manifest matches it.
destIsDigestedReference := false
if named := c.dest.Reference().DockerReference(); named != nil {
if digested, ok := named.(reference.Digested); ok {
destIsDigestedReference = true
sourceManifest, _, err := src.Manifest(ctx)
if err != nil {
return nil, errors.Wrapf(err, "Error reading manifest from source image")
}
matches, err := manifest.MatchesDigest(sourceManifest, digested.Digest())
if err != nil {
return nil, errors.Wrapf(err, "Error computing digest of source image's manifest")
}
if !matches {
return nil, errors.New("Digest of source image's manifest would not match destination reference")
}
}
}
if err := checkImageDestinationForCurrentRuntimeOS(ctx, options.DestinationCtx, src, c.dest); err != nil {
return nil, err
}
var sigs [][]byte
@@ -217,16 +267,16 @@ func (c *copier) copyOneImage(policyContext *signature.PolicyContext, options *O
sigs = [][]byte{}
} else {
c.Printf("Getting image source signatures\n")
s, err := src.Signatures(context.TODO())
s, err := src.Signatures(ctx)
if err != nil {
return errors.Wrap(err, "Error reading signatures")
return nil, errors.Wrap(err, "Error reading signatures")
}
sigs = s
}
if len(sigs) != 0 {
c.Printf("Checking if image destination supports signatures\n")
if err := c.dest.SupportsSignatures(); err != nil {
return errors.Wrap(err, "Can not copy signatures")
if err := c.dest.SupportsSignatures(ctx); err != nil {
return nil, errors.Wrap(err, "Can not copy signatures")
}
}
@@ -235,32 +285,39 @@ func (c *copier) copyOneImage(policyContext *signature.PolicyContext, options *O
manifestUpdates: &types.ManifestUpdateOptions{InformationOnly: types.ManifestUpdateInformation{Destination: c.dest}},
src: src,
// diffIDsAreNeeded is computed later
canModifyManifest: len(sigs) == 0,
canModifyManifest: len(sigs) == 0 && !destIsDigestedReference,
}
// Ensure _this_ copy sees exactly the intended data when either processing a signed image or signing it.
// This may be too conservative, but for now, better safe than sorry, _especially_ on the SignBy path:
// The signature makes the content non-repudiable, so it very much matters that the signature is made over exactly what the user intended.
// We do intend the RecordDigestUncompressedPair calls to only work with reliable data, but at least theres a risk
// that the compressed version coming from a third party may be designed to attack some other decompressor implementation,
// and we would reuse and sign it.
ic.canSubstituteBlobs = ic.canModifyManifest && options.SignBy == ""
if err := ic.updateEmbeddedDockerReference(); err != nil {
return err
return nil, err
}
// We compute preferredManifestMIMEType only to show it in error messages.
// Without having to add this context in an error message, we would be happy enough to know only that no conversion is needed.
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := ic.determineManifestConversion(c.dest.SupportedManifestMIMETypes(), options.ForceManifestMIMEType)
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := ic.determineManifestConversion(ctx, c.dest.SupportedManifestMIMETypes(), options.ForceManifestMIMEType)
if err != nil {
return err
return nil, err
}
// If src.UpdatedImageNeedsLayerDiffIDs(ic.manifestUpdates) will be true, it needs to be true by the time we get here.
ic.diffIDsAreNeeded = src.UpdatedImageNeedsLayerDiffIDs(*ic.manifestUpdates)
if err := ic.copyLayers(); err != nil {
return err
if err := ic.copyLayers(ctx); err != nil {
return nil, err
}
// With docker/distribution registries we do not know whether the registry accepts schema2 or schema1 only;
// and at least with the OpenShift registry "acceptschema2" option, there is no way to detect the support
// without actually trying to upload something and getting a types.ManifestTypeRejectedError.
// So, try the preferred manifest MIME type. If the process succeeds, fine…
manifest, err := ic.copyUpdatedConfigAndManifest()
manifestBytes, err = ic.copyUpdatedConfigAndManifest(ctx)
if err != nil {
logrus.Debugf("Writing manifest using preferred type %s failed: %v", preferredManifestMIMEType, err)
// … if it fails, _and_ the failure is because the manifest is rejected, we may have other options.
@@ -268,14 +325,14 @@ func (c *copier) copyOneImage(policyContext *signature.PolicyContext, options *O
// We dont have other options.
// In principle the code below would handle this as well, but the resulting error message is fairly ugly.
// Dont bother the user with MIME types if we have no choice.
return err
return nil, err
}
// If the original MIME type is acceptable, determineManifestConversion always uses it as preferredManifestMIMEType.
// So if we are here, we will definitely be trying to convert the manifest.
// With !ic.canModifyManifest, that would just be a string of repeated failures for the same reason,
// so lets bail out early and with a better error message.
if !ic.canModifyManifest {
return errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
return nil, errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
}
// errs is a list of errors when trying various manifest types. Also serves as an "upload succeeded" flag when set to nil.
@@ -283,7 +340,7 @@ func (c *copier) copyOneImage(policyContext *signature.PolicyContext, options *O
for _, manifestMIMEType := range otherManifestMIMETypeCandidates {
logrus.Debugf("Trying to use manifest type %s…", manifestMIMEType)
ic.manifestUpdates.ManifestMIMEType = manifestMIMEType
attemptedManifest, err := ic.copyUpdatedConfigAndManifest()
attemptedManifest, err := ic.copyUpdatedConfigAndManifest(ctx)
if err != nil {
logrus.Debugf("Upload of manifest type %s failed: %v", manifestMIMEType, err)
errs = append(errs, fmt.Sprintf("%s(%v)", manifestMIMEType, err))
@@ -291,29 +348,29 @@ func (c *copier) copyOneImage(policyContext *signature.PolicyContext, options *O
}
// We have successfully uploaded a manifest.
manifest = attemptedManifest
manifestBytes = attemptedManifest
errs = nil // Mark this as a success so that we don't abort below.
break
}
if errs != nil {
return fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
return nil, fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
}
}
if options.SignBy != "" {
newSig, err := c.createSignature(manifest, options.SignBy)
newSig, err := c.createSignature(manifestBytes, options.SignBy)
if err != nil {
return err
return nil, err
}
sigs = append(sigs, newSig)
}
c.Printf("Storing signatures\n")
if err := c.dest.PutSignatures(sigs); err != nil {
return errors.Wrap(err, "Error writing signatures")
if err := c.dest.PutSignatures(ctx, sigs); err != nil {
return nil, errors.Wrap(err, "Error writing signatures")
}
return nil
return manifestBytes, nil
}
// Printf writes a formatted string to c.reportWriter.
@@ -325,13 +382,13 @@ func (c *copier) Printf(format string, a ...interface{}) {
fmt.Fprintf(c.reportWriter, format, a...)
}
func checkImageDestinationForCurrentRuntimeOS(ctx *types.SystemContext, src types.Image, dest types.ImageDestination) error {
func checkImageDestinationForCurrentRuntimeOS(ctx context.Context, sys *types.SystemContext, src types.Image, dest types.ImageDestination) error {
if dest.MustMatchRuntimeOS() {
wantedOS := runtime.GOOS
if ctx != nil && ctx.OSChoice != "" {
wantedOS = ctx.OSChoice
if sys != nil && sys.OSChoice != "" {
wantedOS = sys.OSChoice
}
c, err := src.OCIConfig()
c, err := src.OCIConfig(ctx)
if err != nil {
return errors.Wrapf(err, "Error parsing image configuration")
}
@@ -347,6 +404,9 @@ func checkImageDestinationForCurrentRuntimeOS(ctx *types.SystemContext, src type
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
func (ic *imageCopier) updateEmbeddedDockerReference() error {
if ic.c.dest.IgnoresEmbeddedDockerReference() {
return nil // Destination would prefer us not to update the embedded reference.
}
destRef := ic.c.dest.Reference().DockerReference()
if destRef == nil {
return nil // Destination does not care about Docker references
@@ -363,12 +423,22 @@ func (ic *imageCopier) updateEmbeddedDockerReference() error {
return nil
}
// isTTY returns true if the io.Writer is a file and a tty.
func isTTY(w io.Writer) bool {
if f, ok := w.(*os.File); ok {
return terminal.IsTerminal(int(f.Fd()))
}
return false
}
// copyLayers copies layers from ic.src/ic.c.rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
func (ic *imageCopier) copyLayers() error {
func (ic *imageCopier) copyLayers(ctx context.Context) error {
srcInfos := ic.src.LayerInfos()
destInfos := []types.BlobInfo{}
diffIDs := []digest.Digest{}
updatedSrcInfos := ic.src.LayerInfosForCopy()
numLayers := len(srcInfos)
updatedSrcInfos, err := ic.src.LayerInfosForCopy(ctx)
if err != nil {
return err
}
srcInfosUpdated := false
if updatedSrcInfos != nil && !reflect.DeepEqual(srcInfos, updatedSrcInfos) {
if !ic.canModifyManifest {
@@ -377,30 +447,70 @@ func (ic *imageCopier) copyLayers() error {
srcInfos = updatedSrcInfos
srcInfosUpdated = true
}
for _, srcLayer := range srcInfos {
var (
destInfo types.BlobInfo
diffID digest.Digest
err error
)
type copyLayerData struct {
destInfo types.BlobInfo
diffID digest.Digest
err error
}
// copyGroup is used to determine if all layers are copied
copyGroup := sync.WaitGroup{}
copyGroup.Add(numLayers)
// copySemaphore is used to limit the number of parallel downloads to
// avoid malicious images causing troubles and to be nice to servers.
var copySemaphore *semaphore.Weighted
if ic.c.copyInParallel {
copySemaphore = semaphore.NewWeighted(int64(maxParallelDownloads))
} else {
copySemaphore = semaphore.NewWeighted(int64(1))
}
data := make([]copyLayerData, numLayers)
copyLayerHelper := func(index int, srcLayer types.BlobInfo, pool *mpb.Progress) {
defer copySemaphore.Release(1)
defer copyGroup.Done()
cld := copyLayerData{}
if ic.c.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
// DiffIDs are, currently, needed only when converting from schema1.
// In which case src.LayerInfos will not have URLs because schema1
// does not support them.
if ic.diffIDsAreNeeded {
return errors.New("getting DiffID for foreign layers is unimplemented")
cld.err = errors.New("getting DiffID for foreign layers is unimplemented")
} else {
cld.destInfo = srcLayer
logrus.Debugf("Skipping foreign layer %q copy to %s", cld.destInfo.Digest, ic.c.dest.Reference().Transport().Name())
}
destInfo = srcLayer
ic.c.Printf("Skipping foreign layer %q copy to %s\n", destInfo.Digest, ic.c.dest.Reference().Transport().Name())
} else {
destInfo, diffID, err = ic.copyLayer(srcLayer)
if err != nil {
return err
}
cld.destInfo, cld.diffID, cld.err = ic.copyLayer(ctx, srcLayer, pool)
}
destInfos = append(destInfos, destInfo)
diffIDs = append(diffIDs, diffID)
data[index] = cld
}
func() { // A scope for defer
progressPool, progressCleanup := ic.c.newProgressPool(ctx)
defer progressCleanup()
for i, srcLayer := range srcInfos {
copySemaphore.Acquire(ctx, 1)
go copyLayerHelper(i, srcLayer, progressPool)
}
// Wait for all layers to be copied
copyGroup.Wait()
}()
destInfos := make([]types.BlobInfo, numLayers)
diffIDs := make([]digest.Digest, numLayers)
for i, cld := range data {
if cld.err != nil {
return cld.err
}
destInfos[i] = cld.destInfo
diffIDs[i] = cld.diffID
}
ic.manifestUpdates.InformationOnly.LayerInfos = destInfos
if ic.diffIDsAreNeeded {
ic.manifestUpdates.InformationOnly.LayerDiffIDs = diffIDs
@@ -426,7 +536,7 @@ func layerDigestsDiffer(a, b []types.BlobInfo) bool {
// copyUpdatedConfigAndManifest updates the image per ic.manifestUpdates, if necessary,
// stores the resulting config and manifest to the destination, and returns the stored manifest.
func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
func (ic *imageCopier) copyUpdatedConfigAndManifest(ctx context.Context) ([]byte, error) {
pendingImage := ic.src
if !reflect.DeepEqual(*ic.manifestUpdates, types.ManifestUpdateOptions{InformationOnly: ic.manifestUpdates.InformationOnly}) {
if !ic.canModifyManifest {
@@ -441,40 +551,89 @@ func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
// If handling such registries turns out to be necessary, we could compute ic.diffIDsAreNeeded based on the full list of manifest MIME type candidates.
return nil, errors.Errorf("Can not convert image to %s, preparing DiffIDs for this case is not supported", ic.manifestUpdates.ManifestMIMEType)
}
pi, err := ic.src.UpdatedImage(*ic.manifestUpdates)
pi, err := ic.src.UpdatedImage(ctx, *ic.manifestUpdates)
if err != nil {
return nil, errors.Wrap(err, "Error creating an updated image manifest")
}
pendingImage = pi
}
manifest, _, err := pendingImage.Manifest()
manifest, _, err := pendingImage.Manifest(ctx)
if err != nil {
return nil, errors.Wrap(err, "Error reading manifest")
}
if err := ic.c.copyConfig(pendingImage); err != nil {
if err := ic.c.copyConfig(ctx, pendingImage); err != nil {
return nil, err
}
ic.c.Printf("Writing manifest to image destination\n")
if err := ic.c.dest.PutManifest(manifest); err != nil {
if err := ic.c.dest.PutManifest(ctx, manifest); err != nil {
return nil, errors.Wrap(err, "Error writing manifest")
}
return manifest, nil
}
// newProgressPool creates a *mpb.Progress and a cleanup function.
// The caller must eventually call the returned cleanup function after the pool will no longer be updated.
func (c *copier) newProgressPool(ctx context.Context) (*mpb.Progress, func()) {
ctx, cancel := context.WithCancel(ctx)
pool := mpb.New(mpb.WithWidth(40), mpb.WithOutput(c.progressOutput), mpb.WithContext(ctx))
return pool, func() {
cancel()
pool.Wait()
}
}
// createProgressBar creates a mpb.Bar in pool. Note that if the copier's reportWriter
// is ioutil.Discard, the progress bar's output will be discarded
func (c *copier) createProgressBar(pool *mpb.Progress, info types.BlobInfo, kind string, onComplete string) *mpb.Bar {
// shortDigestLen is the length of the digest used for blobs.
const shortDigestLen = 12
prefix := fmt.Sprintf("Copying %s %s", kind, info.Digest.Encoded())
// Truncate the prefix (chopping of some part of the digest) to make all progress bars aligned in a column.
maxPrefixLen := len("Copying blob ") + shortDigestLen
if len(prefix) > maxPrefixLen {
prefix = prefix[:maxPrefixLen]
}
bar := pool.AddBar(info.Size,
mpb.BarClearOnComplete(),
mpb.PrependDecorators(
decor.Name(prefix),
),
mpb.AppendDecorators(
decor.OnComplete(decor.CountersKibiByte("%.1f / %.1f"), " "+onComplete),
),
)
if c.progressOutput == ioutil.Discard {
c.Printf("Copying %s %s\n", kind, info.Digest)
}
return bar
}
// copyConfig copies config.json, if any, from src to dest.
func (c *copier) copyConfig(src types.Image) error {
func (c *copier) copyConfig(ctx context.Context, src types.Image) error {
srcInfo := src.ConfigInfo()
if srcInfo.Digest != "" {
c.Printf("Copying config %s\n", srcInfo.Digest)
configBlob, err := src.ConfigBlob()
configBlob, err := src.ConfigBlob(ctx)
if err != nil {
return errors.Wrapf(err, "Error reading config blob %s", srcInfo.Digest)
}
destInfo, err := c.copyBlobFromStream(bytes.NewReader(configBlob), srcInfo, nil, false)
destInfo, err := func() (types.BlobInfo, error) { // A scope for defer
progressPool, progressCleanup := c.newProgressPool(ctx)
defer progressCleanup()
bar := c.createProgressBar(progressPool, srcInfo, "config", "done")
destInfo, err := c.copyBlobFromStream(ctx, bytes.NewReader(configBlob), srcInfo, nil, false, true, bar)
if err != nil {
return types.BlobInfo{}, err
}
bar.SetTotal(int64(len(configBlob)), true)
return destInfo, nil
}()
if err != nil {
return err
return nil
}
if destInfo.Digest != srcInfo.Digest {
return errors.Errorf("Internal error: copying uncompressed config blob %s changed digest to %s", srcInfo.Digest, destInfo.Digest)
@@ -492,61 +651,65 @@ type diffIDResult struct {
// copyLayer copies a layer with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied layer, and a value for LayerDiffIDs if diffIDIsNeeded
func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest.Digest, error) {
// Check if we already have a blob with this digest
haveBlob, extantBlobSize, err := ic.c.dest.HasBlob(srcInfo)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error checking for blob %s at destination", srcInfo.Digest)
}
// If we already have a cached diffID for this blob, we don't need to compute it
diffIDIsNeeded := ic.diffIDsAreNeeded && (ic.c.cachedDiffIDs[srcInfo.Digest] == "")
// If we already have the blob, and we don't need to recompute the diffID, then we might be able to avoid reading it again
if haveBlob && !diffIDIsNeeded {
// Check the blob sizes match, if we were given a size this time
if srcInfo.Size != -1 && srcInfo.Size != extantBlobSize {
return types.BlobInfo{}, "", errors.Errorf("Error: blob %s is already present, but with size %d instead of %d", srcInfo.Digest, extantBlobSize, srcInfo.Size)
}
srcInfo.Size = extantBlobSize
// Tell the image destination that this blob's delta is being applied again. For some image destinations, this can be faster than using GetBlob/PutBlob
blobinfo, err := ic.c.dest.ReapplyBlob(srcInfo)
func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, pool *mpb.Progress) (types.BlobInfo, digest.Digest, error) {
cachedDiffID := ic.c.blobInfoCache.UncompressedDigest(srcInfo.Digest) // May be ""
diffIDIsNeeded := ic.diffIDsAreNeeded && cachedDiffID == ""
// If we already have the blob, and we don't need to compute the diffID, then we don't need to read it from the source.
if !diffIDIsNeeded {
reused, blobInfo, err := ic.c.dest.TryReusingBlob(ctx, srcInfo, ic.c.blobInfoCache, ic.canSubstituteBlobs)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reapplying blob %s at destination", srcInfo.Digest)
return types.BlobInfo{}, "", errors.Wrapf(err, "Error trying to reuse blob %s at destination", srcInfo.Digest)
}
if reused {
logrus.Debugf("Skipping blob %s (already present):", srcInfo.Digest)
bar := ic.c.createProgressBar(pool, srcInfo, "blob", "skipped: already exists")
bar.SetTotal(0, true)
return blobInfo, cachedDiffID, nil
}
ic.c.Printf("Skipping fetch of repeat blob %s\n", srcInfo.Digest)
return blobinfo, ic.c.cachedDiffIDs[srcInfo.Digest], err
}
// Fallback: copy the layer, computing the diffID if we need to do so
ic.c.Printf("Copying blob %s\n", srcInfo.Digest)
srcStream, srcBlobSize, err := ic.c.rawSource.GetBlob(srcInfo)
srcStream, srcBlobSize, err := ic.c.rawSource.GetBlob(ctx, srcInfo, ic.c.blobInfoCache)
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
defer srcStream.Close()
blobInfo, diffIDChan, err := ic.copyLayerFromStream(srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
diffIDIsNeeded)
bar := ic.c.createProgressBar(pool, srcInfo, "blob", "done")
blobInfo, diffIDChan, err := ic.copyLayerFromStream(ctx, srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize}, diffIDIsNeeded, bar)
if err != nil {
return types.BlobInfo{}, "", err
}
var diffIDResult diffIDResult // = {digest:""}
diffID := cachedDiffID
if diffIDIsNeeded {
diffIDResult = <-diffIDChan
if diffIDResult.err != nil {
return types.BlobInfo{}, "", errors.Wrap(diffIDResult.err, "Error computing layer DiffID")
select {
case <-ctx.Done():
return types.BlobInfo{}, "", ctx.Err()
case diffIDResult := <-diffIDChan:
if diffIDResult.err != nil {
return types.BlobInfo{}, "", errors.Wrap(diffIDResult.err, "Error computing layer DiffID")
}
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
// This is safe because we have just computed diffIDResult.Digest ourselves, and in the process
// we have read all of the input blob, so srcInfo.Digest must have been validated by digestingReader.
ic.c.blobInfoCache.RecordDigestUncompressedPair(srcInfo.Digest, diffIDResult.digest)
diffID = diffIDResult.digest
}
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
ic.c.cachedDiffIDs[srcInfo.Digest] = diffIDResult.digest
}
return blobInfo, diffIDResult.digest, nil
bar.SetTotal(srcInfo.Size, true)
return blobInfo, diffID, nil
}
// copyLayerFromStream is an implementation detail of copyLayer; mostly providing a separate “defer” scope.
// it copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
// perhaps compressing the stream if canCompress,
// and returns a complete blobInfo of the copied blob and perhaps a <-chan diffIDResult if diffIDIsNeeded, to be read by the caller.
func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
diffIDIsNeeded bool) (types.BlobInfo, <-chan diffIDResult, error) {
func (ic *imageCopier) copyLayerFromStream(ctx context.Context, srcStream io.Reader, srcInfo types.BlobInfo,
diffIDIsNeeded bool, bar *mpb.Bar) (types.BlobInfo, <-chan diffIDResult, error) {
var getDiffIDRecorder func(compression.DecompressorFunc) io.Writer // = nil
var diffIDChan chan diffIDResult
@@ -570,7 +733,7 @@ func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.Bl
return pipeWriter
}
}
blobInfo, err := ic.c.copyBlobFromStream(srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest) // Sets err to nil on success
blobInfo, err := ic.c.copyBlobFromStream(ctx, srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest, false, bar) // Sets err to nil on success
return blobInfo, diffIDChan, err
// We need the defer … pipeWriter.CloseWithError() to happen HERE so that the caller can block on reading from diffIDChan
}
@@ -594,6 +757,7 @@ func computeDiffID(stream io.Reader, decompressor compression.DecompressorFunc)
if err != nil {
return "", err
}
defer s.Close()
stream = s
}
@@ -604,16 +768,16 @@ func computeDiffID(stream io.Reader, decompressor compression.DecompressorFunc)
// perhaps sending a copy to an io.Writer if getOriginalLayerCopyWriter != nil,
// perhaps compressing it if canCompress,
// and returns a complete blobInfo of the copied blob.
func (c *copier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
func (c *copier) copyBlobFromStream(ctx context.Context, srcStream io.Reader, srcInfo types.BlobInfo,
getOriginalLayerCopyWriter func(decompressor compression.DecompressorFunc) io.Writer,
canCompress bool) (types.BlobInfo, error) {
canModifyBlob bool, isConfig bool, bar *mpb.Bar) (types.BlobInfo, error) {
// The copying happens through a pipeline of connected io.Readers.
// === Input: srcStream
// === Process input through digestingReader to validate against the expected digest.
// Be paranoid; in case PutBlob somehow managed to ignore an error from digestingReader,
// use a separate validation failure indicator.
// Note that we don't use a stronger "validationSucceeded" indicator, because
// Note that for this check we don't use the stronger "validationSucceeded" indicator, because
// dest.PutBlob may detect that the layer already exists, in which case we don't
// read stream to the end, and validation does not happen.
digestingReader, err := newDigestingReader(srcStream, srcInfo.Digest)
@@ -629,16 +793,7 @@ func (c *copier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
return types.BlobInfo{}, errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
}
isCompressed := decompressor != nil
// === Report progress using a pb.Reader.
bar := pb.New(int(srcInfo.Size)).SetUnits(pb.U_BYTES)
bar.Output = c.reportWriter
bar.SetMaxWidth(80)
bar.ShowTimeLeft = false
bar.ShowPercent = false
bar.Start()
destStream = bar.NewProxyReader(destStream)
defer bar.Finish()
destStream = bar.ProxyReader(destStream)
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.
@@ -647,13 +802,12 @@ func (c *copier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
originalLayerReader = destStream
}
// === Compress the layer if it is uncompressed and compression is desired
// === Deal with layer compression/decompression if necessary
var inputInfo types.BlobInfo
if !canCompress || isCompressed || !c.dest.ShouldCompressLayers() {
logrus.Debugf("Using original blob without modification")
inputInfo = srcInfo
} else {
var compressionOperation types.LayerCompression
if canModifyBlob && c.dest.DesiredLayerCompression() == types.Compress && !isCompressed {
logrus.Debugf("Compressing blob on the fly")
compressionOperation = types.Compress
pipeReader, pipeWriter := io.Pipe()
defer pipeReader.Close()
@@ -664,6 +818,21 @@ func (c *copier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
destStream = pipeReader
inputInfo.Digest = ""
inputInfo.Size = -1
} else if canModifyBlob && c.dest.DesiredLayerCompression() == types.Decompress && isCompressed {
logrus.Debugf("Blob will be decompressed")
compressionOperation = types.Decompress
s, err := decompressor(destStream)
if err != nil {
return types.BlobInfo{}, err
}
defer s.Close()
destStream = s
inputInfo.Digest = ""
inputInfo.Size = -1
} else {
logrus.Debugf("Using original blob without modification")
compressionOperation = types.PreserveOriginal
inputInfo = srcInfo
}
// === Report progress using the c.progress channel, if required.
@@ -678,7 +847,7 @@ func (c *copier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
}
// === Finally, send the layer stream to dest.
uploadedInfo, err := c.dest.PutBlob(destStream, inputInfo)
uploadedInfo, err := c.dest.PutBlob(ctx, destStream, inputInfo, c.blobInfoCache, isConfig)
if err != nil {
return types.BlobInfo{}, errors.Wrap(err, "Error writing blob")
}
@@ -701,6 +870,22 @@ func (c *copier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
if inputInfo.Digest != "" && uploadedInfo.Digest != inputInfo.Digest {
return types.BlobInfo{}, errors.Errorf("Internal error writing blob %s, blob with digest %s saved with digest %s", srcInfo.Digest, inputInfo.Digest, uploadedInfo.Digest)
}
if digestingReader.validationSucceeded {
// If compressionOperation != types.PreserveOriginal, we now have two reliable digest values:
// srcinfo.Digest describes the pre-compressionOperation input, verified by digestingReader
// uploadedInfo.Digest describes the post-compressionOperation output, computed by PutBlob
// (because inputInfo.Digest == "", this must have been computed afresh).
switch compressionOperation {
case types.PreserveOriginal:
break // Do nothing, we have only one digest and we might not have even verified it.
case types.Compress:
c.blobInfoCache.RecordDigestUncompressedPair(uploadedInfo.Digest, srcInfo.Digest)
case types.Decompress:
c.blobInfoCache.RecordDigestUncompressedPair(srcInfo.Digest, uploadedInfo.Digest)
default:
return types.BlobInfo{}, errors.Errorf("Internal error: Unexpected compressionOperation value %#v", compressionOperation)
}
}
return uploadedInfo, nil
}
@@ -711,7 +896,7 @@ func compressGoroutine(dest *io.PipeWriter, src io.Reader) {
dest.CloseWithError(err) // CloseWithError(nil) is equivalent to Close()
}()
zipper := gzip.NewWriter(dest)
zipper := pgzip.NewWriter(dest)
defer zipper.Close()
_, err = io.Copy(zipper, src) // Sets err to nil, i.e. causes dest.Close()

View File

@@ -1,6 +1,7 @@
package copy
import (
"context"
"strings"
"github.com/containers/image/manifest"
@@ -41,11 +42,16 @@ func (os *orderedSet) append(s string) {
// Note that the conversion will only happen later, through ic.src.UpdatedImage
// Returns the preferred manifest MIME type (whether we are converting to it or using it unmodified),
// and a list of other possible alternatives, in order.
func (ic *imageCopier) determineManifestConversion(destSupportedManifestMIMETypes []string, forceManifestMIMEType string) (string, []string, error) {
_, srcType, err := ic.src.Manifest()
func (ic *imageCopier) determineManifestConversion(ctx context.Context, destSupportedManifestMIMETypes []string, forceManifestMIMEType string) (string, []string, error) {
_, srcType, err := ic.src.Manifest(ctx)
if err != nil { // This should have been cached?!
return "", nil, errors.Wrap(err, "Error reading manifest")
}
normalizedSrcType := manifest.NormalizedMIMEType(srcType)
if srcType != normalizedSrcType {
logrus.Debugf("Source manifest MIME type %s, treating it as %s", srcType, normalizedSrcType)
srcType = normalizedSrcType
}
if forceManifestMIMEType != "" {
destSupportedManifestMIMETypes = []string{forceManifestMIMEType}
@@ -106,8 +112,8 @@ func (ic *imageCopier) determineManifestConversion(destSupportedManifestMIMEType
}
// isMultiImage returns true if img is a list of images
func isMultiImage(img types.UnparsedImage) (bool, error) {
_, mt, err := img.Manifest()
func isMultiImage(ctx context.Context, img types.UnparsedImage) (bool, error) {
_, mt, err := img.Manifest(ctx)
if err != nil {
return false, err
}

View File

@@ -1,6 +1,7 @@
package directory
import (
"context"
"io"
"io/ioutil"
"os"
@@ -12,7 +13,7 @@ import (
"github.com/sirupsen/logrus"
)
const version = "Directory Transport Version: 1.0\n"
const version = "Directory Transport Version: 1.1\n"
// ErrNotContainerImageDir indicates that the directory doesn't match the expected contents of a directory created
// using the 'dir' transport
@@ -94,13 +95,15 @@ func (d *dirImageDestination) SupportedManifestMIMETypes() []string {
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *dirImageDestination) SupportsSignatures() error {
func (d *dirImageDestination) SupportsSignatures(ctx context.Context) error {
return nil
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *dirImageDestination) ShouldCompressLayers() bool {
return d.compress
func (d *dirImageDestination) DesiredLayerCompression() types.LayerCompression {
if d.compress {
return types.Compress
}
return types.PreserveOriginal
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
@@ -114,13 +117,26 @@ func (d *dirImageDestination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *dirImageDestination) IgnoresEmbeddedDockerReference() bool {
return false // N/A, DockerReference() returns nil.
}
// HasThreadSafePutBlob indicates whether PutBlob can be executed concurrently.
func (d *dirImageDestination) HasThreadSafePutBlob() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
// May update cache.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
func (d *dirImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, cache types.BlobInfoCache, isConfig bool) (types.BlobInfo, error) {
blobFile, err := ioutil.TempFile(d.ref.path, "dir-put-blob")
if err != nil {
return types.BlobInfo{}, err
@@ -136,6 +152,7 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
@@ -158,38 +175,38 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dirImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
// TryReusingBlob checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If canSubstitute, TryReusingBlob can use an equivalent equivalent of the desired blob; in that case the returned info may not match the input.
// If the blob has been succesfully reused, returns (true, info, nil); info must contain at least a digest and size.
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
// May use and/or update cache.
func (d *dirImageDestination) TryReusingBlob(ctx context.Context, info types.BlobInfo, cache types.BlobInfoCache, canSubstitute bool) (bool, types.BlobInfo, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
return false, types.BlobInfo{}, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
blobPath := d.ref.layerPath(info.Digest)
finfo, err := os.Stat(blobPath)
if err != nil && os.IsNotExist(err) {
return false, -1, nil
return false, types.BlobInfo{}, nil
}
if err != nil {
return false, -1, err
return false, types.BlobInfo{}, err
}
return true, finfo.Size(), nil
}
return true, types.BlobInfo{Digest: info.Digest, Size: finfo.Size()}, nil
func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dirImageDestination) PutManifest(manifest []byte) error {
func (d *dirImageDestination) PutManifest(ctx context.Context, manifest []byte) error {
return ioutil.WriteFile(d.ref.manifestPath(), manifest, 0644)
}
func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
func (d *dirImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
for i, sig := range signatures {
if err := ioutil.WriteFile(d.ref.signaturePath(i), sig, 0644); err != nil {
return err
@@ -202,7 +219,7 @@ func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *dirImageDestination) Commit() error {
func (d *dirImageDestination) Commit(ctx context.Context) error {
return nil
}

View File

@@ -37,7 +37,7 @@ func (s *dirImageSource) Close() error {
// It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *dirImageSource) GetManifest(instanceDigest *digest.Digest) ([]byte, string, error) {
func (s *dirImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil {
return nil, "", errors.Errorf(`Getting target manifest not supported by "dir:"`)
}
@@ -48,15 +48,22 @@ func (s *dirImageSource) GetManifest(instanceDigest *digest.Digest) ([]byte, str
return m, manifest.GuessMIMEType(m), err
}
// HasThreadSafeGetBlob indicates whether GetBlob can be executed concurrently.
func (s *dirImageSource) HasThreadSafeGetBlob() bool {
return false
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
// The Digest field in BlobInfo is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
// May update BlobInfoCache, preferably after it knows for certain that a blob truly exists at a specific location.
func (s *dirImageSource) GetBlob(ctx context.Context, info types.BlobInfo, cache types.BlobInfoCache) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(info.Digest))
if err != nil {
return nil, 0, nil
return nil, -1, err
}
fi, err := r.Stat()
if err != nil {
return nil, 0, nil
return nil, -1, err
}
return r, fi.Size(), nil
}
@@ -84,6 +91,6 @@ func (s *dirImageSource) GetSignatures(ctx context.Context, instanceDigest *dige
}
// LayerInfosForCopy() returns updated layer info that should be used when copying, in preference to values in the manifest, if specified.
func (s *dirImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *dirImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -1,18 +1,18 @@
package directory
import (
"context"
"fmt"
"path/filepath"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
func init() {
@@ -139,29 +139,29 @@ func (ref dirReference) PolicyConfigurationNamespaces() []string {
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
func (ref dirReference) NewImage(ctx *types.SystemContext) (types.ImageCloser, error) {
func (ref dirReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
src := newImageSource(ref)
return image.FromSource(ctx, src)
return image.FromSource(ctx, sys, src)
}
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref dirReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
func (ref dirReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ref), nil
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref dirReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
func (ref dirReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
compress := false
if ctx != nil {
compress = ctx.DirForceCompress
if sys != nil {
compress = sys.DirForceCompress
}
return newImageDestination(ref, compress)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref dirReference) DeleteImage(ctx *types.SystemContext) error {
func (ref dirReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for dir: images")
}
@@ -173,7 +173,7 @@ func (ref dirReference) manifestPath() string {
// layerPath returns a path for a layer tarball within a directory using our conventions.
func (ref dirReference) layerPath(digest digest.Digest) string {
// FIXME: Should we keep the digest identification?
return filepath.Join(ref.path, digest.Hex()+".tar")
return filepath.Join(ref.path, digest.Hex())
}
// signaturePath returns a path for a signature within a directory using our conventions.

View File

@@ -1,6 +1,7 @@
package archive
import (
"context"
"io"
"os"
@@ -15,11 +16,7 @@ type archiveImageDestination struct {
writer io.Closer
}
func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.ImageDestination, error) {
if ref.destinationRef == nil {
return nil, errors.Errorf("docker-archive: destination reference not supplied (must be of form <path>:<reference:tag>)")
}
func newImageDestination(sys *types.SystemContext, ref archiveReference) (types.ImageDestination, error) {
// ref.path can be either a pipe or a regular file
// in the case of a pipe, we require that we can open it for write
// in the case of a regular file, we don't want to overwrite any pre-existing file
@@ -39,13 +36,22 @@ func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.
return nil, errors.New("docker-archive doesn't support modifying existing images")
}
tarDest := tarfile.NewDestination(fh, ref.destinationRef)
if sys != nil && sys.DockerArchiveAdditionalTags != nil {
tarDest.AddRepoTags(sys.DockerArchiveAdditionalTags)
}
return &archiveImageDestination{
Destination: tarfile.NewDestination(fh, ref.destinationRef),
Destination: tarDest,
ref: ref,
writer: fh,
}, nil
}
// DesiredLayerCompression indicates if layers must be compressed, decompressed or preserved
func (d *archiveImageDestination) DesiredLayerCompression() types.LayerCompression {
return types.Decompress
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *archiveImageDestination) Reference() types.ImageReference {
@@ -61,6 +67,6 @@ func (d *archiveImageDestination) Close() error {
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *archiveImageDestination) Commit() error {
return d.Destination.Commit()
func (d *archiveImageDestination) Commit(ctx context.Context) error {
return d.Destination.Commit(ctx)
}

View File

@@ -1,6 +1,7 @@
package archive
import (
"context"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/sirupsen/logrus"
@@ -13,15 +14,18 @@ type archiveImageSource struct {
// newImageSource returns a types.ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref archiveReference) types.ImageSource {
func newImageSource(ctx context.Context, ref archiveReference) (types.ImageSource, error) {
if ref.destinationRef != nil {
logrus.Warnf("docker-archive: references are not supported for sources (ignoring)")
}
src := tarfile.NewSource(ref.path)
src, err := tarfile.NewSourceFromFile(ref.path)
if err != nil {
return nil, err
}
return &archiveImageSource{
Source: src,
ref: ref,
}
}, nil
}
// Reference returns the reference used to set up this source, _as specified by the user_
@@ -30,12 +34,7 @@ func (s *archiveImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *archiveImageSource) Close() error {
return nil
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *archiveImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *archiveImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -1,6 +1,7 @@
package archive
import (
"context"
"fmt"
"strings"
@@ -40,7 +41,9 @@ func (t archiveTransport) ValidatePolicyConfigurationScope(scope string) error {
// archiveReference is an ImageReference for Docker images.
type archiveReference struct {
destinationRef reference.NamedTagged // only used for destinations
// only used for destinations
// archiveReference.destinationRef is optional and can be nil for destinations as well.
destinationRef reference.NamedTagged
path string
}
@@ -130,25 +133,28 @@ func (ref archiveReference) PolicyConfigurationNamespaces() []string {
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
func (ref archiveReference) NewImage(ctx *types.SystemContext) (types.ImageCloser, error) {
src := newImageSource(ctx, ref)
return ctrImage.FromSource(ctx, src)
func (ref archiveReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
src, err := newImageSource(ctx, ref)
if err != nil {
return nil, err
}
return ctrImage.FromSource(ctx, sys, src)
}
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref archiveReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref), nil
func (ref archiveReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref archiveReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
func (ref archiveReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(sys, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref archiveReference) DeleteImage(ctx *types.SystemContext) error {
func (ref archiveReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
// Not really supported, for safety reasons.
return errors.New("Deleting images not implemented for docker-archive: images")
}

23
vendor/github.com/containers/image/docker/cache.go generated vendored Normal file
View File

@@ -0,0 +1,23 @@
package docker
import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
)
// bicTransportScope returns a BICTransportScope appropriate for ref.
func bicTransportScope(ref dockerReference) types.BICTransportScope {
// Blobs can be reused across the whole registry.
return types.BICTransportScope{Opaque: reference.Domain(ref.ref)}
}
// newBICLocationReference returns a BICLocationReference appropriate for ref.
func newBICLocationReference(ref dockerReference) types.BICLocationReference {
// Blobs are scoped to repositories (the tag/digest are not necessary to reuse a blob).
return types.BICLocationReference{Opaque: ref.ref.Name()}
}
// parseBICLocationReference returns a repository for encoded lr.
func parseBICLocationReference(lr types.BICLocationReference) (reference.Named, error) {
return reference.ParseNormalizedNamed(lr.Opaque)
}

View File

@@ -15,10 +15,10 @@ const (
)
// NewDockerClient initializes a new API client based on the passed SystemContext.
func newDockerClient(ctx *types.SystemContext) (*dockerclient.Client, error) {
func newDockerClient(sys *types.SystemContext) (*dockerclient.Client, error) {
host := dockerclient.DefaultDockerHost
if ctx != nil && ctx.DockerDaemonHost != "" {
host = ctx.DockerDaemonHost
if sys != nil && sys.DockerDaemonHost != "" {
host = sys.DockerDaemonHost
}
// Sadly, unix:// sockets don't work transparently with dockerclient.NewClient.
@@ -27,32 +27,39 @@ func newDockerClient(ctx *types.SystemContext) (*dockerclient.Client, error) {
// regardless of the values in the *tls.Config), and we would have to call sockets.ConfigureTransport.
//
// We don't really want to configure anything for unix:// sockets, so just pass a nil *http.Client.
proto, _, _, err := dockerclient.ParseHost(host)
//
// Similarly, if we want to communicate over plain HTTP on a TCP socket, we also need to set
// TLSClientConfig to nil. This can be achieved by using the form `http://`
url, err := dockerclient.ParseHostURL(host)
if err != nil {
return nil, err
}
var httpClient *http.Client
if proto != "unix" {
hc, err := tlsConfig(ctx)
if err != nil {
return nil, err
if url.Scheme != "unix" {
if url.Scheme == "http" {
httpClient = httpConfig()
} else {
hc, err := tlsConfig(sys)
if err != nil {
return nil, err
}
httpClient = hc
}
httpClient = hc
}
return dockerclient.NewClient(host, defaultAPIVersion, httpClient, nil)
}
func tlsConfig(ctx *types.SystemContext) (*http.Client, error) {
func tlsConfig(sys *types.SystemContext) (*http.Client, error) {
options := tlsconfig.Options{}
if ctx != nil && ctx.DockerDaemonInsecureSkipTLSVerify {
if sys != nil && sys.DockerDaemonInsecureSkipTLSVerify {
options.InsecureSkipVerify = true
}
if ctx != nil && ctx.DockerDaemonCertPath != "" {
options.CAFile = filepath.Join(ctx.DockerDaemonCertPath, "ca.pem")
options.CertFile = filepath.Join(ctx.DockerDaemonCertPath, "cert.pem")
options.KeyFile = filepath.Join(ctx.DockerDaemonCertPath, "key.pem")
if sys != nil && sys.DockerDaemonCertPath != "" {
options.CAFile = filepath.Join(sys.DockerDaemonCertPath, "ca.pem")
options.CertFile = filepath.Join(sys.DockerDaemonCertPath, "cert.pem")
options.KeyFile = filepath.Join(sys.DockerDaemonCertPath, "key.pem")
}
tlsc, err := tlsconfig.Client(options)
@@ -67,3 +74,12 @@ func tlsConfig(ctx *types.SystemContext) (*http.Client, error) {
CheckRedirect: dockerclient.CheckRedirect,
}, nil
}
func httpConfig() *http.Client {
return &http.Client{
Transport: &http.Transport{
TLSClientConfig: nil,
},
CheckRedirect: dockerclient.CheckRedirect,
}
}

View File

@@ -1,6 +1,7 @@
package daemon
import (
"context"
"io"
"github.com/containers/image/docker/reference"
@@ -9,7 +10,6 @@ import (
"github.com/docker/docker/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/net/context"
)
type daemonImageDestination struct {
@@ -25,7 +25,7 @@ type daemonImageDestination struct {
}
// newImageDestination returns a types.ImageDestination for the specified image reference.
func newImageDestination(ctx *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
func newImageDestination(ctx context.Context, sys *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
if ref.ref == nil {
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
}
@@ -35,11 +35,11 @@ func newImageDestination(ctx *types.SystemContext, ref daemonReference) (types.I
}
var mustMatchRuntimeOS = true
if ctx != nil && ctx.DockerDaemonHost != client.DefaultDockerHost {
if sys != nil && sys.DockerDaemonHost != client.DefaultDockerHost {
mustMatchRuntimeOS = false
}
c, err := newDockerClient(ctx)
c, err := newDockerClient(sys)
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
@@ -48,7 +48,7 @@ func newImageDestination(ctx *types.SystemContext, ref daemonReference) (types.I
// Commit() may never be called, so we may never read from this channel; so, make this buffered to allow imageLoadGoroutine to write status and terminate even if we never read it.
statusChannel := make(chan error, 1)
goroutineContext, goroutineCancel := context.WithCancel(context.Background())
goroutineContext, goroutineCancel := context.WithCancel(ctx)
go imageLoadGoroutine(goroutineContext, c, reader, statusChannel)
return &daemonImageDestination{
@@ -85,6 +85,11 @@ func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeRe
defer resp.Body.Close()
}
// DesiredLayerCompression indicates if layers must be compressed, decompressed or preserved
func (d *daemonImageDestination) DesiredLayerCompression() types.LayerCompression {
return types.PreserveOriginal
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *daemonImageDestination) MustMatchRuntimeOS() bool {
return d.mustMatchRuntimeOS
@@ -119,9 +124,9 @@ func (d *daemonImageDestination) Reference() types.ImageReference {
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *daemonImageDestination) Commit() error {
func (d *daemonImageDestination) Commit(ctx context.Context) error {
logrus.Debugf("docker-daemon: Closing tar stream")
if err := d.Destination.Commit(); err != nil {
if err := d.Destination.Commit(ctx); err != nil {
return err
}
if err := d.writer.Close(); err != nil {
@@ -130,6 +135,10 @@ func (d *daemonImageDestination) Commit() error {
d.committed = true // We may still fail, but we are done sending to imageLoadGoroutine.
logrus.Debugf("docker-daemon: Waiting for status")
err := <-d.statusChannel
return err
select {
case <-ctx.Done():
return ctx.Err()
case err := <-d.statusChannel:
return err
}
}

View File

@@ -1,21 +1,16 @@
package daemon
import (
"io"
"io/ioutil"
"os"
"context"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/internal/tmpdir"
"github.com/containers/image/types"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
type daemonImageSource struct {
ref daemonReference
*tarfile.Source // Implements most of types.ImageSource
tarCopyPath string
}
type layerInfo struct {
@@ -32,42 +27,26 @@ type layerInfo struct {
// (We could, perhaps, expect an exact sequence, assume that the first plaintext file
// is the config, and that the following len(RootFS) files are the layers, but that feels
// way too brittle.)
func newImageSource(ctx *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
c, err := newDockerClient(ctx)
func newImageSource(ctx context.Context, sys *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
c, err := newDockerClient(sys)
if err != nil {
return nil, errors.Wrap(err, "Error initializing docker engine client")
}
// Per NewReference(), ref.StringWithinTransport() is either an image ID (config digest), or a !reference.NameOnly() reference.
// Either way ImageSave should create a tarball with exactly one image.
inputStream, err := c.ImageSave(context.TODO(), []string{ref.StringWithinTransport()})
inputStream, err := c.ImageSave(ctx, []string{ref.StringWithinTransport()})
if err != nil {
return nil, errors.Wrap(err, "Error loading image from docker engine")
}
defer inputStream.Close()
// FIXME: use SystemContext here.
tarCopyFile, err := ioutil.TempFile(tmpdir.TemporaryDirectoryForBigFiles(), "docker-daemon-tar")
src, err := tarfile.NewSourceFromStream(inputStream)
if err != nil {
return nil, err
}
defer tarCopyFile.Close()
succeeded := false
defer func() {
if !succeeded {
os.Remove(tarCopyFile.Name())
}
}()
if _, err := io.Copy(tarCopyFile, inputStream); err != nil {
return nil, err
}
succeeded = true
return &daemonImageSource{
ref: ref,
Source: tarfile.NewSource(tarCopyFile.Name()),
tarCopyPath: tarCopyFile.Name(),
ref: ref,
Source: src,
}, nil
}
@@ -77,12 +56,7 @@ func (s *daemonImageSource) Reference() types.ImageReference {
return s.ref
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *daemonImageSource) Close() error {
return os.Remove(s.tarCopyPath)
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *daemonImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *daemonImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -1,13 +1,16 @@
package daemon
import (
"github.com/pkg/errors"
"context"
"fmt"
"github.com/containers/image/docker/policyconfiguration"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
func init() {
@@ -34,8 +37,15 @@ func (t daemonTransport) ParseReference(reference string) (types.ImageReference,
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t daemonTransport) ValidatePolicyConfigurationScope(scope string) error {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return errors.New(`docker-daemon: does not support any scopes except the default "" one`)
// ID values cannot be effectively namespaced, and are clearly invalid host:port values.
if _, err := digest.Parse(scope); err == nil {
return errors.Errorf(`docker-daemon: can not use algo:digest value %s as a namespace`, scope)
}
// FIXME? We could be verifying the various character set and length restrictions
// from docker/distribution/reference.regexp.go, but other than that there
// are few semantically invalid strings.
return nil
}
// daemonReference is an ImageReference for images managed by a local Docker daemon
@@ -87,6 +97,8 @@ func NewReference(id digest.Digest, ref reference.Named) (types.ImageReference,
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// Most versions of docker/reference do not handle that (ignoring the tag), so reject such input.
// This MAY be accepted in the future.
// (Even if it were supported, the semantics of policy namespaces are unclear - should we drop
// the tag or the digest first?)
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
@@ -136,9 +148,28 @@ func (ref daemonReference) DockerReference() reference.Named {
// Returns "" if configuration identities for these references are not supported.
func (ref daemonReference) PolicyConfigurationIdentity() string {
// We must allow referring to images in the daemon by image ID, otherwise untagged images would not be accessible.
// But the existence of image IDs means that we cant truly well namespace the input; the untagged images would have to fall into the default policy,
// which can be unexpected. So, punt.
return "" // This still allows using the default "" scope to define a policy for this transport.
// But the existence of image IDs means that we cant truly well namespace the input:
// a single image can be namespaced either using the name or the ID depending on how it is named.
//
// Thats fairly unexpected, but we have to cope somehow.
//
// So, use the ordinary docker/policyconfiguration namespacing for named images.
// image IDs all fall into the root namespace.
// Users can set up the root namespace to be either untrusted or rejected,
// and to set up specific trust for named namespaces. This allows verifying image
// identity when a name is known, and unnamed images would be untrusted or rejected.
switch {
case ref.id != "":
return "" // This still allows using the default "" scope to define a global policy for ID-identified images.
case ref.ref != nil:
res, err := policyconfiguration.DockerReferenceIdentity(ref.ref)
if res == "" || err != nil { // Coverage: Should never happen, NewReference above should refuse values which could cause a failure.
panic(fmt.Sprintf("Internal inconsistency: policyconfiguration.DockerReferenceIdentity returned %#v, %v", res, err))
}
return res
default: // Coverage: Should never happen, NewReference above should refuse such values.
panic("Internal inconsistency: daemonReference has empty id and nil ref")
}
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
@@ -148,7 +179,14 @@ func (ref daemonReference) PolicyConfigurationIdentity() string {
// and each following element to be a prefix of the element preceding it.
func (ref daemonReference) PolicyConfigurationNamespaces() []string {
// See the explanation in daemonReference.PolicyConfigurationIdentity.
return []string{}
switch {
case ref.id != "":
return []string{}
case ref.ref != nil:
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
default: // Coverage: Should never happen, NewReference above should refuse such values.
panic("Internal inconsistency: daemonReference has empty id and nil ref")
}
}
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
@@ -156,28 +194,28 @@ func (ref daemonReference) PolicyConfigurationNamespaces() []string {
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
func (ref daemonReference) NewImage(ctx *types.SystemContext) (types.ImageCloser, error) {
src, err := newImageSource(ctx, ref)
func (ref daemonReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
src, err := newImageSource(ctx, sys, ref)
if err != nil {
return nil, err
}
return image.FromSource(ctx, src)
return image.FromSource(ctx, sys, src)
}
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref daemonReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref)
func (ref daemonReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, sys, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref daemonReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
func (ref daemonReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, sys, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref daemonReference) DeleteImage(ctx *types.SystemContext) error {
func (ref daemonReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
// Should this just untag the image? Should this stop running containers?
// The semantics is not quite as clear as for remote repositories.
// The user can run (docker rmi) directly anyway, so, for now(?), punt instead of trying to guess what the user meant.

View File

@@ -8,26 +8,30 @@ import (
"io"
"io/ioutil"
"net/http"
"net/url"
"os"
"path/filepath"
"strconv"
"strings"
"sync"
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/pkg/docker/config"
"github.com/containers/image/pkg/sysregistriesv2"
"github.com/containers/image/pkg/tlsclientconfig"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/client"
"github.com/docker/go-connections/tlsconfig"
"github.com/opencontainers/go-digest"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
dockerHostname = "docker.io"
dockerRegistry = "registry-1.docker.io"
systemPerHostCertDirPath = "/etc/docker/certs.d"
dockerHostname = "docker.io"
dockerV1Hostname = "index.docker.io"
dockerRegistry = "registry-1.docker.io"
resolvedPingV2URL = "%s://%s/v2/"
resolvedPingV1URL = "%s://%s/v1/_ping"
@@ -49,6 +53,7 @@ var (
ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
// ErrUnauthorizedForCredentials is returned when the status code returned is 401
ErrUnauthorizedForCredentials = errors.New("unable to retrieve auth token: invalid username/password")
systemPerHostCertDirPaths = [2]string{"/etc/containers/certs.d", "/etc/docker/certs.d"}
)
// extensionSignature and extensionSignatureList come from github.com/openshift/origin/pkg/dockerregistry/server/signaturedispatcher.go:
@@ -66,29 +71,40 @@ type extensionSignatureList struct {
}
type bearerToken struct {
Token string `json:"token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
Token string `json:"token"`
AccessToken string `json:"access_token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
expirationTime time.Time
}
// dockerClient is configuration for dealing with a single Docker registry.
type dockerClient struct {
// The following members are set by newDockerClient and do not change afterwards.
ctx *types.SystemContext
registry string
sys *types.SystemContext
registry string
// tlsClientConfig is setup by newDockerClient and will be used and updated
// by detectProperties(). Callers can edit tlsClientConfig.InsecureSkipVerify in the meantime.
tlsClientConfig *tls.Config
// The following members are not set by newDockerClient and must be set by callers if needed.
username string
password string
client *http.Client
signatureBase signatureStorageBase
scope authScope
// The following members are detected registry properties:
// They are set after a successful detectProperties(), and never change afterwards.
scheme string // Empty value also used to indicate detectProperties() has not yet succeeded.
client *http.Client
scheme string
challenges []challenge
supportsSignatures bool
// The following members are private state for setupRequestAuth, both are valid if token != nil.
token *bearerToken
tokenExpiration time.Time
// Private state for setupRequestAuth (key: string, value: bearerToken)
tokenCache sync.Map
// Private state for detectProperties:
detectPropertiesOnce sync.Once // detectPropertiesOnce is used to execute detectProperties() at most once.
detectPropertiesError error // detectPropertiesError caches the initial error.
}
type authScope struct {
@@ -96,6 +112,38 @@ type authScope struct {
actions string
}
// sendAuth determines whether we need authentication for v2 or v1 endpoint.
type sendAuth int
const (
// v2 endpoint with authentication.
v2Auth sendAuth = iota
// v1 endpoint with authentication.
// TODO: Get v1Auth working
// v1Auth
// no authentication, works for both v1 and v2.
noAuth
)
func newBearerTokenFromJSONBlob(blob []byte) (*bearerToken, error) {
token := new(bearerToken)
if err := json.Unmarshal(blob, &token); err != nil {
return nil, err
}
if token.Token == "" {
token.Token = token.AccessToken
}
if token.ExpiresIn < minimumTokenLifetimeSeconds {
token.ExpiresIn = minimumTokenLifetimeSeconds
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
}
if token.IssuedAt.IsZero() {
token.IssuedAt = time.Now().UTC()
}
token.expirationTime = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
return token, nil
}
// this is cloned from docker/go-connections because upstream docker has changed
// it and make deps here fails otherwise.
// We'll drop this once we upgrade to docker 1.13.x deps.
@@ -109,84 +157,125 @@ func serverDefault() *tls.Config {
}
// dockerCertDir returns a path to a directory to be consumed by tlsclientconfig.SetupCertificates() depending on ctx and hostPort.
func dockerCertDir(ctx *types.SystemContext, hostPort string) string {
if ctx != nil && ctx.DockerCertPath != "" {
return ctx.DockerCertPath
func dockerCertDir(sys *types.SystemContext, hostPort string) (string, error) {
if sys != nil && sys.DockerCertPath != "" {
return sys.DockerCertPath, nil
}
var hostCertDir string
if ctx != nil && ctx.DockerPerHostCertDirPath != "" {
hostCertDir = ctx.DockerPerHostCertDirPath
} else if ctx != nil && ctx.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(ctx.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
if sys != nil && sys.DockerPerHostCertDirPath != "" {
return filepath.Join(sys.DockerPerHostCertDirPath, hostPort), nil
}
return filepath.Join(hostCertDir, hostPort)
var (
hostCertDir string
fullCertDirPath string
)
for _, systemPerHostCertDirPath := range systemPerHostCertDirPaths {
if sys != nil && sys.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(sys.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
}
fullCertDirPath = filepath.Join(hostCertDir, hostPort)
_, err := os.Stat(fullCertDirPath)
if err == nil {
break
}
if os.IsNotExist(err) {
continue
}
if os.IsPermission(err) {
logrus.Debugf("error accessing certs directory due to permissions: %v", err)
continue
}
if err != nil {
return "", err
}
}
return fullCertDirPath, nil
}
// newDockerClientFromRef returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
// “write” specifies whether the client will be used for "write" access (in particular passed to lookaside.go:toplevelFromSection)
func newDockerClientFromRef(ctx *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
func newDockerClientFromRef(sys *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
registry := reference.Domain(ref.ref)
username, password, err := config.GetAuthentication(ctx, reference.Domain(ref.ref))
username, password, err := config.GetAuthentication(sys, registry)
if err != nil {
return nil, errors.Wrapf(err, "error getting username and password")
}
sigBase, err := configuredSignatureStorageBase(ctx, ref, write)
sigBase, err := configuredSignatureStorageBase(sys, ref, write)
if err != nil {
return nil, err
}
remoteName := reference.Path(ref.ref)
return newDockerClientWithDetails(ctx, registry, username, password, actions, sigBase, remoteName)
client, err := newDockerClient(sys, registry, ref.ref.Name())
if err != nil {
return nil, err
}
client.username = username
client.password = password
client.signatureBase = sigBase
client.scope.actions = actions
client.scope.remoteName = reference.Path(ref.ref)
return client, nil
}
// newDockerClientWithDetails returns a new dockerClient instance for the given parameters
func newDockerClientWithDetails(ctx *types.SystemContext, registry, username, password, actions string, sigBase signatureStorageBase, remoteName string) (*dockerClient, error) {
// newDockerClient returns a new dockerClient instance for the given registry
// and reference. The reference is used to query the registry configuration
// and can either be a registry (e.g, "registry.com[:5000]"), a repository
// (e.g., "registry.com[:5000][/some/namespace]/repo").
// Please note that newDockerClient does not set all members of dockerClient
// (e.g., username and password); those must be set by callers if necessary.
func newDockerClient(sys *types.SystemContext, registry, reference string) (*dockerClient, error) {
hostName := registry
if registry == dockerHostname {
registry = dockerRegistry
}
tr := tlsclientconfig.NewTransport()
tr.TLSClientConfig = serverDefault()
tlsClientConfig := serverDefault()
// It is undefined whether the host[:port] string for dockerHostname should be dockerHostname or dockerRegistry,
// because docker/docker does not read the certs.d subdirectory at all in that case. We use the user-visible
// dockerHostname here, because it is more symmetrical to read the configuration in that case as well, and because
// generally the UI hides the existence of the different dockerRegistry. But note that this behavior is
// undocumented and may change if docker/docker changes.
certDir := dockerCertDir(ctx, hostName)
if err := tlsclientconfig.SetupCertificates(certDir, tr.TLSClientConfig); err != nil {
certDir, err := dockerCertDir(sys, hostName)
if err != nil {
return nil, err
}
if err := tlsclientconfig.SetupCertificates(certDir, tlsClientConfig); err != nil {
return nil, err
}
if ctx != nil && ctx.DockerInsecureSkipTLSVerify {
tr.TLSClientConfig.InsecureSkipVerify = true
// Check if TLS verification shall be skipped (default=false) which can
// be specified in the sysregistriesv2 configuration.
skipVerify := false
reg, err := sysregistriesv2.FindRegistry(sys, reference)
if err != nil {
return nil, errors.Wrapf(err, "error loading registries")
}
if reg != nil {
skipVerify = reg.Insecure
}
tlsClientConfig.InsecureSkipVerify = skipVerify
return &dockerClient{
ctx: ctx,
registry: registry,
username: username,
password: password,
client: &http.Client{Transport: tr},
signatureBase: sigBase,
scope: authScope{
actions: actions,
remoteName: remoteName,
},
sys: sys,
registry: registry,
tlsClientConfig: tlsClientConfig,
}, nil
}
// CheckAuth validates the credentials by attempting to log into the registry
// returns an error if an error occcured while making the http request or the status code received was 401
func CheckAuth(ctx context.Context, sCtx *types.SystemContext, username, password, registry string) error {
newLoginClient, err := newDockerClientWithDetails(sCtx, registry, username, password, "", nil, "")
// returns an error if an error occurred while making the http request or the status code received was 401
func CheckAuth(ctx context.Context, sys *types.SystemContext, username, password, registry string) error {
client, err := newDockerClient(sys, registry, registry)
if err != nil {
return errors.Wrapf(err, "error creating new docker client")
}
client.username = username
client.password = password
resp, err := newLoginClient.makeRequest(ctx, "GET", "/v2/", nil, nil)
resp, err := client.makeRequest(ctx, "GET", "/v2/", nil, nil, v2Auth, nil)
if err != nil {
return err
}
@@ -198,26 +287,135 @@ func CheckAuth(ctx context.Context, sCtx *types.SystemContext, username, passwor
case http.StatusUnauthorized:
return ErrUnauthorizedForCredentials
default:
return errors.Errorf("error occured with status code %q", resp.StatusCode)
return errors.Errorf("error occured with status code %d (%s)", resp.StatusCode, http.StatusText(resp.StatusCode))
}
}
// SearchResult holds the information of each matching image
// It matches the output returned by the v1 endpoint
type SearchResult struct {
Name string `json:"name"`
Description string `json:"description"`
// StarCount states the number of stars the image has
StarCount int `json:"star_count"`
IsTrusted bool `json:"is_trusted"`
// IsAutomated states whether the image is an automated build
IsAutomated bool `json:"is_automated"`
// IsOfficial states whether the image is an official build
IsOfficial bool `json:"is_official"`
}
// SearchRegistry queries a registry for images that contain "image" in their name
// The limit is the max number of results desired
// Note: The limit value doesn't work with all registries
// for example registry.access.redhat.com returns all the results without limiting it to the limit value
func SearchRegistry(ctx context.Context, sys *types.SystemContext, registry, image string, limit int) ([]SearchResult, error) {
type V2Results struct {
// Repositories holds the results returned by the /v2/_catalog endpoint
Repositories []string `json:"repositories"`
}
type V1Results struct {
// Results holds the results returned by the /v1/search endpoint
Results []SearchResult `json:"results"`
}
v2Res := &V2Results{}
v1Res := &V1Results{}
// Get credentials from authfile for the underlying hostname
username, password, err := config.GetAuthentication(sys, registry)
if err != nil {
return nil, errors.Wrapf(err, "error getting username and password")
}
// The /v2/_catalog endpoint has been disabled for docker.io therefore
// the call made to that endpoint will fail. So using the v1 hostname
// for docker.io for simplicity of implementation and the fact that it
// returns search results.
hostname := registry
if registry == dockerHostname {
hostname = dockerV1Hostname
}
client, err := newDockerClient(sys, hostname, registry)
if err != nil {
return nil, errors.Wrapf(err, "error creating new docker client")
}
client.username = username
client.password = password
// Only try the v1 search endpoint if the search query is not empty. If it is
// empty skip to the v2 endpoint.
if image != "" {
// set up the query values for the v1 endpoint
u := url.URL{
Path: "/v1/search",
}
q := u.Query()
q.Set("q", image)
q.Set("n", strconv.Itoa(limit))
u.RawQuery = q.Encode()
logrus.Debugf("trying to talk to v1 search endpoint")
resp, err := client.makeRequest(ctx, "GET", u.String(), nil, nil, noAuth, nil)
if err != nil {
logrus.Debugf("error getting search results from v1 endpoint %q: %v", registry, err)
} else {
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
logrus.Debugf("error getting search results from v1 endpoint %q, status code %d (%s)", registry, resp.StatusCode, http.StatusText(resp.StatusCode))
} else {
if err := json.NewDecoder(resp.Body).Decode(v1Res); err != nil {
return nil, err
}
return v1Res.Results, nil
}
}
}
logrus.Debugf("trying to talk to v2 search endpoint")
resp, err := client.makeRequest(ctx, "GET", "/v2/_catalog", nil, nil, v2Auth, nil)
if err != nil {
logrus.Debugf("error getting search results from v2 endpoint %q: %v", registry, err)
} else {
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
logrus.Errorf("error getting search results from v2 endpoint %q, status code %d (%s)", registry, resp.StatusCode, http.StatusText(resp.StatusCode))
} else {
if err := json.NewDecoder(resp.Body).Decode(v2Res); err != nil {
return nil, err
}
searchRes := []SearchResult{}
for _, repo := range v2Res.Repositories {
if strings.Contains(repo, image) {
res := SearchResult{
Name: repo,
}
searchRes = append(searchRes, res)
}
}
return searchRes, nil
}
}
return nil, errors.Wrapf(err, "couldn't search registry %q", registry)
}
// makeRequest creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// The host name and schema is taken from the client or autodetected, and the path is relative to it, i.e. the path usually starts with /v2/.
func (c *dockerClient) makeRequest(ctx context.Context, method, path string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
func (c *dockerClient) makeRequest(ctx context.Context, method, path string, headers map[string][]string, stream io.Reader, auth sendAuth, extraScope *authScope) (*http.Response, error) {
if err := c.detectProperties(ctx); err != nil {
return nil, err
}
url := fmt.Sprintf("%s://%s%s", c.scheme, c.registry, path)
return c.makeRequestToResolvedURL(ctx, method, url, headers, stream, -1, true)
return c.makeRequestToResolvedURL(ctx, method, url, headers, stream, -1, auth, extraScope)
}
// makeRequestToResolvedURL creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// streamLen, if not -1, specifies the length of the data expected on stream.
// makeRequest should generally be preferred.
// TODO(runcom): too many arguments here, use a struct
func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url string, headers map[string][]string, stream io.Reader, streamLen int64, sendAuth bool) (*http.Response, error) {
func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url string, headers map[string][]string, stream io.Reader, streamLen int64, auth sendAuth, extraScope *authScope) (*http.Response, error) {
req, err := http.NewRequest(method, url, stream)
if err != nil {
return nil, err
@@ -232,11 +430,11 @@ func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url
req.Header.Add(n, hh)
}
}
if c.ctx != nil && c.ctx.DockerRegistryUserAgent != "" {
req.Header.Add("User-Agent", c.ctx.DockerRegistryUserAgent)
if c.sys != nil && c.sys.DockerRegistryUserAgent != "" {
req.Header.Add("User-Agent", c.sys.DockerRegistryUserAgent)
}
if sendAuth {
if err := c.setupRequestAuth(req); err != nil {
if auth == v2Auth {
if err := c.setupRequestAuth(req, extraScope); err != nil {
return nil, err
}
}
@@ -255,7 +453,7 @@ func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url
// 2) gcr.io is sending 401 without a WWW-Authenticate header in the real request
//
// debugging: https://github.com/containers/image/pull/211#issuecomment-273426236 and follows up
func (c *dockerClient) setupRequestAuth(req *http.Request) error {
func (c *dockerClient) setupRequestAuth(req *http.Request, extraScope *authScope) error {
if len(c.challenges) == 0 {
return nil
}
@@ -267,24 +465,27 @@ func (c *dockerClient) setupRequestAuth(req *http.Request) error {
req.SetBasicAuth(c.username, c.password)
return nil
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
var scope string
if c.scope.remoteName != "" && c.scope.actions != "" {
scope = fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
}
token, err := c.getBearerToken(req.Context(), realm, service, scope)
cacheKey := ""
scopes := []authScope{c.scope}
if extraScope != nil {
// Using ':' as a separator here is unambiguous because getBearerToken below uses the same separator when formatting a remote request (and because repository names can't contain colons).
cacheKey = fmt.Sprintf("%s:%s", extraScope.remoteName, extraScope.actions)
scopes = append(scopes, *extraScope)
}
var token bearerToken
t, inCache := c.tokenCache.Load(cacheKey)
if inCache {
token = t.(bearerToken)
}
if !inCache || time.Now().After(token.expirationTime) {
t, err := c.getBearerToken(req.Context(), challenge, scopes)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
token = *t
c.tokenCache.Store(cacheKey, token)
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token.Token))
return nil
default:
logrus.Debugf("no handler for %s authentication", challenge.Scheme)
@@ -294,23 +495,34 @@ func (c *dockerClient) setupRequestAuth(req *http.Request) error {
return nil
}
func (c *dockerClient) getBearerToken(ctx context.Context, realm, service, scope string) (*bearerToken, error) {
func (c *dockerClient) getBearerToken(ctx context.Context, challenge challenge, scopes []authScope) (*bearerToken, error) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return nil, errors.Errorf("missing realm in bearer auth challenge")
}
authReq, err := http.NewRequest("GET", realm, nil)
if err != nil {
return nil, err
}
authReq = authReq.WithContext(ctx)
getParams := authReq.URL.Query()
if service != "" {
if c.username != "" {
getParams.Add("account", c.username)
}
if service, ok := challenge.Parameters["service"]; ok && service != "" {
getParams.Add("service", service)
}
if scope != "" {
getParams.Add("scope", scope)
for _, scope := range scopes {
if scope.remoteName != "" && scope.actions != "" {
getParams.Add("scope", fmt.Sprintf("repository:%s:%s", scope.remoteName, scope.actions))
}
}
authReq.URL.RawQuery = getParams.Encode()
if c.username != "" && c.password != "" {
authReq.SetBasicAuth(c.username, c.password)
}
logrus.Debugf("%s %s", authReq.Method, authReq.URL.String())
tr := tlsclientconfig.NewTransport()
// TODO(runcom): insecure for now to contact the external token service
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
@@ -326,44 +538,39 @@ func (c *dockerClient) getBearerToken(ctx context.Context, realm, service, scope
case http.StatusOK:
break
default:
return nil, errors.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
return nil, errors.Errorf("unexpected http code: %d (%s), URL: %s", res.StatusCode, http.StatusText(res.StatusCode), authReq.URL)
}
tokenBlob, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, err
}
var token bearerToken
if err := json.Unmarshal(tokenBlob, &token); err != nil {
return nil, err
}
if token.ExpiresIn < minimumTokenLifetimeSeconds {
token.ExpiresIn = minimumTokenLifetimeSeconds
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
}
if token.IssuedAt.IsZero() {
token.IssuedAt = time.Now().UTC()
}
return &token, nil
return newBearerTokenFromJSONBlob(tokenBlob)
}
// detectProperties detects various properties of the registry.
// See the dockerClient documentation for members which are affected by this.
func (c *dockerClient) detectProperties(ctx context.Context) error {
if c.scheme != "" {
return nil
// detectPropertiesHelper performs the work of detectProperties which executes
// it at most once.
func (c *dockerClient) detectPropertiesHelper(ctx context.Context) error {
// We overwrite the TLS clients `InsecureSkipVerify` only if explicitly
// specified by the system context
if c.sys != nil && c.sys.DockerInsecureSkipTLSVerify != types.OptionalBoolUndefined {
c.tlsClientConfig.InsecureSkipVerify = c.sys.DockerInsecureSkipTLSVerify == types.OptionalBoolTrue
}
tr := tlsclientconfig.NewTransport()
tr.TLSClientConfig = c.tlsClientConfig
c.client = &http.Client{Transport: tr}
ping := func(scheme string) error {
url := fmt.Sprintf(resolvedPingV2URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth, nil)
if err != nil {
logrus.Debugf("Ping %s err %s (%#v)", url, err.Error(), err)
return err
}
defer resp.Body.Close()
logrus.Debugf("Ping %s status %d", url, resp.StatusCode)
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return errors.Errorf("error pinging repository, response code %d", resp.StatusCode)
return errors.Errorf("error pinging registry %s, response code %d (%s)", c.registry, resp.StatusCode, http.StatusText(resp.StatusCode))
}
c.challenges = parseAuthHeader(resp.Header)
c.scheme = scheme
@@ -371,20 +578,20 @@ func (c *dockerClient) detectProperties(ctx context.Context) error {
return nil
}
err := ping("https")
if err != nil && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
if err != nil && c.tlsClientConfig.InsecureSkipVerify {
err = ping("http")
}
if err != nil {
err = errors.Wrap(err, "pinging docker registry returned")
if c.ctx != nil && c.ctx.DockerDisableV1Ping {
if c.sys != nil && c.sys.DockerDisableV1Ping {
return err
}
// best effort to understand if we're talking to a V1 registry
pingV1 := func(scheme string) bool {
url := fmt.Sprintf(resolvedPingV1URL, scheme, c.registry)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, true)
logrus.Debugf("Ping %s err %#v", url, err)
resp, err := c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth, nil)
if err != nil {
logrus.Debugf("Ping %s err %s (%#v)", url, err.Error(), err)
return false
}
defer resp.Body.Close()
@@ -395,7 +602,7 @@ func (c *dockerClient) detectProperties(ctx context.Context) error {
return true
}
isV1 := pingV1("https")
if !isV1 && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
if !isV1 && c.tlsClientConfig.InsecureSkipVerify {
isV1 = pingV1("http")
}
if isV1 {
@@ -405,17 +612,24 @@ func (c *dockerClient) detectProperties(ctx context.Context) error {
return err
}
// detectProperties detects various properties of the registry.
// See the dockerClient documentation for members which are affected by this.
func (c *dockerClient) detectProperties(ctx context.Context) error {
c.detectPropertiesOnce.Do(func() { c.detectPropertiesError = c.detectPropertiesHelper(ctx) })
return c.detectPropertiesError
}
// getExtensionsSignatures returns signatures from the X-Registry-Supports-Signatures API extension,
// using the original data structures.
func (c *dockerClient) getExtensionsSignatures(ctx context.Context, ref dockerReference, manifestDigest digest.Digest) (*extensionSignatureList, error) {
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(ref.ref), manifestDigest)
res, err := c.makeRequest(ctx, "GET", path, nil, nil)
res, err := c.makeRequest(ctx, "GET", path, nil, nil, v2Auth, nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, client.HandleErrorResponse(res)
return nil, errors.Wrapf(client.HandleErrorResponse(res), "Error downloading signatures for %s in %s", manifestDigest, ref.ref.Name())
}
body, err := ioutil.ReadAll(res.Body)
if err != nil {

View File

@@ -5,6 +5,8 @@ import (
"encoding/json"
"fmt"
"net/http"
"net/url"
"strings"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
@@ -22,12 +24,12 @@ type Image struct {
// newImage returns a new Image interface type after setting up
// a client to the registry hosting the given image.
// The caller must call .Close() on the returned Image.
func newImage(ctx *types.SystemContext, ref dockerReference) (types.ImageCloser, error) {
s, err := newImageSource(ctx, ref)
func newImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) (types.ImageCloser, error) {
s, err := newImageSource(ctx, sys, ref)
if err != nil {
return nil, err
}
img, err := image.FromSource(ctx, s)
img, err := image.FromSource(ctx, sys, s)
if err != nil {
return nil, err
}
@@ -39,25 +41,67 @@ func (i *Image) SourceRefFullName() string {
return i.src.ref.ref.Name()
}
// GetRepositoryTags list all tags available in the repository. Note that this has no connection with the tag(s) used for this specific image, if any.
func (i *Image) GetRepositoryTags() ([]string, error) {
path := fmt.Sprintf(tagsPath, reference.Path(i.src.ref.ref))
// FIXME: Pass the context.Context
res, err := i.src.c.makeRequest(context.TODO(), "GET", path, nil, nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
// print url also
return nil, errors.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
}
type tagsRes struct {
Tags []string
}
tags := &tagsRes{}
if err := json.NewDecoder(res.Body).Decode(tags); err != nil {
return nil, err
}
return tags.Tags, nil
// GetRepositoryTags list all tags available in the repository. The tag
// provided inside the ImageReference will be ignored. (This is a
// backward-compatible shim method which calls the module-level
// GetRepositoryTags)
func (i *Image) GetRepositoryTags(ctx context.Context) ([]string, error) {
return GetRepositoryTags(ctx, i.src.c.sys, i.src.ref)
}
// GetRepositoryTags list all tags available in the repository. The tag
// provided inside the ImageReference will be ignored.
func GetRepositoryTags(ctx context.Context, sys *types.SystemContext, ref types.ImageReference) ([]string, error) {
dr, ok := ref.(dockerReference)
if !ok {
return nil, errors.Errorf("ref must be a dockerReference")
}
path := fmt.Sprintf(tagsPath, reference.Path(dr.ref))
client, err := newDockerClientFromRef(sys, dr, false, "pull")
if err != nil {
return nil, errors.Wrap(err, "failed to create client")
}
tags := make([]string, 0)
for {
res, err := client.makeRequest(ctx, "GET", path, nil, nil, v2Auth, nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
// print url also
return nil, errors.Errorf("Invalid status code returned when fetching tags list %d (%s)", res.StatusCode, http.StatusText(res.StatusCode))
}
var tagsHolder struct {
Tags []string
}
if err = json.NewDecoder(res.Body).Decode(&tagsHolder); err != nil {
return nil, err
}
tags = append(tags, tagsHolder.Tags...)
link := res.Header.Get("Link")
if link == "" {
break
}
linkURLStr := strings.Trim(strings.Split(link, ";")[0], "<>")
linkURL, err := url.Parse(linkURLStr)
if err != nil {
return tags, err
}
// can be relative or absolute, but we only want the path (and I
// guess we're in trouble if it forwards to a new place...)
path = linkURL.Path
if linkURL.RawQuery != "" {
path += "?"
path += linkURL.RawQuery
}
}
return tags, nil
}

View File

@@ -12,9 +12,11 @@ import (
"net/url"
"os"
"path/filepath"
"strings"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/blobinfocache/none"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
@@ -33,8 +35,8 @@ type dockerImageDestination struct {
}
// newImageDestination creates a new ImageDestination for the specified image reference.
func newImageDestination(ctx *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
c, err := newDockerClientFromRef(ctx, ref, true, "pull,push")
func newImageDestination(sys *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
c, err := newDockerClientFromRef(sys, ref, true, "pull,push")
if err != nil {
return nil, err
}
@@ -66,8 +68,8 @@ func (d *dockerImageDestination) SupportedManifestMIMETypes() []string {
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *dockerImageDestination) SupportsSignatures() error {
if err := d.c.detectProperties(context.TODO()); err != nil {
func (d *dockerImageDestination) SupportsSignatures(ctx context.Context) error {
if err := d.c.detectProperties(ctx); err != nil {
return err
}
switch {
@@ -80,9 +82,8 @@ func (d *dockerImageDestination) SupportsSignatures() error {
}
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *dockerImageDestination) ShouldCompressLayers() bool {
return true
func (d *dockerImageDestination) DesiredLayerCompression() types.LayerCompression {
return types.Compress
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
@@ -96,6 +97,13 @@ func (d *dockerImageDestination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *dockerImageDestination) IgnoresEmbeddedDockerReference() bool {
return false // We do want the manifest updated; older registry versions refuse manifests if the embedded reference does not match.
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }
@@ -104,34 +112,43 @@ func (c *sizeCounter) Write(p []byte) (n int, err error) {
return len(p), nil
}
// HasThreadSafePutBlob indicates whether PutBlob can be executed concurrently.
func (d *dockerImageDestination) HasThreadSafePutBlob() bool {
return true
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
// May update cache.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
func (d *dockerImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, cache types.BlobInfoCache, isConfig bool) (types.BlobInfo, error) {
if inputInfo.Digest.String() != "" {
haveBlob, size, err := d.HasBlob(inputInfo)
// This should not really be necessary, at least the copy code calls TryReusingBlob automatically.
// Still, we need to check, if only because the "initiate upload" endpoint does not have a documented "blob already exists" return value.
// But we do that with NoCache, so that it _only_ checks the primary destination, instead of trying all mount candidates _again_.
haveBlob, reusedInfo, err := d.TryReusingBlob(ctx, inputInfo, none.NoCache, false)
if err != nil {
return types.BlobInfo{}, err
}
if haveBlob {
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
return reusedInfo, nil
}
}
// FIXME? Chunked upload, progress reporting, etc.
uploadPath := fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref))
logrus.Debugf("Uploading %s", uploadPath)
res, err := d.c.makeRequest(context.TODO(), "POST", uploadPath, nil, nil)
res, err := d.c.makeRequest(ctx, "POST", uploadPath, nil, nil, v2Auth, nil)
if err != nil {
return types.BlobInfo{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusAccepted {
logrus.Debugf("Error initiating layer upload, response %#v", *res)
return types.BlobInfo{}, errors.Wrapf(client.HandleErrorResponse(res), "Error initiating layer upload to %s", uploadPath)
return types.BlobInfo{}, errors.Wrapf(client.HandleErrorResponse(res), "Error initiating layer upload to %s in %s", uploadPath, d.c.registry)
}
uploadLocation, err := res.Location()
if err != nil {
@@ -141,7 +158,7 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
digester := digest.Canonical.Digester()
sizeCounter := &sizeCounter{}
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
res, err = d.c.makeRequestToResolvedURL(context.TODO(), "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
res, err = d.c.makeRequestToResolvedURL(ctx, "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, v2Auth, nil)
if err != nil {
logrus.Debugf("Error uploading layer chunked, response %#v", res)
return types.BlobInfo{}, err
@@ -154,13 +171,13 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
return types.BlobInfo{}, errors.Wrap(err, "Error determining upload URL")
}
// FIXME: DELETE uploadLocation on failure
// FIXME: DELETE uploadLocation on failure (does not really work in docker/distribution servers, which incorrectly require the "delete" action in the token's scope)
locationQuery := uploadLocation.Query()
// TODO: check inputInfo.Digest == computedDigest https://github.com/containers/image/pull/70#discussion_r77646717
locationQuery.Set("digest", computedDigest.String())
uploadLocation.RawQuery = locationQuery.Encode()
res, err = d.c.makeRequestToResolvedURL(context.TODO(), "PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
res, err = d.c.makeRequestToResolvedURL(ctx, "PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, v2Auth, nil)
if err != nil {
return types.BlobInfo{}, err
}
@@ -171,21 +188,17 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
}
logrus.Debugf("Upload of layer %s complete", computedDigest)
cache.RecordKnownLocation(d.ref.Transport(), bicTransportScope(d.ref), computedDigest, newBICLocationReference(d.ref))
return types.BlobInfo{Digest: computedDigest, Size: sizeCounter.size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
// blobExists returns true iff repo contains a blob with digest, and if so, also its size.
// If the destination does not contain the blob, or it is unknown, blobExists ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
if info.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
checkPath := fmt.Sprintf(blobsPath, reference.Path(d.ref.ref), info.Digest.String())
func (d *dockerImageDestination) blobExists(ctx context.Context, repo reference.Named, digest digest.Digest, extraScope *authScope) (bool, int64, error) {
checkPath := fmt.Sprintf(blobsPath, reference.Path(repo), digest.String())
logrus.Debugf("Checking %s", checkPath)
res, err := d.c.makeRequest(context.TODO(), "HEAD", checkPath, nil, nil)
res, err := d.c.makeRequest(ctx, "HEAD", checkPath, nil, nil, v2Auth, extraScope)
if err != nil {
return false, -1, err
}
@@ -196,24 +209,144 @@ func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, erro
return true, getBlobSize(res), nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return false, -1, errors.Errorf("not authorized to read from destination repository %s", reference.Path(d.ref.ref))
return false, -1, errors.Wrapf(client.HandleErrorResponse(res), "Error checking whether a blob %s exists in %s", digest, repo.Name())
case http.StatusNotFound:
logrus.Debugf("... not present")
return false, -1, nil
default:
return false, -1, errors.Errorf("failed to read from destination repository %s: %v", reference.Path(d.ref.ref), http.StatusText(res.StatusCode))
return false, -1, errors.Errorf("failed to read from destination repository %s: %d (%s)", reference.Path(d.ref.ref), res.StatusCode, http.StatusText(res.StatusCode))
}
}
func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
// mountBlob tries to mount blob srcDigest from srcRepo to the current destination.
func (d *dockerImageDestination) mountBlob(ctx context.Context, srcRepo reference.Named, srcDigest digest.Digest, extraScope *authScope) error {
u := url.URL{
Path: fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref)),
RawQuery: url.Values{
"mount": {srcDigest.String()},
"from": {reference.Path(srcRepo)},
}.Encode(),
}
mountPath := u.String()
logrus.Debugf("Trying to mount %s", mountPath)
res, err := d.c.makeRequest(ctx, "POST", mountPath, nil, nil, v2Auth, extraScope)
if err != nil {
return err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusCreated:
logrus.Debugf("... mount OK")
return nil
case http.StatusAccepted:
// Oops, the mount was ignored - either the registry does not support that yet, or the blob does not exist; the registry has started an ordinary upload process.
// Abort, and let the ultimate caller do an upload when its ready, instead.
// NOTE: This does not really work in docker/distribution servers, which incorrectly require the "delete" action in the token's scope, and is thus entirely untested.
uploadLocation, err := res.Location()
if err != nil {
return errors.Wrap(err, "Error determining upload URL after a mount attempt")
}
logrus.Debugf("... started an upload instead of mounting, trying to cancel at %s", uploadLocation.String())
res2, err := d.c.makeRequestToResolvedURL(ctx, "DELETE", uploadLocation.String(), nil, nil, -1, v2Auth, extraScope)
if err != nil {
logrus.Debugf("Error trying to cancel an inadvertent upload: %s", err)
} else {
defer res2.Body.Close()
if res2.StatusCode != http.StatusNoContent {
logrus.Debugf("Error trying to cancel an inadvertent upload, status %s", http.StatusText(res.StatusCode))
}
}
// Anyway, if canceling the upload fails, ignore it and return the more important error:
return fmt.Errorf("Mounting %s from %s to %s started an upload instead", srcDigest, srcRepo.Name(), d.ref.ref.Name())
default:
logrus.Debugf("Error mounting, response %#v", *res)
return errors.Wrapf(client.HandleErrorResponse(res), "Error mounting %s from %s to %s", srcDigest, srcRepo.Name(), d.ref.ref.Name())
}
}
// TryReusingBlob checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If canSubstitute, TryReusingBlob can use an equivalent equivalent of the desired blob; in that case the returned info may not match the input.
// If the blob has been succesfully reused, returns (true, info, nil); info must contain at least a digest and size.
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
// May use and/or update cache.
func (d *dockerImageDestination) TryReusingBlob(ctx context.Context, info types.BlobInfo, cache types.BlobInfoCache, canSubstitute bool) (bool, types.BlobInfo, error) {
if info.Digest == "" {
return false, types.BlobInfo{}, errors.Errorf(`"Can not check for a blob with unknown digest`)
}
// First, check whether the blob happens to already exist at the destination.
exists, size, err := d.blobExists(ctx, d.ref.ref, info.Digest, nil)
if err != nil {
return false, types.BlobInfo{}, err
}
if exists {
cache.RecordKnownLocation(d.ref.Transport(), bicTransportScope(d.ref), info.Digest, newBICLocationReference(d.ref))
return true, types.BlobInfo{Digest: info.Digest, Size: size}, nil
}
// Then try reusing blobs from other locations.
for _, candidate := range cache.CandidateLocations(d.ref.Transport(), bicTransportScope(d.ref), info.Digest, canSubstitute) {
candidateRepo, err := parseBICLocationReference(candidate.Location)
if err != nil {
logrus.Debugf("Error parsing BlobInfoCache location reference: %s", err)
continue
}
logrus.Debugf("Trying to reuse cached location %s in %s", candidate.Digest.String(), candidateRepo.Name())
// Sanity checks:
if reference.Domain(candidateRepo) != reference.Domain(d.ref.ref) {
logrus.Debugf("... Internal error: domain %s does not match destination %s", reference.Domain(candidateRepo), reference.Domain(d.ref.ref))
continue
}
if candidateRepo.Name() == d.ref.ref.Name() && candidate.Digest == info.Digest {
logrus.Debug("... Already tried the primary destination")
continue
}
// Whatever happens here, don't abort the entire operation. It's likely we just don't have permissions, and if it is a critical network error, we will find out soon enough anyway.
// Checking candidateRepo, and mounting from it, requires an
// expanded token scope.
extraScope := &authScope{
remoteName: reference.Path(candidateRepo),
actions: "pull",
}
// This existence check is not, strictly speaking, necessary: We only _really_ need it to get the blob size, and we could record that in the cache instead.
// But a "failed" d.mountBlob currently leaves around an unterminated server-side upload, which we would try to cancel.
// So, without this existence check, it would be 1 request on success, 2 requests on failure; with it, it is 2 requests on success, 1 request on failure.
// On success we avoid the actual costly upload; so, in a sense, the success case is "free", but failures are always costly.
// Even worse, docker/distribution does not actually reasonably implement canceling uploads
// (it would require a "delete" action in the token, and Quay does not give that to anyone, so we can't ask);
// so, be a nice client and don't create unnecesary upload sessions on the server.
exists, size, err := d.blobExists(ctx, candidateRepo, candidate.Digest, extraScope)
if err != nil {
logrus.Debugf("... Failed: %v", err)
continue
}
if !exists {
// FIXME? Should we drop the blob from cache here (and elsewhere?)?
continue // logrus.Debug() already happened in blobExists
}
if candidateRepo.Name() != d.ref.ref.Name() {
if err := d.mountBlob(ctx, candidateRepo, candidate.Digest, extraScope); err != nil {
logrus.Debugf("... Mount failed: %v", err)
continue
}
}
cache.RecordKnownLocation(d.ref.Transport(), bicTransportScope(d.ref), candidate.Digest, newBICLocationReference(d.ref))
return true, types.BlobInfo{Digest: candidate.Digest, Size: size}, nil
}
return false, types.BlobInfo{}, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dockerImageDestination) PutManifest(m []byte) error {
func (d *dockerImageDestination) PutManifest(ctx context.Context, m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
return err
@@ -231,13 +364,13 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
if mimeType != "" {
headers["Content-Type"] = []string{mimeType}
}
res, err := d.c.makeRequest(context.TODO(), "PUT", path, headers, bytes.NewReader(m))
res, err := d.c.makeRequest(ctx, "PUT", path, headers, bytes.NewReader(m), v2Auth, nil)
if err != nil {
return err
}
defer res.Body.Close()
if !successStatus(res.StatusCode) {
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest to %s", path)
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest %s to %s", refTail, d.ref.ref.Name())
if isManifestInvalidError(errors.Cause(err)) {
err = types.ManifestTypeRejectedError{Err: err}
}
@@ -258,29 +391,44 @@ func isManifestInvalidError(err error) bool {
if !ok || len(errors) == 0 {
return false
}
ec, ok := errors[0].(errcode.ErrorCoder)
err = errors[0]
ec, ok := err.(errcode.ErrorCoder)
if !ok {
return false
}
switch ec.ErrorCode() {
// ErrorCodeManifestInvalid is returned by OpenShift with acceptschema2=false.
case v2.ErrorCodeManifestInvalid:
return true
// ErrorCodeTagInvalid is returned by docker/distribution (at least as of commit ec87e9b6971d831f0eff752ddb54fb64693e51cd)
// when uploading to a tag (because it cant find a matching tag inside the manifest)
return ec.ErrorCode() == v2.ErrorCodeManifestInvalid || ec.ErrorCode() == v2.ErrorCodeTagInvalid
case v2.ErrorCodeTagInvalid:
return true
// ErrorCodeUnsupported with 'Invalid JSON syntax' is returned by AWS ECR when
// uploading an OCI manifest that is (correctly, according to the spec) missing
// a top-level media type. See libpod issue #1719
// FIXME: remove this case when ECR behavior is fixed
case errcode.ErrorCodeUnsupported:
return strings.Contains(err.Error(), "Invalid JSON syntax")
default:
return false
}
}
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
func (d *dockerImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
// Do not fail if we dont really need to support signatures.
if len(signatures) == 0 {
return nil
}
if err := d.c.detectProperties(context.TODO()); err != nil {
if err := d.c.detectProperties(ctx); err != nil {
return err
}
switch {
case d.c.signatureBase != nil:
return d.putSignaturesToLookaside(signatures)
case d.c.supportsSignatures:
return d.putSignaturesToAPIExtension(signatures)
return d.putSignaturesToAPIExtension(ctx, signatures)
default:
return errors.Errorf("X-Registry-Supports-Signatures extension not supported, and lookaside is not configured")
}
@@ -379,7 +527,7 @@ func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error
}
// putSignaturesToAPIExtension implements PutSignatures() using the X-Registry-Supports-Signatures API extension.
func (d *dockerImageDestination) putSignaturesToAPIExtension(signatures [][]byte) error {
func (d *dockerImageDestination) putSignaturesToAPIExtension(ctx context.Context, signatures [][]byte) error {
// Skip dealing with the manifest digest, or reading the old state, if not necessary.
if len(signatures) == 0 {
return nil
@@ -394,7 +542,7 @@ func (d *dockerImageDestination) putSignaturesToAPIExtension(signatures [][]byte
// always adds signatures. Eventually we should also allow removing signatures,
// but the X-Registry-Supports-Signatures API extension does not support that yet.
existingSignatures, err := d.c.getExtensionsSignatures(context.TODO(), d.ref, d.manifestDigest)
existingSignatures, err := d.c.getExtensionsSignatures(ctx, d.ref, d.manifestDigest)
if err != nil {
return err
}
@@ -436,7 +584,7 @@ sigExists:
}
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), d.manifestDigest.String())
res, err := d.c.makeRequest(context.TODO(), "PUT", path, nil, bytes.NewReader(body))
res, err := d.c.makeRequest(ctx, "PUT", path, nil, bytes.NewReader(body), v2Auth, nil)
if err != nil {
return err
}
@@ -447,7 +595,7 @@ sigExists:
logrus.Debugf("Error body %s", string(body))
}
logrus.Debugf("Error uploading signature, status %d, %#v", res.StatusCode, res)
return errors.Wrapf(client.HandleErrorResponse(res), "Error uploading signature to %s", path)
return errors.Wrapf(client.HandleErrorResponse(res), "Error uploading signature to %s in %s", path, d.c.registry)
}
}
@@ -458,6 +606,6 @@ sigExists:
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *dockerImageDestination) Commit() error {
func (d *dockerImageDestination) Commit(ctx context.Context) error {
return nil
}

View File

@@ -13,9 +13,10 @@ import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/sysregistriesv2"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -30,15 +31,65 @@ type dockerImageSource struct {
// newImageSource creates a new ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
func newImageSource(ctx *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
c, err := newDockerClientFromRef(ctx, ref, false, "pull")
func newImageSource(ctx context.Context, sys *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
registry, err := sysregistriesv2.FindRegistry(sys, ref.ref.Name())
if err != nil {
return nil, errors.Wrapf(err, "error loading registries configuration")
}
if registry == nil {
// No configuration was found for the provided reference, so use the
// equivalent of a default configuration.
registry = &sysregistriesv2.Registry{
Endpoint: sysregistriesv2.Endpoint{
Location: ref.ref.String(),
},
Prefix: ref.ref.String(),
}
}
primaryDomain := reference.Domain(ref.ref)
// Check all endpoints for the manifest availability. If we find one that does
// contain the image, it will be used for all future pull actions. Always try the
// non-mirror original location last; this both transparently handles the case
// of no mirrors configured, and ensures we return the error encountered when
// acessing the upstream location if all endpoints fail.
manifestLoadErr := errors.New("Internal error: newImageSource returned without trying any endpoint")
pullSources, err := registry.PullSourcesFromReference(ref.ref)
if err != nil {
return nil, err
}
return &dockerImageSource{
ref: ref,
c: c,
}, nil
for _, pullSource := range pullSources {
logrus.Debugf("Trying to pull %q", pullSource.Reference)
dockerRef, err := newReference(pullSource.Reference)
if err != nil {
return nil, err
}
endpointSys := sys
// sys.DockerAuthConfig does not explicitly specify a registry; we must not blindly send the credentials intended for the primary endpoint to mirrors.
if endpointSys != nil && endpointSys.DockerAuthConfig != nil && reference.Domain(dockerRef.ref) != primaryDomain {
copy := *endpointSys
copy.DockerAuthConfig = nil
endpointSys = &copy
}
client, err := newDockerClientFromRef(endpointSys, dockerRef, false, "pull")
if err != nil {
return nil, err
}
client.tlsClientConfig.InsecureSkipVerify = pullSource.Endpoint.Insecure
testImageSource := &dockerImageSource{
ref: dockerRef,
c: client,
}
manifestLoadErr = testImageSource.ensureManifestIsLoaded(ctx)
if manifestLoadErr == nil {
return testImageSource, nil
}
}
return nil, manifestLoadErr
}
// Reference returns the reference used to set up this source, _as specified by the user_
@@ -53,8 +104,8 @@ func (s *dockerImageSource) Close() error {
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *dockerImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *dockerImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}
// simplifyContentType drops parameters from a HTTP media type (see https://tools.ietf.org/html/rfc7231#section-3.1.1.1)
@@ -74,11 +125,11 @@ func simplifyContentType(contentType string) string {
// It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *dockerImageSource) GetManifest(instanceDigest *digest.Digest) ([]byte, string, error) {
func (s *dockerImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil {
return s.fetchManifest(context.TODO(), instanceDigest.String())
return s.fetchManifest(ctx, instanceDigest.String())
}
err := s.ensureManifestIsLoaded(context.TODO())
err := s.ensureManifestIsLoaded(ctx)
if err != nil {
return nil, "", err
}
@@ -89,13 +140,13 @@ func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest strin
path := fmt.Sprintf(manifestPath, reference.Path(s.ref.ref), tagOrDigest)
headers := make(map[string][]string)
headers["Accept"] = manifest.DefaultRequestedManifestMIMETypes
res, err := s.c.makeRequest(ctx, "GET", path, headers, nil)
res, err := s.c.makeRequest(ctx, "GET", path, headers, nil, v2Auth, nil)
if err != nil {
return nil, "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, "", client.HandleErrorResponse(res)
return nil, "", errors.Wrapf(client.HandleErrorResponse(res), "Error reading manifest %s in %s", tagOrDigest, s.ref.ref.Name())
}
manblob, err := ioutil.ReadAll(res.Body)
if err != nil {
@@ -108,7 +159,7 @@ func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest strin
//
// ImageSource implementations are not required or expected to do any caching,
// but because our signatures are “attached” to the manifest digest,
// we need to ensure that the digest of the manifest returned by GetManifest(nil)
// we need to ensure that the digest of the manifest returned by GetManifest(ctx, nil)
// and used by GetSignatures(ctx, nil) are consistent, otherwise we would get spurious
// signature verification failures when pulling while a tag is being updated.
func (s *dockerImageSource) ensureManifestIsLoaded(ctx context.Context) error {
@@ -131,26 +182,26 @@ func (s *dockerImageSource) ensureManifestIsLoaded(ctx context.Context) error {
return nil
}
func (s *dockerImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, error) {
func (s *dockerImageSource) getExternalBlob(ctx context.Context, urls []string) (io.ReadCloser, int64, error) {
var (
resp *http.Response
err error
)
for _, url := range urls {
resp, err = s.c.makeRequestToResolvedURL(context.TODO(), "GET", url, nil, nil, -1, false)
resp, err = s.c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, noAuth, nil)
if err == nil {
if resp.StatusCode != http.StatusOK {
err = errors.Errorf("error fetching external blob from %q: %d", url, resp.StatusCode)
err = errors.Errorf("error fetching external blob from %q: %d (%s)", url, resp.StatusCode, http.StatusText(resp.StatusCode))
logrus.Debug(err)
continue
}
break
}
}
if resp.Body != nil && err == nil {
return resp.Body, getBlobSize(resp), nil
if err != nil {
return nil, 0, err
}
return nil, 0, err
return resp.Body, getBlobSize(resp), nil
}
func getBlobSize(resp *http.Response) int64 {
@@ -161,22 +212,30 @@ func getBlobSize(resp *http.Response) int64 {
return size
}
// HasThreadSafeGetBlob indicates whether GetBlob can be executed concurrently.
func (s *dockerImageSource) HasThreadSafeGetBlob() bool {
return true
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
// The Digest field in BlobInfo is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
// May update BlobInfoCache, preferably after it knows for certain that a blob truly exists at a specific location.
func (s *dockerImageSource) GetBlob(ctx context.Context, info types.BlobInfo, cache types.BlobInfoCache) (io.ReadCloser, int64, error) {
if len(info.URLs) != 0 {
return s.getExternalBlob(info.URLs)
return s.getExternalBlob(ctx, info.URLs)
}
path := fmt.Sprintf(blobsPath, reference.Path(s.ref.ref), info.Digest.String())
logrus.Debugf("Downloading %s", path)
res, err := s.c.makeRequest(context.TODO(), "GET", path, nil, nil)
res, err := s.c.makeRequest(ctx, "GET", path, nil, nil, v2Auth, nil)
if err != nil {
return nil, 0, err
}
if res.StatusCode != http.StatusOK {
// print url also
return nil, 0, errors.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
return nil, 0, errors.Errorf("Invalid status code returned when fetching blob %d (%s)", res.StatusCode, http.StatusText(res.StatusCode))
}
cache.RecordKnownLocation(s.ref.Transport(), bicTransportScope(s.ref), info.Digest, newBICLocationReference(s.ref))
return res.Body, getBlobSize(res), nil
}
@@ -274,7 +333,7 @@ func (s *dockerImageSource) getOneSignature(ctx context.Context, url *url.URL) (
if res.StatusCode == http.StatusNotFound {
return nil, true, nil
} else if res.StatusCode != http.StatusOK {
return nil, false, errors.Errorf("Error reading signature from %s: status %d", url.String(), res.StatusCode)
return nil, false, errors.Errorf("Error reading signature from %s: status %d (%s)", url.String(), res.StatusCode, http.StatusText(res.StatusCode))
}
sig, err := ioutil.ReadAll(res.Body)
if err != nil {
@@ -309,8 +368,15 @@ func (s *dockerImageSource) getSignaturesFromAPIExtension(ctx context.Context, i
}
// deleteImage deletes the named image from the registry, if supported.
func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
c, err := newDockerClientFromRef(ctx, ref, true, "push")
func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) error {
// docker/distribution does not document what action should be used for deleting images.
//
// Current docker/distribution requires "pull" for reading the manifest and "delete" for deleting it.
// quay.io requires "push" (an explicit "pull" is unnecessary), does not grant any token (fails parsing the request) if "delete" is included.
// OpenShift ignores the action string (both the password and the token is an OpenShift API token identifying a user).
//
// We have to hard-code a single string, luckily both docker/distribution and quay.io support "*" to mean "everything".
c, err := newDockerClientFromRef(sys, ref, true, "*")
if err != nil {
return err
}
@@ -325,7 +391,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
return err
}
getPath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), refTail)
get, err := c.makeRequest(context.TODO(), "GET", getPath, headers, nil)
get, err := c.makeRequest(ctx, "GET", getPath, headers, nil, v2Auth, nil)
if err != nil {
return err
}
@@ -347,7 +413,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
// When retrieving the digest from a registry >= 2.3 use the following header:
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
delete, err := c.makeRequest(context.TODO(), "DELETE", deletePath, headers, nil)
delete, err := c.makeRequest(ctx, "DELETE", deletePath, headers, nil, v2Auth, nil)
if err != nil {
return err
}

View File

@@ -1,6 +1,7 @@
package docker
import (
"context"
"fmt"
"strings"
@@ -60,8 +61,13 @@ func ParseReference(refString string) (types.ImageReference, error) {
// NewReference returns a Docker reference for a named reference. The reference must satisfy !reference.IsNameOnly().
func NewReference(ref reference.Named) (types.ImageReference, error) {
return newReference(ref)
}
// newReference returns a dockerReference for a named reference.
func newReference(ref reference.Named) (dockerReference, error) {
if reference.IsNameOnly(ref) {
return nil, errors.Errorf("Docker reference %s has neither a tag nor a digest", reference.FamiliarString(ref))
return dockerReference{}, errors.Errorf("Docker reference %s has neither a tag nor a digest", reference.FamiliarString(ref))
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// The docker/distribution API does not really support that (we cant ask for an image with a specific
@@ -71,8 +77,9 @@ func NewReference(ref reference.Named) (types.ImageReference, error) {
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
return nil, errors.Errorf("Docker references with both a tag and digest are currently not supported")
return dockerReference{}, errors.Errorf("Docker references with both a tag and digest are currently not supported")
}
return dockerReference{
ref: ref,
}, nil
@@ -127,25 +134,25 @@ func (ref dockerReference) PolicyConfigurationNamespaces() []string {
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
func (ref dockerReference) NewImage(ctx *types.SystemContext) (types.ImageCloser, error) {
return newImage(ctx, ref)
func (ref dockerReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
return newImage(ctx, sys, ref)
}
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref dockerReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref)
func (ref dockerReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, sys, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref dockerReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ctx, ref)
func (ref dockerReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(sys, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref dockerReference) DeleteImage(ctx *types.SystemContext) error {
return deleteImage(ctx, ref)
func (ref dockerReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
return deleteImage(ctx, sys, ref)
}
// tagOrDigest returns a tag or digest from the reference.

View File

@@ -45,9 +45,9 @@ type registryNamespace struct {
type signatureStorageBase *url.URL // The only documented value is nil, meaning storage is not supported.
// configuredSignatureStorageBase reads configuration to find an appropriate signature storage URL for ref, for write access if “write”.
func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReference, write bool) (signatureStorageBase, error) {
func configuredSignatureStorageBase(sys *types.SystemContext, ref dockerReference, write bool) (signatureStorageBase, error) {
// FIXME? Loading and parsing the config could be cached across calls.
dirPath := registriesDirPath(ctx)
dirPath := registriesDirPath(sys)
logrus.Debugf(`Using registries.d directory %s for sigstore configuration`, dirPath)
config, err := loadAndMergeConfig(dirPath)
if err != nil {
@@ -74,13 +74,13 @@ func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReferenc
}
// registriesDirPath returns a path to registries.d
func registriesDirPath(ctx *types.SystemContext) string {
if ctx != nil {
if ctx.RegistriesDirPath != "" {
return ctx.RegistriesDirPath
func registriesDirPath(sys *types.SystemContext) string {
if sys != nil {
if sys.RegistriesDirPath != "" {
return sys.RegistriesDirPath
}
if ctx.RootForImplicitAbsolutePaths != "" {
return filepath.Join(ctx.RootForImplicitAbsolutePaths, systemRegistriesDirPath)
if sys.RootForImplicitAbsolutePaths != "" {
return filepath.Join(sys.RootForImplicitAbsolutePaths, systemRegistriesDirPath)
}
}
return systemRegistriesDirPath

View File

@@ -1,2 +1,2 @@
This is a copy of github.com/docker/distribution/reference as of commit fb0bebc4b64e3881cc52a2478d749845ed76d2a8,
This is a copy of github.com/docker/distribution/reference as of commit 3226863cbcba6dbc2f6c83a37b28126c934af3f8,
except that ParseAnyReferenceWithSet has been removed to drop the dependency on github.com/docker/distribution/digestset.

View File

@@ -55,6 +55,35 @@ func ParseNormalizedNamed(s string) (Named, error) {
return named, nil
}
// ParseDockerRef normalizes the image reference following the docker convention. This is added
// mainly for backward compatibility.
// The reference returned can only be either tagged or digested. For reference contains both tag
// and digest, the function returns digested reference, e.g. docker.io/library/busybox:latest@
// sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa will be returned as
// docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa.
func ParseDockerRef(ref string) (Named, error) {
named, err := ParseNormalizedNamed(ref)
if err != nil {
return nil, err
}
if _, ok := named.(NamedTagged); ok {
if canonical, ok := named.(Canonical); ok {
// The reference is both tagged and digested, only
// return digested.
newNamed, err := WithName(canonical.Name())
if err != nil {
return nil, err
}
newCanonical, err := WithDigest(newNamed, canonical.Digest())
if err != nil {
return nil, err
}
return newCanonical, nil
}
}
return TagNameOnly(named), nil
}
// splitDockerDomain splits a repository name to domain and remotename string.
// If no valid domain is found, the default domain is used. Repository name
// needs to be already validated before.

View File

@@ -15,7 +15,7 @@
// tag := /[\w][\w.-]{0,127}/
//
// digest := digest-algorithm ":" digest-hex
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]*
// digest-algorithm-separator := /[+.-_]/
// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value
@@ -205,7 +205,7 @@ func Parse(s string) (Reference, error) {
var repo repository
nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
if nameMatch != nil && len(nameMatch) == 3 {
if len(nameMatch) == 3 {
repo.domain = nameMatch[1]
repo.path = nameMatch[2]
} else {

View File

@@ -20,15 +20,15 @@ var (
optional(repeated(separatorRegexp, alphaNumericRegexp)))
// domainComponentRegexp restricts the registry domain component of a
// repository name to start with a component as defined by domainRegexp
// repository name to start with a component as defined by DomainRegexp
// and followed by an optional port.
domainComponentRegexp = match(`(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`)
// domainRegexp defines the structure of potential domain components
// DomainRegexp defines the structure of potential domain components
// that may be part of image names. This is purposely a subset of what is
// allowed by DNS to ensure backwards compatibility with Docker image
// names.
domainRegexp = expression(
DomainRegexp = expression(
domainComponentRegexp,
optional(repeated(literal(`.`), domainComponentRegexp)),
optional(literal(`:`), match(`[0-9]+`)))
@@ -51,14 +51,14 @@ var (
// regexp has capturing groups for the domain and name part omitting
// the separating forward slash from either.
NameRegexp = expression(
optional(domainRegexp, literal(`/`)),
optional(DomainRegexp, literal(`/`)),
nameComponentRegexp,
optional(repeated(literal(`/`), nameComponentRegexp)))
// anchoredNameRegexp is used to parse a name value, capturing the
// domain and trailing components.
anchoredNameRegexp = anchored(
optional(capture(domainRegexp), literal(`/`)),
optional(capture(DomainRegexp), literal(`/`)),
capture(nameComponentRegexp,
optional(repeated(literal(`/`), nameComponentRegexp))))

View File

@@ -3,11 +3,13 @@ package tarfile
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"time"
"github.com/containers/image/docker/reference"
@@ -21,38 +23,31 @@ import (
// Destination is a partial implementation of types.ImageDestination for writing to an io.Writer.
type Destination struct {
writer io.Writer
tar *tar.Writer
repoTag string
writer io.Writer
tar *tar.Writer
repoTags []reference.NamedTagged
// Other state.
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
config []byte
}
// NewDestination returns a tarfile.Destination for the specified io.Writer.
func NewDestination(dest io.Writer, ref reference.NamedTagged) *Destination {
// For github.com/docker/docker consumers, this works just as well as
// refString := ref.String()
// because when reading the RepoTags strings, github.com/docker/docker/reference
// normalizes both of them to the same value.
//
// Doing it this way to include the normalized-out `docker.io[/library]` does make
// a difference for github.com/projectatomic/docker consumers, with the
// “Add --add-registry and --block-registry options to docker daemon” patch.
// These consumers treat reference strings which include a hostname and reference
// strings without a hostname differently.
//
// Using the host name here is more explicit about the intent, and it has the same
// effect as (docker pull) in projectatomic/docker, which tags the result using
// a hostname-qualified reference.
// See https://github.com/containers/image/issues/72 for a more detailed
// analysis and explanation.
refString := fmt.Sprintf("%s:%s", ref.Name(), ref.Tag())
return &Destination{
writer: dest,
tar: tar.NewWriter(dest),
repoTag: refString,
blobs: make(map[digest.Digest]types.BlobInfo),
repoTags := []reference.NamedTagged{}
if ref != nil {
repoTags = append(repoTags, ref)
}
return &Destination{
writer: dest,
tar: tar.NewWriter(dest),
repoTags: repoTags,
blobs: make(map[digest.Digest]types.BlobInfo),
}
}
// AddRepoTags adds the specified tags to the destination's repoTags.
func (d *Destination) AddRepoTags(tags []reference.NamedTagged) {
d.repoTags = append(d.repoTags, tags...)
}
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
@@ -65,15 +60,10 @@ func (d *Destination) SupportedManifestMIMETypes() []string {
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *Destination) SupportsSignatures() error {
func (d *Destination) SupportsSignatures(ctx context.Context) error {
return errors.Errorf("Storing signatures for docker tar files is not supported")
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *Destination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *Destination) AcceptsForeignLayerURLs() bool {
@@ -85,26 +75,29 @@ func (d *Destination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *Destination) IgnoresEmbeddedDockerReference() bool {
return false // N/A, we only accept schema2 images where EmbeddedDockerReferenceConflicts() is always false.
}
// HasThreadSafePutBlob indicates whether PutBlob can be executed concurrently.
func (d *Destination) HasThreadSafePutBlob() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
// May update cache.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
if inputInfo.Digest.String() == "" {
return types.BlobInfo{}, errors.Errorf("Can not stream a blob with unknown digest to docker tarfile")
}
ok, size, err := d.HasBlob(inputInfo)
if err != nil {
return types.BlobInfo{}, err
}
if ok {
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
}
if inputInfo.Size == -1 { // Ouch, we need to stream the blob into a temporary file just to determine the size.
func (d *Destination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, cache types.BlobInfoCache, isConfig bool) (types.BlobInfo, error) {
// Ouch, we need to stream the blob into a temporary file just to determine the size.
// When the layer is decompressed, we also have to generate the digest on uncompressed datas.
if inputInfo.Size == -1 || inputInfo.Digest.String() == "" {
logrus.Debugf("docker tarfile: input with unknown size, streaming to disk first ...")
streamCopy, err := ioutil.TempFile(tmpdir.TemporaryDirectoryForBigFiles(), "docker-tarfile-blob")
if err != nil {
@@ -113,7 +106,10 @@ func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types
defer os.Remove(streamCopy.Name())
defer streamCopy.Close()
size, err := io.Copy(streamCopy, stream)
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
size, err := io.Copy(streamCopy, tee)
if err != nil {
return types.BlobInfo{}, err
}
@@ -122,49 +118,87 @@ func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types
return types.BlobInfo{}, err
}
inputInfo.Size = size // inputInfo is a struct, so we are only modifying our copy.
if inputInfo.Digest == "" {
inputInfo.Digest = digester.Digest()
}
stream = streamCopy
logrus.Debugf("... streaming done")
}
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
if err := d.sendFile(inputInfo.Digest.String(), inputInfo.Size, tee); err != nil {
// Maybe the blob has been already sent
ok, reusedInfo, err := d.TryReusingBlob(ctx, inputInfo, cache, false)
if err != nil {
return types.BlobInfo{}, err
}
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}
return types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}, nil
if ok {
return reusedInfo, nil
}
if isConfig {
buf, err := ioutil.ReadAll(stream)
if err != nil {
return types.BlobInfo{}, errors.Wrap(err, "Error reading Config file stream")
}
d.config = buf
if err := d.sendFile(inputInfo.Digest.Hex()+".json", inputInfo.Size, bytes.NewReader(buf)); err != nil {
return types.BlobInfo{}, errors.Wrap(err, "Error writing Config file")
}
} else {
// Note that this can't be e.g. filepath.Join(l.Digest.Hex(), legacyLayerFileName); due to the way
// writeLegacyLayerMetadata constructs layer IDs differently from inputinfo.Digest values (as described
// inside it), most of the layers would end up in subdirectories alone without any metadata; (docker load)
// tries to load every subdirectory as an image and fails if the config is missing. So, keep the layers
// in the root of the tarball.
if err := d.sendFile(inputInfo.Digest.Hex()+".tar", inputInfo.Size, stream); err != nil {
return types.BlobInfo{}, err
}
}
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: inputInfo.Digest, Size: inputInfo.Size}
return types.BlobInfo{Digest: inputInfo.Digest, Size: inputInfo.Size}, nil
}
// HasBlob returns true iff the image destination already contains a blob with
// the matching digest which can be reapplied using ReapplyBlob. Unlike
// PutBlob, the digest can not be empty. If HasBlob returns true, the size of
// the blob must also be returned. If the destination does not contain the
// blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil); it
// returns a non-nil error only on an unexpected failure.
func (d *Destination) HasBlob(info types.BlobInfo) (bool, int64, error) {
// TryReusingBlob checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If canSubstitute, TryReusingBlob can use an equivalent equivalent of the desired blob; in that case the returned info may not match the input.
// If the blob has been succesfully reused, returns (true, info, nil); info must contain at least a digest and size.
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
// May use and/or update cache.
func (d *Destination) TryReusingBlob(ctx context.Context, info types.BlobInfo, cache types.BlobInfoCache, canSubstitute bool) (bool, types.BlobInfo, error) {
if info.Digest == "" {
return false, -1, errors.Errorf("Can not check for a blob with unknown digest")
return false, types.BlobInfo{}, errors.Errorf("Can not check for a blob with unknown digest")
}
if blob, ok := d.blobs[info.Digest]; ok {
return true, blob.Size, nil
return true, types.BlobInfo{Digest: info.Digest, Size: blob.Size}, nil
}
return false, -1, nil
return false, types.BlobInfo{}, nil
}
// ReapplyBlob informs the image destination that a blob for which HasBlob
// previously returned true would have been passed to PutBlob if it had
// returned false. Like HasBlob and unlike PutBlob, the digest can not be
// empty. If the blob is a filesystem layer, this signifies that the changes
// it describes need to be applied again when composing a filesystem tree.
func (d *Destination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
func (d *Destination) createRepositoriesFile(rootLayerID string) error {
repositories := map[string]map[string]string{}
for _, repoTag := range d.repoTags {
if val, ok := repositories[repoTag.Name()]; ok {
val[repoTag.Tag()] = rootLayerID
} else {
repositories[repoTag.Name()] = map[string]string{repoTag.Tag(): rootLayerID}
}
}
b, err := json.Marshal(repositories)
if err != nil {
return errors.Wrap(err, "Error marshaling repositories")
}
if err := d.sendBytes(legacyRepositoriesFileName, b); err != nil {
return errors.Wrap(err, "Error writing config json file")
}
return nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *Destination) PutManifest(m []byte) error {
func (d *Destination) PutManifest(ctx context.Context, m []byte) error {
// We do not bother with types.ManifestTypeRejectedError; our .SupportedManifestMIMETypes() above is already providing only one alternative,
// so the caller trying a different manifest kind would be pointless.
var man manifest.Schema2
@@ -175,14 +209,42 @@ func (d *Destination) PutManifest(m []byte) error {
return errors.Errorf("Unsupported manifest type, need a Docker schema 2 manifest")
}
layerPaths := []string{}
for _, l := range man.LayersDescriptors {
layerPaths = append(layerPaths, l.Digest.String())
layerPaths, lastLayerID, err := d.writeLegacyLayerMetadata(man.LayersDescriptors)
if err != nil {
return err
}
if len(man.LayersDescriptors) > 0 {
if err := d.createRepositoriesFile(lastLayerID); err != nil {
return err
}
}
repoTags := []string{}
for _, tag := range d.repoTags {
// For github.com/docker/docker consumers, this works just as well as
// refString := ref.String()
// because when reading the RepoTags strings, github.com/docker/docker/reference
// normalizes both of them to the same value.
//
// Doing it this way to include the normalized-out `docker.io[/library]` does make
// a difference for github.com/projectatomic/docker consumers, with the
// “Add --add-registry and --block-registry options to docker daemon” patch.
// These consumers treat reference strings which include a hostname and reference
// strings without a hostname differently.
//
// Using the host name here is more explicit about the intent, and it has the same
// effect as (docker pull) in projectatomic/docker, which tags the result using
// a hostname-qualified reference.
// See https://github.com/containers/image/issues/72 for a more detailed
// analysis and explanation.
refString := fmt.Sprintf("%s:%s", tag.Name(), tag.Tag())
repoTags = append(repoTags, refString)
}
items := []ManifestItem{{
Config: man.ConfigDescriptor.Digest.String(),
RepoTags: []string{d.repoTag},
Config: man.ConfigDescriptor.Digest.Hex() + ".json",
RepoTags: repoTags,
Layers: layerPaths,
Parent: "",
LayerSources: nil,
@@ -193,12 +255,81 @@ func (d *Destination) PutManifest(m []byte) error {
}
// FIXME? Do we also need to support the legacy format?
return d.sendFile(manifestFileName, int64(len(itemsBytes)), bytes.NewReader(itemsBytes))
return d.sendBytes(manifestFileName, itemsBytes)
}
// writeLegacyLayerMetadata writes legacy VERSION and configuration files for all layers
func (d *Destination) writeLegacyLayerMetadata(layerDescriptors []manifest.Schema2Descriptor) (layerPaths []string, lastLayerID string, err error) {
var chainID digest.Digest
lastLayerID = ""
for i, l := range layerDescriptors {
// This chainID value matches the computation in docker/docker/layer.CreateChainID …
if chainID == "" {
chainID = l.Digest
} else {
chainID = digest.Canonical.FromString(chainID.String() + " " + l.Digest.String())
}
// … but note that this image ID does not match docker/docker/image/v1.CreateID. At least recent
// versions allocate new IDs on load, as long as the IDs we use are unique / cannot loop.
//
// Overall, the goal of computing a digest dependent on the full history is to avoid reusing an image ID
// (and possibly creating a loop in the "parent" links) if a layer with the same DiffID appears two or more
// times in layersDescriptors. The ChainID values are sufficient for this, the v1.CreateID computation
// which also mixes in the full image configuration seems unnecessary, at least as long as we are storing
// only a single image per tarball, i.e. all DiffID prefixes are unique (cant differ only with
// configuration).
layerID := chainID.Hex()
physicalLayerPath := l.Digest.Hex() + ".tar"
// The layer itself has been stored into physicalLayerPath in PutManifest.
// So, use that path for layerPaths used in the non-legacy manifest
layerPaths = append(layerPaths, physicalLayerPath)
// ... and create a symlink for the legacy format;
if err := d.sendSymlink(filepath.Join(layerID, legacyLayerFileName), filepath.Join("..", physicalLayerPath)); err != nil {
return nil, "", errors.Wrap(err, "Error creating layer symbolic link")
}
b := []byte("1.0")
if err := d.sendBytes(filepath.Join(layerID, legacyVersionFileName), b); err != nil {
return nil, "", errors.Wrap(err, "Error writing VERSION file")
}
// The legacy format requires a config file per layer
layerConfig := make(map[string]interface{})
layerConfig["id"] = layerID
// The root layer doesn't have any parent
if lastLayerID != "" {
layerConfig["parent"] = lastLayerID
}
// The root layer configuration file is generated by using subpart of the image configuration
if i == len(layerDescriptors)-1 {
var config map[string]*json.RawMessage
err := json.Unmarshal(d.config, &config)
if err != nil {
return nil, "", errors.Wrap(err, "Error unmarshaling config")
}
for _, attr := range [7]string{"architecture", "config", "container", "container_config", "created", "docker_version", "os"} {
layerConfig[attr] = config[attr]
}
}
b, err := json.Marshal(layerConfig)
if err != nil {
return nil, "", errors.Wrap(err, "Error marshaling layer config")
}
if err := d.sendBytes(filepath.Join(layerID, legacyConfigFileName), b); err != nil {
return nil, "", errors.Wrap(err, "Error writing config json file")
}
lastLayerID = layerID
}
return layerPaths, lastLayerID, nil
}
type tarFI struct {
path string
size int64
path string
size int64
isSymlink bool
}
func (t *tarFI) Name() string {
@@ -208,6 +339,9 @@ func (t *tarFI) Size() int64 {
return t.size
}
func (t *tarFI) Mode() os.FileMode {
if t.isSymlink {
return os.ModeSymlink
}
return 0444
}
func (t *tarFI) ModTime() time.Time {
@@ -220,6 +354,21 @@ func (t *tarFI) Sys() interface{} {
return nil
}
// sendSymlink sends a symlink into the tar stream.
func (d *Destination) sendSymlink(path string, target string) error {
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: 0, isSymlink: true}, target)
if err != nil {
return nil
}
logrus.Debugf("Sending as tar link %s -> %s", path, target)
return d.tar.WriteHeader(hdr)
}
// sendBytes sends a path into the tar stream.
func (d *Destination) sendBytes(path string, b []byte) error {
return d.sendFile(path, int64(len(b)), bytes.NewReader(b))
}
// sendFile sends a file into the tar stream.
func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader) error {
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: expectedSize}, "")
@@ -230,6 +379,7 @@ func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader
if err := d.tar.WriteHeader(hdr); err != nil {
return err
}
// TODO: This can take quite some time, and should ideally be cancellable using a context.Context.
size, err := io.Copy(d.tar, stream)
if err != nil {
return err
@@ -243,7 +393,7 @@ func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader
// PutSignatures adds the given signatures to the docker tarfile (currently not
// supported). MUST be called after PutManifest (signatures reference manifest
// contents)
func (d *Destination) PutSignatures(signatures [][]byte) error {
func (d *Destination) PutSignatures(ctx context.Context, signatures [][]byte) error {
if len(signatures) != 0 {
return errors.Errorf("Storing signatures for docker tar files is not supported")
}
@@ -252,6 +402,6 @@ func (d *Destination) PutSignatures(signatures [][]byte) error {
// Commit finishes writing data to the underlying io.Writer.
// It is the caller's responsibility to close it, if necessary.
func (d *Destination) Commit() error {
func (d *Destination) Commit(ctx context.Context) error {
return d.tar.Close()
}

View File

@@ -9,7 +9,9 @@ import (
"io/ioutil"
"os"
"path"
"sync"
"github.com/containers/image/internal/tmpdir"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/types"
@@ -19,8 +21,11 @@ import (
// Source is a partial implementation of types.ImageSource for reading from tarPath.
type Source struct {
tarPath string
tarPath string
removeTarPathOnClose bool // Remove temp file on close if true
cacheDataLock sync.Once // Atomic way to ensure that ensureCachedDataIsPresent is only invoked once
// The following data is only available after ensureCachedDataIsPresent() succeeds
cacheDataResult error // The return value of ensureCachedDataIsPresent, since it should be as safe to cache as the side effects
tarManifest *ManifestItem // nil if not available yet.
configBytes []byte
configDigest digest.Digest
@@ -35,14 +40,75 @@ type layerInfo struct {
size int64
}
// NewSource returns a tarfile.Source for the specified path.
func NewSource(path string) *Source {
// TODO: We could add support for multiple images in a single archive, so
// that people could use docker-archive:opensuse.tar:opensuse:leap as
// the source of an image.
return &Source{
tarPath: path,
// TODO: We could add support for multiple images in a single archive, so
// that people could use docker-archive:opensuse.tar:opensuse:leap as
// the source of an image.
// To do for both the NewSourceFromFile and NewSourceFromStream functions
// NewSourceFromFile returns a tarfile.Source for the specified path.
func NewSourceFromFile(path string) (*Source, error) {
file, err := os.Open(path)
if err != nil {
return nil, errors.Wrapf(err, "error opening file %q", path)
}
defer file.Close()
// If the file is already not compressed we can just return the file itself
// as a source. Otherwise we pass the stream to NewSourceFromStream.
stream, isCompressed, err := compression.AutoDecompress(file)
if err != nil {
return nil, errors.Wrapf(err, "Error detecting compression for file %q", path)
}
defer stream.Close()
if !isCompressed {
return &Source{
tarPath: path,
}, nil
}
return NewSourceFromStream(stream)
}
// NewSourceFromStream returns a tarfile.Source for the specified inputStream,
// which can be either compressed or uncompressed. The caller can close the
// inputStream immediately after NewSourceFromFile returns.
func NewSourceFromStream(inputStream io.Reader) (*Source, error) {
// FIXME: use SystemContext here.
// Save inputStream to a temporary file
tarCopyFile, err := ioutil.TempFile(tmpdir.TemporaryDirectoryForBigFiles(), "docker-tar")
if err != nil {
return nil, errors.Wrap(err, "error creating temporary file")
}
defer tarCopyFile.Close()
succeeded := false
defer func() {
if !succeeded {
os.Remove(tarCopyFile.Name())
}
}()
// In order to be compatible with docker-load, we need to support
// auto-decompression (it's also a nice quality-of-life thing to avoid
// giving users really confusing "invalid tar header" errors).
uncompressedStream, _, err := compression.AutoDecompress(inputStream)
if err != nil {
return nil, errors.Wrap(err, "Error auto-decompressing input")
}
defer uncompressedStream.Close()
// Copy the plain archive to the temporary file.
//
// TODO: This can take quite some time, and should ideally be cancellable
// using a context.Context.
if _, err := io.Copy(tarCopyFile, uncompressedStream); err != nil {
return nil, errors.Wrapf(err, "error copying contents to temporary file %q", tarCopyFile.Name())
}
succeeded = true
return &Source{
tarPath: tarCopyFile.Name(),
removeTarPathOnClose: true,
}, nil
}
// tarReadCloser is a way to close the backing file of a tar.Reader when the user no longer needs the tar component.
@@ -136,43 +202,46 @@ func (s *Source) readTarComponent(path string) ([]byte, error) {
// ensureCachedDataIsPresent loads data necessary for any of the public accessors.
func (s *Source) ensureCachedDataIsPresent() error {
if s.tarManifest != nil {
return nil
}
s.cacheDataLock.Do(func() {
// Read and parse manifest.json
tarManifest, err := s.loadTarManifest()
if err != nil {
s.cacheDataResult = err
return
}
// Read and parse manifest.json
tarManifest, err := s.loadTarManifest()
if err != nil {
return err
}
// Check to make sure length is 1
if len(tarManifest) != 1 {
s.cacheDataResult = errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(tarManifest))
return
}
// Check to make sure length is 1
if len(tarManifest) != 1 {
return errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(tarManifest))
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest[0].Config)
if err != nil {
s.cacheDataResult = err
return
}
var parsedConfig manifest.Schema2Image // There's a lot of info there, but we only really care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
s.cacheDataResult = errors.Wrapf(err, "Error decoding tar config %s", tarManifest[0].Config)
return
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest[0].Config)
if err != nil {
return err
}
var parsedConfig manifest.Schema2Image // There's a lot of info there, but we only really care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest[0].Config)
}
knownLayers, err := s.prepareLayerData(&tarManifest[0], &parsedConfig)
if err != nil {
s.cacheDataResult = err
return
}
knownLayers, err := s.prepareLayerData(&tarManifest[0], &parsedConfig)
if err != nil {
return err
}
// Success; commit.
s.tarManifest = &tarManifest[0]
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
s.knownLayers = knownLayers
return nil
// Success; commit.
s.tarManifest = &tarManifest[0]
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
s.knownLayers = knownLayers
})
return s.cacheDataResult
}
// loadTarManifest loads and decodes the manifest.json.
@@ -189,6 +258,14 @@ func (s *Source) loadTarManifest() ([]ManifestItem, error) {
return items, nil
}
// Close removes resources associated with an initialized Source, if any.
func (s *Source) Close() error {
if s.removeTarPathOnClose {
return os.Remove(s.tarPath)
}
return nil
}
// LoadTarManifest loads and decodes the manifest.json
func (s *Source) LoadTarManifest() ([]ManifestItem, error) {
return s.loadTarManifest()
@@ -236,7 +313,25 @@ func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *manif
return nil, err
}
if li, ok := unknownLayerSizes[h.Name]; ok {
li.size = h.Size
// Since GetBlob will decompress layers that are compressed we need
// to do the decompression here as well, otherwise we will
// incorrectly report the size. Pretty critical, since tools like
// umoci always compress layer blobs. Obviously we only bother with
// the slower method of checking if it's compressed.
uncompressedStream, isCompressed, err := compression.AutoDecompress(t)
if err != nil {
return nil, errors.Wrapf(err, "Error auto-decompressing %s to determine its size", h.Name)
}
defer uncompressedStream.Close()
uncompressedSize := h.Size
if isCompressed {
uncompressedSize, err = io.Copy(ioutil.Discard, uncompressedStream)
if err != nil {
return nil, errors.Wrapf(err, "Error reading %s to find its size", h.Name)
}
}
li.size = uncompressedSize
delete(unknownLayerSizes, h.Name)
}
}
@@ -251,9 +346,9 @@ func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *manif
// It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *Source) GetManifest(instanceDigest *digest.Digest) ([]byte, string, error) {
func (s *Source) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil {
// How did we even get here? GetManifest(nil) has returned a manifest.DockerV2Schema2MediaType.
// How did we even get here? GetManifest(ctx, nil) has returned a manifest.DockerV2Schema2MediaType.
return nil, "", errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
}
if s.generatedManifest == nil {
@@ -290,20 +385,33 @@ func (s *Source) GetManifest(instanceDigest *digest.Digest) ([]byte, string, err
return s.generatedManifest, manifest.DockerV2Schema2MediaType, nil
}
type readCloseWrapper struct {
// uncompressedReadCloser is an io.ReadCloser that closes both the uncompressed stream and the underlying input.
type uncompressedReadCloser struct {
io.Reader
closeFunc func() error
underlyingCloser func() error
uncompressedCloser func() error
}
func (r readCloseWrapper) Close() error {
if r.closeFunc != nil {
return r.closeFunc()
func (r uncompressedReadCloser) Close() error {
var res error
if err := r.uncompressedCloser(); err != nil {
res = err
}
return nil
if err := r.underlyingCloser(); err != nil && res == nil {
res = err
}
return res
}
// HasThreadSafeGetBlob indicates whether GetBlob can be executed concurrently.
func (s *Source) HasThreadSafeGetBlob() bool {
return true
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
// The Digest field in BlobInfo is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
// May update BlobInfoCache, preferably after it knows for certain that a blob truly exists at a specific location.
func (s *Source) GetBlob(ctx context.Context, info types.BlobInfo, cache types.BlobInfoCache) (io.ReadCloser, int64, error) {
if err := s.ensureCachedDataIsPresent(); err != nil {
return nil, 0, err
}
@@ -313,10 +421,16 @@ func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
}
if li, ok := s.knownLayers[info.Digest]; ok { // diffID is a digest of the uncompressed tarball,
stream, err := s.openTarComponent(li.path)
underlyingStream, err := s.openTarComponent(li.path)
if err != nil {
return nil, 0, err
}
closeUnderlyingStream := true
defer func() {
if closeUnderlyingStream {
underlyingStream.Close()
}
}()
// In order to handle the fact that digests != diffIDs (and thus that a
// caller which is trying to verify the blob will run into problems),
@@ -330,22 +444,17 @@ func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
// be verifing a "digest" which is not the actual layer's digest (but
// is instead the DiffID).
decompressFunc, reader, err := compression.DetectCompression(stream)
uncompressedStream, _, err := compression.AutoDecompress(underlyingStream)
if err != nil {
return nil, 0, errors.Wrapf(err, "Detecting compression in blob %s", info.Digest)
return nil, 0, errors.Wrapf(err, "Error auto-decompressing blob %s", info.Digest)
}
if decompressFunc != nil {
reader, err = decompressFunc(reader)
if err != nil {
return nil, 0, errors.Wrapf(err, "Decompressing blob %s stream", info.Digest)
}
}
newStream := readCloseWrapper{
Reader: reader,
closeFunc: stream.Close,
newStream := uncompressedReadCloser{
Reader: uncompressedStream,
underlyingCloser: underlyingStream.Close,
uncompressedCloser: uncompressedStream.Close,
}
closeUnderlyingStream = false
return newStream, li.size, nil
}
@@ -359,7 +468,7 @@ func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
// (e.g. if the source never returns manifest lists).
func (s *Source) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
if instanceDigest != nil {
// How did we even get here? GetManifest(nil) has returned a manifest.DockerV2Schema2MediaType.
// How did we even get here? GetManifest(ctx, nil) has returned a manifest.DockerV2Schema2MediaType.
return nil, errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
}
return [][]byte{}, nil

View File

@@ -9,11 +9,11 @@ import (
// Based on github.com/docker/docker/image/tarexport/tarexport.go
const (
manifestFileName = "manifest.json"
// legacyLayerFileName = "layer.tar"
// legacyConfigFileName = "json"
// legacyVersionFileName = "VERSION"
// legacyRepositoriesFileName = "repositories"
manifestFileName = "manifest.json"
legacyLayerFileName = "layer.tar"
legacyConfigFileName = "json"
legacyVersionFileName = "VERSION"
legacyRepositoriesFileName = "repositories"
)
// ManifestItem is an element of the array stored in the top-level manifest.json file.

View File

@@ -0,0 +1,66 @@
{
"title": "JSON embedded in an atomic container signature",
"description": "This schema is a supplement to atomic-signature.md in this directory.\n\nConsumers of the JSON MUST use the processing rules documented in atomic-signature.md, especially the requirements for the 'critical' subjobject.\n\nWhenever this schema and atomic-signature.md, or the github.com/containers/image/signature implementation, differ,\nit is the atomic-signature.md document, or the github.com/containers/image/signature implementation, which governs.\n\nUsers are STRONGLY RECOMMENDED to use the github.com/containeres/image/signature implementation instead of writing\ntheir own, ESPECIALLY when consuming signatures, so that the policy.json format can be shared by all image consumers.\n",
"type": "object",
"required": [
"critical",
"optional"
],
"additionalProperties": false,
"properties": {
"critical": {
"type": "object",
"required": [
"type",
"image",
"identity"
],
"additionalProperties": false,
"properties": {
"type": {
"type": "string",
"enum": [
"atomic container signature"
]
},
"image": {
"type": "object",
"required": [
"docker-manifest-digest"
],
"additionalProperties": false,
"properties": {
"docker-manifest-digest": {
"type": "string"
}
}
},
"identity": {
"type": "object",
"required": [
"docker-reference"
],
"additionalProperties": false,
"properties": {
"docker-reference": {
"type": "string"
}
}
}
}
},
"optional": {
"type": "object",
"description": "All members are optional, but if they are included, they must be valid.",
"additionalProperties": true,
"properties": {
"creator": {
"type": "string"
},
"timestamp": {
"type": "integer"
}
}
}
}
}

View File

@@ -0,0 +1,28 @@
% containers-certs.d(5)
# NAME
containers-certs.d - Directory for storing custom container-registry TLS configurations
# DESCRIPTION
A custom TLS configuration for a container registry can be configured by creating a directory under `/etc/containers/certs.d`.
The name of the directory must correspond to the `host:port` of the registry (e.g., `my-registry.com:5000`).
## Directory Structure
A certs directory can contain one or more files with the following extensions:
* `*.crt` files with this extensions will be interpreted as CA certificates
* `*.cert` files with this extensions will be interpreted as client certificates
* `*.key` files with this extensions will be interpreted as client keys
Note that the client certificate-key pair will be selected by the file name (e.g., `client.{cert,key}`).
An examplary setup for a registry running at `my-registry.com:5000` may look as follows:
```
/etc/containers/certs.d/ <- Certificate directory
└── my-registry.com:5000 <- Hostname:port
├── client.cert <- Client certificate
├── client.key <- Client key
└── ca.crt <- Certificate authority that signed the registry certificate
```
# HISTORY
Feb 2019, Originally compiled by Valentin Rothberg <rothberg@redhat.com>

Some files were not shown because too many files have changed in this diff Show More