Compare commits

...

323 Commits

Author SHA1 Message Date
Miloslav Trmač
875bb42594 Release v1.6.2
- Bump github.com/prometheus/client_golang to v1.11.1
- Update the command to install golint

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-31 20:25:45 +02:00
Miloslav Trmač
1186cc6bce Merge pull request #1612 from mtrmac/prometheus-bump
[release-1.6] Bump github.com/prometheus/client_golang to v1.11.1
2022-03-31 20:24:24 +02:00
Miloslav Trmač
311f61f1aa Update the command to install golint
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-31 19:59:11 +02:00
Miloslav Trmač
796c9cc041 Bump github.com/prometheus/client_golang to v1.11.1
Related to CVE-2022-21698 ; note that the vulnerable
code is not actually reachable in Skopeo.

> go get github.com/prometheus/client_golang@v1.11.1
> make vendor

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-31 19:05:17 +02:00
Daniel J Walsh
49084d2cd8 Bump to v1.6.1
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2022-02-16 12:02:57 -05:00
Daniel J Walsh
8b904e908e Merge pull request #1568 from mtrmac/resolved-workaround
Resolved workaround
2022-02-15 14:18:05 -05:00
Miloslav Trmač
23183072fb Work around systemd-resolved's handling of .invalid domains
... per https://github.com/containers/skopeo/pull/1558 .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-15 16:54:51 +01:00
Miloslav Trmač
3be97ce281 Beautify a few calls
Use the sort-of-convention of keeping the output matching regex,
and the command, on separatel lines.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-15 16:54:51 +01:00
Miloslav Trmač
b46506c077 Merge pull request #1572 from mtrmac/inspect-expect-config
Don't expect the config blob to be listed in (skopeo inspect)
2022-02-15 16:54:22 +01:00
Miloslav Trmač
49d9fa9faf Only look for the layer digests in the Layers field.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-15 16:08:35 +01:00
Miloslav Trmač
77363128e1 Don't expect the config blob to be listed in (skopeo inspect)
... because it currently isn't.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-15 16:08:35 +01:00
Daniel J Walsh
59a452276b Merge pull request #1558 from cevich/new_python_images
Cirrus: Use updated VM images
2022-02-10 14:23:18 -05:00
Chris Evich
0f363498c2 Cirrus: Use updated VM images
Mainly this is to confirm some changes needed for the podman-py CI
setup don't disrupt operations here. Ref:

https://github.com/containers/automation_images/pull/111

Note: Glibc resolver configuration has changed from previous images.  An
additional setup command was added to remove systemd-resolved from the
chain.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-02-10 13:38:12 -05:00
Daniel J Walsh
a2dccca2e6 Merge pull request #1565 from TomSweeneyRedHat/dev/tsweeney/commonup
Bump c/common to v0.47.4
2022-02-10 09:37:04 -05:00
tomsweeneyredhat
27b77f2bde Bump c/common to v0.47.4
As the title says

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2022-02-09 19:23:20 -05:00
Miloslav Trmač
6eda759dd2 Merge pull request #1564 from edsantiago/skip_sif_on_rhel
tests: skip sif test on RHEL
2022-02-07 22:25:57 +01:00
Ed Santiago
de71408294 tests: skip sif test on RHEL
(or, more precisely, if fakeroot binary not in $PATH).

Solves RHEL gating-test failure.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2022-02-07 13:04:15 -07:00
Daniel J Walsh
13cd098079 Merge pull request #1561 from mtrmac/release
Release v1.6.0
2022-02-02 17:10:11 -05:00
Miloslav Trmač
697ef59525 Bump to v1.6.1-dev
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-02 22:43:04 +01:00
Miloslav Trmač
e4b79d7741 Release v1.6.0
Highlights:
- A new sif: transport
- New options --multi-arch, --preserve-digests, --sign-passphrase-file

- Use a dynamic temp dir for test
- Add an option to allow copying image indexes alone
- proxy: Add a GetFullConfig method
- proxy: Also bump compatible semver
- Add option to preserve digests on copy
- Run codespell on code
- prompt-less signing via passphrase file
- add a SIF systemtest
- Merge pull request #1550 from vrothberg/sif-test
- Improve the documentation of the argument to (skopeo inspect)
- Document where various fields of (skopeo inspect) come from
- Improve the documentation of boolean flags

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-02 22:39:41 +01:00
Daniel J Walsh
bf24ce9ff2 Merge pull request #1560 from rhatdan/VENDOR
Bump version of containers/image and containers/common
2022-02-02 14:40:51 -05:00
Daniel J Walsh
162bbab3a6 Bump version of containers/image and containers/common
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2022-02-02 14:40:05 -05:00
Valentin Rothberg
cf19643e76 Merge pull request #1555 from mtrmac/inspect-docs
Improve documentation of skopeo inspect
2022-02-01 09:56:19 +01:00
Valentin Rothberg
afc18ceed3 Merge pull request #1557 from mtrmac/compress-docs
Improve the documentation of boolean flags
2022-02-01 09:55:29 +01:00
Miloslav Trmač
004519f143 Improve the documentation of boolean flags
The Go behavior of boolean flags is as follows:

Accepted values are --flag, which is the same as --flag=true, and --flag=false,
which is the default (except for OptionalBoolFlag).
--flag {false,true} is parsed as --flag=true with a non-option {false,true} argument.

So, for almost all flags, document them just as --flag, not
mentioning the [={false,true}] part, because users can just
omit =true, or the whole flag instead of =false.

OTOH, for tls-verify, document only the tls-verify={true,false}
variant, because the primary use is tls-verify=false, and because
tls-verify is not "the default", but equivalent to an explicit
tls-verify=true (overriding registries.conf).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-01 02:16:45 +01:00
Miloslav Trmač
9db60ec007 Document where various fields of (skopeo inspect) come from
... and suggest how to deal with other-architecture images,
a fairly frequent point of confusion.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-01 02:16:35 +01:00
Miloslav Trmač
cb74933b41 Improve the documentation of the argument to (skopeo inspect)
Don't repeat ourselves, and actually point to some documentation.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-01 02:16:35 +01:00
Miloslav Trmač
8fb455174d Merge pull request #1556 from rhatdan/VENDOR
Update vendor of containers/storage and containers/common
2022-02-01 01:32:11 +01:00
Daniel J Walsh
7f4db3db9d Update vendor of containers/storage and containers/common
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2022-01-31 17:27:05 -05:00
Daniel J Walsh
96cdfac7d9 Merge pull request #1550 from vrothberg/sif-test
add a SIF systemtest
2022-01-27 08:46:27 -05:00
Valentin Rothberg
a4476c358c add a SIF systemtest
To make sure that the basic functionality is exercised in Skopeo and
c/image CI.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2022-01-27 10:02:02 +01:00
Daniel J Walsh
1391aae0a5 Merge pull request #1551 from rhatdan/VENDOR
Update vendor of containers/common
2022-01-26 12:50:22 -05:00
Daniel J Walsh
042f481629 Update vendor of containers/common
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2022-01-26 12:49:18 -05:00
Daniel J Walsh
3518c50688 Merge pull request #1547 from containers/dependabot/go_modules/github.com/containers/storage-1.38.1
Bump github.com/containers/storage from 1.38.0 to 1.38.1
2022-01-26 11:46:13 -05:00
Chris Evich
327f87d79b Merge pull request #1549 from cevich/fix_yaml
Github workflow: Fix yaml syntax
2022-01-26 11:26:04 -05:00
Chris Evich
bd8ed664d5 Github workflow: Fix yaml syntax
Same problem as addressed in
https://github.com/containers/podman/pull/13005 I neglected to include
in https://github.com/containers/skopeo/pull/1546 for whatever reason.
This commit makes it right.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-01-26 10:17:52 -05:00
dependabot[bot]
b51707d50d Bump github.com/containers/storage from 1.38.0 to 1.38.1
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.38.0 to 1.38.1.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.38.0...v1.38.1)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-01-26 12:42:07 +00:00
Miloslav Trmač
2c84bc232c Merge pull request #1540 from vrothberg/passphrase
prompt-less signing via passphrase file
2022-01-26 13:41:08 +01:00
Valentin Rothberg
bb49923af4 prompt-less signing via passphrase file
To support signing images without prompting the user, add CLI flags for
providing a passphrase file.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2022-01-26 08:30:49 +01:00
Daniel J Walsh
639aabbaf3 Merge pull request #1546 from cevich/notify_on_error
[CI:DOCS] Github-workflow: Report both failures and errors
2022-01-25 19:50:18 -05:00
Chris Evich
cd58349b25 Github-workflow: Report both failures and errors
Port of changes from https://github.com/containers/podman/pull/12997 and
https://github.com/containers/podman/pull/13005 to the workflow in this
repository.

***Note***: Impractical to automatically verify these changes until
they're merged into `main`.  Though the similar changes made in the
podman repo. have been manually confirmed to function as intended.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-01-25 14:56:15 -05:00
Daniel J Walsh
4b79ed7d7d Merge pull request #1543 from rhatdan/codespell
Run codespell on code
2022-01-21 15:29:35 -05:00
Daniel J Walsh
2858904e4b Run codespell on code
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2022-01-21 07:49:49 -05:00
Miloslav Trmač
15296d9876 Merge pull request #1542 from rhatdan/VENDOR
Update the vendor of containers/common
2022-01-20 20:11:19 +01:00
Daniel J Walsh
923c58a8ee Update the vendor of containers/common
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2022-01-20 13:30:07 -05:00
Miloslav Trmač
43726bbc27 Merge pull request #1541 from containers/dependabot/go_modules/github.com/containers/storage-1.38.0
Bump github.com/containers/storage from 1.37.0 to 1.38.0
2022-01-20 13:27:03 +01:00
dependabot[bot]
1bf18b7ef8 Bump github.com/containers/storage from 1.37.0 to 1.38.0
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.37.0 to 1.38.0.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.37.0...v1.38.0)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-01-20 09:10:51 +00:00
Daniel J Walsh
df4d82b960 Merge pull request #1536 from mtrmac/dep-updates
Update github.com/containerd/containerd to 1.5.9
2022-01-07 10:53:39 -05:00
Miloslav Trmač
d32c56b47f Update github.com/containerd/containerd to 1.5.9
> go get github.com/containerd/containerd@latest
> make vendor

... because 1.5.9 contains a vulnerability fix, and we
want to silence scanners.

NOTE: Skopeo DOES NOT use the vulnerable code that
was fixed in containerd 1.5.9, so it is NOT vulnerable to
GHSA-mvff-h3cj-wj9c .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-01-06 22:11:46 +01:00
Miloslav Trmač
6007e792e4 Fix the pseudo-version of github.com/opencontainers/image-spec
> go get github.com/opencontainers/image-spec@a5463b7f9c8451553af3adcba2cab538469df00c
> make vendor

Primarily we want to use a 1.0.3-0... version rather than 1.0.2-0..., so that
dependencies on 1.0.2 don't cause Skopeo to use 1.0.2 instead of
the later main-branch code.

Go has some logic to prevent using pseudo-version that don't follow
a released version (which is the case here, where 1.0.2 is on a branch,
and we want to use a main-branch commit instead); luckily some later
PRs on the main branch include the full contents of the 1.0.2 branch.
So, update a bit further along the main branch.

This particular commit corresponds to the choice in
https://github.com/containers/image/pull/1433 .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-01-06 22:11:14 +01:00
Daniel J Walsh
77f881e61c Merge pull request #1532 from mtrmac/bump-runc
Update github.com/opencontainers/runc to v1.0.3
2022-01-06 07:53:40 -05:00
Miloslav Trmač
5aa06a51f4 Update github.com/opencontainers/runc to v1.0.3
... to silence warnings about CVE-2021-43784
/ GHSA-v95c-p5hm-xq8f .

NOTE: The vulnerable code was not used in this package,
so Skopeo is has not been vulnerable to this issue.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-01-03 12:51:52 +01:00
Miloslav Trmač
e422e44fca Merge pull request #1527 from containers/dependabot/go_modules/github.com/spf13/cobra-1.3.0
Bump github.com/spf13/cobra from 1.2.1 to 1.3.0
2021-12-15 22:11:56 +01:00
dependabot[bot]
f6a84289eb Bump github.com/spf13/cobra from 1.2.1 to 1.3.0
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.2.1 to 1.3.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Changelog](https://github.com/spf13/cobra/blob/master/CHANGELOG.md)
- [Commits](https://github.com/spf13/cobra/compare/v1.2.1...v1.3.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-12-15 09:33:22 +00:00
Daniel J Walsh
2689eb367f Merge pull request #1526 from containers/dependabot/go_modules/github.com/docker/docker-20.10.12incompatible
Bump github.com/docker/docker from 20.10.11+incompatible to 20.10.12+incompatible
2021-12-14 15:07:33 -05:00
dependabot[bot]
c5b45c6c49 Bump github.com/docker/docker
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 20.10.11+incompatible to 20.10.12+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Changelog](https://github.com/moby/moby/blob/master/CHANGELOG.md)
- [Commits](https://github.com/docker/docker/compare/v20.10.11...v20.10.12)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-12-14 09:33:41 +00:00
Miloslav Trmač
037f518146 Merge pull request #1520 from Jamstah/preserve-digests
Add option to preserve digests on copy and sync
2021-12-10 17:06:53 +01:00
James Hewitt
c582c4844f Add option to preserve digests on copy
When enabled, if digests can't be preserved an error will be raised.

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2021-12-07 13:16:10 +00:00
James Hewitt
2046bfdaaa Add option to preserve digests on copy
When enabled, if digests can't be preserved an error will be raised.

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2021-12-07 13:16:10 +00:00
Daniel J Walsh
25868f17c0 Merge pull request #1523 from cgwalters/proxy-config-2
proxy: Add a GetFullConfig method
2021-12-07 06:33:55 -05:00
Colin Walters
e7dc5e79f2 proxy: Also bump compatible semver
To denote we have new API.
2021-12-06 20:59:17 -05:00
Colin Walters
3606b2d1de proxy: Add a GetFullConfig method
Sadly...I swear I had tested this at one point, but it was
*definitely* not the intention that we just return the container
runtime configuration.

I need a method to return the full image configuration.  At some point
I must have accidentally added a redundant `.Config`.

This whole new method `GetFullConfig` is like `GetConfig` but
returns the whole image configuration.  A specific motivation
here is that it's only in the image configuration that we can
stick arbitrary metadata (labels) that will survive a round trip through
docker schema v2.
2021-12-06 17:15:46 -05:00
Daniel J Walsh
f03d0401c1 Merge pull request #1521 from mtrmac/image-spec
Update opencontainers/image-spec
2021-12-02 13:51:26 -05:00
Miloslav Trmač
5c82c7728f Update github.com/containerd/containerd to v1.5.8
just to keep various dependency checkers happy.

> go get github.com/containerd/containerd@v1.5.8

NOTE: This is NOT a fix for CVE-2021-41190 / GHSA-77vh-xpmg-72qh ,
that was fixed in Skopeo 1.5.2.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-12-02 19:03:33 +01:00
Miloslav Trmač
37d801c90b Update opencontainers/image-spec
... to a version past 1.0.2, just to keep various
dependency checkers happy.

> go get github.com/opencontainers/image-spec@v1.0.2-0.20211123152302-43a7dee1ec31

The commit is intended to match https://github.com/containers/image/pull/1419
to minimize churn.

NOTE: This is NOT a fix for CVE-2021-41190 / GHSA-77vh-xpmg-72qh ,
that was fixed in Skopeo 1.5.2.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-12-02 18:56:36 +01:00
Miloslav Trmač
c3f65951bc Merge pull request #1511 from Jamstah/copy-sparse-manifest
Add an option to allow copying image indexes alone
2021-12-02 14:38:27 +01:00
James Hewitt
d94015466f Add an option to allow copying image indexes alone
The new --multi-arch option allows the user to select between copying the
image associated with the system platform, all images in the index, or
just the index itself without attempting to copy the images.

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2021-11-27 15:38:42 +00:00
Miloslav Trmač
1d24e657fa Merge pull request #1518 from Jamstah/int-test-ignore
Stop test producing output in source directory
2021-11-27 16:13:31 +01:00
James Hewitt
4dcd28df92 Use a dynamic temp dir for test
This test was incorrectly assuming that nothing would be made on disk,
but it was putting files into the source directory.

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2021-11-27 13:44:28 +00:00
Miloslav Trmač
789ee8bea9 Bump to 1.5.3-dev 2021-11-26 11:49:38 -05:00
Miloslav Trmač
8a88191c84 Release 1.5.2
Includes a fix for CVE-2021-41190 / GHSA-77vh-xpmg-72qh .

- use fedora:latest in contrib/skopeoimage/*/Dockerfile
- Fix test bug that prevented useful diagnostics on registry fail
- proxy: Add an API to fetch the config upconverted to OCI
- proxy: Add support for manifest lists
- proxy: Uncapitalize all errors
- Cirrus: Bump Fedora to release 35 & Ubuntu to 21.10
- Update to c/image v5.17.0

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-11-26 11:49:38 -05:00
Miloslav Trmač
69728fdf93 Update to c/image v5.17.0
Includes a fix for CVE-2021-41190 / GHSA-77vh-xpmg-72qh .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-11-22 14:19:37 -05:00
Daniel J Walsh
904c745bb0 Merge pull request #1499 from cevich/update_to_f35
Cirrus: Bump Fedora to release 35 & Ubuntu to 21.10
2021-11-19 11:57:13 -05:00
Chris Evich
47066f2d77 Cirrus: Bump Fedora to release 35 & Ubuntu to 21.10
The Fedora 35 cloud images have switched to UEFI boot with a GPT
partition. Formerly, all Fedora images included support for runtime
re-partitioning. However, the requirement to test alternate storage
has since been dropped/removed.  Rather than maintain a disused
feature, and supporting scripts, these Fedora VM images have reverted
to the default: Automatically resize to 100% on boot.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-11-19 10:11:16 -05:00
Daniel J Walsh
fab344c335 Merge pull request #1509 from containers/dependabot/go_modules/github.com/docker/docker-20.10.11incompatible
Bump github.com/docker/docker from 20.10.10+incompatible to 20.10.11+incompatible
2021-11-18 09:18:17 -05:00
dependabot[bot]
adfa1d4e49 Bump github.com/docker/docker
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 20.10.10+incompatible to 20.10.11+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Changelog](https://github.com/moby/moby/blob/master/CHANGELOG.md)
- [Commits](https://github.com/docker/docker/compare/v20.10.10...v20.10.11)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-18 09:33:27 +00:00
Valentin Rothberg
002978258c Merge pull request #1495 from cgwalters/proxy-config
proxy: Add `GetConfig`, add manifest list support, add an integration test
2021-11-16 17:00:59 +01:00
Colin Walters
05a2ed4921 proxy: Uncapitalize all errors
By Go convention.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Colin Walters
e9535f868b tests: Add new "procutils" that exposes PDEATHSIG
To fix compilation on MacOS.

I think actually we want to use this pervasively in our tests
on Linux; it doesn't really matter when run inside a transient
container, but `PDEATHSIG` is useful for persistent containers (e.g.)
toolbox and when running outside of a pid namespace, e.g. on a host
system shell directly or in systemd.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Colin Walters
fa86297c36 proxy_test: Test GetConfig
Now that we have a test suite, let's use it more.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Colin Walters
2bb6f27d13 proxy_test: Add helper to read all from a reply
Prep for testing `GetConfig`.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Colin Walters
f90725d80c proxy_test: Add a helper method to call without fd
To verify in one place.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Colin Walters
644074cbb4 proxy: Add support for manifest lists
We need to support manifest lists. I'm not sure how I missed this
originally.  At least now we have integration tests that cover this.

The issue here is fairly subtle - the way c/image works right now,
`image.FromUnparsedImage` does pick a matching image from a list
by default.  But it also overrides `GetManifest()` to return the
original manifest list, which defeats our goal here.

Handle this by adding explicit manifest list support code.  We'll
want this anyways for future support for `GetRawManifest` or so
which exposes OCI manifest lists to the client.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Colin Walters
83416068d3 tests/integration/proxy_test: New test that exercises proxy.go
I debated adding "reverse dependency testing" using
https://crates.io/crates/containers-image-proxy
but I think it's easier to reuse the test infrastructure here.

This also starts fleshing out a Go client for the proxy (not
that this is going to be something most Go projects would want
versus vendoring c/image...but hey, maybe it'll be useful).

Now what I hit in trying to use the main test images is currently
the proxy fails on manifest lists, so I'll need to fix that.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Colin Walters
a3adf36db6 proxy: Use float → int helper for pipeid
Just noticed while scrolling past the code.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Colin Walters
6510f1011b proxy: Add a helper to return a byte array
Since this is shared between the manifest and config paths.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Colin Walters
e7b7be5734 proxy: Add an API to fetch the config upconverted to OCI
While the caller could fetch this today as a blob, it'd be in
either docker or oci schema.  In keeping with the model of having
this proxy only expose OCI, add an API which uses the c/image logic
to do the conversion.

This is necessary for callers to get the diffIDs, and in general
to implement something like an external `skopeo copy`.

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-15 21:02:14 -05:00
Miloslav Trmač
1e01e38459 Merge pull request #1502 from edsantiago/duh
Fix bug that prevented useful diagnostics on registry fail
2021-11-11 15:12:08 +01:00
Ed Santiago
942cd6ec58 Fix bug that prevented useful diagnostics on registry fail
Sigh. 'expr 1 - 1' yields 0 (correctly) but also exits 1. This
is even documented in the man page, but I didn't know it. And
thus, on the final iteration, when timeout reached 0, BATS
errored out on the expr instead of continuing to the 'podman logs'
or the 'die' message.

Solution is super trivial: use $(( ... )) instead of expr.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2021-11-10 19:56:33 -07:00
Tom Sweeney
a902709e14 Merge pull request #1496 from lsm5/skopeoimage
use fedora:latest in contrib/skopeoimage/*/Dockerfile
2021-11-08 17:05:21 -05:00
Lokesh Mandvekar
41de7f2f66 use fedora:latest in contrib/skopeoimage/*/Dockerfile
Fixes: #1492

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2021-11-08 14:44:42 -05:00
Lokesh Mandvekar
c264cec359 Move to v1.5.2-dev
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2021-11-04 10:15:42 -04:00
Lokesh Mandvekar
2b357d8276 Bump to v1.5.1
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2021-11-04 10:15:42 -04:00
Colin Walters
4acc9f0d2c main: Error out if an unrecognized subcommand is provided
Surprisingly, the spf13/cobra CLI parsing logic, when presented
with an unknown subcommand outputs usage to stdout
and *exits successfully*.

This is bad for both users and scripts.  Cargo cult some code
I found in podman to handle this.

Motivated by https://github.com/containers/containers-image-proxy-rs/pull/1

Signed-off-by: Colin Walters <walters@verbum.org>
2021-11-03 15:14:49 -04:00
Daniel J Walsh
c2732cb15d Merge pull request #1480 from jaikiran/785
skopeo inspect command - introduce a way to skip querying all available tags
2021-10-26 14:57:51 -04:00
Valentin Rothberg
49f709576a Merge pull request #1487 from vrothberg/vendor-common
move optional-flag code to c/common/pkg/flag
2021-10-26 16:15:36 +02:00
Valentin Rothberg
7885162a35 move optional-flag code to c/common/pkg/flag
As the title says: it allows for code share with other tools such as
Podman and Buildah.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2021-10-26 15:18:30 +02:00
Miloslav Trmač
01e58f8e25 Merge pull request #1484 from lyft/precompute-digests
Add --dest-precompute-digests option for docker
2021-10-22 16:54:56 +02:00
Paul Fisher
36d860ebce Add --dest-precompute-digests option for docker
This ensures layers are not uploaded that already exist on the
destination registry, in exchange for streaming layers to temporary
files when digests are unknown (ex. compressing "on the fly").

Signed-off-by: Paul Fisher <pfisher@lyft.com>
2021-10-21 17:29:03 -07:00
Paul Fisher
c8777f3bf7 bump containers/image to 2541165
Signed-off-by: Paul Fisher <pfisher@lyft.com>
2021-10-21 17:29:03 -07:00
Miloslav Trmač
8f64c0412f Merge pull request #1483 from jpetazzo/static-build-instructions
Add instructions to generate static binaries
2021-10-20 16:07:29 +02:00
Jerome Petazzoni
985d4c09ae Add instructions to generate static binaries
Following the discussion in #1478, we don't want to provide
(and maintain) static binaries, but giving instructions to
produce such builds (with appropriate warnings around these
instructions) was considered acceptable, so - here we go!
2021-10-19 23:10:48 +02:00
Miloslav Trmač
8182255d22 Merge pull request #1476 from cgwalters/proxy
Add new `experimental-image-proxy` hidden command
2021-10-14 20:48:58 +02:00
Colin Walters
11b5989872 Add new experimental-image-proxy hidden command
This imports the code from https://github.com/cgwalters/container-image-proxy

First, assume one is operating on a codebase that isn't Go, but wants
to interact with container images - we can't just include the Go containers/image
library.

The primary intended use case of this is for things like
[ostree-containers](https://github.com/ostreedev/ostree-rs-ext/issues/18)
where we're using container images to encapsulate host operating system
updates, but we don't want to involve the [containers/image](github.com/containers/image/)
storage layer.

Vendoring the containers/image stack in another project is a large lift; the stripped
binary for this proxy standalone weighs in at 16M (I'm sure the lack
of LTO and the overall simplicity of the Go compiler is a large factor).
Anyways, I'd like to avoid shipping another copy.

This command is marked as experimental, and hidden.  The goal is
just to use it from the ostree stack for now, ideally shipping at least
in CentOS 9 Stream relatively soon.   We can (and IMO should)
change and improve it later.

A lot more discussion in https://github.com/cgwalters/container-image-proxy/issues/1
2021-10-14 14:16:32 -04:00
Jaikiran Pai
2144a37c21 issue#785 inspect command - introduce a way to skip querying available tags for an image 2021-10-12 20:24:39 +05:30
Valentin Rothberg
9c9a9f3a1f Merge pull request #1481 from mtrmac/container-install
Document container images as an alternative to installing packages
2021-10-12 10:13:11 +02:00
Miloslav Trmač
60c98cacde Document container images as an alternative to installing packages
Also fix the location of the introductory text about building from source,
and fix the document title.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-10-11 20:26:39 +02:00
Miloslav Trmač
116e75fbfd Merge pull request #1470 from jaikiran/527
Introduce --username and --password to pass credentials
2021-10-07 18:34:51 +02:00
Jaikiran Pai
89ecd5a4c0 Introduce --username and --password to pass credentials 2021-10-07 20:31:59 +05:30
Daniel J Walsh
fc81803bfa Merge pull request #1475 from rhatdan/main
Bump to v1.5.0
2021-10-06 16:34:36 -04:00
Daniel J Walsh
119eeb83a7 Move to v1.5.1-dev
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2021-10-06 16:32:33 -04:00
Daniel J Walsh
209a993159 Bump to v1.5.0
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2021-10-06 16:31:52 -04:00
Daniel J Walsh
5e7d11cbf3 Merge pull request #1474 from containers/dependabot/go_modules/github.com/containers/image/v5-5.16.1
Bump github.com/containers/image/v5 from 5.16.0 to 5.16.1
2021-10-06 12:17:20 -04:00
Lokesh Mandvekar
fc86da2023 Merge branch 'main' into dependabot/go_modules/github.com/containers/image/v5-5.16.1 2021-10-06 15:36:01 +00:00
Miloslav Trmač
0f370eed02 Merge pull request #1471 from containers/dependabot/go_modules/github.com/docker/docker-20.10.9incompatible
Bump github.com/docker/docker from 20.10.8+incompatible to 20.10.9+incompatible
2021-10-06 17:29:21 +02:00
dependabot[bot]
3e4d4a480f Bump github.com/containers/image/v5 from 5.16.0 to 5.16.1
Bumps [github.com/containers/image/v5](https://github.com/containers/image) from 5.16.0 to 5.16.1.
- [Release notes](https://github.com/containers/image/releases)
- [Commits](https://github.com/containers/image/compare/v5.16.0...v5.16.1)

---
updated-dependencies:
- dependency-name: github.com/containers/image/v5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-10-06 08:29:03 +00:00
dependabot[bot]
3a97a0c032 Bump github.com/docker/docker
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 20.10.8+incompatible to 20.10.9+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Changelog](https://github.com/moby/moby/blob/master/CHANGELOG.md)
- [Commits](https://github.com/docker/docker/compare/v20.10.8...v20.10.9)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-10-05 20:59:22 +00:00
Miloslav Trmač
ff88d3fcc2 Remove leftover Nix packaging files
... after https://github.com/containers/skopeo/pull/1463 dropped
it from the Makefile.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-10-05 20:11:54 +00:00
Daniel J Walsh
64be259655 Merge pull request #1472 from mtrmac/containerd
Update github.com/containerd/containerd to v1.5.7
2021-10-05 15:10:17 -04:00
Miloslav Trmač
e19b57c3b9 Update github.com/containerd/containerd to v1.5.7
... to include a fix for
https://github.com/advisories/GHSA-c2h3-6mxw-7mvq .

(Note that Skopeo doesn't depend on the vulnerable code,
so this is primarily to avoid dependency checker warnings.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-10-05 18:45:24 +02:00
Miloslav Trmač
2d5a00e833 Merge pull request #1468 from jaikiran/1466
Introduce a --ignore option to allow "sync" command to continue syncing even after a particular image sync fails
2021-10-05 15:19:12 +02:00
Jaikiran Pai
b950f83c60 issue#1466 - Introduce a --keep-going option to allow "sync" command to continue syncing even after a particular image sync fails 2021-10-05 07:18:38 +05:30
Daniel J Walsh
a95b0cc6fa Merge pull request #1467 from containers/dependabot/go_modules/github.com/containers/storage-1.37.0
Bump github.com/containers/storage from 1.36.0 to 1.37.0
2021-10-01 10:38:40 -04:00
dependabot[bot]
12d0103730 Bump github.com/containers/storage from 1.36.0 to 1.37.0
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.36.0 to 1.37.0.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.36.0...v1.37.0)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-10-01 08:33:20 +00:00
Valentin Rothberg
53cf287e37 Merge pull request #1464 from lsm5/update-installation-steps
Update installation doc with latest steps
2021-10-01 08:42:21 +02:00
Lokesh Mandvekar
e0c53dfd9b Update installation doc with latest steps
- Remove Kubic repo suggestions where skopeo exists by default
- Include documentation about lack of Windows package
(RE: https://github.com/containers/skopeo/issues/715)

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2021-09-30 13:35:32 -04:00
Valentin Rothberg
86fa758ad8 Merge pull request #1463 from lsm5/drop-nix
drop nix support
2021-09-28 15:23:03 +02:00
Lokesh Mandvekar
aba57a8814 Makefile: drop nix support
nix build is no longer being maintained.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2021-09-27 14:55:44 -04:00
Daniel J Walsh
4d3588e46a Merge pull request #1462 from containers/dependabot/go_modules/github.com/containers/common-0.46.0
Bump github.com/containers/common from 0.45.0 to 0.46.0
2021-09-27 13:14:33 -04:00
dependabot[bot]
93c42bcd74 Bump github.com/containers/common from 0.45.0 to 0.46.0
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.45.0 to 0.46.0.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.45.0...v0.46.0)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-27 08:31:48 +00:00
Daniel J Walsh
2c2e5b773f Merge pull request #1431 from rhatdan/tls-verify
Remove the extra (defaults to true) help msg
2021-09-25 05:25:26 -04:00
Miloslav Trmač
25d3e7b46d Merge pull request #1457 from containers/dependabot/go_modules/github.com/containers/common-0.45.0
Bump github.com/containers/common from 0.44.1 to 0.45.0
2021-09-22 18:32:37 +02:00
dependabot[bot]
c0f07d3dfd Bump github.com/containers/common from 0.44.1 to 0.45.0
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.44.1 to 0.45.0.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.44.1...v0.45.0)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-22 08:33:52 +00:00
Daniel J Walsh
c5a5199f57 Merge pull request #1456 from containers/dependabot/go_modules/github.com/containers/common-0.44.1
Bump github.com/containers/common from 0.44.0 to 0.44.1
2021-09-21 05:32:10 -04:00
dependabot[bot]
0ce7081e6d Bump github.com/containers/common from 0.44.0 to 0.44.1
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.44.0 to 0.44.1.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.44.0...v0.44.1)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-21 08:28:21 +00:00
Miloslav Trmač
db1e814e86 Merge pull request #1455 from mtrmac/mpb
Update to github.com/vbauerster/mpb v7.1.5
2021-09-20 16:14:32 +02:00
Miloslav Trmač
52dafe8f8d Update to github.com/vbauerster/mpb v7.1.5
... to fix https://github.com/vbauerster/mpb/issues/100 .

> go get github.com/vbauerster/mpb/v7@latest
> make vendor

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-09-20 15:43:07 +02:00
Daniel J Walsh
31b8981b04 Merge pull request #1444 from cevich/update_images
Update VM Images + Drop prior-ubuntu references
2021-09-16 04:06:29 -04:00
Daniel J Walsh
d8ba8b90fe Merge pull request #1443 from jaikiran/1411
Introduce DISABLE_DOCS to skip doc generation while building from source
2021-09-16 04:05:37 -04:00
Jaikiran Pai
ee8b8e77fc Explain the usage of DISABLE_DOCS in the installation doc 2021-09-15 17:21:31 +05:30
Chris Evich
1d204fb10f Update VM Images + Drop prior-ubuntu references
These images contain a workaround for:
     https://github.com/containers/podman/issues/11123

Prior-Ubuntu support is being dropped everywhere.

Ref: https://github.com/containers/podman/issues/11070
     https://github.com/containers/automation_images/pull/88

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-09-14 11:32:34 -04:00
Jaikiran Pai
6131077770 issue#1411 Introduce DISABLE_DOCS to skip doc generation while building from source 2021-09-14 20:23:11 +05:30
Daniel J Walsh
177443f47d Merge pull request #1441 from containers/dependabot/go_modules/github.com/containers/common-0.44.0
Bump github.com/containers/common from 0.43.2 to 0.44.0
2021-09-14 06:19:20 -04:00
dependabot[bot]
ed96bf04a1 Bump github.com/containers/common from 0.43.2 to 0.44.0
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.43.2 to 0.44.0.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.43.2...v0.44.0)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-14 08:34:07 +00:00
Daniel J Walsh
30f208ea59 Merge pull request #1439 from containers/dependabot/go_modules/github.com/containers/storage-1.36.0
Bump github.com/containers/storage from 1.35.0 to 1.36.0
2021-09-13 14:04:29 -04:00
dependabot[bot]
a837fbe28b Bump github.com/containers/storage from 1.35.0 to 1.36.0
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.35.0 to 1.36.0.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.35.0...v1.36.0)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-13 08:31:20 +00:00
Daniel J Walsh
9edeb69f6a Remove the extra (defaults to true) help msg
By default skopeo checks to see if the user actually uses one of the
--*tls-verify flags. Their initial value is ignored.  Setting the
initial value to false causes Cobra to not display the default value on
the screen when the user runs a `skopeo --help` command.

If the user does not specify a --*tls-verify option, it falls back to
using the value specified in the registries.conf file.

Fixes: https://github.com/containers/skopeo/issues/1383

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2021-08-27 06:16:50 -04:00
Daniel J Walsh
47b808275d Merge pull request #1430 from containers/dependabot/go_modules/github.com/containers/image/v5-5.16.0
Bump github.com/containers/image/v5 from 5.15.2 to 5.16.0
2021-08-26 07:06:21 -04:00
dependabot[bot]
a2d083ca84 Bump github.com/containers/image/v5 from 5.15.2 to 5.16.0
Bumps [github.com/containers/image/v5](https://github.com/containers/image) from 5.15.2 to 5.16.0.
- [Release notes](https://github.com/containers/image/releases)
- [Commits](https://github.com/containers/image/compare/v5.15.2...v5.16.0)

---
updated-dependencies:
- dependency-name: github.com/containers/image/v5
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-26 08:30:41 +00:00
Miloslav Trmač
4fda005a3e Merge pull request #1427 from mtrmac/go1.17
Run (gofmt -s -w)
2021-08-23 20:50:51 +02:00
Miloslav Trmač
0e87d4d1ca Run (gofmt -s -w)
Go 1.17 introduces a much more reasonable build constraint format, and gofmt now fails without using it.

Sadly we still need the old format as well, to support <1.17 builds.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-08-23 18:04:45 +02:00
Miloslav Trmač
5739b90946 Merge pull request #1428 from mtrmac/deps
Update non-module dependencies
2021-08-23 18:04:19 +02:00
Miloslav Trmač
c399909f04 Update non-module dependencies
Dependabot was apparently not picking these up (and
several haven't had a release for a long time anyway).

Also move from github.com/go-check/check to its newly
declared (and go.mod-enforced) name gopkg.in/check.v1.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-08-23 15:52:48 +02:00
Daniel J Walsh
5da1b0f304 Merge pull request #1422 from containers/dependabot/go_modules/github.com/containers/image/v5-5.15.2
Bump github.com/containers/image/v5 from 5.15.1 to 5.15.2
2021-08-19 06:11:23 -04:00
dependabot[bot]
102e2143ac Bump github.com/containers/image/v5 from 5.15.1 to 5.15.2
Bumps [github.com/containers/image/v5](https://github.com/containers/image) from 5.15.1 to 5.15.2.
- [Release notes](https://github.com/containers/image/releases)
- [Commits](https://github.com/containers/image/compare/v5.15.1...v5.15.2)

---
updated-dependencies:
- dependency-name: github.com/containers/image/v5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-19 08:32:27 +00:00
Miloslav Trmač
291bbdf66c Merge pull request #1420 from rhatdan/codespell
[CI:DOCS] Add OWNERS file
2021-08-18 23:17:03 +02:00
Miloslav Trmač
6bdadc8058 Merge pull request #1421 from containers/dependabot/go_modules/github.com/containers/common-0.43.2
Bump github.com/containers/common from 0.43.1 to 0.43.2
2021-08-18 18:49:38 +02:00
dependabot[bot]
7d5ef9d9e7 Bump github.com/containers/common from 0.43.1 to 0.43.2
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.43.1 to 0.43.2.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.43.1...v0.43.2)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-18 08:30:12 +00:00
Daniel J Walsh
70eaf171ea Add OWNERS file
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2021-08-17 16:50:30 -04:00
Miloslav Trmač
8da1c849a8 Merge pull request #1419 from containers/dependabot/go_modules/github.com/containers/image/v5-5.15.1
Bump github.com/containers/image/v5 from 5.15.0 to 5.15.1
2021-08-17 19:05:11 +02:00
dependabot[bot]
6196947297 Bump github.com/containers/image/v5 from 5.15.0 to 5.15.1
Bumps [github.com/containers/image/v5](https://github.com/containers/image) from 5.15.0 to 5.15.1.
- [Release notes](https://github.com/containers/image/releases)
- [Commits](https://github.com/containers/image/compare/v5.15.0...v5.15.1)

---
updated-dependencies:
- dependency-name: github.com/containers/image/v5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-17 10:51:44 +00:00
Daniel J Walsh
ecd3809bf5 Merge pull request #1418 from containers/dependabot/go_modules/github.com/containers/storage-1.34.1
Bump github.com/containers/storage from 1.34.0 to 1.34.1
2021-08-17 06:41:40 -04:00
dependabot[bot]
ec1ac5d0c8 Bump github.com/containers/storage from 1.34.0 to 1.34.1
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.34.0 to 1.34.1.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.34.0...v1.34.1)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-17 08:25:16 +00:00
Miloslav Trmač
a15fcbe63c Merge pull request #1417 from containers/dependabot/go_modules/github.com/containers/common-0.43.1
Bump github.com/containers/common from 0.43.0 to 0.43.1
2021-08-14 15:00:41 +02:00
dependabot[bot]
082db20fc0 Bump github.com/containers/common from 0.43.0 to 0.43.1
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.43.0 to 0.43.1.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.43.0...v0.43.1)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-13 08:22:25 +00:00
Miloslav Trmač
85ce748e8e Merge pull request #1414 from rhatdan/codespell
Add codespell fixes
2021-08-12 15:26:46 +02:00
Daniel J Walsh
8dce403b95 Add codespell fixes
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2021-08-11 16:47:48 -04:00
Miloslav Trmač
ab36f7f092 Merge pull request #1413 from edsantiago/flake_debug
systemtests: if registry times out, show container logs
2021-08-11 18:46:53 +02:00
Ed Santiago
f6ae786508 systemtests: if registry times out, show container logs
the 'signing' test is flaking; symptom is that we can never
connect to the port on the registry:

   https://api.cirrus-ci.com/v1/task/6208385738604544/logs/system.log

By all indications, the registry is up, i.e., the 'podman rm -f reg'
in teardown() succeeds, as shown by the 53c (CID) in the log. (It
bothers me that the FAIL message from die() does not appear in the
log, and I can't figure out why).

To try to diagnose this, run 'podman logs' on the registry upon
failure.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2021-08-11 10:10:23 -06:00
Miloslav Trmač
4069abba0e Merge pull request #1412 from containers/dependabot/go_modules/github.com/containers/common-0.43.0
Bump github.com/containers/common from 0.42.1 to 0.43.0
2021-08-11 16:13:59 +02:00
dependabot[bot]
9acb8b6a15 Bump github.com/containers/common from 0.42.1 to 0.43.0
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.42.1 to 0.43.0.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.42.1...v0.43.0)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-11 08:23:50 +00:00
Daniel J Walsh
0ae0e8d23f Merge pull request #1410 from containers/dependabot/go_modules/github.com/containers/storage-1.34.0
Bump github.com/containers/storage from 1.33.2 to 1.34.0
2021-08-10 14:05:12 -04:00
dependabot[bot]
a23b9f532d Bump github.com/containers/storage from 1.33.2 to 1.34.0
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.33.2 to 1.34.0.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.33.2...v1.34.0)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2021-08-10 13:16:43 -04:00
Daniel J Walsh
252af41dba Merge pull request #1408 from containers/dependabot/go_modules/github.com/containers/storage-1.33.2
Bump github.com/containers/storage from 1.33.1 to 1.33.2
2021-08-06 11:50:47 -04:00
dependabot[bot]
be821b4f59 Bump github.com/containers/storage from 1.33.1 to 1.33.2
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.33.1 to 1.33.2.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.33.1...v1.33.2)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2021-08-06 11:15:28 -04:00
Miloslav Trmač
678682f128 Merge pull request #1334 from cevich/drop_podmanmake
Cirrus: Run checks directly on the host
2021-08-04 22:31:21 +02:00
Chris Evich
da294ebce1 Merge pull request #1405 from cevich/cron_fail_mail
[CI:DOCS] Github: Add workflow to monitor Cirrus-Cron builds
2021-08-04 15:55:08 -04:00
Chris Evich
ab87b15fea Cirrus: Run checks directly on the host
In order to meet achievable deadlines converting from Travis to Cirrus
CI, one significant artifact was carried forward (instead of fixing):

Depending on a `--privileged` container to execute all/most automated
checks/tests.

Prior attempts to remove this aspect resulted in several test failures.
Fixing the problems was viewed as more time-consuming than simply
preserving this runtime environment.

Time has passed, and the code has since moved on.  This commit removes
the legacy need to execute CI operations in a `--privileged`
container, instead running them directly on the host.  At the same time,
the necessary test binaries are obtained from the same container used
for development/local testing purposes.  This ensures the two
experiences are virtually always identical.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-08-04 15:37:57 -04:00
Chris Evich
1aa98baba4 Github: Add workflow to monitor Cirrus-Cron builds
The Cirrus-CI configuration for this repository is setup to execute test
builds on certain important release branches.  There is no built-in way
to monitor these for success or failure.  This commit adds a
Github-Actions Workflow to e-mail the podman-monitor list if any fail.
Otherwise it will take no action if everything is successful.

Note: This duplicates 99.999% of the same YAML used for the Buildah
repository.  The only changes were for the settings URL and
mentioning "skopeo" in a comment.  A similar workflow is also in use
on the Podman repository.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-08-04 10:09:45 -04:00
Daniel J Walsh
3e127edb9c Merge pull request #1404 from containers/dependabot/go_modules/github.com/docker/docker-20.10.8incompatible
Bump github.com/docker/docker from 20.10.7+incompatible to 20.10.8+incompatible
2021-08-04 05:50:46 -04:00
dependabot[bot]
fbf9699867 Bump github.com/docker/docker
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 20.10.7+incompatible to 20.10.8+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Changelog](https://github.com/moby/moby/blob/master/CHANGELOG.md)
- [Commits](https://github.com/docker/docker/compare/v20.10.7...v20.10.8)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-04 08:24:01 +00:00
Valentin Rothberg
a0084eda60 Merge pull request #1402 from containers/dependabot/go_modules/github.com/containers/common-0.42.1
Bump github.com/containers/common from 0.42.0 to 0.42.1
2021-08-03 11:12:24 +02:00
dependabot[bot]
a3bb1cc5b8 Bump github.com/containers/common from 0.42.0 to 0.42.1
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.42.0 to 0.42.1.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.42.0...v0.42.1)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-03 08:31:38 +00:00
Daniel J Walsh
8060e41dce Merge pull request #1400 from mtrmac/v1.4.0
v1.4.0
2021-08-02 11:52:08 -04:00
Miloslav Trmač
0667a1e037 Bump to 1.4.1-dev
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-08-02 17:37:31 +02:00
Miloslav Trmač
a44da449d3 Release 1.4.0
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-08-02 17:37:11 +02:00
Daniel J Walsh
788b2e2dd3 Merge pull request #1396 from saschagrunert/accept-repositories
Accept repositories on login/logout
2021-08-02 10:26:46 -04:00
Daniel J Walsh
2135466ba3 Merge pull request #1399 from vrothberg/update-in-container
vendor-in-container: update to golang:1.16
2021-08-02 10:26:20 -04:00
Valentin Rothberg
3d9340c836 vendor-in-container: update to golang:1.16
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2021-08-02 15:30:06 +02:00
Sascha Grunert
961d5da7ce Accept repositories on login/logout
We now synchronize the behavior with Podman and accept repositories
during login and logout per default.

Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
2021-08-02 13:41:45 +02:00
Miloslav Trmač
920f0b2414 Merge pull request #1395 from vrothberg/vendor-dance
update c/common, c/image, c/storage
2021-08-02 13:21:14 +02:00
Valentin Rothberg
fb03e033cc update c/common, c/image, c/storage
Pin them to the specific versions that Podman v3.3 targets for RHEL.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2021-08-02 09:32:09 +02:00
Miloslav Trmač
caf1469b1d Merge pull request #1394 from n0vad3v/patch-1
Update on Building on Ubuntu
2021-07-31 14:28:06 +02:00
Nova Kwok
d70ea89050 Update on Building on Ubuntu
Build on Ubuntu should include pkg-config otherwise it will fail:
```
skopeo (main) # make bin/skopeo
CGO_CFLAGS="" CGO_LDFLAGS="-L/usr/lib/x86_64-linux-gnu -lgpgme -lassuan -lgpg-error" GO111MODULE=on go build -mod=vendor "-buildmode=pie" -ldflags '-X main.gitCommit=a8f0c902060288bd02db15bf6671c8cd0149704e ' -gcflags "" -tags "  " -o bin/skopeo ./cmd/skopeo
# pkg-config --cflags  -- devmapper
pkg-config: exec: "pkg-config": executable file not found in $PATH
make: *** [Makefile:134: bin/skopeo] Error 2
```
2021-07-31 14:02:48 +08:00
Valentin Rothberg
a8f0c90206 Merge pull request #1390 from mtrmac/start-timeouts
Add timeouts when waiting on OpenShift or the registry to start
2021-07-30 13:44:29 +02:00
Miloslav Trmač
ce6035b738 Add timeouts when waiting on OpenShift or the registry to start
... so that we terminate with the full context and pointing at
the relevant code, instead of relying
on the overall test suite timeout.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-30 12:36:54 +02:00
Miloslav Trmač
b6b7bd9250 Merge pull request #1365 from moio/allow-decompression
Allow decompressing while copying
2021-07-29 16:27:19 +02:00
Daniel J Walsh
c27d9063e5 Merge pull request #1391 from mtrmac/lint
Fix two instances of unused err found by go-staticcheck
2021-07-29 07:09:17 -04:00
Silvio Moioli
3a8d3cb566 Add docs and bash completions
Signed-off-by: Silvio Moioli <moio@suse.com>
2021-07-29 11:37:01 +02:00
Silvio Moioli
aeb61f656c Add support for decompressing while copying to dir://
Signed-off-by: Silvio Moioli <moio@suse.com>
2021-07-29 11:35:02 +02:00
Silvio Moioli
76eb9bc9e9 Update to enabled containers/image version
Signed-off-by: Silvio Moioli <moio@suse.com>
2021-07-29 11:34:13 +02:00
Miloslav Trmač
a1f9318e7b Fix two instances of unused err found by go-staticcheck
*cough* Go is an easy-to-use, hard-to-misuse language *cough*

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-28 21:46:07 +02:00
Miloslav Trmač
64dc748e5e Merge pull request #1389 from containers/dependabot/go_modules/github.com/containers/storage-1.33.0
Bump github.com/containers/storage from 1.32.6 to 1.33.0
2021-07-28 14:54:35 +02:00
dependabot[bot]
d82c662101 Bump github.com/containers/storage from 1.32.6 to 1.33.0
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.32.6 to 1.33.0.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.32.6...v1.33.0)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-28 12:33:17 +00:00
Daniel J Walsh
24a75c9608 Merge pull request #1387 from cevich/daily_version_update
[CI:DOCS] Multi-arch image build: Daily version-tag push
2021-07-28 05:54:51 -04:00
Chris Evich
f0c49b5ccc Multi-arch image build: Daily version-tag push
This mirrors changes from
https://github.com/containers/buildah/pull/3381

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-07-27 14:12:48 -04:00
Miloslav Trmač
bef3b0c997 Merge pull request #1386 from moio/patch-2
CONTRIBUTING: small fixes to commands
2021-07-27 14:41:16 +02:00
Silvio Moioli
5e5506646d CONTRIBUTING: small fixes to commands 2021-07-27 14:12:10 +02:00
Valentin Rothberg
76bfc7f07f Merge pull request #1369 from mtrmac/tls-verify
Fix --tls-verify
2021-07-26 14:56:58 +02:00
Miloslav Trmač
726d982ceb Fix --tls-verify
Differentiate, again, between (skopeo --tls-verify subcommand)
and (skope subcommand --tls-verify), by
- using a "local" Corba flag for the (skopeo --tls-verify ...) variant
- adding separate --tls-verify flags to subcommands that only accept
  them as legacy, available through deprecatedTLSVerifyFlags
  (unlike the non-legacy path of dockerImageFlags());
- using TraverseChildren: true; this causes the global and
  per-subcommand flags to be treated separately by Corba,
  i.e. they no longer happen to share the "Hidden" flag
  and Corba actually sets the right flag variable now.

So, we can now warn on (skopeo --tls-verify command) again,
and --help lists the flag correctly (it is hidden at the
global level, and in subcommands like copy that deprecated it,
but visible in subcommands like inspect where it's not deprecated).

NOTE: This removes --tls-verify from (skopeo manifest-digest) and
the three signing commands; it never made sense there. This change
could, in principle, break some users.

Also update man pages to match.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-23 17:50:25 +02:00
Miloslav Trmač
bb447f2f1e Test both imageOptions and imageDestOptions in TestTLSVerifyFlags
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-23 17:13:07 +02:00
Miloslav Trmač
2a98df6b12 Split testing of --tls-verify into separate TestTLSVerifyFlags
It will get bigger, and we will also want to test imageDestOptions
for extra confidence.

Only moves the code, should not change (test) behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-23 17:13:05 +02:00
Miloslav Trmač
a6cf2f4293 Add the --tls-verify option to (skopeo logout)
The current implementation can actually contact the registry (if
logout fails with "not logged in" but there are .docker/config.json
credentials present), so provide a non-deprecated way to disable TLS.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-23 15:47:02 +02:00
Miloslav Trmač
bd309aed2a Merge pull request #1382 from cevich/fix_docker_limits
Fix using images from rate-limited docker hub
2021-07-22 23:01:20 +02:00
Chris Evich
285a5cb6a0 Fix using images from rate-limited docker hub
The necessary images have been manually copied over to quay.  Code was
updated with centralized constants for the utilized images.  Tests then
all reference the constants (in case the image locations need to change
again).

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-07-22 16:28:20 -04:00
Daniel J Walsh
3c2d98875d Merge pull request #1374 from cevich/man_page_checker_3
Implement manpage checker in CI
2021-07-22 14:16:43 -04:00
Chris Evich
02bacf571d Use Fedora container for doccheck
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-07-22 10:10:12 -04:00
Ed Santiago
ae0595c56a Man page validation: part 2 of 2
This is the script that runs 'skopeo COMMAND --help' and
cross-checks that all the option flags are documented
in man pages, and vice-versa (all options listed in man
pages appear in COMMAND's --help message).

Copied from podman, with changes for skopeo-land (removing
the rst checks, and conforming to skopeo conventions).

Signed-off-by: Ed Santiago <santiago@redhat.com>
2021-07-21 14:51:47 -04:00
Miloslav Trmač
b0ebbdd501 Merge pull request #1381 from mtrmac/signatures_docs
docs: Adding info re container signatures
2021-07-21 12:01:59 +02:00
Dave Hay
ec73ff3d91 docs: Adding info re container signatures
Updating docs to reflect container signatures

Fixes #1337

Signed-off-by: Dave Hay <david_hay@uk.ibm.com>
2021-07-21 11:39:48 +02:00
Chris Evich
ce2f64c946 Merge pull request #1379 from cevich/generic_steps
[CI:DOCS] Multi-arch image workflow: Make steps generic
2021-07-19 15:09:31 -04:00
Chris Evich
e460b9aa8c [CI:DOCS] Multi-arch image workflow: Make steps generic
This duplicates the change from
https://github.com/containers/buildah/pull/3385

Since this workflow is duplicated across three repositories, maintaining
changes becomes onerous if the item contents vary between
implementations in any way. Improve this situation by encoding the
repository-specific details into env. vars. then referencing those vars
throughout. This way, a meaningful diff can be worked with to compare
the contents across repositories.

Also included are abstractions for the specific command used to obtain
the project version, and needed details for filtering the output. Both
of these vary across the Buildah, Skopeo, and Podman repos.

NOTE: This change requires the names of two github action secrets
to be updated: SKOPEO_QUAY_USERNAME -> REPONAME_QUAY_USERNAME
(and *PASSWORD).

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-07-16 16:53:43 -04:00
Daniel J Walsh
643920b373 Merge pull request #1376 from alvistack/master-linux-amd64
Update nix pin with `make nixpkgs`
2021-07-14 20:12:30 -04:00
Chris Evich
598f9e7ce3 Merge pull request #1375 from cevich/freshen_vm_images
Cirrus: Freshen CI images
2021-07-14 16:47:06 -04:00
Wong Hoi Sing Edison
ee05486383 Update nix pin with make nixpkgs
Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
2021-07-14 21:20:06 +08:00
Chris Evich
2476e99cb1 Cirrus: Freshen CI images
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-07-12 15:25:56 -04:00
Miloslav Trmač
074cfda358 Merge pull request #1373 from containers/dependabot/go_modules/github.com/containers/common-0.41.0
Bump github.com/containers/common from 0.40.1 to 0.41.0
2021-07-12 18:31:05 +02:00
Daniel J Walsh
cec7aa68f7 Merge branch 'main' into dependabot/go_modules/github.com/containers/common-0.41.0 2021-07-12 10:48:23 -04:00
Daniel J Walsh
dc1cf646e0 Merge pull request #1372 from containers/dependabot/go_modules/github.com/containers/storage-1.32.6
Bump github.com/containers/storage from 1.32.5 to 1.32.6
2021-07-12 10:48:04 -04:00
dependabot[bot]
76103a6c2d Bump github.com/containers/common from 0.40.1 to 0.41.0
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.40.1 to 0.41.0.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.40.1...v0.41.0)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-12 08:26:07 +00:00
dependabot[bot]
990908bf80 Bump github.com/containers/storage from 1.32.5 to 1.32.6
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.32.5 to 1.32.6.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.32.5...v1.32.6)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-12 08:25:52 +00:00
Valentin Rothberg
a6e745dad5 Merge pull request #1371 from mtrmac/staticcheck
Fix staticcheck-reported problems
2021-07-10 15:33:12 +02:00
Miloslav Trmač
ede29c9168 Remove an unnecessary break
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-10 11:33:45 +02:00
Miloslav Trmač
75f0183edc Remove an unnecessary Sprintf
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-10 11:33:36 +02:00
Miloslav Trmač
7ace4265fb Fix TestDockerRepositoryReferenceParser
Don't ignore errors, and don't even skip test cases on errors.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-10 11:33:22 +02:00
Miloslav Trmač
3d4fb09f2c Remove unused code
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-10 11:32:42 +02:00
Valentin Rothberg
92ad5eddcc Merge pull request #1370 from mtrmac/stateless-disable-completion
Set cobra.Command.CompletionOption already in createApp
2021-07-10 10:48:18 +02:00
Miloslav Trmač
4efeb71e28 Set cobra.Command.CompletionOption already in createApp
... because our unit tests use createApp, so the current
main()-only edit is not visible to unit tests.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-09 23:03:05 +02:00
Miloslav Trmač
392c6fce02 Merge pull request #1368 from lsm5/bump-version
Bump main version to v1.4.0-dev
2021-07-09 16:53:18 +02:00
Lokesh Mandvekar
a0ce542193 Bump version to v1.4.0-dev
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2021-07-09 09:54:53 -04:00
Miloslav Trmač
0035a9aecb Merge pull request #1367 from mtrmac/revert-tmp-workaround
Revert "integration tests: disable `ls` for logs"
2021-07-09 13:52:48 +02:00
Miloslav Trmač
f80bf8a39f Revert "integration tests: disable ls for logs"
This reverts commit 65c5b0bf8d.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-09 13:15:13 +02:00
Miloslav Trmač
0fac3f10d3 Merge pull request #1364 from moio/patch-1
CONTRIBUTING: update vendoring instructions
2021-07-08 16:01:36 +02:00
Silvio Moioli
c39b3dc266 CONTRIBUTING: update vendoring instructions
`vndr` was dropped in 2019

Signed-off-by: Silvio Moioli <moio@suse.com>
2021-07-08 13:35:24 +02:00
Miloslav Trmač
07c81c7777 Merge pull request #1363 from vrothberg/completion
disable `completion` command
2021-07-08 12:50:18 +02:00
Valentin Rothberg
8eaf0329f8 disable completion command
Disable the implicit `completion` command that cobra 1.2.x added.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
2021-07-08 10:23:13 +02:00
Daniel J Walsh
378e6694c7 Merge pull request #1362 from containers/dependabot/go_modules/github.com/spf13/cobra-1.2.1
Bump github.com/spf13/cobra from 1.2.0 to 1.2.1
2021-07-05 05:48:10 -04:00
dependabot[bot]
aeb75f3857 Bump github.com/spf13/cobra from 1.2.0 to 1.2.1
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.2.0 to 1.2.1.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Changelog](https://github.com/spf13/cobra/blob/master/CHANGELOG.md)
- [Commits](https://github.com/spf13/cobra/compare/v1.2.0...v1.2.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-05 08:26:05 +00:00
Miloslav Trmač
2286a58a39 Merge pull request #1359 from containers/dependabot/go_modules/github.com/spf13/cobra-1.2.0
Bump github.com/spf13/cobra from 1.1.3 to 1.2.0
2021-07-02 14:56:04 +02:00
dependabot[bot]
83603a79d4 Bump github.com/spf13/cobra from 1.1.3 to 1.2.0
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.1.3 to 1.2.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Changelog](https://github.com/spf13/cobra/blob/master/CHANGELOG.md)
- [Commits](https://github.com/spf13/cobra/compare/v1.1.3...v1.2.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-02 08:28:08 +00:00
Miloslav Trmač
37b24aedd7 Merge pull request #1356 from mtrmac/error
Update tests for removal of error and Error from error messages
2021-07-01 21:38:15 +02:00
Daniel J Walsh
6d6c8b5609 Update tests for removal of error and Error from error messages
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-07-01 21:02:13 +02:00
Miloslav Trmač
99621f4168 Merge pull request #1351 from mtrmac/man-page-checker
Misc man-page-checker RFCs
2021-07-01 15:49:34 +02:00
Miloslav Trmač
09282bcf88 Fix some comments in man-page-checker
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-06-30 20:57:08 +02:00
Miloslav Trmač
09ca3ba47f Improve the description of (skopeo list-tags)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-06-30 20:56:30 +02:00
Miloslav Trmač
22908fb3e8 Include the mandatory --output option in synopsis of (skopeo standalone-sign)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-06-30 20:56:30 +02:00
Miloslav Trmač
a37251289a Support **non-replaceable strings** in synopsis
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-06-29 23:59:34 +02:00
Miloslav Trmač
e4d1392085 Use (make validate-local) in the validate target
... to keep the two in sync.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-06-29 23:49:21 +02:00
Daniel J Walsh
71e7a5839e Merge pull request #1347 from edsantiago/man_page_checker
man page checker - part 1 of 2
2021-06-29 17:13:11 -04:00
Miloslav Trmač
316503341b Merge pull request #1341 from cevich/add_local_cross
Cirrus: Rename cross -> osx task, add cross task.
2021-06-29 22:53:44 +02:00
Ed Santiago
e716b2fa66 man page checker - part 1 of 2
Add new script, hack/man-page-checker, copied from podman. Run it
in 'make validate-local' target.

This is NOT the checker requested in #1332 (verify that flags
listed in 'skopeo foo --help' are documented in man pages and
vice-versa). This is a much simpler script that merely looks
for very basic typos or discrepancies between skopeo.1.md
and skopeo-foo.1.md.

The next part (cross-checking flags) is in progress but will
require a huge number of changes to the man pages. I'm submitting
this now because it's easy to review.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2021-06-29 13:07:42 -06:00
Chris Evich
97eaace7db Cirrus: Rename cross -> osx task, add cross task.
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-29 14:01:46 -04:00
Miloslav Trmač
846ea33b40 Merge pull request #1344 from containers/dependabot/go_modules/github.com/containers/ocicrypt-1.1.2
Bump github.com/containers/ocicrypt from 1.1.1 to 1.1.2
2021-06-29 16:55:19 +02:00
dependabot[bot]
30c0eb03f0 Bump github.com/containers/ocicrypt from 1.1.1 to 1.1.2
Bumps [github.com/containers/ocicrypt](https://github.com/containers/ocicrypt) from 1.1.1 to 1.1.2.
- [Release notes](https://github.com/containers/ocicrypt/releases)
- [Commits](https://github.com/containers/ocicrypt/compare/v1.1.1...v1.1.2)

---
updated-dependencies:
- dependency-name: github.com/containers/ocicrypt
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-29 08:31:55 +00:00
Miloslav Trmač
7cb70f4e9c Merge pull request #1340 from cevich/add_vendor_check
Cirrus: Add vendor + tree status check
2021-06-28 22:59:19 +02:00
Chris Evich
5918513ed5 Cirrus: Add vendor + tree status check
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-28 16:18:32 -04:00
Miloslav Trmač
b768f4e3af Merge pull request #1339 from mtrmac/unit-integration
Run unit tests as well, not integration tests twice
2021-06-28 20:28:29 +02:00
Miloslav Trmač
b20c2d45f1 Run unit tests as well, not integration tests twice
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-06-28 19:54:04 +02:00
Daniel J Walsh
fc3678038e Merge pull request #1335 from containers/dependabot/go_modules/github.com/containers/storage-1.32.5
Bump github.com/containers/storage from 1.32.4 to 1.32.5
2021-06-28 13:49:33 -04:00
dependabot[bot]
d0f7339b77 Bump github.com/containers/storage from 1.32.4 to 1.32.5
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.32.4 to 1.32.5.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.32.4...v1.32.5)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-25 08:26:29 +00:00
Chris Evich
af550fda48 Merge pull request #1331 from mtrmac/revert-destdir
Reintroduce the GNU semantics of DESTDIR
2021-06-24 13:04:06 -04:00
Miloslav Trmač
012ed6610e Reintroduce the GNU semantics of DESTDIR
This partially reverts commit a81cd74734
so that most path variables like PREFIX and BINDIR refer to paths on
the installed system, and DESTDIR is used only in (make install), following
the philosophy of the GNU coding standards for path variables. (But not precisely
the variable names, which are lowercase in the standard, nor the principle that
even SYSCONFDIR should be under $PREFIX.

Keep the use of ?= instead of = because it somewhat better expresses the idea
that the values can be overridden.

Use ${DESTDIR}${BINDIR} instead of ${DESTDIR}/${BINDIR} etc, so that a plain
(make install) does not use paths like //usr/bin/... ; strictly speaking they
are IIRC reserved by POSIX, and more importantly it just looks untidy :)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-06-24 14:14:07 +02:00
Daniel J Walsh
f7aab1aba5 Merge pull request #1332 from rittneje/add-retry-times-docs
Add --retry-times to markdown docs
2021-06-24 05:48:50 -04:00
Jesse Rittner
c30b904cbe Add --retry-times to markdown docs
Fixes #1322.

Signed-off-by: Jesse Rittner <rittneje@gmail.com>
2021-06-23 23:29:53 -04:00
Daniel J Walsh
45028801eb Merge pull request #1330 from cevich/default_registry
[CI:DOCS] Workaround quay.io image build failure
2021-06-23 15:49:03 -04:00
Chris Evich
9fbb9abc6d Workaround quay.io image build failure
Normally it's a best-practice to only use short-names in Dockerfiles.
However, this image is really only ever used for testing/CI purposes.
Side-step the docker hub rate limits entirely by simply hard-coding the
FQIN and hope that's also "okay" for non-CI usage.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-23 15:40:58 -04:00
Miloslav Trmač
69fd1d4be0 Merge pull request #1329 from mtrmac/brew-update
Start macOS tests with (brew update)
2021-06-23 21:14:54 +02:00
Miloslav Trmač
4417dc4402 Update brew to avoid 403 on accessing https://homebrew.bintray.com
Compare https://github.com/Homebrew/brew/issues/11119 .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-06-23 20:49:43 +02:00
Daniel J Walsh
8f0ae5bde6 Merge pull request #1328 from cevich/master_to_main
Fix automation re: master->main rename
2021-06-23 14:46:50 -04:00
Chris Evich
93b819a766 Fix automation re: master->main rename
Bump up the global timeout due to some (possibly) temporary failures in
the 'cross' task.  Also fix build-failure in Dockerfile related to use
of pre-module golang packages.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-23 13:31:08 -04:00
Daniel J Walsh
ce06c87817 Merge pull request #1327 from containers/dependabot/go_modules/github.com/containers/storage-1.32.4
Bump github.com/containers/storage from 1.32.3 to 1.32.4
2021-06-23 10:33:55 -04:00
dependabot[bot]
e7c5e9f7e6 Bump github.com/containers/storage from 1.32.3 to 1.32.4
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.32.3 to 1.32.4.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.32.3...v1.32.4)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-23 08:27:06 +00:00
Miloslav Trmač
8a1214a07b Merge pull request #1324 from containers/dependabot/go_modules/github.com/containers/common-0.40.1
Bump github.com/containers/common from 0.40.0 to 0.40.1
2021-06-21 21:41:14 +02:00
dependabot[bot]
1eac38e3ce Bump github.com/containers/common from 0.40.0 to 0.40.1
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.40.0 to 0.40.1.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.40.0...v0.40.1)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 18:25:00 +00:00
Miloslav Trmač
5000f745b0 Merge pull request #1325 from containers/dependabot/go_modules/github.com/containers/storage-1.32.3
Bump github.com/containers/storage from 1.32.2 to 1.32.3
2021-06-21 19:58:14 +02:00
dependabot[bot]
b1e78efaa2 Bump github.com/containers/storage from 1.32.2 to 1.32.3
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.32.2 to 1.32.3.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.32.2...v1.32.3)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 08:26:57 +00:00
Daniel J Walsh
ccdaf6e0f2 Merge pull request #1321 from containers/dependabot/go_modules/github.com/containers/image/v5-5.13.2
Bump github.com/containers/image/v5 from 5.13.1 to 5.13.2
2021-06-18 04:41:40 -04:00
Valentin Rothberg
d25476e4f7 Merge pull request #1319 from mtrmac/format-docs
Fix documentation of the --format option of skopeo copy and skopeo sync
2021-06-18 10:41:17 +02:00
dependabot[bot]
298f7476d0 Bump github.com/containers/image/v5 from 5.13.1 to 5.13.2
Bumps [github.com/containers/image/v5](https://github.com/containers/image) from 5.13.1 to 5.13.2.
- [Release notes](https://github.com/containers/image/releases)
- [Commits](https://github.com/containers/image/compare/v5.13.1...v5.13.2)

---
updated-dependencies:
- dependency-name: github.com/containers/image/v5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-18 08:25:40 +00:00
Daniel J Walsh
2fee990acc Merge pull request #1318 from containers/dependabot/go_modules/github.com/containers/common-0.40.0
Bump github.com/containers/common from 0.39.0 to 0.40.0
2021-06-17 16:51:51 -04:00
Chris Evich
6ba1affd23 Merge pull request #1317 from cevich/update_vm_images
Cirrus: New VM Images w/ podman 3.2.1
2021-06-17 16:00:11 -04:00
Miloslav Trmač
5778d9bd67 Fix documentation of the --format option of skopeo copy and skopeo sync
It affects all transports; and without --format, we try several manifest formats.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2021-06-17 19:29:17 +02:00
dependabot[bot]
df17004709 Bump github.com/containers/common from 0.39.0 to 0.40.0
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.39.0 to 0.40.0.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.39.0...v0.40.0)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-17 08:23:51 +00:00
Chris Evich
ad4ec8b496 Cirrus: New VM Images w/ podman 3.2.1
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-16 17:07:58 -04:00
Daniel J Walsh
5f8ec87c54 Merge pull request #1316 from containers/dependabot/go_modules/github.com/containers/image/v5-5.13.1
Bump github.com/containers/image/v5 from 5.12.0 to 5.13.1
2021-06-16 17:06:17 -04:00
dependabot[bot]
abdc4a7e42 Bump github.com/containers/image/v5 from 5.12.0 to 5.13.1
Bumps [github.com/containers/image/v5](https://github.com/containers/image) from 5.12.0 to 5.13.1.
- [Release notes](https://github.com/containers/image/releases)
- [Commits](https://github.com/containers/image/compare/v5.12.0...v5.13.1)

---
updated-dependencies:
- dependency-name: github.com/containers/image/v5
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-16 15:34:27 +00:00
Daniel J Walsh
513a524d7d Merge pull request #1308 from cevich/fix_skopeo_skopeo
Fix multi-arch build version check
2021-06-15 16:32:19 -04:00
Daniel J Walsh
d4a500069e Merge pull request #1310 from alvistack/master-linux-amd64
Update nix pin with `make nixpkgs`
2021-06-15 16:31:42 -04:00
Wong Hoi Sing Edison
bcc18ebfb7 Update nix pin with make nixpkgs
Fixes #1305

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
2021-06-12 07:20:51 +08:00
Chris Evich
9b9ef675c1 Fix multi-arch build version check
This was copy-pasted from buildah and podman, unfortunately the
Dockerfile entrypoint is different for skopeo. Fix it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-11 15:00:49 -04:00
Daniel J Walsh
dde3e759f6 Merge pull request #1312 from cevich/fix_links
[CI:DOCS] Fix docs links due to branch rename
2021-06-11 14:40:44 -04:00
Daniel J Walsh
622faa0b8a Merge pull request #1311 from containers/dependabot/go_modules/github.com/containers/storage-1.32.2
Bump github.com/containers/storage from 1.32.1 to 1.32.2
2021-06-11 14:40:22 -04:00
Chris Evich
9a5f009ea2 [CI:DOCS] Fix docs links due to branch rename
Ref: https://github.com/containers/common/issues/549

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-10 11:35:11 -04:00
dependabot[bot]
865407cad0 Bump github.com/containers/storage from 1.32.1 to 1.32.2
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.32.1 to 1.32.2.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/master/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.32.1...v1.32.2)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-10 08:21:38 +00:00
Valentin Rothberg
ec13aa6d87 Merge pull request #1305 from alvistack/master-linux-amd64
Update nix pin with `make nixpkgs`
2021-06-03 15:57:26 +02:00
Daniel J Walsh
780de354d4 Merge pull request #1304 from containers/dependabot/go_modules/github.com/docker/docker-20.10.7incompatible
Bump github.com/docker/docker from 20.10.6+incompatible to 20.10.7+incompatible
2021-06-03 09:40:29 -04:00
Wong Hoi Sing Edison
10c4c877ba Update nix pin with make nixpkgs
- Bugfix `make nixpkgs` which pin with branch `nixos-21.05`
  - Code lint with `nixpkgs-fmt`
  - Code sync between x86\_64 and aarch64

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
2021-06-03 16:33:04 +08:00
dependabot[bot]
e32f3f1792 Bump github.com/docker/docker
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 20.10.6+incompatible to 20.10.7+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Changelog](https://github.com/moby/moby/blob/master/CHANGELOG.md)
- [Commits](https://github.com/docker/docker/compare/v20.10.6...v20.10.7)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-03 08:17:08 +00:00
Daniel J Walsh
a07f1e0f89 Merge pull request #1299 from containers/dependabot/go_modules/github.com/containers/storage-1.32.1
Bump github.com/containers/storage from 1.32.0 to 1.32.1
2021-06-02 09:00:00 -04:00
Daniel J Walsh
a2c8022a21 Merge pull request #1303 from cevich/fix_github_workflows
Fix GitHub workflows + Add [CI:DOCS] mode
2021-06-02 08:56:54 -04:00
Daniel J Walsh
b9661b2a05 Merge pull request #1302 from bentito/patch-1
install.md Building Docs needs MacOS section
2021-06-01 20:34:24 -04:00
Chris Evich
761100143a Fix wrong directory name
Github checks the `workflows` sub-directory for github actions workflow
files.  Since it was called `workflow`, nothing was running.  Fix this
by renaming the directory.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-01 10:15:08 -04:00
Chris Evich
a0b6ea288d Support [CI:DOCS] mode
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-01 10:15:08 -04:00
Brett Tofel
e5cb7ce196 install.md Building Docs needs MacOS section
The Building Documentation section of the `install.md` also needs a MacOS section because go-md2man is not present there, by default, either.
2021-06-01 10:01:00 -04:00
dependabot[bot]
c806083830 Bump github.com/containers/storage from 1.32.0 to 1.32.1
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.32.0 to 1.32.1.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/master/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.32.0...v1.32.1)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-28 08:26:20 +00:00
Valentin Rothberg
714ffe1b60 Merge pull request #1297 from containers/dependabot/go_modules/github.com/containers/common-0.39.0
Bump github.com/containers/common from 0.38.4 to 0.39.0
2021-05-26 12:41:27 +02:00
dependabot[bot]
cac3f2b140 Bump github.com/containers/common from 0.38.4 to 0.39.0
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.38.4 to 0.39.0.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.38.4...v0.39.0)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-26 08:25:26 +00:00
Daniel J Walsh
8efffce8be Merge pull request #1294 from containers/dependabot/go_modules/github.com/containers/storage-1.31.2
Bump github.com/containers/storage from 1.31.1 to 1.31.2
2021-05-21 15:45:48 -04:00
Daniel J Walsh
efc789be55 Merge pull request #1288 from cevich/multi_arch_images
Multi-arch github-action workflow unification
2021-05-21 15:45:14 -04:00
Chris Evich
6452a9b6f6 Multi-arch github-action workflow unification
This is a port from the podman and buildah repository workflows.
It's purpose is to build and push multi-arch images containing the
latest upstream, testing, and stable versions of skopeo.  It fully
replaces the last remaining use of Travis in this repo, for
substantially the same purpose.

In a future commit, I intend to de-duplicate this workflow from
podman and buildah, such that all three share a common set of details.
Until then, any changes will need to be manually duplicated across
all three repos.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-05-21 14:33:55 -04:00
dependabot[bot]
184f0eee58 Bump github.com/containers/storage from 1.31.1 to 1.31.2
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.31.1 to 1.31.2.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/master/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.31.1...v1.31.2)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-21 08:16:23 +00:00
Daniel J Walsh
5af5f8a0e7 Merge pull request #1293 from rhatdan/master
Bump to v1.3.0
2021-05-19 17:11:31 -04:00
Daniel J Walsh
65ed9920da Move to v1.3.1-dev 2021-05-19 17:09:51 -04:00
1002 changed files with 50634 additions and 36014 deletions

View File

@@ -6,7 +6,7 @@ env:
#### Global variables used for all tasks
####
# Name of the ultimate destination branch for this CI run, PR or post-merge.
DEST_BRANCH: "master"
DEST_BRANCH: "main"
# Overrides default location (/tmp/cirrus) for repo clone
GOPATH: &gopath "/var/tmp/go"
GOBIN: "${GOPATH}/bin"
@@ -23,30 +23,27 @@ env:
####
#### Cache-image names to test with (double-quotes around names are critical)
####
FEDORA_NAME: "fedora-34"
PRIOR_FEDORA_NAME: "fedora-33"
UBUNTU_NAME: "ubuntu-2104"
PRIOR_UBUNTU_NAME: "ubuntu-2010"
FEDORA_NAME: "fedora-35"
PRIOR_FEDORA_NAME: "fedora-34"
UBUNTU_NAME: "ubuntu-2110"
# Google-cloud VM Images
IMAGE_SUFFIX: "c6032583541653504"
IMAGE_SUFFIX: "c4764556961513472"
FEDORA_CACHE_IMAGE_NAME: "fedora-${IMAGE_SUFFIX}"
PRIOR_FEDORA_CACHE_IMAGE_NAME: "prior-fedora-${IMAGE_SUFFIX}"
UBUNTU_CACHE_IMAGE_NAME: "ubuntu-${IMAGE_SUFFIX}"
PRIOR_UBUNTU_CACHE_IMAGE_NAME: "prior-ubuntu-${IMAGE_SUFFIX}"
# Container FQIN's
FEDORA_CONTAINER_FQIN: "quay.io/libpod/fedora_podman:${IMAGE_SUFFIX}"
PRIOR_FEDORA_CONTAINER_FQIN: "quay.io/libpod/prior-fedora_podman:${IMAGE_SUFFIX}"
UBUNTU_CONTAINER_FQIN: "quay.io/libpod/ubuntu_podman:${IMAGE_SUFFIX}"
PRIOR_UBUNTU_CONTAINER_FQIN: "quay.io/libpod/prior-ubuntu_podman:${IMAGE_SUFFIX}"
# Equivilent to image produced by 'make build-container'
SKOPEO_CI_CONTAINER_FQIN: "quay.io/skopeo/ci:${DEST_BRANCH}"
# Built along with the standard PR-based workflow in c/automation_images
SKOPEO_CIDEV_CONTAINER_FQIN: "quay.io/libpod/skopeo_cidev:${IMAGE_SUFFIX}"
# Default timeout for each task
timeout_in: 30m
timeout_in: 45m
gcp_credentials: ENCRYPTED[52d9e807b531b37ab14e958cb5a72499460663f04c8d73e22ad608c027a31118420f1c80f0be0882fbdf96f49d8f9ac0]
@@ -57,20 +54,43 @@ validate_task:
# under Cirrus-CI, due to challenges obtaining the starting commit ID.
# Only do validation for PRs.
only_if: $CIRRUS_PR != ''
container: &build_container
image: "${SKOPEO_CI_CONTAINER_FQIN}"
container:
image: '${SKOPEO_CIDEV_CONTAINER_FQIN}'
cpu: 4
memory: 8
script: make validate-local
script: |
make validate-local
make vendor && hack/tree_status.sh
doccheck_task:
only_if: $CIRRUS_PR != ''
depends_on:
- validate
container:
image: "${FEDORA_CONTAINER_FQIN}"
cpu: 4
memory: 8
env:
BUILDTAGS: &withopengpg 'btrfs_noversion libdm_no_deferred_remove containers_image_openpgp'
script: |
# TODO: Can't use 'runner.sh setup' inside container. However,
# removing the pre-installed package is the only necessary step
# at the time of this comment.
dnf erase -y skopeo # Guarantee non-interference
"${SKOPEO_PATH}/${SCRIPT_BASE}/runner.sh" build
"${SKOPEO_PATH}/${SCRIPT_BASE}/runner.sh" doccheck
cross_task:
osx_task:
only_if: &not_docs $CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*'
depends_on:
- validate
macos_instance:
image: catalina-xcode
setup_script: |
export PATH=$GOPATH/bin:$PATH
brew update
brew install gpgme go go-md2man
go get -u golang.org/x/lint/golint
go install golang.org/x/lint/golint@latest
test_script: |
export PATH=$GOPATH/bin:$PATH
go version
@@ -80,6 +100,28 @@ cross_task:
/usr/local/bin/skopeo -v
cross_task:
alias: cross
only_if: *not_docs
depends_on:
- validate
gce_instance:
image_project: libpod-218412
zone: "us-central1-f"
cpu: 2
memory: "4Gb"
# Required to be 200gig, do not modify - has i/o performance impact
# according to gcloud CLI tool warning messages.
disk: 200
image_name: ${FEDORA_CACHE_IMAGE_NAME}
env:
BUILDTAGS: *withopengpg
setup_script: >-
"${GOSRC}/${SCRIPT_BASE}/runner.sh" setup
cross_script: >-
"${GOSRC}/${SCRIPT_BASE}/runner.sh" cross
#####
##### NOTE: This task is subtantially duplicated in the containers/image
##### repository's `.cirrus.yml`. Changes made here should be fully merged
@@ -87,6 +129,7 @@ cross_task:
#####
test_skopeo_task:
alias: test_skopeo
only_if: *not_docs
depends_on:
- validate
gce_instance:
@@ -99,20 +142,18 @@ test_skopeo_task:
disk: 200
image_name: ${FEDORA_CACHE_IMAGE_NAME}
matrix:
- name: "Skopeo Test"
- name: "Skopeo Test" # N/B: Name ref. by hack/get_fqin.sh
env:
BUILDTAGS: 'btrfs_noversion libdm_no_deferred_remove'
- name: "Skopeo Test w/ opengpg"
env:
BUILDTAGS: 'btrfs_noversion libdm_no_deferred_remove containers_image_openpgp'
BUILDTAGS: *withopengpg
setup_script: >-
"${GOSRC}/${SCRIPT_BASE}/runner.sh" setup
vendor_script: >-
"${SKOPEO_PATH}/${SCRIPT_BASE}/runner.sh" vendor
build_script: >-
"${SKOPEO_PATH}/${SCRIPT_BASE}/runner.sh" build
validate_script: >-
"${SKOPEO_PATH}/${SCRIPT_BASE}/runner.sh" validate
unit_script: >-
"${SKOPEO_PATH}/${SCRIPT_BASE}/runner.sh" unit
integration_script: >-
@@ -137,7 +178,6 @@ meta_task:
${FEDORA_CACHE_IMAGE_NAME}
${PRIOR_FEDORA_CACHE_IMAGE_NAME}
${UBUNTU_CACHE_IMAGE_NAME}
${PRIOR_UBUNTU_CACHE_IMAGE_NAME}
BUILDID: "${CIRRUS_BUILD_ID}"
REPOREF: "${CIRRUS_REPO_NAME}"
GCPJSON: ENCRYPTED[6867b5a83e960e7c159a98fe6c8360064567a071c6f4b5e7d532283ecd870aa65c94ccd74bdaa9bf7aadac9d42e20a67]
@@ -156,6 +196,8 @@ success_task:
# N/B: ALL tasks must be listed here, minus their '_task' suffix.
depends_on:
- validate
- doccheck
- osx
- cross
- test_skopeo
- meta

102
.github/workflows/check_cirrus_cron.yml vendored Normal file
View File

@@ -0,0 +1,102 @@
---
# See also:
# https://github.com/containers/podman/blob/main/.github/workflows/check_cirrus_cron.yml
# Format Ref: https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions
# Required to un-FUBAR default ${{github.workflow}} value
name: check_cirrus_cron
on:
# Note: This only applies to the default branch.
schedule:
# N/B: This should correspond to a period slightly after
# the last job finishes running. See job defs. at:
# https://cirrus-ci.com/settings/repository/6706677464432640
- cron: '59 23 * * 1-5'
# Debug: Allow triggering job manually in github-actions WebUI
workflow_dispatch: {}
env:
# Debug-mode can reveal secrets, only enable by a secret value.
# Ref: https://help.github.com/en/actions/configuring-and-managing-workflows/managing-a-workflow-run#enabling-step-debug-logging
ACTIONS_STEP_DEBUG: '${{ secrets.ACTIONS_STEP_DEBUG }}'
# CSV listing of e-mail addresses for delivery failure or error notices
RCPTCSV: rh.container.bot@gmail.com,podman-monitor@lists.podman.io
# Filename for table of cron-name to build-id data
# (must be in $GITHUB_WORKSPACE/artifacts/)
NAME_ID_FILEPATH: './artifacts/name_id.txt'
jobs:
cron_failures:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
persist-credentials: false
# Avoid duplicating cron_failures.sh in skopeo repo.
- uses: actions/checkout@v2
with:
repository: containers/podman
path: '_podman'
persist-credentials: false
- name: Get failed cron names and Build IDs
id: cron
run: './_podman/.github/actions/${{ github.workflow }}/${{ github.job }}.sh'
- if: steps.cron.outputs.failures > 0
shell: bash
# Must be inline, since context expressions are used.
# Ref: https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions
run: |
set -eo pipefail
(
echo "Detected one or more Cirrus-CI cron-triggered jobs have failed recently:"
echo ""
while read -r NAME BID; do
echo "Cron build '$NAME' Failed: https://cirrus-ci.com/build/$BID"
done < "$NAME_ID_FILEPATH"
echo ""
echo "# Source: ${{ github.workflow }} workflow on ${{ github.repository }}."
# Separate content from sendgrid.com automatic footer.
echo ""
echo ""
) > ./artifacts/email_body.txt
- if: steps.cron.outputs.failures > 0
name: Send failure notification e-mail
# Ref: https://github.com/dawidd6/action-send-mail
uses: dawidd6/action-send-mail@v2.2.2
with:
server_address: ${{secrets.ACTION_MAIL_SERVER}}
server_port: 465
username: ${{secrets.ACTION_MAIL_USERNAME}}
password: ${{secrets.ACTION_MAIL_PASSWORD}}
subject: Cirrus-CI cron build failures on ${{github.repository}}
to: ${{env.RCPTCSV}}
from: ${{secrets.ACTION_MAIL_SENDER}}
body: file://./artifacts/email_body.txt
- if: always()
uses: actions/upload-artifact@v2
with:
name: ${{ github.job }}_artifacts
path: artifacts/*
- if: failure()
name: Send error notification e-mail
uses: dawidd6/action-send-mail@v2.2.2
with:
server_address: ${{secrets.ACTION_MAIL_SERVER}}
server_port: 465
username: ${{secrets.ACTION_MAIL_USERNAME}}
password: ${{secrets.ACTION_MAIL_PASSWORD}}
subject: Github workflow error on ${{github.repository}}
to: ${{env.RCPTCSV}}
from: ${{secrets.ACTION_MAIL_SENDER}}
body: "Job failed: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}"

209
.github/workflows/multi-arch-build.yaml vendored Normal file
View File

@@ -0,0 +1,209 @@
---
# Please see contrib/<reponame>image/README.md for details on the intentions
# of this workflow.
#
# BIG FAT WARNING: This workflow is duplicated across containers/skopeo,
# containers/buildah, and containers/podman. ANY AND
# ALL CHANGES MADE HERE MUST BE MANUALLY DUPLICATED
# TO THE OTHER REPOS.
name: build multi-arch images
on:
# Upstream tends to be very active, with many merges per day.
# Only run this daily via cron schedule, or manually, not by branch push.
schedule:
- cron: '0 8 * * *'
# allows to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
multi:
name: multi-arch image build
env:
REPONAME: skopeo # No easy way to parse this out of $GITHUB_REPOSITORY
# Server/namespace value used to format FQIN
REPONAME_QUAY_REGISTRY: quay.io/skopeo
CONTAINERS_QUAY_REGISTRY: quay.io/containers
# list of architectures for build
PLATFORMS: linux/amd64,linux/s390x,linux/ppc64le,linux/arm64
# Command to execute in container to obtain project version number
VERSION_CMD: "--version" # skopeo is the entrypoint
# build several images (upstream, testing, stable) in parallel
strategy:
# By default, failure of one matrix item cancels all others
fail-fast: false
matrix:
# Builds are located under contrib/<reponame>image/<source> directory
source:
- upstream
- testing
- stable
runs-on: ubuntu-latest
# internal registry caches build for inspection before push
services:
registry:
image: quay.io/libpod/registry:2
ports:
- 5000:5000
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
driver-opts: network=host
install: true
- name: Build and locally push image
uses: docker/build-push-action@v2
with:
context: contrib/${{ env.REPONAME }}image/${{ matrix.source }}
file: ./contrib/${{ env.REPONAME }}image/${{ matrix.source }}/Dockerfile
platforms: ${{ env.PLATFORMS }}
push: true
tags: localhost:5000/${{ env.REPONAME }}/${{ matrix.source }}
# Simple verification that stable images work, and
# also grab version number use in forming the FQIN.
- name: amd64 container sniff test
if: matrix.source == 'stable'
id: sniff_test
run: |
podman pull --tls-verify=false \
localhost:5000/$REPONAME/${{ matrix.source }}
VERSION_OUTPUT=$(podman run \
localhost:5000/$REPONAME/${{ matrix.source }} \
$VERSION_CMD)
echo "$VERSION_OUTPUT"
VERSION=$(awk -r -e "/^${REPONAME} version /"'{print $3}' <<<"$VERSION_OUTPUT")
test -n "$VERSION"
echo "::set-output name=version::$VERSION"
- name: Generate image FQIN(s) to push
id: reponame_reg
run: |
if [[ "${{ matrix.source }}" == 'stable' ]]; then
# The command version in image just built
VERSION='v${{ steps.sniff_test.outputs.version }}'
# workaround vim syntax-highlight bug: '
# Push both new|updated version-tag and latest-tag FQINs
FQIN="$REPONAME_QUAY_REGISTRY/stable:$VERSION,$REPONAME_QUAY_REGISTRY/stable:latest"
elif [[ "${{ matrix.source }}" == 'testing' ]]; then
# Assume some contents changed, always push latest testing.
FQIN="$REPONAME_QUAY_REGISTRY/testing:latest"
elif [[ "${{ matrix.source }}" == 'upstream' ]]; then
# Assume some contents changed, always push latest upstream.
FQIN="$REPONAME_QUAY_REGISTRY/upstream:latest"
else
echo "::error::Unknown matrix item '${{ matrix.source }}'"
exit 1
fi
echo "::warning::Pushing $FQIN"
echo "::set-output name=fqin::${FQIN}"
echo '::set-output name=push::true'
# This is substantially similar to the above logic,
# but only handles $CONTAINERS_QUAY_REGISTRY for
# the stable "latest" and named-version tagged images.
- name: Generate containers reg. image FQIN(s)
if: matrix.source == 'stable'
id: containers_reg
run: |
VERSION='v${{ steps.sniff_test.outputs.version }}'
# workaround vim syntax-highlight bug: '
# Push both new|updated version-tag and latest-tag FQINs
FQIN="$CONTAINERS_QUAY_REGISTRY/$REPONAME:$VERSION,$CONTAINERS_QUAY_REGISTRY/$REPONAME:latest"
echo "::warning::Pushing $FQIN"
echo "::set-output name=fqin::${FQIN}"
echo '::set-output name=push::true'
- name: Define LABELS multi-line env. var. value
run: |
# This is a really hacky/strange workflow idiom, required
# for setting multi-line $LABELS value for consumption in
# a future step. There is literally no cleaner way to do this :<
# https://docs.github.com/en/actions/reference/workflow-commands-for-github-actions#multiline-strings
function set_labels() {
echo 'LABELS<<DELIMITER' >> "$GITHUB_ENV"
for line; do
echo "$line" | tee -a "$GITHUB_ENV"
done
echo "DELIMITER" >> "$GITHUB_ENV"
}
declare -a lines
lines=(\
"org.opencontainers.image.source=https://github.com/${GITHUB_REPOSITORY}.git"
"org.opencontainers.image.revision=${GITHUB_SHA}"
"org.opencontainers.image.created=$(date -u --iso-8601=seconds)"
)
# Only the 'stable' matrix source obtains $VERSION
if [[ "${{ matrix.source }}" == "stable" ]]; then
lines+=(\
"org.opencontainers.image.version=${{ steps.sniff_test.outputs.version }}"
)
fi
set_labels "${lines[@]}"
# Separate steps to login and push for $REPONAME_QUAY_REGISTRY and
# $CONTAINERS_QUAY_REGISTRY are required, because 2 sets of credentials
# are used and namespaced within the registry. At the same time, reuse
# of non-shell steps is not supported by Github Actions nor are YAML
# anchors/aliases, nor composite actions.
# Push to $REPONAME_QUAY_REGISTRY for stable, testing. and upstream
- name: Login to ${{ env.REPONAME_QUAY_REGISTRY }}
uses: docker/login-action@v1
if: steps.reponame_reg.outputs.push == 'true'
with:
registry: ${{ env.REPONAME_QUAY_REGISTRY }}
# N/B: Secrets are not passed to workflows that are triggered
# by a pull request from a fork
username: ${{ secrets.REPONAME_QUAY_USERNAME }}
password: ${{ secrets.REPONAME_QUAY_PASSWORD }}
- name: Push images to ${{ steps.reponame_reg.outputs.fqin }}
uses: docker/build-push-action@v2
if: steps.reponame_reg.outputs.push == 'true'
with:
cache-from: type=registry,ref=localhost:5000/${{ env.REPONAME }}/${{ matrix.source }}
cache-to: type=inline
context: contrib/${{ env.REPONAME }}image/${{ matrix.source }}
file: ./contrib/${{ env.REPONAME }}image/${{ matrix.source }}/Dockerfile
platforms: ${{ env.PLATFORMS }}
push: true
tags: ${{ steps.reponame_reg.outputs.fqin }}
labels: |
${{ env.LABELS }}
# Push to $CONTAINERS_QUAY_REGISTRY only stable
- name: Login to ${{ env.CONTAINERS_QUAY_REGISTRY }}
if: steps.containers_reg.outputs.push == 'true'
uses: docker/login-action@v1
with:
registry: ${{ env.CONTAINERS_QUAY_REGISTRY}}
username: ${{ secrets.CONTAINERS_QUAY_USERNAME }}
password: ${{ secrets.CONTAINERS_QUAY_PASSWORD }}
- name: Push images to ${{ steps.containers_reg.outputs.fqin }}
if: steps.containers_reg.outputs.push == 'true'
uses: docker/build-push-action@v2
with:
cache-from: type=registry,ref=localhost:5000/${{ env.REPONAME }}/${{ matrix.source }}
cache-to: type=inline
context: contrib/${{ env.REPONAME }}image/${{ matrix.source }}
file: ./contrib/${{ env.REPONAME }}image/${{ matrix.source }}/Dockerfile
platforms: ${{ env.PLATFORMS }}
push: true
tags: ${{ steps.containers_reg.outputs.fqin }}
labels: |
${{ env.LABELS }}

View File

@@ -1,75 +0,0 @@
language: go
go:
- 1.15.x
notifications:
email: false
env:
global:
# Multiarch manifest will support architectures from this list. It should be the same architectures, as ones in image-build-push step in this Travis config
- MULTIARCH_MANIFEST_ARCHITECTURES="amd64 s390x ppc64le"
# env variables for stable image build
- STABLE_IMAGE=quay.io/skopeo/stable:v1.2.0
- EXTRA_STABLE_IMAGE=quay.io/containers/skopeo:v1.2.0
# env variable for upstream image build
- UPSTREAM_IMAGE=quay.io/skopeo/upstream:master
# Just declaration of the image-build-push step with script actions to execute
x_base_steps:
- &image-build-push
services:
- docker
os: linux
dist: focal
script:
# skopeo upstream image build
- make -f release/Makefile build-image/upstream
# Push image in case if build is started via cron job or code is pushed to master
- if [ "$TRAVIS_EVENT_TYPE" == "push" ] || [ "$TRAVIS_EVENT_TYPE" == "cron" ]; then
make -f release/Makefile push-image/upstream ;
fi
# skopeo stable image build
- make -f release/Makefile build-image/stable
# Push image in case if build is started via cron job or code is pushed to master
- if [ "$TRAVIS_EVENT_TYPE" == "push" ] || [ "$TRAVIS_EVENT_TYPE" == "cron" ]; then
make -f release/Makefile push-image/stable ;
fi
# Just declaration of stage order to run
stages:
# Build and push image for 1 architecture
- name: image-build-push
if: branch = master
# Create and push image manifest to have multiarch image on top of architecture specific images
- name: manifest-multiarch-push
if: (type IN (push, cron)) AND branch = master
# Actual execution of steps
jobs:
include:
# Run 3 image-build-push tasks in parallel for linux/amd64, linux/s390x and linux/ppc64le platforms (for upstream and stable)
- stage: image-build-push
<<: *image-build-push
name: images for amd64
arch: amd64
- stage: image-build-push
<<: *image-build-push
name: images for s390x
arch: s390x
- stage: image-build-push
<<: *image-build-push
name: images for ppc64le
arch: ppc64le
# Run final task to generate multi-arch image manifests (for upstream and stable)
- stage: manifest-multiarch-push
os: linux
dist: focal
script:
- make -f release/Makefile push-manifest-multiarch/upstream
- make -f release/Makefile push-manifest-multiarch/stable

View File

@@ -1,3 +1,3 @@
## The skopeo Project Community Code of Conduct
The skopeo project follows the [Containers Community Code of Conduct](https://github.com/containers/common/blob/master/CODE-OF-CONDUCT.md).
The skopeo project follows the [Containers Community Code of Conduct](https://github.com/containers/common/blob/main/CODE-OF-CONDUCT.md).

View File

@@ -117,29 +117,31 @@ commit automatically with `git commit -s`.
### Dependencies management
Make sure [`vndr`](https://github.com/LK4D4/vndr) is installed.
Dependencies are managed via [standard go modules](https://golang.org/ref/mod).
In order to add a new dependency to this project:
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
- use `go get -d path/to/dep@version` to add a new line to `go.mod`
- run `make vendor`
In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- use `go get -d -u path/to/dep@version` to update the relevant dependency line in `go.mod`
- run `make vendor`
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- find out the version of `containers/image` you want to use and note its commit ID. You might also want to use a fork of `containers/image`, in that case note its repo
- use `go get -d github.com/$REPO/image/v5@$COMMIT_ID` to download the right version. The command will fetch the dependency and then fail because of a conflict in `go.mod`, this is expected. Note the pseudo-version (eg. `v5.13.1-0.20210707123201-50afbf0a326`)
- use `go mod edit -replace=github.com/containers/image/v5=github.com/$REPO/image/v5@$PSEUDO_VERSION` to add a replacement line to `go.mod` (e.g. `replace github.com/containers/image/v5 => github.com/moio/image/v5 v5.13.1-0.20210707123201-50afbf0a3262`)
- run `make vendor`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now required by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- as soon as possible after that, in the skopeo PR, use `go mod edit -dropreplace=github.com/containers/image` to remove the `replace` line in `go.mod`
- run `make vendor`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete successfully again, merge the skopeo PR

View File

@@ -1,54 +0,0 @@
FROM fedora
RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-md2man \
# storage deps
btrfs-progs-devel \
device-mapper-devel \
# gpgme bindings deps
libassuan-devel gpgme-devel \
gnupg \
# htpasswd for system tests
httpd-tools \
# OpenShift deps
which tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \
bats jq podman runc \
golint \
openssl \
&& dnf clean all
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests.
RUN set -x \
&& REGISTRY_COMMIT_SCHEMA1=ec87e9b6971d831f0eff752ddb54fb64693e51cd \
&& REGISTRY_COMMIT=47a064d4195a9b56133891bbb13620c3ac83a827 \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH"
RUN set -x \
&& export GOPATH=$(mktemp -d) \
&& git clone --depth 1 -b v1.5.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
# The sed edits out a "go < 1.5" check which works incorrectly with go ≥ 1.10. \
&& sed -i -e 's/\[\[ "\${go_version\[2]}" < "go1.5" ]]/false/' "$GOPATH/src/github.com/openshift/origin/hack/common.sh" \
&& (cd "$GOPATH/src/github.com/openshift/origin" && make clean build && make all WHAT=cmd/dockerregistry) \
&& cp -a "$GOPATH/src/github.com/openshift/origin/_output/local/bin/linux"/*/* /usr/local/bin \
&& cp "$GOPATH/src/github.com/openshift/origin/images/dockerregistry/config.yml" /atomic-registry-config.yml \
&& rm -rf "$GOPATH" \
&& mkdir /registry
ENV GOPATH /usr/share/gocode:/go
ENV PATH $GOPATH/bin:/usr/share/gocode/bin:$PATH
ENV container_magic 85531765-346b-4316-bdb8-358e4cca9e5d
RUN go version
WORKDIR /go/src/github.com/containers/skopeo
COPY . /go/src/github.com/containers/skopeo
#ENTRYPOINT ["hack/dind"]

View File

@@ -1,12 +0,0 @@
FROM registry.fedoraproject.org/fedora:33
RUN dnf update -y && \
dnf install -y \
btrfs-progs-devel \
device-mapper-devel \
golang \
gpgme-devel \
make
ENV GOPATH=/
WORKDIR /src/github.com/containers/skopeo

168
Makefile
View File

@@ -1,8 +1,8 @@
.PHONY: all binary build-container docs docs-in-container build-local clean install install-binary install-completions shell test-integration .install.vndr vendor vendor-in-container
.PHONY: all binary docs docs-in-container build-local clean install install-binary install-completions shell test-integration .install.vndr vendor vendor-in-container
export GOPROXY=https://proxy.golang.org
# On some plaforms (eg. macOS, FreeBSD) gpgme is installed in /usr/local/ but /usr/local/include/ is
# On some platforms (eg. macOS, FreeBSD) gpgme is installed in /usr/local/ but /usr/local/include/ is
# not in the default search path. Rather than hard-code this directory, use gpgme-config.
# Sadly that must be done at the top-level user instead of locally in the gpgme subpackage, because cgo
# supports only pkg-config, not general shell scripts, and gpgme does not install a pkg-config file.
@@ -10,12 +10,12 @@ export GOPROXY=https://proxy.golang.org
# (and the user will probably find out because the cgo compilation will fail).
GPGME_ENV := CGO_CFLAGS="$(shell gpgme-config --cflags 2>/dev/null)" CGO_LDFLAGS="$(shell gpgme-config --libs 2>/dev/null)"
# Normally empty, DESTDIR can be used to relocate the entire install-tree
# The following variables very roughly follow https://www.gnu.org/prep/standards/standards.html#Makefile-Conventions .
DESTDIR ?=
CONTAINERSCONFDIR ?= ${DESTDIR}/etc/containers
PREFIX ?= /usr/local
CONTAINERSCONFDIR ?= /etc/containers
REGISTRIESDDIR ?= ${CONTAINERSCONFDIR}/registries.d
SIGSTOREDIR ?= ${DESTDIR}/var/lib/containers/sigstore
PREFIX ?= ${DESTDIR}/usr/local
SIGSTOREDIR ?= /var/lib/containers/sigstore
BINDIR ?= ${PREFIX}/bin
MANDIR ?= ${PREFIX}/share/man
BASHCOMPLETIONSDIR ?= ${PREFIX}/share/bash-completion/completions
@@ -29,12 +29,10 @@ ifeq ($(GOBIN),)
GOBIN := $(GOPATH)/bin
endif
# Required for integration-tests to detect they are running inside a specific
# container image. Env. var defined in image, make does not automatically
# pass to children unless explicitly exported
export container_magic
CONTAINER_RUNTIME := $(shell command -v podman 2> /dev/null || echo docker)
GOMD2MAN ?= $(shell command -v go-md2man || echo '$(GOBIN)/go-md2man')
# Multiple scripts are sensitive to this value, make sure it's exported/available
# N/B: Need to use 'command -v' here for compatibility with MacOS.
export CONTAINER_RUNTIME ?= $(if $(shell command -v podman),podman,docker)
GOMD2MAN ?= $(if $(shell command -v go-md2man),go-md2man,$(GOBIN)/go-md2man)
# Go module support: set `-mod=vendor` to use the vendored sources.
# See also hack/make.sh.
@@ -54,9 +52,32 @@ ifeq ($(GOOS), linux)
endif
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
IMAGE := skopeo-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
# set env like gobuildtag?
CONTAINER_CMD := ${CONTAINER_RUNTIME} run --rm -i -e TESTFLAGS="$(TESTFLAGS)" #$(CONTAINER_ENVS)
# If $TESTFLAGS is set, it is passed as extra arguments to 'go test'.
# You can increase test output verbosity with the option '-test.vv'.
# You can select certain tests to run, with `-test.run <regex>` for example:
#
# make test-unit TESTFLAGS='-test.run ^TestManifestDigest$'
#
# For integration test, we use [gocheck](https://labix.org/gocheck).
# You can increase test output verbosity with the option '-check.vv'.
# You can limit test selection with `-check.f <regex>`, for example:
#
# make test-integration TESTFLAGS='-check.f CopySuite.TestCopy.*'
export TESTFLAGS ?= -v -check.v -test.timeout=15m
# This is assumed to be set non-empty when operating inside a CI/automation environment
CI ?=
# This env. var. is interpreted by some tests as a permission to
# modify local configuration files and services.
export SKOPEO_CONTAINER_TESTS ?= $(if $(CI),1,0)
# This is a compromise, we either use a container for this or require
# the local user to have a compatible python3 development environment.
# Define it as a "resolve on use" variable to avoid calling out when possible
SKOPEO_CIDEV_CONTAINER_FQIN ?= $(shell hack/get_fqin.sh)
CONTAINER_CMD ?= ${CONTAINER_RUNTIME} run --rm -i -e TESTFLAGS="$(TESTFLAGS)" -e CI=$(CI) -e SKOPEO_CONTAINER_TESTS=1
# if this session isn't interactive, then we don't want to allocate a
# TTY, which would fail, but if it is interactive, we do want to attach
# so that the user can send e.g. ^C through.
@@ -64,7 +85,8 @@ INTERACTIVE := $(shell [ -t 0 ] && echo 1 || echo 0)
ifeq ($(INTERACTIVE), 1)
CONTAINER_CMD += -t
endif
CONTAINER_RUN := $(CONTAINER_CMD) "$(IMAGE)"
CONTAINER_GOSRC = /src/github.com/containers/skopeo
CONTAINER_RUN ?= $(CONTAINER_CMD) --security-opt label=disable -v $(CURDIR):$(CONTAINER_GOSRC) -w $(CONTAINER_GOSRC) $(SKOPEO_CIDEV_CONTAINER_FQIN)
GIT_COMMIT := $(shell git rev-parse HEAD 2> /dev/null || true)
@@ -76,7 +98,8 @@ MANPAGES ?= $(MANPAGES_MD:%.md=%)
BTRFS_BUILD_TAG = $(shell hack/btrfs_tag.sh) $(shell hack/btrfs_installed_tag.sh)
LIBDM_BUILD_TAG = $(shell hack/libdm_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG)
LIBSUBID_BUILD_TAG = $(shell hack/libsubid_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG) $(LIBSUBID_BUILD_TAG)
BUILDTAGS += $(LOCAL_BUILD_TAGS)
ifeq ($(DISABLE_CGO), 1)
@@ -89,6 +112,9 @@ endif
# use source debugging tools like delve.
all: bin/skopeo docs
codespell:
codespell -S Makefile,build,buildah,buildah.spec,imgtype,copy,AUTHORS,bin,vendor,.git,go.sum,CHANGELOG.md,changelog.txt,seccomp.json,.cirrus.yml,"*.xz,*.gz,*.tar,*.tgz,*ico,*.png,*.1,*.5,*.orig,*.rej" -L fpr,uint,iff,od,ERRO -w
help:
@echo "Usage: make <target>"
@echo
@@ -96,7 +122,6 @@ help:
@echo
@echo " * 'install' - Install binaries and documents to system locations"
@echo " * 'binary' - Build skopeo with a container"
@echo " * 'static' - Build statically linked binary"
@echo " * 'bin/skopeo' - Build skopeo locally"
@echo " * 'test-unit' - Execute unit tests"
@echo " * 'test-integration' - Execute integration tests"
@@ -105,28 +130,9 @@ help:
@echo " * 'shell' - Run the built image and attach to a shell"
@echo " * 'clean' - Clean artifacts"
# Build a container image (skopeobuild) that has everything we need to build.
# Then do the build and the output (skopeo) should appear in current dir
# Do the build and the output (skopeo) should appear in current dir
binary: cmd/skopeo
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
${CONTAINER_RUNTIME} run --rm --security-opt label=disable -v $$(pwd):/src/github.com/containers/skopeo \
skopeobuildimage make bin/skopeo $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
# Update nix/nixpkgs.json its latest stable commit
.PHONY: nixpkgs
nixpkgs:
@nix run \
-f channel:nixos-20.09 nix-prefetch-git \
-c nix-prefetch-git \
--no-deepClone \
https://github.com/nixos/nixpkgs refs/heads/nixos-20.09 > nix/nixpkgs.json
# Build statically linked binary
.PHONY: static
static:
@nix build -f nix/
mkdir -p ./bin
cp -rfp ./result/bin/* ./bin/
$(CONTAINER_RUN) make bin/skopeo $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
# Build w/o using containers
.PHONY: bin/skopeo
@@ -136,83 +142,91 @@ bin/skopeo.%:
GOOS=$(word 2,$(subst ., ,$@)) GOARCH=$(word 3,$(subst ., ,$@)) $(GO) build $(MOD_VENDOR) ${SKOPEO_LDFLAGS} -tags "containers_image_openpgp $(BUILDTAGS)" -o $@ ./cmd/skopeo
local-cross: bin/skopeo.darwin.amd64 bin/skopeo.linux.arm bin/skopeo.linux.arm64 bin/skopeo.windows.386.exe bin/skopeo.windows.amd64.exe
build-container:
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -t "$(IMAGE)" .
$(MANPAGES): %: %.md
ifneq ($(DISABLE_DOCS), 1)
sed -e 's/\((skopeo.*\.md)\)//' -e 's/\[\(skopeo.*\)\]/\1/' $< | $(GOMD2MAN) -in /dev/stdin -out $@
endif
docs: $(MANPAGES)
docs-in-container:
${CONTAINER_RUNTIME} build ${BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
${CONTAINER_RUNTIME} run --rm --security-opt label=disable -v $$(pwd):/src/github.com/containers/skopeo \
skopeobuildimage make docs $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
${CONTAINER_RUN} $(MAKE) docs $(if $(DEBUG),DEBUG=$(DEBUG))
clean:
rm -rf bin docs/*.1
install: install-binary install-docs install-completions
install -d -m 755 ${SIGSTOREDIR}
install -d -m 755 ${CONTAINERSCONFDIR}
install -m 644 default-policy.json ${CONTAINERSCONFDIR}/policy.json
install -d -m 755 ${REGISTRIESDDIR}
install -m 644 default.yaml ${REGISTRIESDDIR}/default.yaml
install -d -m 755 ${DESTDIR}${SIGSTOREDIR}
install -d -m 755 ${DESTDIR}${CONTAINERSCONFDIR}
install -m 644 default-policy.json ${DESTDIR}${CONTAINERSCONFDIR}/policy.json
install -d -m 755 ${DESTDIR}${REGISTRIESDDIR}
install -m 644 default.yaml ${DESTDIR}${REGISTRIESDDIR}/default.yaml
install-binary: bin/skopeo
install -d -m 755 ${BINDIR}
install -m 755 bin/skopeo ${BINDIR}/skopeo
install -d -m 755 ${DESTDIR}${BINDIR}
install -m 755 bin/skopeo ${DESTDIR}${BINDIR}/skopeo
install-docs: docs
install -d -m 755 ${MANDIR}/man1
install -m 644 docs/*.1 ${MANDIR}/man1
ifneq ($(DISABLE_DOCS), 1)
install -d -m 755 ${DESTDIR}${MANDIR}/man1
install -m 644 docs/*.1 ${DESTDIR}${MANDIR}/man1
endif
install-completions:
install -m 755 -d ${BASHCOMPLETIONSDIR}
install -m 644 completions/bash/skopeo ${BASHCOMPLETIONSDIR}/skopeo
install -m 755 -d ${DESTDIR}${BASHCOMPLETIONSDIR}
install -m 644 completions/bash/skopeo ${DESTDIR}${BASHCOMPLETIONSDIR}/skopeo
shell: build-container
shell:
$(CONTAINER_RUN) bash
check: validate test-unit test-integration test-system
# The tests can run out of entropy and block in containers, so replace /dev/random.
test-integration: build-container
$(CONTAINER_RUN) bash -c 'rm -f /dev/random; ln -sf /dev/urandom /dev/random; SKOPEO_CONTAINER_TESTS=1 BUILDTAGS="$(BUILDTAGS)" $(MAKE) test-integration-local'
test-integration:
$(CONTAINER_RUN) $(MAKE) test-integration-local
# Intended for CI, shortcut 'build-container' since already running inside container.
test-integration-local:
# Intended for CI, assumed to be running in quay.io/libpod/skopeo_cidev container.
test-integration-local: bin/skopeo
hack/make.sh test-integration
# complicated set of options needed to run podman-in-podman
test-system: build-container
# TODO: The $(RM) command will likely fail w/o `podman unshare`
test-system:
DTEMP=$(shell mktemp -d --tmpdir=/var/tmp podman-tmp.XXXXXX); \
$(CONTAINER_CMD) --privileged \
-v $$DTEMP:/var/lib/containers:Z -v /run/systemd/journal/socket:/run/systemd/journal/socket \
"$(IMAGE)" \
bash -c 'BUILDTAGS="$(BUILDTAGS)" $(MAKE) test-system-local'; \
-v $(CURDIR):$(CONTAINER_GOSRC) -w $(CONTAINER_GOSRC) \
-v $$DTEMP:/var/lib/containers:Z -v /run/systemd/journal/socket:/run/systemd/journal/socket \
"$(SKOPEO_CIDEV_CONTAINER_FQIN)" \
$(MAKE) test-system-local; \
rc=$$?; \
$(RM) -rf $$DTEMP; \
-$(RM) -rf $$DTEMP; \
exit $$rc
# Intended for CI, shortcut 'build-container' since already running inside container.
test-system-local:
# Intended for CI, assumed to already be running in quay.io/libpod/skopeo_cidev container.
test-system-local: bin/skopeo
hack/make.sh test-system
test-unit: build-container
test-unit:
# Just call (make test unit-local) here instead of worrying about environment differences
$(CONTAINER_RUN) make test-unit-local BUILDTAGS='$(BUILDTAGS)'
$(CONTAINER_RUN) $(MAKE) test-unit-local
validate: build-container
$(CONTAINER_RUN) hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
validate:
$(CONTAINER_RUN) $(MAKE) validate-local
# This target is only intended for development, e.g. executing it from an IDE. Use (make test) for CI or pre-release testing.
test-all-local: validate-local test-unit-local
test-all-local: validate-local validate-docs test-unit-local
.PHONY: validate-local
validate-local:
hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
BUILDTAGS="${BUILDTAGS}" hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
test-unit-local:
# This invokes bin/skopeo, hence cannot be run as part of validate-local
.PHONY: validate-docs
validate-docs:
hack/man-page-checker
hack/xref-helpmsgs-manpages
test-unit-local: bin/skopeo
$(GPGME_ENV) $(GO) test $(MOD_VENDOR) -tags "$(BUILDTAGS)" $$($(GO) list $(MOD_VENDOR) -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/containers/skopeo/\(integration\|vendor/.*\)$$')
vendor:
@@ -221,4 +235,4 @@ vendor:
$(GO) mod verify
vendor-in-container:
podman run --privileged --rm --env HOME=/root -v `pwd`:/src -w /src docker.io/library/golang:1.13 make vendor
podman run --privileged --rm --env HOME=/root -v $(CURDIR):/src -w /src quay.io/libpod/golang:1.16 $(MAKE) vendor

17
OWNERS Normal file
View File

@@ -0,0 +1,17 @@
approvers:
- mtrmac
- lsm5
- TomSweeneyRedHat
- rhatdan
- vrothberg
reviewers:
- ashley-cui
- giuseppe
- containers/image-maintainers
- lsm5
- mtrmac
- QiWang19
- rhatdan
- runcom
- TomSweeneyRedHat
- vrothberg

View File

@@ -1,3 +1,3 @@
## Security and Disclosure Information Policy for the skopeo Project
The skopeo Project follows the [Security and Disclosure Information Policy](https://github.com/containers/common/blob/master/SECURITY.md) for the Containers Projects.
The skopeo Project follows the [Security and Disclosure Information Policy](https://github.com/containers/common/blob/main/SECURITY.md) for the Containers Projects.

View File

@@ -1,3 +1,4 @@
//go:build !containers_image_openpgp
// +build !containers_image_openpgp
package main

View File

@@ -7,10 +7,12 @@ import (
"io/ioutil"
"strings"
commonFlag "github.com/containers/common/pkg/flag"
"github.com/containers/common/pkg/retry"
"github.com/containers/image/v5/copy"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/cli"
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/transports/alltransports"
encconfig "github.com/containers/ocicrypt/config"
@@ -19,31 +21,37 @@ import (
)
type copyOptions struct {
global *globalOptions
srcImage *imageOptions
destImage *imageDestOptions
retryOpts *retry.RetryOptions
additionalTags []string // For docker-archive: destinations, in addition to the name:tag specified as destination, also add these
removeSignatures bool // Do not copy signatures from the source image
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
digestFile string // Write digest to this file
format optionalString // Force conversion of the image to a specified format
quiet bool // Suppress output information when copying images
all bool // Copy all of the images if the source is a list
encryptLayer []int // The list of layers to encrypt
encryptionKeys []string // Keys needed to encrypt the image
decryptionKeys []string // Keys needed to decrypt the image
global *globalOptions
deprecatedTLSVerify *deprecatedTLSVerifyOption
srcImage *imageOptions
destImage *imageDestOptions
retryOpts *retry.RetryOptions
additionalTags []string // For docker-archive: destinations, in addition to the name:tag specified as destination, also add these
removeSignatures bool // Do not copy signatures from the source image
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
signPassphraseFile string // Path pointing to a passphrase file when signing
digestFile string // Write digest to this file
format commonFlag.OptionalString // Force conversion of the image to a specified format
quiet bool // Suppress output information when copying images
all bool // Copy all of the images if the source is a list
multiArch commonFlag.OptionalString // How to handle multi architecture images
preserveDigests bool // Preserve digests during copy
encryptLayer []int // The list of layers to encrypt
encryptionKeys []string // Keys needed to encrypt the image
decryptionKeys []string // Keys needed to decrypt the image
}
func copyCmd(global *globalOptions) *cobra.Command {
sharedFlags, sharedOpts := sharedImageFlags()
srcFlags, srcOpts := imageFlags(global, sharedOpts, "src-", "screds")
destFlags, destOpts := imageDestFlags(global, sharedOpts, "dest-", "dcreds")
deprecatedTLSVerifyFlags, deprecatedTLSVerifyOpt := deprecatedTLSVerifyFlags()
srcFlags, srcOpts := imageFlags(global, sharedOpts, deprecatedTLSVerifyOpt, "src-", "screds")
destFlags, destOpts := imageDestFlags(global, sharedOpts, deprecatedTLSVerifyOpt, "dest-", "dcreds")
retryFlags, retryOpts := retryFlags()
opts := copyOptions{global: global,
srcImage: srcOpts,
destImage: destOpts,
retryOpts: retryOpts,
deprecatedTLSVerify: deprecatedTLSVerifyOpt,
srcImage: srcOpts,
destImage: destOpts,
retryOpts: retryOpts,
}
cmd := &cobra.Command{
Use: "copy [command options] SOURCE-IMAGE DESTINATION-IMAGE",
@@ -61,26 +69,52 @@ See skopeo(1) section "IMAGE NAMES" for the expected format
adjustUsage(cmd)
flags := cmd.Flags()
flags.AddFlagSet(&sharedFlags)
flags.AddFlagSet(&deprecatedTLSVerifyFlags)
flags.AddFlagSet(&srcFlags)
flags.AddFlagSet(&destFlags)
flags.AddFlagSet(&retryFlags)
flags.StringSliceVar(&opts.additionalTags, "additional-tag", []string{}, "additional tags (supports docker-archive)")
flags.BoolVarP(&opts.quiet, "quiet", "q", false, "Suppress output information when copying images")
flags.BoolVarP(&opts.all, "all", "a", false, "Copy all images if SOURCE-IMAGE is a list")
flags.Var(commonFlag.NewOptionalStringValue(&opts.multiArch), "multi-arch", `How to handle multi-architecture images (system, all, or index-only)`)
flags.BoolVar(&opts.preserveDigests, "preserve-digests", false, "Preserve digests of images and lists")
flags.BoolVar(&opts.removeSignatures, "remove-signatures", false, "Do not copy signatures from SOURCE-IMAGE")
flags.StringVar(&opts.signByFingerprint, "sign-by", "", "Sign the image using a GPG key with the specified `FINGERPRINT`")
flags.StringVar(&opts.signPassphraseFile, "sign-passphrase-file", "", "File that contains a passphrase for the --sign-by key")
flags.StringVar(&opts.digestFile, "digestfile", "", "Write the digest of the pushed image to the specified file")
flags.VarP(newOptionalStringValue(&opts.format), "format", "f", `MANIFEST TYPE (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)`)
flags.VarP(commonFlag.NewOptionalStringValue(&opts.format), "format", "f", `MANIFEST TYPE (oci, v2s1, or v2s2) to use in the destination (default is manifest type of source, with fallbacks)`)
flags.StringSliceVar(&opts.encryptionKeys, "encryption-key", []string{}, "*Experimental* key with the encryption protocol to use needed to encrypt the image (e.g. jwe:/path/to/key.pem)")
flags.IntSliceVar(&opts.encryptLayer, "encrypt-layer", []int{}, "*Experimental* the 0-indexed layer indices, with support for negative indexing (e.g. 0 is the first layer, -1 is the last layer)")
flags.StringSliceVar(&opts.decryptionKeys, "decryption-key", []string{}, "*Experimental* key needed to decrypt the image")
return cmd
}
// parseMultiArch parses the list processing selection
// It returns the copy.ImageListSelection to use with image.Copy option
func parseMultiArch(multiArch string) (copy.ImageListSelection, error) {
switch multiArch {
case "system":
return copy.CopySystemImage, nil
case "all":
return copy.CopyAllImages, nil
// There is no CopyNoImages value in copy.ImageListSelection, but because we
// don't provide an option to select a set of images to copy, we can use
// CopySpecificImages.
case "index-only":
return copy.CopySpecificImages, nil
// We don't expose CopySpecificImages other than index-only above, because
// we currently don't provide an option to choose the images to copy. That
// could be added in the future.
default:
return copy.CopySystemImage, fmt.Errorf("unknown multi-arch option %q. Choose one of the supported options: 'system', 'all', or 'index-only'", multiArch)
}
}
func (opts *copyOptions) run(args []string, stdout io.Writer) error {
if len(args) != 2 {
return errorShouldDisplayUsage{errors.New("Exactly two arguments expected")}
}
opts.deprecatedTLSVerify.warnIfUsed([]string{"--src-tls-verify", "--dest-tls-verify"})
imageNames := args
if err := reexecIfNecessaryForImages(imageNames...); err != nil {
@@ -112,8 +146,8 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
}
var manifestType string
if opts.format.present {
manifestType, err = parseManifestFormat(opts.format.value)
if opts.format.Present() {
manifestType, err = parseManifestFormat(opts.format.Value())
if err != nil {
return err
}
@@ -137,7 +171,17 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
if opts.quiet {
stdout = nil
}
imageListSelection := copy.CopySystemImage
if opts.multiArch.Present() && opts.all {
return fmt.Errorf("Cannot use --all and --multi-arch flags together")
}
if opts.multiArch.Present() {
imageListSelection, err = parseMultiArch(opts.multiArch.Value())
if err != nil {
return err
}
}
if opts.all {
imageListSelection = copy.CopyAllImages
}
@@ -178,15 +222,22 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
decConfig = cc.DecryptConfig
}
passphrase, err := cli.ReadPassphraseFile(opts.signPassphraseFile)
if err != nil {
return err
}
return retry.RetryIfNecessary(ctx, func() error {
manifestBytes, err := copy.Image(ctx, policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: opts.removeSignatures,
SignBy: opts.signByFingerprint,
SignPassphrase: passphrase,
ReportWriter: stdout,
SourceCtx: sourceCtx,
DestinationCtx: destinationCtx,
ForceManifestMIMEType: manifestType,
ImageListSelection: imageListSelection,
PreserveDigests: opts.preserveDigests,
OciDecryptConfig: decConfig,
OciEncryptLayers: encLayers,
OciEncryptConfig: encConfig,

View File

@@ -20,7 +20,7 @@ type deleteOptions struct {
func deleteCmd(global *globalOptions) *cobra.Command {
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageFlags(global, sharedOpts, "", "")
imageFlags, imageOpts := imageFlags(global, sharedOpts, nil, "", "")
retryFlags, retryOpts := retryFlags()
opts := deleteOptions{
global: global,

View File

@@ -1,222 +0,0 @@
package main
import (
"testing"
"github.com/spf13/cobra"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestOptionalBoolSet(t *testing.T) {
for _, c := range []struct {
input string
accepted bool
value bool
}{
// Valid inputs documented for strconv.ParseBool == flag.BoolVar
{"1", true, true},
{"t", true, true},
{"T", true, true},
{"TRUE", true, true},
{"true", true, true},
{"True", true, true},
{"0", true, false},
{"f", true, false},
{"F", true, false},
{"FALSE", true, false},
{"false", true, false},
{"False", true, false},
// A few invalid inputs
{"", false, false},
{"yes", false, false},
{"no", false, false},
{"2", false, false},
} {
var ob optionalBool
v := internalNewOptionalBoolValue(&ob)
require.False(t, ob.present)
err := v.Set(c.input)
if c.accepted {
assert.NoError(t, err, c.input)
assert.Equal(t, c.value, ob.value)
} else {
assert.Error(t, err, c.input)
assert.False(t, ob.present) // Just to be extra paranoid.
}
}
// Nothing actually explicitly says that .Set() is never called when the flag is not present on the command line;
// so, check that it is not being called, at least in the straightforward case (it's not possible to test that it
// is not called in any possible situation).
var globalOB, commandOB optionalBool
actionRun := false
app := &cobra.Command{
Use: "app",
}
optionalBoolFlag(app.PersistentFlags(), &globalOB, "global-OB", "")
cmd := &cobra.Command{
Use: "cmd",
RunE: func(cmd *cobra.Command, args []string) error {
assert.False(t, globalOB.present)
assert.False(t, commandOB.present)
actionRun = true
return nil
},
}
optionalBoolFlag(cmd.Flags(), &commandOB, "command-OB", "")
app.AddCommand(cmd)
app.SetArgs([]string{"cmd"})
err := app.Execute()
require.NoError(t, err)
assert.True(t, actionRun)
}
func TestOptionalBoolString(t *testing.T) {
for _, c := range []struct {
input optionalBool
expected string
}{
{optionalBool{present: true, value: true}, "true"},
{optionalBool{present: true, value: false}, "false"},
{optionalBool{present: false, value: true}, ""},
{optionalBool{present: false, value: false}, ""},
} {
var ob optionalBool
v := internalNewOptionalBoolValue(&ob)
ob = c.input
res := v.String()
assert.Equal(t, c.expected, res)
}
}
func TestOptionalBoolIsBoolFlag(t *testing.T) {
// IsBoolFlag means that the argument value must either be part of the same argument, with =;
// if there is no =, the value is set to true.
// This differs form other flags, where the argument is required and may be either separated with = or supplied in the next argument.
for _, c := range []struct {
input []string
expectedOB optionalBool
expectedArgs []string
}{
{[]string{"1", "2"}, optionalBool{present: false}, []string{"1", "2"}}, // Flag not present
{[]string{"--OB=true", "1", "2"}, optionalBool{present: true, value: true}, []string{"1", "2"}}, // --OB=true
{[]string{"--OB=false", "1", "2"}, optionalBool{present: true, value: false}, []string{"1", "2"}}, // --OB=false
{[]string{"--OB", "true", "1", "2"}, optionalBool{present: true, value: true}, []string{"true", "1", "2"}}, // --OB true
{[]string{"--OB", "false", "1", "2"}, optionalBool{present: true, value: true}, []string{"false", "1", "2"}}, // --OB false
} {
var ob optionalBool
actionRun := false
app := &cobra.Command{Use: "app"}
cmd := &cobra.Command{
Use: "cmd",
RunE: func(cmd *cobra.Command, args []string) error {
assert.Equal(t, c.expectedOB, ob)
assert.Equal(t, c.expectedArgs, args)
actionRun = true
return nil
},
}
optionalBoolFlag(cmd.Flags(), &ob, "OB", "")
app.AddCommand(cmd)
app.SetArgs(append([]string{"cmd"}, c.input...))
err := app.Execute()
require.NoError(t, err)
assert.True(t, actionRun)
}
}
func TestOptionalStringSet(t *testing.T) {
// Really just a smoke test, but differentiating between not present and empty.
for _, c := range []string{"", "hello"} {
var os optionalString
v := newOptionalStringValue(&os)
require.False(t, os.present)
err := v.Set(c)
assert.NoError(t, err, c)
assert.Equal(t, c, os.value)
}
// Nothing actually explicitly says that .Set() is never called when the flag is not present on the command line;
// so, check that it is not being called, at least in the straightforward case (it's not possible to test that it
// is not called in any possible situation).
var globalOS, commandOS optionalString
actionRun := false
app := &cobra.Command{
Use: "app",
}
app.PersistentFlags().Var(newOptionalStringValue(&globalOS), "global-OS", "")
cmd := &cobra.Command{
Use: "cmd",
RunE: func(cmd *cobra.Command, args []string) error {
assert.False(t, globalOS.present)
assert.False(t, commandOS.present)
actionRun = true
return nil
},
}
cmd.Flags().Var(newOptionalStringValue(&commandOS), "command-OS", "")
app.AddCommand(cmd)
app.SetArgs([]string{"cmd"})
err := app.Execute()
require.NoError(t, err)
assert.True(t, actionRun)
}
func TestOptionalStringString(t *testing.T) {
for _, c := range []struct {
input optionalString
expected string
}{
{optionalString{present: true, value: "hello"}, "hello"},
{optionalString{present: true, value: ""}, ""},
{optionalString{present: false, value: "hello"}, ""},
{optionalString{present: false, value: ""}, ""},
} {
var os optionalString
v := newOptionalStringValue(&os)
os = c.input
res := v.String()
assert.Equal(t, c.expected, res)
}
}
func TestOptionalStringIsBoolFlag(t *testing.T) {
// NOTE: optionalStringValue does not implement IsBoolFlag!
// IsBoolFlag means that the argument value must either be part of the same argument, with =;
// if there is no =, the value is set to true.
// This differs form other flags, where the argument is required and may be either separated with = or supplied in the next argument.
for _, c := range []struct {
input []string
expectedOS optionalString
expectedArgs []string
}{
{[]string{"1", "2"}, optionalString{present: false}, []string{"1", "2"}}, // Flag not present
{[]string{"--OS=hello", "1", "2"}, optionalString{present: true, value: "hello"}, []string{"1", "2"}}, // --OS=true
{[]string{"--OS=", "1", "2"}, optionalString{present: true, value: ""}, []string{"1", "2"}}, // --OS=false
{[]string{"--OS", "hello", "1", "2"}, optionalString{present: true, value: "hello"}, []string{"1", "2"}}, // --OS true
{[]string{"--OS", "", "1", "2"}, optionalString{present: true, value: ""}, []string{"1", "2"}}, // --OS false
} {
var os optionalString
actionRun := false
app := &cobra.Command{
Use: "app",
}
cmd := &cobra.Command{
Use: "cmd",
RunE: func(cmd *cobra.Command, args []string) error {
assert.Equal(t, c.expectedOS, os)
assert.Equal(t, c.expectedArgs, args)
actionRun = true
return nil
},
}
cmd.Flags().Var(newOptionalStringValue(&os), "OS", "")
app.AddCommand(cmd)
app.SetArgs(append([]string{"cmd"}, c.input...))
err := app.Execute()
require.NoError(t, err)
assert.True(t, actionRun)
}
}

View File

@@ -24,17 +24,18 @@ import (
)
type inspectOptions struct {
global *globalOptions
image *imageOptions
retryOpts *retry.RetryOptions
format string
raw bool // Output the raw manifest instead of parsing information about the image
config bool // Output the raw config blob instead of parsing information about the image
global *globalOptions
image *imageOptions
retryOpts *retry.RetryOptions
format string
raw bool // Output the raw manifest instead of parsing information about the image
config bool // Output the raw config blob instead of parsing information about the image
doNotListTags bool // Do not list all tags available in the same repository
}
func inspectCmd(global *globalOptions) *cobra.Command {
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageFlags(global, sharedOpts, "", "")
imageFlags, imageOpts := imageFlags(global, sharedOpts, nil, "", "")
retryFlags, retryOpts := retryFlags()
opts := inspectOptions{
global: global,
@@ -60,6 +61,7 @@ See skopeo(1) section "IMAGE NAMES" for the expected format
flags.BoolVar(&opts.raw, "raw", false, "output raw manifest or configuration")
flags.BoolVar(&opts.config, "config", false, "output configuration")
flags.StringVarP(&opts.format, "format", "f", "", "Format the output to a Go template")
flags.BoolVarP(&opts.doNotListTags, "no-tags", "n", false, "Do not list the available tags from the repository in the output")
flags.AddFlagSet(&sharedFlags)
flags.AddFlagSet(&imageFlags)
flags.AddFlagSet(&retryFlags)
@@ -192,7 +194,7 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
if dockerRef := img.Reference().DockerReference(); dockerRef != nil {
outputData.Name = dockerRef.Name()
}
if img.Reference().Transport() == docker.Transport {
if !opts.doNotListTags && img.Reference().Transport() == docker.Transport {
sys, err := opts.image.newSystemContext()
if err != nil {
return err
@@ -222,13 +224,6 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
return printTmpl(row, data)
}
func inspectNormalize(row string) string {
r := strings.NewReplacer(
".ImageID", ".Image",
)
return r.Replace(row)
}
func printTmpl(row string, data []interface{}) error {
t, err := template.New("skopeo inspect").Parse(row)
if err != nil {

View File

@@ -25,7 +25,7 @@ type layersOptions struct {
func layersCmd(global *globalOptions) *cobra.Command {
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageFlags(global, sharedOpts, "", "")
imageFlags, imageOpts := imageFlags(global, sharedOpts, nil, "", "")
retryFlags, retryOpts := retryFlags()
opts := layersOptions{
global: global,

View File

@@ -30,7 +30,7 @@ type tagsOptions struct {
func tagsCmd(global *globalOptions) *cobra.Command {
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := dockerImageFlags(global, sharedOpts, "", "")
imageFlags, imageOpts := dockerImageFlags(global, sharedOpts, nil, "", "")
retryFlags, retryOpts := retryFlags()
opts := tagsOptions{

View File

@@ -5,6 +5,7 @@ import (
"github.com/containers/image/v5/transports/alltransports"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Tests the kinds of inputs allowed and expected to the command
@@ -17,10 +18,10 @@ func TestDockerRepositoryReferenceParser(t *testing.T) {
} {
ref, err := parseDockerRepositoryReference(test[0])
require.NoError(t, err)
expected, err := alltransports.ParseImageName(test[0])
if assert.NoError(t, err, "Could not parse, got error on %v", test[0]) {
assert.Equal(t, expected.DockerReference().Name(), ref.DockerReference().Name(), "Mismatched parse result for input %v", test[0])
}
require.NoError(t, err)
assert.Equal(t, expected.DockerReference().Name(), ref.DockerReference().Name(), "Mismatched parse result for input %v", test[0])
}
for _, test := range [][]string{

View File

@@ -5,6 +5,7 @@ import (
"os"
"github.com/containers/common/pkg/auth"
commonFlag "github.com/containers/common/pkg/flag"
"github.com/containers/image/v5/types"
"github.com/spf13/cobra"
)
@@ -12,8 +13,7 @@ import (
type loginOptions struct {
global *globalOptions
loginOpts auth.LoginOptions
getLogin optionalBool
tlsVerify optionalBool
tlsVerify commonFlag.OptionalBool
}
func loginCmd(global *globalOptions) *cobra.Command {
@@ -21,7 +21,7 @@ func loginCmd(global *globalOptions) *cobra.Command {
global: global,
}
cmd := &cobra.Command{
Use: "login",
Use: "login [command options] REGISTRY",
Short: "Login to a container registry",
Long: "Login to a container registry on a specified server.",
RunE: commandAction(opts.run),
@@ -29,7 +29,7 @@ func loginCmd(global *globalOptions) *cobra.Command {
}
adjustUsage(cmd)
flags := cmd.Flags()
optionalBoolFlag(flags, &opts.tlsVerify, "tls-verify", "require HTTPS and verify certificates when accessing the registry")
commonFlag.OptionalBoolFlag(flags, &opts.tlsVerify, "tls-verify", "require HTTPS and verify certificates when accessing the registry")
flags.AddFlagSet(auth.GetLoginFlags(&opts.loginOpts))
return cmd
}
@@ -39,9 +39,10 @@ func (opts *loginOptions) run(args []string, stdout io.Writer) error {
defer cancel()
opts.loginOpts.Stdout = stdout
opts.loginOpts.Stdin = os.Stdin
opts.loginOpts.AcceptRepositories = true
sys := opts.global.newSystemContext()
if opts.tlsVerify.present {
sys.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.value)
if opts.tlsVerify.Present() {
sys.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.Value())
}
return auth.Login(ctx, sys, &opts.loginOpts, args)
}

View File

@@ -4,12 +4,15 @@ import (
"io"
"github.com/containers/common/pkg/auth"
commonFlag "github.com/containers/common/pkg/flag"
"github.com/containers/image/v5/types"
"github.com/spf13/cobra"
)
type logoutOptions struct {
global *globalOptions
logoutOpts auth.LogoutOptions
tlsVerify commonFlag.OptionalBool
}
func logoutCmd(global *globalOptions) *cobra.Command {
@@ -17,19 +20,25 @@ func logoutCmd(global *globalOptions) *cobra.Command {
global: global,
}
cmd := &cobra.Command{
Use: "logout",
Use: "logout [command options] REGISTRY",
Short: "Logout of a container registry",
Long: "Logout of a container registry on a specified server.",
RunE: commandAction(opts.run),
Example: `skopeo logout quay.io`,
}
adjustUsage(cmd)
cmd.Flags().AddFlagSet(auth.GetLogoutFlags(&opts.logoutOpts))
flags := cmd.Flags()
commonFlag.OptionalBoolFlag(flags, &opts.tlsVerify, "tls-verify", "require HTTPS and verify certificates when accessing the registry")
flags.AddFlagSet(auth.GetLogoutFlags(&opts.logoutOpts))
return cmd
}
func (opts *logoutOptions) run(args []string, stdout io.Writer) error {
opts.logoutOpts.Stdout = stdout
opts.logoutOpts.AcceptRepositories = true
sys := opts.global.newSystemContext()
if opts.tlsVerify.Present() {
sys.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.Value())
}
return auth.Logout(sys, &opts.logoutOpts, args)
}

View File

@@ -3,8 +3,10 @@ package main
import (
"context"
"fmt"
"strings"
"time"
commonFlag "github.com/containers/common/pkg/flag"
"github.com/containers/image/v5/signature"
"github.com/containers/image/v5/types"
"github.com/containers/skopeo/version"
@@ -20,17 +22,32 @@ var gitCommit = ""
var defaultUserAgent = "skopeo/" + version.Version
type globalOptions struct {
debug bool // Enable debug output
tlsVerify optionalBool // Require HTTPS and verify certificates (for docker: and docker-daemon:)
policyPath string // Path to a signature verification policy file
insecurePolicy bool // Use an "allow everything" signature verification policy
registriesDirPath string // Path to a "registries.d" registry configuration directory
overrideArch string // Architecture to use for choosing images, instead of the runtime one
overrideOS string // OS to use for choosing images, instead of the runtime one
overrideVariant string // Architecture variant to use for choosing images, instead of the runtime one
commandTimeout time.Duration // Timeout for the command execution
registriesConfPath string // Path to the "registries.conf" file
tmpDir string // Path to use for big temporary files
debug bool // Enable debug output
tlsVerify commonFlag.OptionalBool // Require HTTPS and verify certificates (for docker: and docker-daemon:)
policyPath string // Path to a signature verification policy file
insecurePolicy bool // Use an "allow everything" signature verification policy
registriesDirPath string // Path to a "registries.d" registry configuration directory
overrideArch string // Architecture to use for choosing images, instead of the runtime one
overrideOS string // OS to use for choosing images, instead of the runtime one
overrideVariant string // Architecture variant to use for choosing images, instead of the runtime one
commandTimeout time.Duration // Timeout for the command execution
registriesConfPath string // Path to the "registries.conf" file
tmpDir string // Path to use for big temporary files
}
// requireSubcommand returns an error if no sub command is provided
// This was copied from podman: `github.com/containers/podman/cmd/podman/validate/args.go
// Some small style changes to match skopeo were applied, but try to apply any
// bugfixes there first.
func requireSubcommand(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
suggestions := cmd.SuggestionsFor(args[0])
if len(suggestions) == 0 {
return fmt.Errorf("Unrecognized command `%[1]s %[2]s`\nTry '%[1]s --help' for more information", cmd.CommandPath(), args[0])
}
return fmt.Errorf("Unrecognized command `%[1]s %[2]s`\n\nDid you mean this?\n\t%[3]s\n\nTry '%[1]s --help' for more information", cmd.CommandPath(), args[0], strings.Join(suggestions, "\n\t"))
}
return fmt.Errorf("Missing command '%[1]s COMMAND'\nTry '%[1]s --help' for more information", cmd.CommandPath())
}
// createApp returns a cobra.Command, and the underlying globalOptions object, to be run or tested.
@@ -40,11 +57,23 @@ func createApp() (*cobra.Command, *globalOptions) {
rootCommand := &cobra.Command{
Use: "skopeo",
Long: "Various operations with container images and container image registries",
RunE: requireSubcommand,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
return opts.before(cmd)
},
SilenceUsage: true,
SilenceErrors: true,
// Currently, skopeo uses manually written completions. Cobra allows
// for auto-generating completions for various shells. Podman is
// already making us of that. If Skopeo decides to follow, please
// remove the line below (and hide the `completion` command).
CompletionOptions: cobra.CompletionOptions{DisableDefaultCmd: true},
// This is documented to parse "local" (non-PersistentFlags) flags of parent commands before
// running subcommands and handling their options. We don't really run into such cases,
// because all of our flags on rootCommand are in PersistentFlags, except for the deprecated --tls-verify;
// in that case we need TraverseChildren so that we can distinguish between
// (skopeo --tls-verify inspect) (causes a warning) and (skopeo inspect --tls-verify) (no warning).
TraverseChildren: true,
}
if gitCommit != "" {
rootCommand.Version = fmt.Sprintf("%s commit: %s", version.Version, gitCommit)
@@ -55,8 +84,6 @@ func createApp() (*cobra.Command, *globalOptions) {
var dummyVersion bool
rootCommand.Flags().BoolVarP(&dummyVersion, "version", "v", false, "Version for Skopeo")
rootCommand.PersistentFlags().BoolVar(&opts.debug, "debug", false, "enable debug output")
flag := optionalBoolFlag(rootCommand.PersistentFlags(), &opts.tlsVerify, "tls-verify", "Require HTTPS and verify certificates when accessing the registry")
flag.Hidden = true
rootCommand.PersistentFlags().StringVar(&opts.policyPath, "policy", "", "Path to a trust policy file")
rootCommand.PersistentFlags().BoolVar(&opts.insecurePolicy, "insecure-policy", false, "run the tool without any policy check")
rootCommand.PersistentFlags().StringVar(&opts.registriesDirPath, "registries.d", "", "use registry configuration files in `DIR` (e.g. for container signature storage)")
@@ -69,6 +96,8 @@ func createApp() (*cobra.Command, *globalOptions) {
logrus.Fatal("unable to mark registries-conf flag as hidden")
}
rootCommand.PersistentFlags().StringVar(&opts.tmpDir, "tmpdir", "", "directory used to store temporary files")
flag := commonFlag.OptionalBoolFlag(rootCommand.Flags(), &opts.tlsVerify, "tls-verify", "Require HTTPS and verify certificates when accessing the registry")
flag.Hidden = true
rootCommand.AddCommand(
copyCmd(&opts),
deleteCmd(&opts),
@@ -77,6 +106,7 @@ func createApp() (*cobra.Command, *globalOptions) {
loginCmd(&opts),
logoutCmd(&opts),
manifestDigestCmd(),
proxyCmd(&opts),
syncCmd(&opts),
standaloneSignCmd(),
standaloneVerifyCmd(),
@@ -91,7 +121,7 @@ func (opts *globalOptions) before(cmd *cobra.Command) error {
if opts.debug {
logrus.SetLevel(logrus.DebugLevel)
}
if opts.tlsVerify.present {
if opts.tlsVerify.Present() {
logrus.Warn("'--tls-verify' is deprecated, please set this on the specific subcommand")
}
return nil
@@ -148,8 +178,8 @@ func (opts *globalOptions) newSystemContext() *types.SystemContext {
DockerRegistryUserAgent: defaultUserAgent,
}
// DEPRECATED: We support this for backward compatibility, but override it if a per-image flag is provided.
if opts.tlsVerify.present {
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.value)
if opts.tlsVerify.Present() {
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.Value())
}
return ctx
}

View File

@@ -16,7 +16,7 @@ type manifestDigestOptions struct {
func manifestDigestCmd() *cobra.Command {
var opts manifestDigestOptions
cmd := &cobra.Command{
Use: "manifest-digest MANIFEST",
Use: "manifest-digest MANIFEST-FILE",
Short: "Compute a manifest digest of a file",
RunE: commandAction(opts.run),
Example: "skopeo manifest-digest manifest.json",

734
cmd/skopeo/proxy.go Normal file
View File

@@ -0,0 +1,734 @@
//go:build !windows
// +build !windows
package main
/*
This code is currently only intended to be used by ostree
to fetch content via containers. The API is subject
to change. A goal however is to stabilize the API
eventually as a full out-of-process interface to the
core containers/image library functionality.
To use this command, in a parent process create a
`socketpair()` of type `SOCK_SEQPACKET`. Fork
off this command, and pass one half of the socket
pair to the child. Providing it on stdin (fd 0)
is the expected default.
The protocol is JSON for the control layer,
and a read side of a `pipe()` passed for large data.
Base JSON protocol:
request: { method: "MethodName": args: [arguments] }
reply: { success: bool, value: JSVAL, pipeid: number, error: string }
For any non-metadata i.e. payload data from `GetManifest`
and `GetBlob` the server will pass back the read half of a `pipe(2)` via FD passing,
along with a `pipeid` integer.
The expected flow looks like this:
- Initialize
And validate the returned protocol version versus
what your client supports.
- OpenImage docker://quay.io/someorg/example:latest
(returns an imageid)
- GetManifest imageid (and associated <pipeid>)
(Streaming read data from pipe)
- FinishPipe <pipeid>
- GetBlob imageid sha256:...
(Streaming read data from pipe)
- FinishPipe <pipeid>
- GetBlob imageid sha256:...
(Streaming read data from pipe)
- FinishPipe <pipeid>
- CloseImage imageid
You may interleave invocations of these methods, e.g. one
can also invoke `OpenImage` multiple times, as well as
starting multiple GetBlob requests before calling `FinishPipe`
on them. The server will stream data into the pipefd
until `FinishPipe` is invoked.
Note that the pipe will not be closed by the server until
the client has invoked `FinishPipe`. This is to ensure
that the client checks for errors. For example, `GetBlob`
performs digest (e.g. sha256) verification and this must
be checked after all data has been written.
*/
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net"
"os"
"sync"
"syscall"
"github.com/containers/image/v5/image"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/blobinfocache"
"github.com/containers/image/v5/transports/alltransports"
"github.com/containers/image/v5/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
)
// protocolVersion is semantic version of the protocol used by this proxy.
// The first version of the protocol has major version 0.2 to signify a
// departure from the original code which used HTTP.
//
// 0.2.1: Initial version
// 0.2.2: Added support for fetching image configuration as OCI
// 0.2.3: Added GetFullConfig
const protocolVersion = "0.2.3"
// maxMsgSize is the current limit on a packet size.
// Note that all non-metadata (i.e. payload data) is sent over a pipe.
const maxMsgSize = 32 * 1024
// maxJSONFloat is ECMA Number.MAX_SAFE_INTEGER
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER
// We hard error if the input JSON numbers we expect to be
// integers are above this.
const maxJSONFloat = float64(1<<53 - 1)
// request is the JSON serialization of a function call
type request struct {
// Method is the name of the function
Method string `json:"method"`
// Args is the arguments (parsed inside the function)
Args []interface{} `json:"args"`
}
// reply is serialized to JSON as the return value from a function call.
type reply struct {
// Success is true if and only if the call succeeded.
Success bool `json:"success"`
// Value is an arbitrary value (or values, as array/map) returned from the call.
Value interface{} `json:"value"`
// PipeID is an index into open pipes, and should be passed to FinishPipe
PipeID uint32 `json:"pipeid"`
// Error should be non-empty if Success == false
Error string `json:"error"`
}
// replyBuf is our internal deserialization of reply plus optional fd
type replyBuf struct {
// value will be converted to a reply Value
value interface{}
// fd is the read half of a pipe, passed back to the client
fd *os.File
// pipeid will be provided to the client as PipeID, an index into our open pipes
pipeid uint32
}
// activePipe is an open pipe to the client.
// It contains an error value
type activePipe struct {
// w is the write half of the pipe
w *os.File
// wg is completed when our worker goroutine is done
wg sync.WaitGroup
// err may be set in our worker goroutine
err error
}
// openImage is an opened image reference
type openImage struct {
// id is an opaque integer handle
id uint32
src types.ImageSource
cachedimg types.Image
}
// proxyHandler is the state associated with our socket.
type proxyHandler struct {
// lock protects everything else in this structure.
lock sync.Mutex
// opts is CLI options
opts *proxyOptions
sysctx *types.SystemContext
cache types.BlobInfoCache
// imageSerial is a counter for open images
imageSerial uint32
// images holds our opened images
images map[uint32]*openImage
// activePipes maps from "pipeid" to a pipe + goroutine pair
activePipes map[uint32]*activePipe
}
// Initialize performs one-time initialization, and returns the protocol version
func (h *proxyHandler) Initialize(args []interface{}) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
if len(args) != 0 {
return ret, fmt.Errorf("invalid request, expecting zero arguments")
}
if h.sysctx != nil {
return ret, fmt.Errorf("already initialized")
}
sysctx, err := h.opts.imageOpts.newSystemContext()
if err != nil {
return ret, err
}
h.sysctx = sysctx
h.cache = blobinfocache.DefaultCache(sysctx)
r := replyBuf{
value: protocolVersion,
}
return r, nil
}
// OpenImage accepts a string image reference i.e. TRANSPORT:REF - like `skopeo copy`.
// The return value is an opaque integer handle.
func (h *proxyHandler) OpenImage(args []interface{}) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
if h.sysctx == nil {
return ret, fmt.Errorf("client error: must invoke Initialize")
}
if len(args) != 1 {
return ret, fmt.Errorf("invalid request, expecting one argument")
}
imageref, ok := args[0].(string)
if !ok {
return ret, fmt.Errorf("expecting string imageref, not %T", args[0])
}
imgRef, err := alltransports.ParseImageName(imageref)
if err != nil {
return ret, err
}
imgsrc, err := imgRef.NewImageSource(context.Background(), h.sysctx)
if err != nil {
return ret, err
}
h.imageSerial++
openimg := &openImage{
id: h.imageSerial,
src: imgsrc,
}
h.images[openimg.id] = openimg
ret.value = openimg.id
return ret, nil
}
func (h *proxyHandler) CloseImage(args []interface{}) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
if h.sysctx == nil {
return ret, fmt.Errorf("client error: must invoke Initialize")
}
if len(args) != 1 {
return ret, fmt.Errorf("invalid request, expecting one argument")
}
imgref, err := h.parseImageFromID(args[0])
if err != nil {
return ret, err
}
imgref.src.Close()
delete(h.images, imgref.id)
return ret, nil
}
func parseImageID(v interface{}) (uint32, error) {
imgidf, ok := v.(float64)
if !ok {
return 0, fmt.Errorf("expecting integer imageid, not %T", v)
}
return uint32(imgidf), nil
}
// parseUint64 validates that a number fits inside a JavaScript safe integer
func parseUint64(v interface{}) (uint64, error) {
f, ok := v.(float64)
if !ok {
return 0, fmt.Errorf("expecting numeric, not %T", v)
}
if f > maxJSONFloat {
return 0, fmt.Errorf("out of range integer for numeric %f", f)
}
return uint64(f), nil
}
func (h *proxyHandler) parseImageFromID(v interface{}) (*openImage, error) {
imgid, err := parseImageID(v)
if err != nil {
return nil, err
}
imgref, ok := h.images[imgid]
if !ok {
return nil, fmt.Errorf("no image %v", imgid)
}
return imgref, nil
}
func (h *proxyHandler) allocPipe() (*os.File, *activePipe, error) {
piper, pipew, err := os.Pipe()
if err != nil {
return nil, nil, err
}
f := activePipe{
w: pipew,
}
h.activePipes[uint32(pipew.Fd())] = &f
f.wg.Add(1)
return piper, &f, nil
}
// returnBytes generates a return pipe() from a byte array
// In the future it might be nicer to return this via memfd_create()
func (h *proxyHandler) returnBytes(retval interface{}, buf []byte) (replyBuf, error) {
var ret replyBuf
piper, f, err := h.allocPipe()
if err != nil {
return ret, err
}
go func() {
// Signal completion when we return
defer f.wg.Done()
_, err = io.Copy(f.w, bytes.NewReader(buf))
if err != nil {
f.err = err
}
}()
ret.value = retval
ret.fd = piper
ret.pipeid = uint32(f.w.Fd())
return ret, nil
}
// cacheTargetManifest is invoked when GetManifest or GetConfig is invoked
// the first time for a given image. If the requested image is a manifest
// list, this function resolves it to the image matching the calling process'
// operating system and architecture.
//
// TODO: Add GetRawManifest or so that exposes manifest lists
func (h *proxyHandler) cacheTargetManifest(img *openImage) error {
ctx := context.Background()
if img.cachedimg != nil {
return nil
}
unparsedToplevel := image.UnparsedInstance(img.src, nil)
mfest, manifestType, err := unparsedToplevel.Manifest(ctx)
if err != nil {
return err
}
var target *image.UnparsedImage
if manifest.MIMETypeIsMultiImage(manifestType) {
manifestList, err := manifest.ListFromBlob(mfest, manifestType)
if err != nil {
return err
}
instanceDigest, err := manifestList.ChooseInstance(h.sysctx)
if err != nil {
return err
}
target = image.UnparsedInstance(img.src, &instanceDigest)
} else {
target = unparsedToplevel
}
cachedimg, err := image.FromUnparsedImage(ctx, h.sysctx, target)
if err != nil {
return err
}
img.cachedimg = cachedimg
return nil
}
// GetManifest returns a copy of the manifest, converted to OCI format, along with the original digest.
// Manifest lists are resolved to the current operating system and architecture.
func (h *proxyHandler) GetManifest(args []interface{}) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
if h.sysctx == nil {
return ret, fmt.Errorf("client error: must invoke Initialize")
}
if len(args) != 1 {
return ret, fmt.Errorf("invalid request, expecting one argument")
}
imgref, err := h.parseImageFromID(args[0])
if err != nil {
return ret, err
}
err = h.cacheTargetManifest(imgref)
if err != nil {
return ret, err
}
img := imgref.cachedimg
ctx := context.Background()
rawManifest, manifestType, err := img.Manifest(ctx)
if err != nil {
return ret, err
}
// We only support OCI and docker2schema2. We know docker2schema2 can be easily+cheaply
// converted into OCI, so consumers only need to see OCI.
switch manifestType {
case imgspecv1.MediaTypeImageManifest, manifest.DockerV2Schema2MediaType:
break
// Explicitly reject e.g. docker schema 1 type with a "legacy" note
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
return ret, fmt.Errorf("unsupported legacy manifest MIME type: %s", manifestType)
default:
return ret, fmt.Errorf("unsupported manifest MIME type: %s", manifestType)
}
// We always return the original digest, as that's what clients need to do pull-by-digest
// and in general identify the image.
digest, err := manifest.Digest(rawManifest)
if err != nil {
return ret, err
}
var serialized []byte
// But, we convert to OCI format on the wire if it's not already. The idea here is that by reusing the containers/image
// stack, clients to this proxy can pretend the world is OCI only, and not need to care about e.g.
// docker schema and MIME types.
if manifestType != imgspecv1.MediaTypeImageManifest {
manifestUpdates := types.ManifestUpdateOptions{ManifestMIMEType: imgspecv1.MediaTypeImageManifest}
ociImage, err := img.UpdatedImage(ctx, manifestUpdates)
if err != nil {
return ret, err
}
ociSerialized, _, err := ociImage.Manifest(ctx)
if err != nil {
return ret, err
}
serialized = ociSerialized
} else {
serialized = rawManifest
}
return h.returnBytes(digest, serialized)
}
// GetFullConfig returns a copy of the image configuration, converted to OCI format.
// https://github.com/opencontainers/image-spec/blob/main/config.md
func (h *proxyHandler) GetFullConfig(args []interface{}) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
if h.sysctx == nil {
return ret, fmt.Errorf("client error: must invoke Initialize")
}
if len(args) != 1 {
return ret, fmt.Errorf("invalid request, expecting: [imgid]")
}
imgref, err := h.parseImageFromID(args[0])
if err != nil {
return ret, err
}
err = h.cacheTargetManifest(imgref)
if err != nil {
return ret, err
}
img := imgref.cachedimg
ctx := context.TODO()
config, err := img.OCIConfig(ctx)
if err != nil {
return ret, err
}
serialized, err := json.Marshal(&config)
if err != nil {
return ret, err
}
return h.returnBytes(nil, serialized)
}
// GetConfig returns a copy of the container runtime configuration, converted to OCI format.
// Note that due to a historical mistake, this returns not the full image configuration,
// but just the container runtime configuration. You should use GetFullConfig instead.
func (h *proxyHandler) GetConfig(args []interface{}) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
if h.sysctx == nil {
return ret, fmt.Errorf("client error: must invoke Initialize")
}
if len(args) != 1 {
return ret, fmt.Errorf("invalid request, expecting: [imgid]")
}
imgref, err := h.parseImageFromID(args[0])
if err != nil {
return ret, err
}
err = h.cacheTargetManifest(imgref)
if err != nil {
return ret, err
}
img := imgref.cachedimg
ctx := context.TODO()
config, err := img.OCIConfig(ctx)
if err != nil {
return ret, err
}
serialized, err := json.Marshal(&config.Config)
if err != nil {
return ret, err
}
return h.returnBytes(nil, serialized)
}
// GetBlob fetches a blob, performing digest verification.
func (h *proxyHandler) GetBlob(args []interface{}) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
if h.sysctx == nil {
return ret, fmt.Errorf("client error: must invoke Initialize")
}
if len(args) != 3 {
return ret, fmt.Errorf("found %d args, expecting (imgid, digest, size)", len(args))
}
imgref, err := h.parseImageFromID(args[0])
if err != nil {
return ret, err
}
digestStr, ok := args[1].(string)
if !ok {
return ret, fmt.Errorf("expecting string blobid")
}
size, err := parseUint64(args[2])
if err != nil {
return ret, err
}
ctx := context.TODO()
d, err := digest.Parse(digestStr)
if err != nil {
return ret, err
}
blobr, blobSize, err := imgref.src.GetBlob(ctx, types.BlobInfo{Digest: d, Size: int64(size)}, h.cache)
if err != nil {
return ret, err
}
piper, f, err := h.allocPipe()
if err != nil {
return ret, err
}
go func() {
// Signal completion when we return
defer f.wg.Done()
verifier := d.Verifier()
tr := io.TeeReader(blobr, verifier)
n, err := io.Copy(f.w, tr)
if err != nil {
f.err = err
return
}
if n != int64(size) {
f.err = fmt.Errorf("expected %d bytes in blob, got %d", size, n)
}
if !verifier.Verified() {
f.err = fmt.Errorf("corrupted blob, expecting %s", d.String())
}
}()
ret.value = blobSize
ret.fd = piper
ret.pipeid = uint32(f.w.Fd())
return ret, nil
}
// FinishPipe waits for the worker goroutine to finish, and closes the write side of the pipe.
func (h *proxyHandler) FinishPipe(args []interface{}) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
pipeidv, err := parseUint64(args[0])
if err != nil {
return ret, err
}
pipeid := uint32(pipeidv)
f, ok := h.activePipes[pipeid]
if !ok {
return ret, fmt.Errorf("finishpipe: no active pipe %d", pipeid)
}
// Wait for the goroutine to complete
f.wg.Wait()
// And only now do we close the write half; this forces the client to call this API
f.w.Close()
// Propagate any errors from the goroutine worker
err = f.err
delete(h.activePipes, pipeid)
return ret, err
}
// send writes a reply buffer to the socket
func (buf replyBuf) send(conn *net.UnixConn, err error) error {
replyToSerialize := reply{
Success: err == nil,
Value: buf.value,
PipeID: buf.pipeid,
}
if err != nil {
replyToSerialize.Error = err.Error()
}
serializedReply, err := json.Marshal(&replyToSerialize)
if err != nil {
return err
}
// We took ownership of the FD - close it when we're done.
defer func() {
if buf.fd != nil {
buf.fd.Close()
}
}()
// Copy the FD number to the socket ancillary buffer
fds := make([]int, 0)
if buf.fd != nil {
fds = append(fds, int(buf.fd.Fd()))
}
oob := syscall.UnixRights(fds...)
n, oobn, err := conn.WriteMsgUnix(serializedReply, oob, nil)
if err != nil {
return err
}
// Validate that we sent the full packet
if n != len(serializedReply) || oobn != len(oob) {
return io.ErrShortWrite
}
return nil
}
type proxyOptions struct {
global *globalOptions
imageOpts *imageOptions
sockFd int
}
func proxyCmd(global *globalOptions) *cobra.Command {
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageFlags(global, sharedOpts, nil, "", "")
opts := proxyOptions{global: global, imageOpts: imageOpts}
cmd := &cobra.Command{
Use: "experimental-image-proxy [command options] IMAGE",
Short: "Interactive proxy for fetching container images (EXPERIMENTAL)",
Long: `Run skopeo as a proxy, supporting HTTP requests to fetch manifests and blobs.`,
RunE: commandAction(opts.run),
Args: cobra.ExactArgs(0),
// Not stabilized yet
Hidden: true,
Example: `skopeo experimental-image-proxy --sockfd 3`,
}
adjustUsage(cmd)
flags := cmd.Flags()
flags.AddFlagSet(&sharedFlags)
flags.AddFlagSet(&imageFlags)
flags.IntVar(&opts.sockFd, "sockfd", 0, "Serve on opened socket pair (default 0/stdin)")
return cmd
}
// processRequest dispatches a remote request.
// replyBuf is the result of the invocation.
// terminate should be true if processing of requests should halt.
func (h *proxyHandler) processRequest(req request) (rb replyBuf, terminate bool, err error) {
// Dispatch on the method
switch req.Method {
case "Initialize":
rb, err = h.Initialize(req.Args)
case "OpenImage":
rb, err = h.OpenImage(req.Args)
case "CloseImage":
rb, err = h.CloseImage(req.Args)
case "GetManifest":
rb, err = h.GetManifest(req.Args)
case "GetConfig":
rb, err = h.GetConfig(req.Args)
case "GetFullConfig":
rb, err = h.GetFullConfig(req.Args)
case "GetBlob":
rb, err = h.GetBlob(req.Args)
case "FinishPipe":
rb, err = h.FinishPipe(req.Args)
case "Shutdown":
terminate = true
default:
err = fmt.Errorf("unknown method: %s", req.Method)
}
return
}
// Implementation of podman experimental-image-proxy
func (opts *proxyOptions) run(args []string, stdout io.Writer) error {
handler := &proxyHandler{
opts: opts,
images: make(map[uint32]*openImage),
activePipes: make(map[uint32]*activePipe),
}
// Convert the socket FD passed by client into a net.FileConn
fd := os.NewFile(uintptr(opts.sockFd), "sock")
fconn, err := net.FileConn(fd)
if err != nil {
return err
}
conn := fconn.(*net.UnixConn)
// Allocate a buffer to copy the packet into
buf := make([]byte, maxMsgSize)
for {
n, _, err := conn.ReadFrom(buf)
if err != nil {
if errors.Is(err, io.EOF) {
return nil
}
return fmt.Errorf("reading socket: %v", err)
}
// Parse the request JSON
readbuf := buf[0:n]
var req request
if err := json.Unmarshal(readbuf, &req); err != nil {
rb := replyBuf{}
rb.send(conn, fmt.Errorf("invalid request: %v", err))
}
rb, terminate, err := handler.processRequest(req)
if terminate {
return nil
}
rb.send(conn, err)
}
}

View File

@@ -0,0 +1,30 @@
//go:build windows
// +build windows
package main
import (
"fmt"
"io"
"github.com/spf13/cobra"
)
type proxyOptions struct {
global *globalOptions
}
func proxyCmd(global *globalOptions) *cobra.Command {
opts := proxyOptions{global: global}
cmd := &cobra.Command{
RunE: commandAction(opts.run),
Args: cobra.ExactArgs(0),
// Not stabilized yet
Hidden: true,
}
return cmd
}
func (opts *proxyOptions) run(args []string, stdout io.Writer) error {
return fmt.Errorf("This command is not supported on Windows")
}

View File

@@ -7,24 +7,27 @@ import (
"io"
"io/ioutil"
"github.com/containers/image/v5/pkg/cli"
"github.com/containers/image/v5/signature"
"github.com/spf13/cobra"
)
type standaloneSignOptions struct {
output string // Output file path
output string // Output file path
passphraseFile string // Path pointing to a passphrase file when signing
}
func standaloneSignCmd() *cobra.Command {
opts := standaloneSignOptions{}
cmd := &cobra.Command{
Use: "standalone-sign [command options] MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT",
Use: "standalone-sign [command options] MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT --output|-o SIGNATURE",
Short: "Create a signature using local files",
RunE: commandAction(opts.run),
}
adjustUsage(cmd)
flags := cmd.Flags()
flags.StringVarP(&opts.output, "output", "o", "", "output the signature to `SIGNATURE`")
flags.StringVarP(&opts.passphraseFile, "passphrase-file", "", "", "file that contains a passphrase for the --sign-by key")
return cmd
}
@@ -46,7 +49,13 @@ func (opts *standaloneSignOptions) run(args []string, stdout io.Writer) error {
return fmt.Errorf("Error initializing GPG: %v", err)
}
defer mech.Close()
signature, err := signature.SignDockerManifest(manifest, dockerReference, mech, fingerprint)
passphrase, err := cli.ReadPassphraseFile(opts.passphraseFile)
if err != nil {
return err
}
signature, err := signature.SignDockerManifestWithOptions(manifest, dockerReference, mech, fingerprint, &signature.SignOptions{Passphrase: passphrase})
if err != nil {
return fmt.Errorf("Error creating signature: %v", err)
}

View File

@@ -11,11 +11,13 @@ import (
"regexp"
"strings"
commonFlag "github.com/containers/common/pkg/flag"
"github.com/containers/common/pkg/retry"
"github.com/containers/image/v5/copy"
"github.com/containers/image/v5/directory"
"github.com/containers/image/v5/docker"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/pkg/cli"
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/types"
"github.com/opencontainers/go-digest"
@@ -27,17 +29,21 @@ import (
// syncOptions contains information retrieved from the skopeo sync command line.
type syncOptions struct {
global *globalOptions // Global (not command dependent) skopeo options
srcImage *imageOptions // Source image options
destImage *imageDestOptions // Destination image options
retryOpts *retry.RetryOptions
removeSignatures bool // Do not copy signatures from the source image
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
format optionalString // Force conversion of the image to a specified format
source string // Source repository name
destination string // Destination registry name
scoped bool // When true, namespace copied images at destination using the source repository name
all bool // Copy all of the images if an image in the source is a list
global *globalOptions // Global (not command dependent) skopeo options
deprecatedTLSVerify *deprecatedTLSVerifyOption
srcImage *imageOptions // Source image options
destImage *imageDestOptions // Destination image options
retryOpts *retry.RetryOptions
removeSignatures bool // Do not copy signatures from the source image
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
signPassphraseFile string // Path pointing to a passphrase file when signing
format commonFlag.OptionalString // Force conversion of the image to a specified format
source string // Source repository name
destination string // Destination registry name
scoped bool // When true, namespace copied images at destination using the source repository name
all bool // Copy all of the images if an image in the source is a list
preserveDigests bool // Preserve digests during sync
keepGoing bool // Whether or not to abort the sync if there are any errors during syncing the images
}
// repoDescriptor contains information of a single repository used as a sync source.
@@ -68,27 +74,29 @@ type sourceConfig map[string]registrySyncConfig
func syncCmd(global *globalOptions) *cobra.Command {
sharedFlags, sharedOpts := sharedImageFlags()
srcFlags, srcOpts := dockerImageFlags(global, sharedOpts, "src-", "screds")
destFlags, destOpts := dockerImageFlags(global, sharedOpts, "dest-", "dcreds")
deprecatedTLSVerifyFlags, deprecatedTLSVerifyOpt := deprecatedTLSVerifyFlags()
srcFlags, srcOpts := dockerImageFlags(global, sharedOpts, deprecatedTLSVerifyOpt, "src-", "screds")
destFlags, destOpts := dockerImageFlags(global, sharedOpts, deprecatedTLSVerifyOpt, "dest-", "dcreds")
retryFlags, retryOpts := retryFlags()
opts := syncOptions{
global: global,
srcImage: srcOpts,
destImage: &imageDestOptions{imageOptions: destOpts},
retryOpts: retryOpts,
global: global,
deprecatedTLSVerify: deprecatedTLSVerifyOpt,
srcImage: srcOpts,
destImage: &imageDestOptions{imageOptions: destOpts},
retryOpts: retryOpts,
}
cmd := &cobra.Command{
Use: "sync [command options] --src SOURCE-LOCATION --dest DESTINATION-LOCATION SOURCE DESTINATION",
Use: "sync [command options] --src TRANSPORT --dest TRANSPORT SOURCE DESTINATION",
Short: "Synchronize one or more images from one location to another",
Long: fmt.Sprint(`Copy all the images from a SOURCE to a DESTINATION.
Long: `Copy all the images from a SOURCE to a DESTINATION.
Allowed SOURCE transports (specified with --src): docker, dir, yaml.
Allowed DESTINATION transports (specified with --dest): docker, dir.
See skopeo-sync(1) for details.
`),
`,
RunE: commandAction(opts.run),
Example: `skopeo sync --src docker --dest dir --scoped registry.example.com/busybox /media/usb`,
}
@@ -96,12 +104,16 @@ See skopeo-sync(1) for details.
flags := cmd.Flags()
flags.BoolVar(&opts.removeSignatures, "remove-signatures", false, "Do not copy signatures from SOURCE images")
flags.StringVar(&opts.signByFingerprint, "sign-by", "", "Sign the image using a GPG key with the specified `FINGERPRINT`")
flags.VarP(newOptionalStringValue(&opts.format), "format", "f", `MANIFEST TYPE (oci, v2s1, or v2s2) to use when syncing image(s) to a destination (default is manifest type of source)`)
flags.StringVar(&opts.signPassphraseFile, "sign-passphrase-file", "", "File that contains a passphrase for the --sign-by key")
flags.VarP(commonFlag.NewOptionalStringValue(&opts.format), "format", "f", `MANIFEST TYPE (oci, v2s1, or v2s2) to use when syncing image(s) to a destination (default is manifest type of source, with fallbacks)`)
flags.StringVarP(&opts.source, "src", "s", "", "SOURCE transport type")
flags.StringVarP(&opts.destination, "dest", "d", "", "DESTINATION transport type")
flags.BoolVar(&opts.scoped, "scoped", false, "Images at DESTINATION are prefix using the full source image path as scope")
flags.BoolVarP(&opts.all, "all", "a", false, "Copy all images if SOURCE-IMAGE is a list")
flags.BoolVar(&opts.preserveDigests, "preserve-digests", false, "Preserve digests of images and lists")
flags.BoolVarP(&opts.keepGoing, "keep-going", "", false, "Do not abort the sync if any image copy fails")
flags.AddFlagSet(&sharedFlags)
flags.AddFlagSet(&deprecatedTLSVerifyFlags)
flags.AddFlagSet(&srcFlags)
flags.AddFlagSet(&destFlags)
flags.AddFlagSet(&retryFlags)
@@ -207,7 +219,6 @@ func getImageTags(ctx context.Context, sysCtx *types.SystemContext, repoRef refe
// Some registries may decide to block the "list all tags" endpoint.
// Gracefully allow the sync to continue in this case.
logrus.Warnf("Registry disallows tag list retrieval: %s", err)
break
default:
return tags, errors.Wrapf(err, "Error determining repository tags for image %s", name)
}
@@ -493,6 +504,7 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) error {
if len(args) != 2 {
return errorShouldDisplayUsage{errors.New("Exactly two arguments expected")}
}
opts.deprecatedTLSVerify.warnIfUsed([]string{"--src-tls-verify", "--dest-tls-verify"})
policyContext, err := opts.global.getPolicyContext()
if err != nil {
@@ -539,8 +551,8 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) error {
}
var manifestType string
if opts.format.present {
manifestType, err = parseManifestFormat(opts.format.value)
if opts.format.Present() {
manifestType, err = parseManifestFormat(opts.format.Value())
if err != nil {
return err
}
@@ -564,17 +576,23 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) error {
return err
}
imagesNumber := 0
passphrase, err := cli.ReadPassphraseFile(opts.signPassphraseFile)
if err != nil {
return err
}
options := copy.Options{
RemoveSignatures: opts.removeSignatures,
SignBy: opts.signByFingerprint,
SignPassphrase: passphrase,
ReportWriter: os.Stdout,
DestinationCtx: destinationCtx,
ImageListSelection: imageListSelection,
PreserveDigests: opts.preserveDigests,
OptimizeDestinationImageAlreadyExists: true,
ForceManifestMIMEType: manifestType,
}
errorsPresent := false
imagesNumber := 0
for _, srcRepo := range srcRepoList {
options.SourceCtx = srcRepo.Context
for counter, ref := range srcRepo.ImageRefs {
@@ -610,12 +628,22 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) error {
_, err = copy.Image(ctx, policyContext, destRef, ref, &options)
return err
}, opts.retryOpts); err != nil {
return errors.Wrapf(err, "Error copying ref %q", transports.ImageName(ref))
if !opts.keepGoing {
return errors.Wrapf(err, "Error copying ref %q", transports.ImageName(ref))
}
// log the error, keep a note that there was a failure and move on to the next
// image ref
errorsPresent = true
logrus.WithError(err).Errorf("Error copying ref %q", transports.ImageName(ref))
continue
}
imagesNumber++
}
}
logrus.Infof("Synced %d images from %d sources", imagesNumber, len(srcRepoList))
return nil
if !errorsPresent {
return nil
}
return errors.New("Sync failed due to previous reported error(s) for one or more images")
}

View File

@@ -1,11 +1,8 @@
//go:build !linux
// +build !linux
package main
func maybeReexec() error {
return nil
}
func reexecIfNecessaryForImages(inputImageNames ...string) error {
return nil
}

View File

@@ -7,6 +7,7 @@ import (
"os"
"strings"
commonFlag "github.com/containers/common/pkg/flag"
"github.com/containers/common/pkg/retry"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/compression"
@@ -14,6 +15,7 @@ import (
"github.com/containers/image/v5/types"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
@@ -38,6 +40,35 @@ func commandAction(handler func(args []string, stdout io.Writer) error) func(cmd
}
}
// deprecatedTLSVerifyOption represents a deprecated --tls-verify option,
// which was accepted for all subcommands, for a time.
// Every user should call deprecatedTLSVerifyOption.warnIfUsed() as part of handling the CLI,
// whether or not the value actually ends up being used.
// DO NOT ADD ANY NEW USES OF THIS; just call dockerImageFlags with an appropriate, possibly empty, flagPrefix.
type deprecatedTLSVerifyOption struct {
tlsVerify commonFlag.OptionalBool // FIXME FIXME: Warn if this is used, or even if it is ignored.
}
// warnIfUsed warns if tlsVerify was set by the user, and suggests alternatives (which should
// start with "--").
// Every user should call this as part of handling the CLI, whether or not the value actually
// ends up being used.
func (opts *deprecatedTLSVerifyOption) warnIfUsed(alternatives []string) {
if opts.tlsVerify.Present() {
logrus.Warnf("'--tls-verify' is deprecated, instead use: %s", strings.Join(alternatives, ", "))
}
}
// deprecatedTLSVerifyFlags prepares the CLI flag writing into deprecatedTLSVerifyOption, and the managed deprecatedTLSVerifyOption structure.
// DO NOT ADD ANY NEW USES OF THIS; just call dockerImageFlags with an appropriate, possibly empty, flagPrefix.
func deprecatedTLSVerifyFlags() (pflag.FlagSet, *deprecatedTLSVerifyOption) {
opts := deprecatedTLSVerifyOption{}
fs := pflag.FlagSet{}
flag := commonFlag.OptionalBoolFlag(&fs, &opts.tlsVerify, "tls-verify", "require HTTPS and verify certificates when accessing the container registry")
flag.Hidden = true
return fs, &opts
}
// sharedImageOptions collects CLI flags which are image-related, but do not change across images.
// This really should be a part of globalOptions, but that would break existing users of (skopeo copy --authfile=).
type sharedImageOptions struct {
@@ -56,14 +87,17 @@ func sharedImageFlags() (pflag.FlagSet, *sharedImageOptions) {
// the same across subcommands, but may be different for each image
// (e.g. may differ between the source and destination of a copy)
type dockerImageOptions struct {
global *globalOptions // May be shared across several imageOptions instances.
shared *sharedImageOptions // May be shared across several imageOptions instances.
authFilePath optionalString // Path to a */containers/auth.json (prefixed version to override shared image option).
credsOption optionalString // username[:password] for accessing a registry
registryToken optionalString // token to be used directly as a Bearer token when accessing the registry
dockerCertPath string // A directory using Docker-like *.{crt,cert,key} files for connecting to a registry or a daemon
tlsVerify optionalBool // Require HTTPS and verify certificates (for docker: and docker-daemon:)
noCreds bool // Access the registry anonymously
global *globalOptions // May be shared across several imageOptions instances.
shared *sharedImageOptions // May be shared across several imageOptions instances.
deprecatedTLSVerify *deprecatedTLSVerifyOption // May be shared across several imageOptions instances, or nil.
authFilePath commonFlag.OptionalString // Path to a */containers/auth.json (prefixed version to override shared image option).
credsOption commonFlag.OptionalString // username[:password] for accessing a registry
userName commonFlag.OptionalString // username for accessing a registry
password commonFlag.OptionalString // password for accessing a registry
registryToken commonFlag.OptionalString // token to be used directly as a Bearer token when accessing the registry
dockerCertPath string // A directory using Docker-like *.{crt,cert,key} files for connecting to a registry or a daemon
tlsVerify commonFlag.OptionalBool // Require HTTPS and verify certificates (for docker: and docker-daemon:)
noCreds bool // Access the registry anonymously
}
// imageOptions collects CLI flags which are the same across subcommands, but may be different for each image
@@ -76,36 +110,39 @@ type imageOptions struct {
// dockerImageFlags prepares a collection of docker-transport specific CLI flags
// writing into imageOptions, and the managed imageOptions structure.
func dockerImageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) (pflag.FlagSet, *imageOptions) {
func dockerImageFlags(global *globalOptions, shared *sharedImageOptions, deprecatedTLSVerify *deprecatedTLSVerifyOption, flagPrefix, credsOptionAlias string) (pflag.FlagSet, *imageOptions) {
flags := imageOptions{
dockerImageOptions: dockerImageOptions{
global: global,
shared: shared,
global: global,
shared: shared,
deprecatedTLSVerify: deprecatedTLSVerify,
},
}
fs := pflag.FlagSet{}
if flagPrefix != "" {
// the non-prefixed flag is handled by a shared flag.
fs.Var(newOptionalStringValue(&flags.authFilePath), flagPrefix+"authfile", "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json")
fs.Var(commonFlag.NewOptionalStringValue(&flags.authFilePath), flagPrefix+"authfile", "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json")
}
fs.Var(newOptionalStringValue(&flags.credsOption), flagPrefix+"creds", "Use `USERNAME[:PASSWORD]` for accessing the registry")
fs.Var(commonFlag.NewOptionalStringValue(&flags.credsOption), flagPrefix+"creds", "Use `USERNAME[:PASSWORD]` for accessing the registry")
fs.Var(commonFlag.NewOptionalStringValue(&flags.userName), flagPrefix+"username", "Username for accessing the registry")
fs.Var(commonFlag.NewOptionalStringValue(&flags.password), flagPrefix+"password", "Password for accessing the registry")
if credsOptionAlias != "" {
// This is horribly ugly, but we need to support the old option forms of (skopeo copy) for compatibility.
// Don't add any more cases like this.
f := fs.VarPF(newOptionalStringValue(&flags.credsOption), credsOptionAlias, "", "Use `USERNAME[:PASSWORD]` for accessing the registry")
f := fs.VarPF(commonFlag.NewOptionalStringValue(&flags.credsOption), credsOptionAlias, "", "Use `USERNAME[:PASSWORD]` for accessing the registry")
f.Hidden = true
}
fs.Var(newOptionalStringValue(&flags.registryToken), flagPrefix+"registry-token", "Provide a Bearer token for accessing the registry")
fs.Var(commonFlag.NewOptionalStringValue(&flags.registryToken), flagPrefix+"registry-token", "Provide a Bearer token for accessing the registry")
fs.StringVar(&flags.dockerCertPath, flagPrefix+"cert-dir", "", "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry or daemon")
optionalBoolFlag(&fs, &flags.tlsVerify, flagPrefix+"tls-verify", "require HTTPS and verify certificates when talking to the container registry or daemon (defaults to true)")
commonFlag.OptionalBoolFlag(&fs, &flags.tlsVerify, flagPrefix+"tls-verify", "require HTTPS and verify certificates when talking to the container registry or daemon")
fs.BoolVar(&flags.noCreds, flagPrefix+"no-creds", false, "Access the registry anonymously")
return fs, &flags
}
// imageFlags prepares a collection of CLI flags writing into imageOptions, and the managed imageOptions structure.
func imageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) (pflag.FlagSet, *imageOptions) {
dockerFlags, opts := dockerImageFlags(global, shared, flagPrefix, credsOptionAlias)
func imageFlags(global *globalOptions, shared *sharedImageOptions, deprecatedTLSVerify *deprecatedTLSVerifyOption, flagPrefix, credsOptionAlias string) (pflag.FlagSet, *imageOptions) {
dockerFlags, opts := dockerImageFlags(global, shared, deprecatedTLSVerify, flagPrefix, credsOptionAlias)
fs := pflag.FlagSet{}
fs.StringVar(&opts.sharedBlobDir, flagPrefix+"shared-blob-dir", "", "`DIRECTORY` to use to share blobs across OCI repositories")
@@ -114,10 +151,6 @@ func imageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, c
return fs, opts
}
type retryOptions struct {
maxRetry int // The number of times to possibly retry
}
func retryFlags() (pflag.FlagSet, *retry.RetryOptions) {
opts := retry.RetryOptions{}
fs := pflag.FlagSet{}
@@ -136,27 +169,49 @@ func (opts *imageOptions) newSystemContext() (*types.SystemContext, error) {
ctx.AuthFilePath = opts.shared.authFilePath
ctx.DockerDaemonHost = opts.dockerDaemonHost
ctx.DockerDaemonCertPath = opts.dockerCertPath
if opts.dockerImageOptions.authFilePath.present {
ctx.AuthFilePath = opts.dockerImageOptions.authFilePath.value
if opts.dockerImageOptions.authFilePath.Present() {
ctx.AuthFilePath = opts.dockerImageOptions.authFilePath.Value()
}
if opts.tlsVerify.present {
ctx.DockerDaemonInsecureSkipTLSVerify = !opts.tlsVerify.value
if opts.deprecatedTLSVerify != nil && opts.deprecatedTLSVerify.tlsVerify.Present() {
// If both this deprecated option and a non-deprecated option is present, we use the latter value.
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.deprecatedTLSVerify.tlsVerify.Value())
}
if opts.tlsVerify.present {
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.value)
if opts.tlsVerify.Present() {
ctx.DockerDaemonInsecureSkipTLSVerify = !opts.tlsVerify.Value()
}
if opts.credsOption.present && opts.noCreds {
if opts.tlsVerify.Present() {
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.Value())
}
if opts.credsOption.Present() && opts.noCreds {
return nil, errors.New("creds and no-creds cannot be specified at the same time")
}
if opts.credsOption.present {
if opts.userName.Present() && opts.noCreds {
return nil, errors.New("username and no-creds cannot be specified at the same time")
}
if opts.credsOption.Present() && opts.userName.Present() {
return nil, errors.New("creds and username cannot be specified at the same time")
}
// if any of username or password is present, then both are expected to be present
if opts.userName.Present() != opts.password.Present() {
if opts.userName.Present() {
return nil, errors.New("password must be specified when username is specified")
}
return nil, errors.New("username must be specified when password is specified")
}
if opts.credsOption.Present() {
var err error
ctx.DockerAuthConfig, err = getDockerAuth(opts.credsOption.value)
ctx.DockerAuthConfig, err = getDockerAuth(opts.credsOption.Value())
if err != nil {
return nil, err
}
} else if opts.userName.Present() {
ctx.DockerAuthConfig = &types.DockerAuthConfig{
Username: opts.userName.Value(),
Password: opts.password.Value(),
}
}
if opts.registryToken.present {
ctx.DockerBearerRegistryToken = opts.registryToken.value
if opts.registryToken.Present() {
ctx.DockerBearerRegistryToken = opts.registryToken.Value()
}
if opts.noCreds {
ctx.DockerAuthConfig = &types.DockerAuthConfig{}
@@ -168,22 +223,26 @@ func (opts *imageOptions) newSystemContext() (*types.SystemContext, error) {
// imageDestOptions is a superset of imageOptions specialized for image destinations.
type imageDestOptions struct {
*imageOptions
dirForceCompression bool // Compress layers when saving to the dir: transport
ociAcceptUncompressedLayers bool // Whether to accept uncompressed layers in the oci: transport
compressionFormat string // Format to use for the compression
compressionLevel optionalInt // Level to use for the compression
dirForceCompression bool // Compress layers when saving to the dir: transport
dirForceDecompression bool // Decompress layers when saving to the dir: transport
ociAcceptUncompressedLayers bool // Whether to accept uncompressed layers in the oci: transport
compressionFormat string // Format to use for the compression
compressionLevel commonFlag.OptionalInt // Level to use for the compression
precomputeDigests bool // Precompute digests to dedup layers when saving to the docker: transport
}
// imageDestFlags prepares a collection of CLI flags writing into imageDestOptions, and the managed imageDestOptions structure.
func imageDestFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) (pflag.FlagSet, *imageDestOptions) {
genericFlags, genericOptions := imageFlags(global, shared, flagPrefix, credsOptionAlias)
func imageDestFlags(global *globalOptions, shared *sharedImageOptions, deprecatedTLSVerify *deprecatedTLSVerifyOption, flagPrefix, credsOptionAlias string) (pflag.FlagSet, *imageDestOptions) {
genericFlags, genericOptions := imageFlags(global, shared, deprecatedTLSVerify, flagPrefix, credsOptionAlias)
opts := imageDestOptions{imageOptions: genericOptions}
fs := pflag.FlagSet{}
fs.AddFlagSet(&genericFlags)
fs.BoolVar(&opts.dirForceCompression, flagPrefix+"compress", false, "Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)")
fs.BoolVar(&opts.dirForceDecompression, flagPrefix+"decompress", false, "Decompress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)")
fs.BoolVar(&opts.ociAcceptUncompressedLayers, flagPrefix+"oci-accept-uncompressed-layers", false, "Allow uncompressed image layers when saving to an OCI image using the 'oci' transport. (default is to compress things that aren't compressed)")
fs.StringVar(&opts.compressionFormat, flagPrefix+"compress-format", "", "`FORMAT` to use for the compression")
fs.Var(newOptionalIntValue(&opts.compressionLevel), flagPrefix+"compress-level", "`LEVEL` to use for the compression")
fs.Var(commonFlag.NewOptionalIntValue(&opts.compressionLevel), flagPrefix+"compress-level", "`LEVEL` to use for the compression")
fs.BoolVar(&opts.precomputeDigests, flagPrefix+"precompute-digests", false, "Precompute digests to prevent uploading layers already on the registry using the 'docker' transport.")
return fs, &opts
}
@@ -196,6 +255,7 @@ func (opts *imageDestOptions) newSystemContext() (*types.SystemContext, error) {
}
ctx.DirForceCompress = opts.dirForceCompression
ctx.DirForceDecompress = opts.dirForceDecompression
ctx.OCIAcceptUncompressedLayers = opts.ociAcceptUncompressedLayers
if opts.compressionFormat != "" {
cf, err := compression.AlgorithmByName(opts.compressionFormat)
@@ -204,9 +264,11 @@ func (opts *imageDestOptions) newSystemContext() (*types.SystemContext, error) {
}
ctx.CompressionFormat = &cf
}
if opts.compressionLevel.present {
ctx.CompressionLevel = &opts.compressionLevel.value
if opts.compressionLevel.Present() {
value := opts.compressionLevel.Value()
ctx.CompressionLevel = &value
}
ctx.DockerRegistryPushPrecomputeDigests = opts.precomputeDigests
return ctx, err
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/containers/image/v5/types"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -17,17 +18,26 @@ func fakeGlobalOptions(t *testing.T, flags []string) (*globalOptions, *cobra.Com
app, opts := createApp()
cmd := &cobra.Command{}
app.AddCommand(cmd)
err := cmd.ParseFlags(flags)
err := app.ParseFlags(flags)
require.NoError(t, err)
return opts, cmd
}
// fakeImageOptions creates imageOptions and sets it according to globalFlags/cmdFlags.
func fakeImageOptions(t *testing.T, flagPrefix string, globalFlags []string, cmdFlags []string) *imageOptions {
func fakeImageOptions(t *testing.T, flagPrefix string, useDeprecatedTLSVerify bool,
globalFlags []string, cmdFlags []string) *imageOptions {
globalOpts, cmd := fakeGlobalOptions(t, globalFlags)
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageFlags(globalOpts, sharedOpts, flagPrefix, "")
var deprecatedTLSVerifyFlag pflag.FlagSet
var deprecatedTLSVerifyOpt *deprecatedTLSVerifyOption
if useDeprecatedTLSVerify {
deprecatedTLSVerifyFlag, deprecatedTLSVerifyOpt = deprecatedTLSVerifyFlags()
}
imageFlags, imageOpts := imageFlags(globalOpts, sharedOpts, deprecatedTLSVerifyOpt, flagPrefix, "")
cmd.Flags().AddFlagSet(&sharedFlags)
if useDeprecatedTLSVerify {
cmd.Flags().AddFlagSet(&deprecatedTLSVerifyFlag)
}
cmd.Flags().AddFlagSet(&imageFlags)
err := cmd.ParseFlags(cmdFlags)
require.NoError(t, err)
@@ -36,7 +46,7 @@ func fakeImageOptions(t *testing.T, flagPrefix string, globalFlags []string, cmd
func TestImageOptionsNewSystemContext(t *testing.T) {
// Default state
opts := fakeImageOptions(t, "dest-", []string{}, []string{})
opts := fakeImageOptions(t, "dest-", true, []string{}, []string{})
res, err := opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, &types.SystemContext{
@@ -44,7 +54,7 @@ func TestImageOptionsNewSystemContext(t *testing.T) {
}, res)
// Set everything to non-default values.
opts = fakeImageOptions(t, "dest-", []string{
opts = fakeImageOptions(t, "dest-", true, []string{
"--registries.d", "/srv/registries.d",
"--override-arch", "overridden-arch",
"--override-os", "overridden-os",
@@ -80,49 +90,29 @@ func TestImageOptionsNewSystemContext(t *testing.T) {
BigFilesTemporaryDir: "/srv",
}, res)
// Global/per-command tlsVerify behavior
for _, c := range []struct {
global, cmd string
expectedDocker types.OptionalBool
expectedDockerDaemon bool
}{
{"", "", types.OptionalBoolUndefined, false},
{"", "false", types.OptionalBoolTrue, true},
{"", "true", types.OptionalBoolFalse, false},
{"false", "", types.OptionalBoolTrue, false},
{"false", "false", types.OptionalBoolTrue, true},
{"false", "true", types.OptionalBoolFalse, false},
{"true", "", types.OptionalBoolFalse, false},
{"true", "false", types.OptionalBoolTrue, true},
{"true", "true", types.OptionalBoolFalse, false},
} {
globalFlags := []string{}
if c.global != "" {
globalFlags = append(globalFlags, "--tls-verify="+c.global)
}
cmdFlags := []string{}
if c.cmd != "" {
cmdFlags = append(cmdFlags, "--dest-tls-verify="+c.cmd)
}
opts := fakeImageOptions(t, "dest-", globalFlags, cmdFlags)
res, err = opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, c.expectedDocker, res.DockerInsecureSkipTLSVerify, "%#v", c)
assert.Equal(t, c.expectedDockerDaemon, res.DockerDaemonInsecureSkipTLSVerify, "%#v", c)
}
// Global/per-command tlsVerify behavior is tested in TestTLSVerifyFlags.
// Invalid option values
opts = fakeImageOptions(t, "dest-", []string{}, []string{"--dest-creds", ""})
opts = fakeImageOptions(t, "dest-", true, []string{}, []string{"--dest-creds", ""})
_, err = opts.newSystemContext()
assert.Error(t, err)
}
// fakeImageDestOptions creates imageDestOptions and sets it according to globalFlags/cmdFlags.
func fakeImageDestOptions(t *testing.T, flagPrefix string, globalFlags []string, cmdFlags []string) *imageDestOptions {
func fakeImageDestOptions(t *testing.T, flagPrefix string, useDeprecatedTLSVerify bool,
globalFlags []string, cmdFlags []string) *imageDestOptions {
globalOpts, cmd := fakeGlobalOptions(t, globalFlags)
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := imageDestFlags(globalOpts, sharedOpts, flagPrefix, "")
var deprecatedTLSVerifyFlag pflag.FlagSet
var deprecatedTLSVerifyOpt *deprecatedTLSVerifyOption
if useDeprecatedTLSVerify {
deprecatedTLSVerifyFlag, deprecatedTLSVerifyOpt = deprecatedTLSVerifyFlags()
}
imageFlags, imageOpts := imageDestFlags(globalOpts, sharedOpts, deprecatedTLSVerifyOpt, flagPrefix, "")
cmd.Flags().AddFlagSet(&sharedFlags)
if useDeprecatedTLSVerify {
cmd.Flags().AddFlagSet(&deprecatedTLSVerifyFlag)
}
cmd.Flags().AddFlagSet(&imageFlags)
err := cmd.ParseFlags(cmdFlags)
require.NoError(t, err)
@@ -131,7 +121,7 @@ func fakeImageDestOptions(t *testing.T, flagPrefix string, globalFlags []string,
func TestImageDestOptionsNewSystemContext(t *testing.T) {
// Default state
opts := fakeImageDestOptions(t, "dest-", []string{}, []string{})
opts := fakeImageDestOptions(t, "dest-", true, []string{}, []string{})
res, err := opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, &types.SystemContext{
@@ -151,7 +141,7 @@ func TestImageDestOptionsNewSystemContext(t *testing.T) {
os.Setenv("REGISTRY_AUTH_FILE", authFile)
// Explicitly set everything to default, except for when the default is “not present”
opts = fakeImageDestOptions(t, "dest-", []string{}, []string{
opts = fakeImageDestOptions(t, "dest-", true, []string{}, []string{
"--dest-compress=false",
})
res, err = opts.newSystemContext()
@@ -162,7 +152,7 @@ func TestImageDestOptionsNewSystemContext(t *testing.T) {
}, res)
// Set everything to non-default values.
opts = fakeImageDestOptions(t, "dest-", []string{
opts = fakeImageDestOptions(t, "dest-", true, []string{
"--registries.d", "/srv/registries.d",
"--override-arch", "overridden-arch",
"--override-os", "overridden-os",
@@ -177,34 +167,176 @@ func TestImageDestOptionsNewSystemContext(t *testing.T) {
"--dest-tls-verify=false",
"--dest-creds", "creds-user:creds-password",
"--dest-registry-token", "faketoken",
"--dest-precompute-digests=true",
})
res, err = opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, &types.SystemContext{
RegistriesDirPath: "/srv/registries.d",
AuthFilePath: "/srv/authfile",
ArchitectureChoice: "overridden-arch",
OSChoice: "overridden-os",
VariantChoice: "overridden-variant",
OCISharedBlobDirPath: "/srv/shared-blob-dir",
DockerCertPath: "/srv/cert-dir",
DockerInsecureSkipTLSVerify: types.OptionalBoolTrue,
DockerAuthConfig: &types.DockerAuthConfig{Username: "creds-user", Password: "creds-password"},
DockerBearerRegistryToken: "faketoken",
DockerDaemonCertPath: "/srv/cert-dir",
DockerDaemonHost: "daemon-host.example.com",
DockerDaemonInsecureSkipTLSVerify: true,
DockerRegistryUserAgent: defaultUserAgent,
DirForceCompress: true,
BigFilesTemporaryDir: "/srv",
RegistriesDirPath: "/srv/registries.d",
AuthFilePath: "/srv/authfile",
ArchitectureChoice: "overridden-arch",
OSChoice: "overridden-os",
VariantChoice: "overridden-variant",
OCISharedBlobDirPath: "/srv/shared-blob-dir",
DockerCertPath: "/srv/cert-dir",
DockerInsecureSkipTLSVerify: types.OptionalBoolTrue,
DockerAuthConfig: &types.DockerAuthConfig{Username: "creds-user", Password: "creds-password"},
DockerBearerRegistryToken: "faketoken",
DockerDaemonCertPath: "/srv/cert-dir",
DockerDaemonHost: "daemon-host.example.com",
DockerDaemonInsecureSkipTLSVerify: true,
DockerRegistryUserAgent: defaultUserAgent,
DirForceCompress: true,
BigFilesTemporaryDir: "/srv",
DockerRegistryPushPrecomputeDigests: true,
}, res)
// Global/per-command tlsVerify behavior is tested in TestTLSVerifyFlags.
// Invalid option values in imageOptions
opts = fakeImageDestOptions(t, "dest-", []string{}, []string{"--dest-creds", ""})
opts = fakeImageDestOptions(t, "dest-", true, []string{}, []string{"--dest-creds", ""})
_, err = opts.newSystemContext()
assert.Error(t, err)
}
// TestImageOptionsUsernamePassword verifies that using the username and password
// options works as expected
func TestImageOptionsUsernamePassword(t *testing.T) {
for _, command := range []struct {
commandArgs []string
expectedAuthConfig *types.DockerAuthConfig // data to expect, or nil if an error is expected
}{
// Set only username/password (without --creds), expected to pass
{
commandArgs: []string{"--dest-username", "foo", "--dest-password", "bar"},
expectedAuthConfig: &types.DockerAuthConfig{Username: "foo", Password: "bar"},
},
// no username but set password, expect error
{
commandArgs: []string{"--dest-password", "foo"},
expectedAuthConfig: nil,
},
// set username but no password. expected to fail (we currently don't allow a user without password)
{
commandArgs: []string{"--dest-username", "bar"},
expectedAuthConfig: nil,
},
// set username with --creds, expected to fail
{
commandArgs: []string{"--dest-username", "bar", "--dest-creds", "hello:world", "--dest-password", "foo"},
expectedAuthConfig: nil,
},
// set username with --no-creds, expected to fail
{
commandArgs: []string{"--dest-username", "bar", "--dest-no-creds", "--dest-password", "foo"},
expectedAuthConfig: nil,
},
} {
opts := fakeImageDestOptions(t, "dest-", true, []string{}, command.commandArgs)
// parse the command options
res, err := opts.newSystemContext()
if command.expectedAuthConfig == nil {
assert.Error(t, err)
} else {
require.NoError(t, err)
assert.Equal(t, &types.SystemContext{
DockerRegistryUserAgent: defaultUserAgent,
DockerAuthConfig: command.expectedAuthConfig,
}, res)
}
}
}
func TestTLSVerifyFlags(t *testing.T) {
type systemContextOpts interface { // Either *imageOptions or *imageDestOptions
newSystemContext() (*types.SystemContext, error)
}
for _, creator := range []struct {
name string
newOpts func(useDeprecatedTLSVerify bool, globalFlags, cmdFlags []string) systemContextOpts
}{
{
"imageFlags",
func(useDeprecatedTLSVerify bool, globalFlags, cmdFlags []string) systemContextOpts {
return fakeImageOptions(t, "dest-", useDeprecatedTLSVerify, globalFlags, cmdFlags)
},
},
{
"imageDestFlags",
func(useDeprecatedTLSVerify bool, globalFlags, cmdFlags []string) systemContextOpts {
return fakeImageDestOptions(t, "dest-", useDeprecatedTLSVerify, globalFlags, cmdFlags)
},
},
} {
t.Run(creator.name, func(t *testing.T) {
for _, c := range []struct {
global, deprecatedCmd, cmd string
expectedDocker types.OptionalBool
expectedDockerDaemon bool
}{
{"", "", "", types.OptionalBoolUndefined, false},
{"", "", "false", types.OptionalBoolTrue, true},
{"", "", "true", types.OptionalBoolFalse, false},
{"", "false", "", types.OptionalBoolTrue, false},
{"", "false", "false", types.OptionalBoolTrue, true},
{"", "false", "true", types.OptionalBoolFalse, false},
{"", "true", "", types.OptionalBoolFalse, false},
{"", "true", "false", types.OptionalBoolTrue, true},
{"", "true", "true", types.OptionalBoolFalse, false},
{"false", "", "", types.OptionalBoolTrue, false},
{"false", "", "false", types.OptionalBoolTrue, true},
{"false", "", "true", types.OptionalBoolFalse, false},
{"false", "false", "", types.OptionalBoolTrue, false},
{"false", "false", "false", types.OptionalBoolTrue, true},
{"false", "false", "true", types.OptionalBoolFalse, false},
{"false", "true", "", types.OptionalBoolFalse, false},
{"false", "true", "false", types.OptionalBoolTrue, true},
{"false", "true", "true", types.OptionalBoolFalse, false},
{"true", "", "", types.OptionalBoolFalse, false},
{"true", "", "false", types.OptionalBoolTrue, true},
{"true", "", "true", types.OptionalBoolFalse, false},
{"true", "false", "", types.OptionalBoolTrue, false},
{"true", "false", "false", types.OptionalBoolTrue, true},
{"true", "false", "true", types.OptionalBoolFalse, false},
{"true", "true", "", types.OptionalBoolFalse, false},
{"true", "true", "false", types.OptionalBoolTrue, true},
{"true", "true", "true", types.OptionalBoolFalse, false},
} {
globalFlags := []string{}
if c.global != "" {
globalFlags = append(globalFlags, "--tls-verify="+c.global)
}
cmdFlags := []string{}
if c.deprecatedCmd != "" {
cmdFlags = append(cmdFlags, "--tls-verify="+c.deprecatedCmd)
}
if c.cmd != "" {
cmdFlags = append(cmdFlags, "--dest-tls-verify="+c.cmd)
}
opts := creator.newOpts(true, globalFlags, cmdFlags)
res, err := opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, c.expectedDocker, res.DockerInsecureSkipTLSVerify, "%#v", c)
assert.Equal(t, c.expectedDockerDaemon, res.DockerDaemonInsecureSkipTLSVerify, "%#v", c)
if c.deprecatedCmd == "" { // Test also the behavior when deprecatedTLSFlag is not recognized
// Use globalFlags from the previous test
cmdFlags := []string{}
if c.cmd != "" {
cmdFlags = append(cmdFlags, "--dest-tls-verify="+c.cmd)
}
opts := creator.newOpts(false, globalFlags, cmdFlags)
res, err = opts.newSystemContext()
require.NoError(t, err)
assert.Equal(t, c.expectedDocker, res.DockerInsecureSkipTLSVerify, "%#v", c)
assert.Equal(t, c.expectedDockerDaemon, res.DockerDaemonInsecureSkipTLSVerify, "%#v", c)
}
}
})
}
}
func TestParseManifestFormat(t *testing.T) {
for _, testCase := range []struct {
formatParam string
@@ -271,7 +403,7 @@ func TestImageOptionsAuthfileOverride(t *testing.T) {
}, "/srv/dest-authfile",
},
} {
opts := fakeImageOptions(t, testCase.flagPrefix, []string{}, testCase.cmdFlags)
opts := fakeImageOptions(t, testCase.flagPrefix, false, []string{}, testCase.cmdFlags)
res, err := opts.newSystemContext()
require.NoError(t, err)

View File

@@ -40,7 +40,9 @@ _skopeo_copy() {
--src-authfile
--dest-authfile
--format -f
--multi-arch
--sign-by
--sign-passphrase-file
--src-creds --screds
--src-cert-dir
--src-tls-verify
@@ -51,15 +53,22 @@ _skopeo_copy() {
--dest-daemon-host
--src-registry-token
--dest-registry-token
--src-username
--src-password
--dest-username
--dest-password
"
local boolean_options="
--all
--dest-compress
--dest-decompress
--remove-signatures
--src-no-creds
--dest-no-creds
--dest-oci-accept-uncompressed-layers
--dest-precompute-digests
--preserve-digests
"
local transports
@@ -81,11 +90,16 @@ _skopeo_sync() {
--format
--retry-times
--sign-by
--sign-passphrase-file
--src
--src-authfile
--src-cert-dir
--src-creds
--src-registry-token
--src-username
--src-password
--dest-username
--dest-password
"
local boolean_options="
@@ -96,6 +110,8 @@ _skopeo_sync() {
--scoped
--src-no-creds
--src-tls-verify
--keep-going
--preserve-digests
"
local transports
@@ -114,12 +130,15 @@ _skopeo_inspect() {
--format
--retry-times
--registry-token
--username
--password
"
local boolean_options="
--config
--raw
--tls-verify
--no-creds
--no-tags -n
"
local transports
@@ -133,6 +152,7 @@ _skopeo_inspect() {
_skopeo_standalone_sign() {
local options_with_args="
-o --output
--passphrase-file
"
local boolean_options="
"
@@ -161,6 +181,8 @@ _skopeo_delete() {
--creds
--cert-dir
--registry-token
--username
--password
"
local boolean_options="
--tls-verify
@@ -181,6 +203,8 @@ _skopeo_layers() {
--creds
--cert-dir
--registry-token
--username
--password
"
local boolean_options="
--tls-verify
@@ -195,6 +219,8 @@ _skopeo_list_repository_tags() {
--creds
--cert-dir
--registry-token
--username
--password
"
local boolean_options="

View File

@@ -6,6 +6,19 @@
set -e
# BEGIN Global export of all variables
set -a
# Due to differences across platforms and runtime execution environments,
# handling of the (otherwise) default shell setup is non-uniform. Rather
# than attempt to workaround differences, simply force-load/set required
# items every time this library is utilized.
USER="$(whoami)"
HOME="$(getent passwd $USER | cut -d : -f 6)"
# Some platforms set and make this read-only
[[ -n "$UID" ]] || \
UID=$(getent passwd $USER | cut -d : -f 3)
if [[ -r "/etc/automation_environment" ]]; then
source /etc/automation_environment
source $AUTOMATION_LIB_PATH/common_lib.sh
@@ -23,57 +36,104 @@ OS_RELEASE_VER="$(source /etc/os-release; echo $VERSION_ID | tr -d '.')"
# Combined to ease some usage
OS_REL_VER="${OS_RELEASE_ID}-${OS_RELEASE_VER}"
export "PATH=$PATH:$GOPATH/bin"
# This is the magic interpreted by the tests to allow modifying local config/services.
SKOPEO_CONTAINER_TESTS=1
PATH=$PATH:$GOPATH/bin
# END Global export of all variables
set +a
podmanmake() {
req_env_vars GOPATH SKOPEO_PATH SKOPEO_CI_CONTAINER_FQIN
warn "Accumulated technical-debt requires execution inside a --privileged container. This is very likely hiding bugs!"
showrun podman run -it --rm --privileged \
-e GOPATH=$GOPATH \
-v $GOPATH:$GOPATH:Z \
-w $SKOPEO_PATH \
$SKOPEO_CI_CONTAINER_FQIN \
make "$@"
}
_run_setup() {
if [[ "$OS_RELEASE_ID" == "fedora" ]]; then
# This is required as part of the standard Fedora VM setup
growpart /dev/sda 1
resize2fs /dev/sda1
# VM's come with the distro. skopeo pre-installed
dnf erase -y skopeo
else
local mnt
local errmsg
req_env_vars SKOPEO_CIDEV_CONTAINER_FQIN
if [[ "$OS_RELEASE_ID" != "fedora" ]]; then
die "Unknown/unsupported distro. $OS_REL_VER"
fi
if [[ -r "/.ci_setup_complete" ]]; then
warn "Thwarted an attempt to execute setup more than once."
return
fi
# VM's come with the distro. skopeo package pre-installed
dnf erase -y skopeo
# Required for testing the SIF transport
dnf install -y fakeroot squashfs-tools
msg "Removing systemd-resolved from nsswitch.conf"
# /etc/resolv.conf is already set to bypass systemd-resolvd
sed -i -r -e 's/^(hosts.+)resolve.+dns/\1dns/' /etc/nsswitch.conf
# A slew of compiled binaries are pre-built and distributed
# within the CI/Dev container image, but we want to run
# things directly on the host VM. Fortunately they're all
# located in the container under /usr/local/bin
msg "Accessing contents of $SKOPEO_CIDEV_CONTAINER_FQIN"
podman pull --quiet $SKOPEO_CIDEV_CONTAINER_FQIN
mnt=$(podman mount $(podman create $SKOPEO_CIDEV_CONTAINER_FQIN))
# The container and VM images are built in tandem in the same repo.
# automation, but the sources are in different directories. It's
# possible for a mismatch to happen, but should (hopefully) be unlikely.
# Double-check to make sure.
if ! fgrep -qx "ID=$OS_RELEASE_ID" $mnt/etc/os-release || \
! fgrep -qx "VERSION_ID=$OS_RELEASE_VER" $mnt/etc/os-release; then
die "Somehow $SKOPEO_CIDEV_CONTAINER_FQIN is not based on $OS_REL_VER."
fi
msg "Copying test binaries from $SKOPEO_CIDEV_CONTAINER_FQIN /usr/local/bin/"
cp -a "$mnt/usr/local/bin/"* "/usr/local/bin/"
msg "Configuring the openshift registry"
# TODO: Put directory & yaml into more sensible place + update integration tests
mkdir -vp /registry
cp -a "$mnt/atomic-registry-config.yml" /
msg "Cleaning up"
podman umount --latest
podman rm --latest
# Ensure setup can only run once
touch "/.ci_setup_complete"
}
_run_vendor() {
podmanmake vendor BUILDTAGS="$BUILDTAGS"
make vendor BUILDTAGS="$BUILDTAGS"
}
_run_build() {
podmanmake bin/skopeo BUILDTAGS="$BUILDTAGS"
make bin/skopeo BUILDTAGS="$BUILDTAGS"
make install PREFIX=/usr/local
}
_run_validate() {
podmanmake validate-local BUILDTAGS="$BUILDTAGS"
_run_cross() {
make local-cross BUILDTAGS="$BUILDTAGS"
}
_run_doccheck() {
make validate-docs BUILDTAGS="$BUILDTAGS"
}
_run_unit() {
podmanmake test-integration-local BUILDTAGS="$BUILDTAGS"
make test-unit-local BUILDTAGS="$BUILDTAGS"
}
_run_integration() {
podmanmake test-integration-local BUILDTAGS="$BUILDTAGS"
# Ensure we start with a clean-slate
podman system reset --force
make test-integration-local BUILDTAGS="$BUILDTAGS"
}
_run_system() {
# Ensure we start with a clean-slate
podman system reset --force
# Executes with containers required for testing.
showrun make test-system-local BUILDTAGS="$BUILDTAGS"
make test-system-local BUILDTAGS="$BUILDTAGS"
}
req_env_vars SKOPEO_PATH BUILDTAGS

View File

@@ -6,22 +6,35 @@
## Overview
This directory contains the Dockerfiles necessary to create the three skopeoimage container
images that are housed on quay.io under the skopeo account. All three repositories where
the images live are public and can be pulled without credentials. These container images
are secured and the resulting containers can run safely. The container images are built
using the latest Fedora and then Skopeo is installed into them:
This directory contains the Dockerfiles necessary to create the skopeoimage container
images that are housed on quay.io under the skopeo account. All repositories where
the images live are public and can be pulled without credentials. These container images are secured and the
resulting containers can run safely with privileges within the container.
* quay.io/skopeo/stable - This image is built using the latest stable version of Skopeo in a Fedora based container. Built with skopeoimage/stable/Dockerfile.
* quay.io/skopeo/upstream - This image is built using the latest code found in this GitHub repository. When someone creates a commit and pushes it, the image is created. Due to that the image changes frequently and is not guaranteed to be stable. Built with skopeoimage/upstream/Dockerfile.
* quay.io/skopeo/testing - This image is built using the latest version of Skopeo that is or was in updates testing for Fedora. At times this may be the same as the stable image. This container image will primarily be used by the development teams for verification testing when a new package is created. Built with skopeoimage/testing/Dockerfile.
The container images are built using the latest Fedora and then Skopeo is installed into them.
The PATH in the container images is set to the default PATH provided by Fedora. Also, the
ENTRYPOINT and the WORKDIR variables are not set within these container images, as such they
default to `/`.
## Multiarch images
The container images are:
Multiarch images are available for Skopeo upstream and stable versions. Supported architectures are `amd64`, `s390x`, `ppc64le`.
Available images are `quay.io/skopeo/upstream:master`, `quay.io/skopeo/stable:v1.2.0`, `quay.io/containers/skopeo:v1.2.0`.
* `quay.io/containers/skopeo:<version>` and `quay.io/skopeo/stable:<version>` -
These images are built when a new Skopeo version becomes available in
Fedora. These images are intended to be unchanging and stable, they will
never be updated by automation once they've been pushed. For build details,
please [see the configuration file](stable/Dockerfile).
* `quay.io/containers/skopeo:latest` and `quay.io/skopeo/stable:latest` -
Built daily using the same Dockerfile as above. The skopeo version
will remain the "latest" available in Fedora, however the image
contents may vary compared to the version-tagged images.
* `quay.io/skopeo/testing:latest` - This image is built daily, using the
latest version of Skopeo that was in the Fedora `updates-testing` repository.
The image is Built with [the testing Dockerfile](testing/Dockerfile).
* `quay.io/skopeo/upstream:latest` - This image is built daily using the latest
code found in this GitHub repository. Due to the image changing frequently,
it's not guaranteed to be stable or even executable. The image is built with
[the upstream Dockerfile](upstream/Dockerfile).
Images can be used the same way as in a single architecture case, no extra setup is required. For samples see next chapter.
## Sample Usage

View File

@@ -6,7 +6,7 @@
# This image can be used to create a secured container
# that runs safely with privileges within the container.
#
FROM registry.fedoraproject.org/fedora:33
FROM registry.fedoraproject.org/fedora:latest
# Don't include container-selinux and remove
# directories used by yum that are just taking

View File

@@ -7,7 +7,7 @@
# This image can be used to create a secured container
# that runs safely with privileges within the container.
#
FROM registry.fedoraproject.org/fedora:33
FROM registry.fedoraproject.org/fedora:latest
# Don't include container-selinux and remove
# directories used by yum that are just taking

View File

@@ -6,7 +6,7 @@
# This image can be used to create a secured container
# that runs safely with privileges within the container.
#
FROM registry.fedoraproject.org/fedora:33
FROM registry.fedoraproject.org/fedora:latest
# Don't include container-selinux and remove
# directories used by yum that are just taking

View File

@@ -4,7 +4,7 @@
skopeo\-copy - Copy an image (manifest, filesystem layers, signatures) from one location to another.
## SYNOPSIS
**skopeo copy** [**--sign-by=**_key-ID_] _source-image destination-image_
**skopeo copy** [*options*] _source-image_ _destination-image_
## DESCRIPTION
Copy an image (manifest, filesystem layers, signatures) from one location to another.
@@ -20,7 +20,11 @@ automatically inherit any parts of the source name.
## OPTIONS
**--all**
**--additional-tag**=_strings_
Additional tags (supports docker-archive).
**--all**, **-a**
If _source-image_ refers to a list of images, instead of copying just the image which matches the current OS and
architecture (subject to the use of the global --override-os, --override-arch and --override-variant options), attempt to copy all of
@@ -42,55 +46,162 @@ Path of the authentication file for the source registry. Uses path given by `--a
Path of the authentication file for the destination registry. Uses path given by `--authfile`, if not provided.
**--dest-shared-blob-dir** _directory_
Directory to use to share blobs across OCI repositories.
**--digestfile** _path_
After copying the image, write the digest of the resulting image to the file.
**--format, -f** _manifest-type_ Manifest type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
**--preserve-digests**
**--quiet, -q** suppress output information when copying images
Preserve the digests during copying. Fail if the digest cannot be preserved.
**--remove-signatures** do not copy signatures, if any, from _source-image_. Necessary when copying a signed image to a destination which does not support signatures.
**--encrypt-layer** _ints_
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
*Experimental* the 0-indexed layer indices, with support for negative indexing (e.g. 0 is the first layer, -1 is the last layer)
**--encryption-key** _protocol:keyfile_ specifies the encryption protocol, which can be JWE (RFC7516), PGP (RFC4880), and PKCS7 (RFC2315) and the key material required for image encryption. For instance, jwe:/path/to/key.pem or pgp:admin@example.com or pkcs7:/path/to/x509-file.
**--format**, **-f** _manifest-type_
**--decryption-key** _key[:passphrase]_ to be used for decryption of images. Key can point to keys and/or certificates. Decryption will be tried with all keys. If the key is protected by a passphrase, it is required to be passed in the argument and omitted otherwise.
MANIFEST TYPE (oci, v2s1, or v2s2) to use in the destination (default is manifest type of source, with fallbacks)
**--src-creds** _username[:password]_ for accessing the source registry.
**--help**, **-h**
**--dest-compress** _bool-value_ Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source).
Print usage statement
**--dest-oci-accept-uncompressed-layers** _bool-value_ Allow uncompressed image layers when saving to an OCI image using the 'oci' transport. (default is to compress things that aren't compressed).
**--multi-arch**
**--dest-creds** _username[:password]_ for accessing the destination registry.
Control what is copied if _source-image_ refers to a multi-architecture image. Default is system.
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry or daemon.
Options:
- system: Copy only the image that matches the system architecture
- all: Copy the full multi-architecture image
- index-only: Copy only the index
**--src-no-creds** _bool-value_ Access the registry anonymously.
The index-only option usually fails unless the referenced per-architecture images are already present in the destination, or the target registry supports sparse indexes.
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry or daemon (defaults to true).
**--quiet**, **-q**
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry or daemon.
Suppress output information when copying images.
**--dest-no-creds** _bool-value_ Access the registry anonymously.
**--remove-signatures**
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry or daemon (defaults to true).
Do not copy signatures, if any, from _source-image_. Necessary when copying a signed image to a destination which does not support signatures.
**--src-daemon-host** _host_ Copy from docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
**--sign-by**=_key-id_
**--dest-daemon-host** _host_ Copy to docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
Add a signature using that key ID for an image name corresponding to _destination-image_
**--sign-passphrase-file**=_path_
The passphare to use when signing with the key ID from `--sign-by`. Only the first line will be read. A passphrase stored in a file is of questionable security if other users can read this file. Do not use this option if at all avoidable.
**--src-shared-blob-dir** _directory_
Directory to use to share blobs across OCI repositories.
**--encryption-key** _protocol:keyfile_
Specifies the encryption protocol, which can be JWE (RFC7516), PGP (RFC4880), and PKCS7 (RFC2315) and the key material required for image encryption. For instance, jwe:/path/to/key.pem or pgp:admin@example.com or pkcs7:/path/to/x509-file.
**--decryption-key** _key[:passphrase]_
Key to be used for decryption of images. Key can point to keys and/or certificates. Decryption will be tried with all keys. If the key is protected by a passphrase, it is required to be passed in the argument and omitted otherwise.
**--src-creds** _username[:password]_
Credentials for accessing the source registry.
**--dest-compress**
Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source).
**--dest-decompress**
Decompress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source).
**--dest-oci-accept-uncompressed-layers**
Allow uncompressed image layers when saving to an OCI image using the 'oci' transport. (default is to compress things that aren't compressed).
**--dest-creds** _username[:password]_
Credentials for accessing the destination registry.
**--src-cert-dir** _path_
Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry or daemon.
**--src-no-creds**
Access the registry anonymously.
**--src-tls-verify**=_bool_
Require HTTPS and verify certificates when talking to container source registry or daemon. Default to source registry setting.
**--dest-cert-dir** _path_
Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry or daemon.
**--dest-no-creds**
Access the registry anonymously.
**--dest-tls-verify**=_bool_
Require HTTPS and verify certificates when talking to container destination registry or daemon. Default to destination registry setting.
**--src-daemon-host** _host_
Copy from docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
**--dest-daemon-host** _host_
Copy to docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
Existing signatures, if any, are preserved as well.
**--dest-compress-format** _format_ Specifies the compression format to use. Supported values are: `gzip` and `zstd`.
**--dest-compress-format** _format_
**--dest-compress-level** _format_ Specifies the compression level to use. The value is specific to the compression algorithm used, e.g. for zstd the accepted values are in the range 1-20 (inclusive), while for gzip it is 1-9 (inclusive).
Specifies the compression format to use. Supported values are: `gzip` and `zstd`.
**--src-registry-token** _Bearer token_ for accessing the source registry.
**--dest-compress-level** _format_
**--dest-registry-token** _Bearer token_ for accessing the destination registry.
Specifies the compression level to use. The value is specific to the compression algorithm used, e.g. for zstd the accepted values are in the range 1-20 (inclusive), while for gzip it is 1-9 (inclusive).
**--src-registry-token** _token_
Bearer token for accessing the source registry.
**--dest-registry-token** _token_
Bearer token for accessing the destination registry.
**--dest-precompute-digests**
Precompute digests to ensure layers are not uploaded that already exist on the destination registry. Layers with initially unknown digests (ex. compressing "on the fly") will be temporarily streamed to disk.
**--retry-times**
The number of times to retry. Retry wait time will be exponentially increased based on the number of failed attempts.
**--src-username**
The username to access the source registry.
**--src-password**
The password to access the source registry.
**--dest-username**
The username to access the destination registry.
**--dest-password**
The password to access the destination registry.
## EXAMPLES
@@ -148,9 +259,8 @@ skopeo copy --encryption-key jwe:./public.key --encrypt-layer 1 oci:local_nginx
```
## SEE ALSO
skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5), containers-policy.json(5), containers-transports(5)
skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5), containers-policy.json(5), containers-transports(5), containers-signature(5)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -4,7 +4,7 @@
skopeo\-delete - Mark the _image-name_ for later deletion by the registry's garbage collector.
## SYNOPSIS
**skopeo delete** _image-name_
**skopeo delete** [*options*] _image-name_
Mark _image-name_ for deletion. To release the allocated disk space, you must login to the container registry server and execute the container registry garbage collector. E.g.,
@@ -19,22 +19,58 @@ $ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distrib
```
## OPTIONS
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry.
**--creds** _username[:password]_
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry.
Credentials for accessing the registry.
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true).
**--cert-dir** _path_
**--no-creds** _bool-value_ Access the registry anonymously.
Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry.
**--daemon-host** _host_
Use docker daemon host at _host_ (`docker-daemon:` transport only)
**--help**, **-h**
Print usage statement
**--no-creds**
Access the registry anonymously.
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
**--registry-token** _Bearer token_ for accessing the registry.
**--registry-token** _token_
Bearer token for accessing the registry.
**--retry-times**
The number of times to retry. Retry wait time will be exponentially increased based on the number of failed attempts.
**--shared-blob-dir** _directory_
Directory to use to share blobs across OCI repositories.
**--tls-verify**=_bool_
Require HTTPS and verify certificates when talking to the container registry or daemon. Default to registry.conf setting.
**--username**
The username to access the registry.
**--password**
The password to access the registry.
## EXAMPLES
@@ -51,4 +87,3 @@ skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -8,9 +8,12 @@ skopeo\-inspect - Return low-level information about _image-name_ in a registry.
## DESCRIPTION
Return low-level information about _image-name_ in a registry
Return low-level information about _image-name_ in a registry.
See [skopeo(1)](skopeo.1.md) for the format of _image-name_.
_image-name_ name of image to retrieve information about
The default output includes data from various sources: user input (**Name**), the remote repository, if any (**RepoTags**), the top-level manifest (**Digest**),
and a per-architecture/OS image matching the current run-time environment (most other values).
To see values for a different architecture/OS, use the **--override-os** / **--override-arch** options documented in [skopeo(1)](skopeo.1.md).
## OPTIONS
@@ -31,11 +34,19 @@ Output configuration in OCI format, default is to format in JSON format.
Username and password for accessing the registry.
**--daemon-host** _host_
Use docker daemon host at _host_ (`docker-daemon:` transport only)
**--format**, **-f**=*format*
Format the output using the given Go template.
The keys of the returned JSON can be used as the values for the --format flag (see examples below).
**--help**, **-h**
Print usage statement
**--no-creds**
Access the registry anonymously.
@@ -53,9 +64,25 @@ Registry token for accessing the registry.
The number of times to retry; retry wait time will be exponentially increased based on the number of failed attempts.
**--tls-verify**
**--shared-blob-dir** _directory_
Require HTTPS and verify certificates when talking to container registries (defaults to true).
Directory to use to share blobs across OCI repositories.
**--tls-verify**=_bool_
Require HTTPS and verify certificates when talking to the container registry or daemon. Default to registry.conf setting.
**--username**
The username to access the registry.
**--password**
The password to access the registry.
**--no-tags**, **-n**
Do not list the available tags from the repository in the output. When `true`, the `RepoTags` array will be empty. Defaults to `false`, which includes all available tags.
## EXAMPLES
@@ -86,6 +113,42 @@ $ skopeo inspect docker://docker.io/fedora
}
```
To inspect python from the docker.io registry and not show the available tags:
```sh
$ skopeo inspect --no-tags docker://docker.io/library/python
{
"Name": "docker.io/library/python",
"Digest": "sha256:5ca194a80ddff913ea49c8154f38da66a41d2b73028c5cf7e46bc3c1d6fda572",
"RepoTags": [],
"Created": "2021-10-05T23:40:54.936108045Z",
"DockerVersion": "20.10.7",
"Labels": null,
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:df5590a8898bedd76f02205dc8caa5cc9863267dbcd8aac038bcd212688c1cc7",
"sha256:705bb4cb554eb7751fd21a994f6f32aee582fbe5ea43037db6c43d321763992b",
"sha256:519df5fceacdeaadeec563397b1d9f4d7c29c9f6eff879739cab6f0c144f49e1",
"sha256:ccc287cbeddc96a0772397ca00ec85482a7b7f9a9fac643bfddd87b932f743db",
"sha256:e3f8e6af58ed3a502f0c3c15dce636d9d362a742eb5b67770d0cfcb72f3a9884",
"sha256:aebed27b2d86a5a3a2cbe186247911047a7e432b9d17daad8f226597c0ea4276",
"sha256:54c32182bdcc3041bf64077428467109a70115888d03f7757dcf614ff6d95ebe",
"sha256:cc8b7caedab13af07adf4836e13af2d4e9e54d794129b0fd4c83ece6b1112e86",
"sha256:462c3718af1d5cdc050cfba102d06c26f78fe3b738ce2ca2eb248034b1738945"
],
"Env": [
"PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"GPG_KEY=A035C8C19219BA821ECEA86B64E628F8D684696D",
"PYTHON_VERSION=3.10.0",
"PYTHON_PIP_VERSION=21.2.4",
"PYTHON_SETUPTOOLS_VERSION=57.5.0",
"PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/d781367b97acf0ece7e9e304bf281e99b618bf10/public/get-pip.py",
"PYTHON_GET_PIP_SHA256=01249aa3e58ffb3e1686b7141b4e9aac4d398ef4ac3012ed9dff8dd9f685ffe0"
]
}
```
```
$ /bin/skopeo inspect --config docker://registry.fedoraproject.org/fedora --format "{{ .Architecture }}"
amd64

View File

@@ -1,29 +1,55 @@
% skopeo-list-tags(1)
## NAME
skopeo\-list\-tags - Return a list of tags for the transport-specific image repository.
skopeo\-list\-tags - List tags in the transport-specific image repository.
## SYNOPSIS
**skopeo list-tags** _repository-name_
**skopeo list-tags** [*options*] _repository-name_
Return a list of tags from _repository-name_ in a registry.
_repository-name_ name of repository to retrieve tag listing from
**--authfile** _path_
## OPTIONS
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
**--authfile** _path_
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--creds** _username[:password]_ for accessing the registry.
**--creds** _username[:password]_ for accessing the registry.
**--cert-dir** _path_ Use certificates at _path_ (\*.crt, \*.cert, \*.key) to connect to the registry.
**--cert-dir** _path_
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true).
Use certificates at _path_ (\*.crt, \*.cert, \*.key) to connect to the registry.
**--no-creds** _bool-value_ Access the registry anonymously.
**--help**, **-h**
**--registry-token** _Bearer token_ for accessing the registry.
Print usage statement
**--no-creds**
Access the registry anonymously.
**--registry-token** _Bearer token_
Bearer token for accessing the registry.
**--retry-times**
The number of times to retry. Retry wait time will be exponentially increased based on the number of failed attempts.
**--tls-verify**=_bool_
Require HTTPS and verify certificates when talking to the container registry or daemon. Default to registry.conf setting.
**--username**
The username to access the registry.
**--password**
The password to access the registry.
## REPOSITORY NAMES
@@ -34,18 +60,18 @@ This commands refers to repositories using a _transport_`:`_details_ format. The
**docker://**_docker-repository-reference_
A repository in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(skopeo login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
A _docker-repository-reference_ is of the form: **registryhost:port/repositoryname** which is similar to an _image-reference_ but with no tag or digest allowed as the last component (e.g no `:latest` or `@sha256:xyz`)
Examples of valid docker-repository-references:
"docker.io/myuser/myrepo"
"docker.io/nginx"
"docker.io/library/fedora"
"localhost:5000/myrepository"
Examples of invalid references:
"docker.io/nginx:latest"
"docker.io/myuser/myimage:v1.0"
"docker.io/myuser/myimage@sha256:f48c4cc192f4c3c6a069cb5cca6d0a9e34d6076ba7c214fd0cc3ca60e0af76bb"
## EXAMPLES
@@ -101,4 +127,3 @@ skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5)
## AUTHORS
Zach Hill <zach@anchore.com>

View File

@@ -4,7 +4,7 @@
skopeo\-login - Login to a container registry.
## SYNOPSIS
**skopeo login** [*options*] *registry*
**skopeo login** [*options*] _registry_
## DESCRIPTION
**skopeo login** logs into a specified registry server with the correct username
@@ -43,16 +43,18 @@ Return the logged-in user for the registry. Return error if no login is found.
Use certificates at *path* (\*.crt, \*.cert, \*.key) to connect to the registry.
Default certificates directory is _/etc/containers/certs.d_.
**--tls-verify**=*true|false*
Require HTTPS and verify certificates when contacting registries (default: true). If explicitly set to true,
then TLS verification will be used. If set to false, then TLS verification will not be used. If not specified,
TLS verification will be used unless the target registry is listed as an insecure registry in registries.conf.
**--help**, **-h**
Print usage statement
**--tls-verify**=_bool_
Require HTTPS and verify certificates when talking to the container registry or daemon. Default to registry.conf setting.
**--verbose**, **-v**
Write more detailed information to stdout
## EXAMPLES
```

View File

@@ -4,7 +4,7 @@
skopeo\-logout - Logout of a container registry.
## SYNOPSIS
**skopeo logout** [*options*] *registry*
**skopeo logout** [*options*] _registry_
## DESCRIPTION
**skopeo logout** logs out of a specified registry server by deleting the cached credentials
@@ -29,6 +29,10 @@ Remove the cached credentials for all registries in the auth file
Print usage statement
**--tls-verify**=_bool_
Require HTTPS and verify certificates when talking to the container registry or daemon. Default to registry.conf setting.
## EXAMPLES
```

View File

@@ -10,6 +10,12 @@ skopeo\-manifest\-digest - Compute a manifest digest for a manifest-file and wri
Compute a manifest digest of _manifest-file_ and write it to standard output.
## OPTIONS
**--help**, **-h**
Print usage statement
## EXAMPLES
```sh
@@ -23,4 +29,3 @@ skopeo(1)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -4,11 +4,10 @@
skopeo\-standalone-sign - Debugging tool - Publish and sign an image in one step.
## SYNOPSIS
**skopeo standalone-sign** _manifest docker-reference key-fingerprint_ **--output**|**-o** _signature_
**skopeo standalone-sign** [*options*] _manifest_ _docker-reference_ _key-fingerprint_ **--output**|**-o** _signature_
## DESCRIPTION
This is primarily a debugging tool, or useful for special cases,
and usually should not be a part of your normal operational workflow; use `skopeo copy --sign-by` instead to publish and sign an image in one step.
This is primarily a debugging tool, useful for special cases, and usually should not be a part of your normal operational workflow; use `skopeo copy --sign-by` instead to publish and sign an image in one step.
_manifest_ Path to a file containing the image manifest
@@ -16,7 +15,19 @@ and usually should not be a part of your normal operational workflow; use `skope
_key-fingerprint_ Key identity to use for signing
**--output**|**-o** output file
## OPTIONS
**--help**, **-h**
Print usage statement
**--output**, **-o** _output file_
Write signature to _output file_.
**--passphrase-file**=_path_
The passphare to use when signing with the key ID from `--sign-by`. Only the first line will be read. A passphrase stored in a file is of questionable security if other users can read this file. Do not use this option if at all avoidable.
## EXAMPLES
@@ -25,10 +36,13 @@ $ skopeo standalone-sign busybox-manifest.json registry.example.com/example/busy
$
```
## NOTES
This command is intended for use with local signatures e.g. OpenPGP ( other signature formats may be added in the future ), as per containers-signature(5). Furthermore, this command does **not** interact with the artifacts generated by Docker Content Trust (DCT). For more information, please see [containers-signature(5)](https://github.com/containers/image/blob/main/docs/containers-signature.5.md).
## SEE ALSO
skopeo(1), skopeo-copy(1), containers-signature(5)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -4,11 +4,13 @@
skopeo\-standalone\-verify - Verify an image signature.
## SYNOPSIS
**skopeo standalone-verify** _manifest docker-reference key-fingerprint signature_
**skopeo standalone-verify** _manifest_ _docker-reference_ _key-fingerprint_ _signature_
## DESCRIPTION
Verify a signature using local files, digest will be printed on success.
Verify a signature using local files; the digest will be printed on success. This is primarily a debugging tool, useful for special cases,
and usually should not be a part of your normal operational workflow. Additionally, consider configuring a signature verification policy file,
as per containers-policy.json(5).
_manifest_ Path to a file containing the image manifest
@@ -20,6 +22,12 @@ Verify a signature using local files, digest will be printed on success.
**Note:** If you do use this, make sure that the image can not be changed at the source location between the times of its verification and use.
## OPTIONS
**--help**, **-h**
Print usage statement
## EXAMPLES
```sh
@@ -27,10 +35,13 @@ $ skopeo standalone-verify busybox-manifest.json registry.example.com/example/bu
Signature verified, digest sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55
```
## NOTES
This command is intended for use with local signatures e.g. OpenPGP ( other signature formats may be added in the future ), as per containers-signature(5). Furthermore, this command does **not** interact with the artifacts generated by Docker Content Trust (DCT). For more information, please see [containers-signature(5)](https://github.com/containers/image/blob/main/docs/containers-signature.5.md).
## SEE ALSO
skopeo(1), containers-signature(5)
skopeo(1), containers-signature(5), containers-policy.json(5)
## AUTHORS
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>

View File

@@ -5,7 +5,7 @@ skopeo\-sync - Synchronize images between container registries and local directo
## SYNOPSIS
**skopeo sync** --src _transport_ --dest _transport_ _source_ _destination_
**skopeo sync** [*options*] --src _transport_ --dest _transport_ _source_ _destination_
## DESCRIPTION
Synchronize images between container registries and local directories.
@@ -32,7 +32,7 @@ When the `--scoped` option is specified, images are prefixed with the source ima
name can be stored at _destination_.
## OPTIONS
**--all**
**--all**, **-a**
If one of the images in __src__ refers to a list of images, instead of copying just the image which matches the current OS and
architecture (subject to the use of the global --override-os, --override-arch and --override-variant options), attempt to copy all of
the images in the list, and the list itself.
@@ -50,17 +50,25 @@ Path of the authentication file for the source registry. Uses path given by `--a
Path of the authentication file for the destination registry. Uses path given by `--authfile`, if not provided.
**--src** _transport_ Transport for the source repository.
**--src**, **-s** _transport_ Transport for the source repository.
**--dest** _transport_ Destination transport.
**--dest**, **-d** _transport_ Destination transport.
**--format, -f** _manifest-type_ Manifest Type (oci, v2s1, or v2s2) to use when syncing image(s) to a destination (default is manifest type of source).
**--format**, **-f** _manifest-type_ Manifest Type (oci, v2s1, or v2s2) to use when syncing image(s) to a destination (default is manifest type of source, with fallbacks).
**--help**, **-h**
Print usage statement.
**--scoped** Prefix images with the source image path, so that multiple images with the same name can be stored at _destination_.
**--preserve-digests** Preserve the digests during copying. Fail if the digest cannot be preserved.
**--remove-signatures** Do not copy signatures, if any, from _source-image_. This is necessary when copying a signed image to a destination which does not support signatures.
**--sign-by=**_key-id_ Add a signature using that key ID for an image name corresponding to _destination-image_.
**--sign-by**=_key-id_ Add a signature using that key ID for an image name corresponding to _destination-image_.
**--sign-passphrase-file**=_path_ The passphare to use when signing with the key ID from `--sign-by`. Only the first line will be read. A passphrase stored in a file is of questionable security if other users can read this file. Do not use this option if at all avoidable.
**--src-creds** _username[:password]_ for accessing the source registry.
@@ -68,20 +76,41 @@ Path of the authentication file for the destination registry. Uses path given by
**--src-cert-dir** _path_ Use certificates (*.crt, *.cert, *.key) at _path_ to connect to the source registry or daemon.
**--src-no-creds** _bool-value_ Access the registry anonymously.
**--src-no-creds** Access the registry anonymously.
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to a container source registry or daemon (defaults to true).
**--src-tls-verify**=_bool_ Require HTTPS and verify certificates when talking to a container source registry or daemon. Default to source registry entry in registry.conf setting.
**--dest-cert-dir** _path_ Use certificates (*.crt, *.cert, *.key) at _path_ to connect to the destination registry or daemon.
**--dest-no-creds** _bool-value_ Access the registry anonymously.
**--dest-no-creds** Access the registry anonymously.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to a container destination registry or daemon (defaults to true).
**--dest-tls-verify**=_bool_ Require HTTPS and verify certificates when talking to a container destination registry or daemon. Default to destination registry entry in registry.conf setting.
**--src-registry-token** _Bearer token_ for accessing the source registry.
**--dest-registry-token** _Bearer token_ for accessing the destination registry.
**--retry-times** the number of times to retry, retry wait time will be exponentially increased based on the number of failed attempts.
**--keep-going**
If any errors occur during copying of images, those errors are logged and the process continues syncing rest of the images and finally fails at the end.
**--src-username**
The username to access the source registry.
**--src-password**
The password to access the source registry.
**--dest-username**
The username to access the destination registry.
**--dest-password**
The password to access the destination registry.
## EXAMPLES
### Synchronizing to a local directory

View File

@@ -51,42 +51,64 @@ See [containers-transports(5)](https://github.com/containers/image/blob/master/d
## OPTIONS
**--command-timeout** _duration_ Timeout for the command execution.
**--command-timeout** _duration_
**--debug** enable debug output
Timeout for the command execution.
**--help**|**-h** Show help
**--debug**
**--insecure-policy** Adopt an insecure, permissive policy that allows anything. This obviates the need for a policy file.
enable debug output
**--override-arch** _arch_ Use _arch_ instead of the architecture of the machine for choosing images.
**--help**, **-h**
**--override-os** _OS_ Use _OS_ instead of the running OS for choosing images.
Show help
**--override-variant** _VARIANT_ Use _VARIANT_ instead of the running architecture variant for choosing images.
**--insecure-policy**
**--policy** _path-to-policy_ Path to a policy.json file to use for verifying signatures and deciding whether an image is trusted, overriding the default trust policy file.
Adopt an insecure, permissive policy that allows anything. This obviates the need for a policy file.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for container signature storage), overriding the default path.
**--override-arch** _arch_
**--tmpdir** _dir_ used to store temporary files. Defaults to /var/tmp.
Use _arch_ instead of the architecture of the machine for choosing images.
**--version**|**-v** print the version number
**--override-os** _os_
Use _OS_ instead of the running OS for choosing images.
**--override-variant** _variant_
Use _variant_ instead of the running architecture variant for choosing images.
**--policy** _path-to-policy_
Path to a policy.json file to use for verifying signatures and deciding whether an image is trusted, overriding the default trust policy file.
**--registries.d** _dir_
Use registry configuration files in _dir_ (e.g. for container signature storage), overriding the default path.
**--tmpdir** _dir_
Directory used to store temporary files. Defaults to /var/tmp.
**--version**, **-v**
Print the version number
## COMMANDS
| Command | Description |
| ----------------------------------------- | ------------------------------------------------------------------------------ |
| [skopeo-copy(1)](skopeo-copy.1.md) | Copy an image (manifest, filesystem layers, signatures) from one location to another. |
| [skopeo-delete(1)](skopeo-delete.1.md) | Mark image-name for deletion. |
| [skopeo-inspect(1)](skopeo-inspect.1.md) | Return low-level information about image-name in a registry. |
| [skopeo-list-tags(1)](skopeo-list-tags.1.md) | List the tags for the given transport/repository. |
| [skopeo-delete(1)](skopeo-delete.1.md) | Mark the _image-name_ for later deletion by the registry's garbage collector. |
| [skopeo-inspect(1)](skopeo-inspect.1.md) | Return low-level information about _image-name_ in a registry. |
| [skopeo-list-tags(1)](skopeo-list-tags.1.md) | List tags in the transport-specific image repository. |
| [skopeo-login(1)](skopeo-login.1.md) | Login to a container registry. |
| [skopeo-logout(1)](skopeo-logout.1.md) | Logout of a container registry. |
| [skopeo-manifest-digest(1)](skopeo-manifest-digest.1.md) | Compute a manifest digest of manifest-file and write it to standard output.|
| [skopeo-standalone-sign(1)](skopeo-standalone-sign.1.md) | Sign an image. |
| [skopeo-standalone-verify(1)](skopeo-standalone-verify.1.md)| Verify an image. |
| [skopeo-sync(1)](skopeo-sync.1.md)| Copy images from one or more repositories to a user specified destination. |
| [skopeo-manifest-digest(1)](skopeo-manifest-digest.1.md) | Compute a manifest digest for a manifest-file and write it to standard output. |
| [skopeo-standalone-sign(1)](skopeo-standalone-sign.1.md) | Debugging tool - Publish and sign an image in one step. |
| [skopeo-standalone-verify(1)](skopeo-standalone-verify.1.md)| Verify an image signature. |
| [skopeo-sync(1)](skopeo-sync.1.md)| Synchronize images between container registries and local directories. |
## FILES
**/etc/containers/policy.json**

20
go.mod
View File

@@ -3,23 +3,23 @@ module github.com/containers/skopeo
go 1.12
require (
github.com/containers/common v0.38.4
github.com/containers/image/v5 v5.12.0
github.com/containers/ocicrypt v1.1.1
github.com/containers/storage v1.31.1
github.com/docker/docker v20.10.6+incompatible
github.com/containers/common v0.47.4
github.com/containers/image/v5 v5.19.1
github.com/containers/ocicrypt v1.1.2
github.com/containers/storage v1.38.2
github.com/docker/docker v20.10.12+incompatible
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/go-check/check v0.0.0-20180628173108-788fd7840127
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.2-0.20190823105129-775207bd45b6
github.com/opencontainers/image-tools v0.0.0-20170926011501-6d941547fa1d
github.com/opencontainers/image-spec v1.0.3-0.20211202193544-a5463b7f9c84
github.com/opencontainers/image-tools v1.0.0-rc3
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.11.1 // indirect
github.com/russross/blackfriday v2.0.0+incompatible // indirect
github.com/sirupsen/logrus v1.8.1
github.com/spf13/cobra v1.1.3
github.com/spf13/cobra v1.3.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.7.0
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635
go4.org v0.0.0-20190218023631-ce4c26f7be8e // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
gopkg.in/yaml.v2 v2.4.0
)

674
go.sum

File diff suppressed because it is too large Load Diff

34
hack/get_fqin.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
# This script is intended to be called from the Makefile. It's purpose
# is to automation correspondence between the environment used for local
# development and CI.
set -e
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# When running under CI, we already have the necessary information,
# simply provide it to the Makefile.
if [[ -n "$SKOPEO_CIDEV_CONTAINER_FQIN" ]]; then
echo "$SKOPEO_CIDEV_CONTAINER_FQIN"
exit 0
fi
if [[ -n $(command -v podman) ]]; then CONTAINER_RUNTIME=podman; fi
CONTAINER_RUNTIME=${CONTAINER_RUNTIME:-docker}
# Borrow the get_ci_vm container image since it's small, and
# by necessity contains a script that can accurately interpret
# env. var. values from any .cirrus.yml runtime context.
$CONTAINER_RUNTIME run --rm \
--security-opt label=disable \
-v $REPO_DIRPATH:/src:ro \
--entrypoint=/usr/share/automation/bin/cirrus-ci_env.py \
quay.io/libpod/get_ci_vm:latest \
--envs="Skopeo Test" /src/.cirrus.yml | \
egrep -m1 '^SKOPEO_CIDEV_CONTAINER_FQIN' | \
awk -F "=" -e '{print $2}' | \
tr -d \'\"

19
hack/libsubid_tag.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
if test $(${GO:-go} env GOOS) != "linux" ; then
exit 0
fi
tmpdir="$PWD/tmp.$RANDOM"
mkdir -p "$tmpdir"
trap 'rm -fr "$tmpdir"' EXIT
cc -o "$tmpdir"/libsubid_tag -l subid -x c - > /dev/null 2> /dev/null << EOF
#include <shadow/subid.h>
int main() {
struct subid_range *ranges = NULL;
get_subuid_ranges("root", &ranges);
free(ranges);
return 0;
}
EOF
if test $? -eq 0 ; then
echo libsubid
fi

View File

@@ -2,15 +2,14 @@
set -e
# This script builds various binary from a checkout of the skopeo
# source code.
# source code. DO NOT CALL THIS SCRIPT DIRECTLY.
#
# Requirements:
# - The current directory should be a checkout of the skopeo source code
# (https://github.com/containers/skopeo). Whatever version is checked out
# will be built.
# - The script is intended to be run inside the docker container specified
# in the Dockerfile at the root of the source. In other words:
# DO NOT CALL THIS SCRIPT DIRECTLY.
# - The script is intended to be run inside the container specified
# in the output of hack/get_fqin.sh
# - The right way to call this script is to invoke "make" from
# your checkout of the skopeo repository.
# the Makefile will do a "docker build -t skopeo ." and then
@@ -23,21 +22,19 @@ export SKOPEO_PKG='github.com/containers/skopeo'
export SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
export MAKEDIR="$SCRIPTDIR/make"
# We're a nice, sexy, little shell script, and people might try to run us;
# but really, they shouldn't. We want to be in a container!
# The magic value is defined inside our Dockerfile.
if [[ "$container_magic" != "85531765-346b-4316-bdb8-358e4cca9e5d" ]]; then
{
echo "# WARNING! I don't seem to be running in a Docker container."
echo "# The result of this command might be an incorrect build, and will not be"
echo "# officially supported."
echo "#"
echo "# Try this instead: make all"
echo "#"
} >&2
else
echo "# I appear to be running inside my designated container image, good!"
export SKOPEO_CONTAINER_TESTS=1
# Set this to 1 to enable installation/modification of environment/services
export SKOPEO_CONTAINER_TESTS=${SKOPEO_CONTAINER_TESTS:-0}
if [[ "$SKOPEO_CONTAINER_TESTS" == "0" ]] && [[ "$CI" != "true" ]]; then
(
echo "***************************************************************"
echo "WARNING: Executing tests directly on the local development"
echo " host is highly discouraged. Many important items"
echo " will be skipped. For manual execution, please utilize"
echo " the Makefile targets WITHOUT the '-local' suffix."
echo "***************************************************************"
) > /dev/stderr
sleep 5s
fi
echo
@@ -56,8 +53,6 @@ DEFAULT_BUNDLES=(
test-integration
)
TESTFLAGS+=" -test.timeout=15m"
# Go module support: set `-mod=vendor` to use the vendored sources
# See also the top-level Makefile.
mod_vendor=
@@ -66,16 +61,6 @@ if go help mod >/dev/null 2>&1; then
mod_vendor='-mod=vendor'
fi
# If $TESTFLAGS is set in the environment, it is passed as extra arguments to 'go test'.
# You can use this to select certain tests to run, eg.
#
# TESTFLAGS='-test.run ^TestBuild$' ./hack/make.sh test-unit
#
# For integration-cli test, we use [gocheck](https://labix.org/gocheck), if you want
# to run certain tests on your local host, you should run with command:
#
# TESTFLAGS='-check.f DockerSuite.TestBuild*' ./hack/make.sh binary test-integration-cli
#
go_test_dir() {
dir=$1
(

View File

@@ -5,7 +5,7 @@ if [ -z "$VALIDATE_UPSTREAM" ]; then
# are running more than one validate bundlescript
VALIDATE_REPO='https://github.com/containers/skopeo.git'
VALIDATE_BRANCH='master'
VALIDATE_BRANCH='main'
if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then
VALIDATE_REPO="https://github.com/${TRAVIS_REPO_SLUG}.git"

View File

@@ -2,13 +2,11 @@
set -e
bundle_test_integration() {
TESTFLAGS="$TESTFLAGS -check.v"
go_test_dir ./integration
}
# subshell so that we can export PATH without breaking other things
(
make bin/skopeo ${BUILDTAGS:+BUILDTAGS="$BUILDTAGS"}
make PREFIX=/usr install
bundle_test_integration
) 2>&1

View File

@@ -11,7 +11,6 @@ sed -i \
/etc/containers/storage.conf
# Build skopeo, install into /usr/bin
make bin/skopeo ${BUILDTAGS:+BUILDTAGS="$BUILDTAGS"}
make PREFIX=/usr install
# Run tests

View File

@@ -1,6 +1,6 @@
#!/bin/bash
errors=$(go vet $mod_vendor $(go list $mod_vendor -e ./...))
errors=$(go vet -tags="${BUILDTAGS}" $mod_vendor $(go list $mod_vendor -e ./...))
if [ -z "$errors" ]; then
echo 'Congratulations! All Go source files have been vetted.'

150
hack/man-page-checker Executable file
View File

@@ -0,0 +1,150 @@
#!/usr/bin/env bash
#
# man-page-checker - validate and cross-reference man page names
#
# This is the script that cross-checks BETWEEN MAN PAGES. It is not the
# script that cross-checks that each option in skopeo foo --help is listed
# in skopeo-foo.1.md and vice-versa; that one is xref-helpmsgs-manpages.
#
verbose=
for i; do
case "$i" in
-v|--verbose) verbose=verbose ;;
esac
done
die() {
echo "$(basename $0): $*" >&2
exit 1
}
cd $(dirname $0)/../docs || die "Please run me from top-level skopeo dir"
rc=0
# Pass 1: cross-check file names with NAME section
#
# for a given skopeo-foo.1.md, the NAME should be 'skopeo-foo'
for md in *.1.md;do
# Read the first line after '## NAME'
name=$(egrep -A1 '^## NAME' $md|tail -1|awk '{print $1}' | tr -d \\\\)
expect=$(basename $md .1.md)
if [ "$name" != "$expect" ]; then
echo
printf "Inconsistent program NAME in %s:\n" $md
printf " NAME= %s (expected: %s)\n" $name $expect
rc=1
fi
done
# Pass 2: compare descriptions.
#
# Make sure the descriptive text in skopeo-foo.1.md matches the one
# in the table in skopeo.1.md.
for md in $(ls -1 *-*.1.md);do
desc=$(egrep -A1 '^## NAME' $md|tail -1|sed -E -e 's/^skopeo[^[:space:]]+ - //')
# Find the descriptive text in the main skopeo man page.
parent=skopeo.1.md
parent_desc=$(grep $md $parent | awk -F'|' '{print $3}' | sed -E -e 's/^[[:space:]]+//' -e 's/[[:space:]]+$//')
if [ "$desc" != "$parent_desc" ]; then
echo
printf "Inconsistent subcommand descriptions:\n"
printf " %-32s = '%s'\n" $md "$desc"
printf " %-32s = '%s'\n" $parent "$parent_desc"
printf "Please ensure that the NAME section of $md\n"
printf "matches the subcommand description in $parent\n"
rc=1
fi
done
# Helper function: compares man page synopsis vs --help usage message
function compare_usage() {
local cmd="$1"
local from_man="$2"
# Run 'cmd --help', grab the line immediately after 'Usage:'
local help_output=$(../bin/$cmd --help)
local from_help=$(echo "$help_output" | grep -A1 '^Usage:' | tail -1)
# strip off command name from both
from_man=$(sed -E -e "s/\*\*$cmd\*\*[[:space:]]*//" <<<"$from_man")
from_help=$(sed -E -e "s/^[[:space:]]*$cmd[[:space:]]*//" <<<"$from_help")
# man page lists 'foo [*options*]', help msg shows 'foo [command options]'.
# Make sure if one has it, the other does too.
if expr "$from_man" : "\[\*options\*\]" >/dev/null; then
if expr "$from_help" : "\[command options\]" >/dev/null; then
:
else
echo "WARNING: $cmd: man page shows '[*options*]', help does not show [command options]"
rc=1
fi
elif expr "$from_help" : "\[command options\]" >/dev/null; then
echo "WARNING: $cmd: --help shows [command options], man page does not show [*options*]"
rc=1
fi
# Strip off options and flags; start comparing arguments
from_man=$(sed -E -e 's/^\[\*options\*\][[:space:]]*//' <<<"$from_man")
from_help=$(sed -E -e 's/^\[command options\][[:space:]]*//' <<<"$from_help")
# Constant strings in man page are '**foo**', in --help are 'foo'.
from_man=$(sed -E -e 's/\*\*([^*]+)\*\*/\1/g' <<<"$from_man")
# Args in man page are '_foo_', in --help are 'FOO'. Convert all to
# UPCASE simply because it stands out better to the eye.
from_man=$(sed -E -e 's/_([a-z-]+)_/\U\1/g' <<<"$from_man")
# Compare man-page and --help usage strings. Skip 'skopeo' itself,
# because the man page includes '[global options]' which we don't grok.
if [[ "$from_man" != "$from_help" && "$cmd" != "skopeo" ]]; then
printf "%-25s man='%s' help='%s'\n" "$cmd:" "$from_man" "$from_help"
rc=1
fi
}
# Pass 3: compare synopses.
#
# Make sure the SYNOPSIS line in skopeo-foo.1.md reads '**skopeo foo** ...'
for md in *.1.md;do
synopsis=$(egrep -A1 '^#* SYNOPSIS' $md|tail -1)
# Command name must be bracketed by double asterisks; options and
# arguments are bracketed by single ones.
# E.g. '**skopeo copy** [*options*] _..._'
# Get the command name, and confirm that it matches the md file name.
cmd=$(echo "$synopsis" | sed -E -e 's/^\*\*([^*]+)\*\*.*/\1/' | tr -d \*)
# Use sed, not tr, so we only replace the first dash: we want
# skopeo-list-tags -> "skopeo list-tags", not "skopeo list tags"
md_nodash=$(basename "$md" .1.md | sed -e 's/-/ /')
if [ "$cmd" != "$md_nodash" ]; then
echo
printf "Inconsistent program name in SYNOPSIS in %s:\n" $md
printf " SYNOPSIS = %s (expected: '%s')\n" "$cmd" "$md_nodash"
rc=1
fi
# The convention is to use UPPER CASE in 'skopeo foo --help',
# but *lower case bracketed by asterisks* in the man page
if expr "$synopsis" : ".*[A-Z]" >/dev/null; then
echo
printf "Inconsistent capitalization in SYNOPSIS in %s\n" $md
printf " '%s' should not contain upper-case characters\n" "$synopsis"
rc=1
fi
# (for debugging, and getting a sense of standard conventions)
#printf " %-32s ------ '%s'\n" $md "$synopsis"
# If bin/skopeo is available, run "cmd --help" and compare Usage
# messages. This is complicated, so do it in a helper function.
compare_usage "$md_nodash" "$synopsis"
done
exit $rc

277
hack/xref-helpmsgs-manpages Executable file
View File

@@ -0,0 +1,277 @@
#!/usr/bin/perl
#
# xref-helpmsgs-manpages - cross-reference --help options against man pages
#
package LibPod::CI::XrefHelpmsgsManpages;
use v5.14;
use utf8;
use strict;
use warnings;
(our $ME = $0) =~ s|.*/||;
our $VERSION = '0.1';
# For debugging, show data structures using DumpTree($var)
#use Data::TreeDumper; $Data::TreeDumper::Displayaddress = 0;
# unbuffer output
$| = 1;
###############################################################################
# BEGIN user-customizable section
# Path to skopeo executable
my $Default_Skopeo = './bin/skopeo';
my $SKOPEO = $ENV{SKOPEO} || $Default_Skopeo;
# Path to all doc files (markdown)
my $Docs_Path = 'docs';
# Global error count
my $Errs = 0;
# END user-customizable section
###############################################################################
###############################################################################
# BEGIN boilerplate args checking, usage messages
sub usage {
print <<"END_USAGE";
Usage: $ME [OPTIONS]
$ME recursively runs 'skopeo --help' against
all subcommands; and recursively reads skopeo-*.1.md files
in $Docs_Path, then cross-references that each --help
option is listed in the appropriate man page and vice-versa.
$ME invokes '\$SKOPEO' (default: $Default_Skopeo).
Exit status is zero if no inconsistencies found, one otherwise
OPTIONS:
-v, --verbose show verbose progress indicators
-n, --dry-run make no actual changes
--help display this message
--version display program name and version
END_USAGE
exit;
}
# Command-line options. Note that this operates directly on @ARGV !
our $debug = 0;
our $verbose = 0;
sub handle_opts {
use Getopt::Long;
GetOptions(
'debug!' => \$debug,
'verbose|v' => \$verbose,
help => \&usage,
version => sub { print "$ME version $VERSION\n"; exit 0 },
) or die "Try `$ME --help' for help\n";
}
# END boilerplate args checking, usage messages
###############################################################################
############################## CODE BEGINS HERE ###############################
# The term is "modulino".
__PACKAGE__->main() unless caller();
# Main code.
sub main {
# Note that we operate directly on @ARGV, not on function parameters.
# This is deliberate: it's because Getopt::Long only operates on @ARGV
# and there's no clean way to make it use @_.
handle_opts(); # will set package globals
# Fetch command-line arguments. Barf if too many.
die "$ME: Too many arguments; try $ME --help\n" if @ARGV;
my $help = skopeo_help();
my $man = skopeo_man('skopeo');
xref_by_help($help, $man);
xref_by_man($help, $man);
exit !!$Errs;
}
###############################################################################
# BEGIN cross-referencing
##################
# xref_by_help # Find keys in '--help' but not in man
##################
sub xref_by_help {
my ($help, $man, @subcommand) = @_;
for my $k (sort keys %$help) {
if (exists $man->{$k}) {
if (ref $help->{$k}) {
xref_by_help($help->{$k}, $man->{$k}, @subcommand, $k);
}
# Otherwise, non-ref is leaf node such as a --option
}
else {
my $man = $man->{_path} || 'man';
warn "$ME: skopeo @subcommand --help lists $k, but $k not in $man\n";
++$Errs;
}
}
}
#################
# xref_by_man # Find keys in man pages but not in --help
#################
#
# In an ideal world we could share the functionality in one function; but
# there are just too many special cases in man pages.
#
sub xref_by_man {
my ($help, $man, @subcommand) = @_;
# FIXME: this generates way too much output
for my $k (grep { $_ ne '_path' } sort keys %$man) {
if (exists $help->{$k}) {
if (ref $man->{$k}) {
xref_by_man($help->{$k}, $man->{$k}, @subcommand, $k);
}
}
elsif ($k ne '--help' && $k ne '-h') {
my $man = $man->{_path} || 'man';
warn "$ME: skopeo @subcommand: $k in $man, but not --help\n";
++$Errs;
}
}
}
# END cross-referencing
###############################################################################
# BEGIN data gathering
#################
# skopeo_help # Parse output of 'skopeo [subcommand] --help'
#################
sub skopeo_help {
my %help;
open my $fh, '-|', $SKOPEO, @_, '--help'
or die "$ME: Cannot fork: $!\n";
my $section = '';
while (my $line = <$fh>) {
# Cobra is blessedly consistent in its output:
# Usage: ...
# Available Commands:
# ....
# Options:
# ....
#
# Start by identifying the section we're in...
if ($line =~ /^Available\s+(Commands):/) {
$section = lc $1;
}
elsif ($line =~ /^(Flags):/) {
$section = lc $1;
}
# ...then track commands and options. For subcommands, recurse.
elsif ($section eq 'commands') {
if ($line =~ /^\s{1,4}(\S+)\s/) {
my $subcommand = $1;
print "> skopeo @_ $subcommand\n" if $debug;
$help{$subcommand} = skopeo_help(@_, $subcommand)
unless $subcommand eq 'help'; # 'help' not in man
}
}
elsif ($section eq 'flags') {
# Handle '--foo' or '-f, --foo'
if ($line =~ /^\s{1,10}(--\S+)\s/) {
print "> skopeo @_ $1\n" if $debug;
$help{$1} = 1;
}
elsif ($line =~ /^\s{1,10}(-\S),\s+(--\S+)\s/) {
print "> skopeo @_ $1, $2\n" if $debug;
$help{$1} = $help{$2} = 1;
}
}
}
close $fh
or die "$ME: Error running 'skopeo @_ --help'\n";
return \%help;
}
################
# skopeo_man # Parse contents of skopeo-*.1.md
################
sub skopeo_man {
my $command = shift;
my $manpath = "$Docs_Path/$command.1.md";
print "** $manpath \n" if $debug;
my %man = (_path => $manpath);
open my $fh, '<', $manpath
or die "$ME: Cannot read $manpath: $!\n";
my $section = '';
my @most_recent_flags;
my $previous_subcmd = '';
while (my $line = <$fh>) {
chomp $line;
next unless $line; # skip empty lines
# .md files designate sections with leading double hash
if ($line =~ /^##\s*OPTIONS/) {
$section = 'flags';
}
elsif ($line =~ /^\#\#\s+(SUB)?COMMANDS/) {
$section = 'commands';
}
elsif ($line =~ /^\#\#[^#]/) {
$section = '';
}
# This will be a table containing subcommand names, links to man pages.
elsif ($section eq 'commands') {
# In skopeo.1.md
if ($line =~ /^\|\s*\[skopeo-(\S+?)\(\d\)\]/) {
# $1 will be changed by recursion _*BEFORE*_ left-hand assignment
my $subcmd = $1;
$man{$subcmd} = skopeo_man("skopeo-$1");
}
}
# Options should always be of the form '**-f**' or '**\-\-flag**',
# possibly separated by comma-space.
elsif ($section eq 'flags') {
# If option has long and short form, long must come first.
# This is a while-loop because there may be multiple long
# option names (not in skopeo ATM, but leave the possibility open)
while ($line =~ s/^\*\*(--[a-z0-9.-]+)\*\*(=\*[a-zA-Z0-9-]+\*)?(,\s+)?//g) {
$man{$1} = 1;
}
# Short form
if ($line =~ s/^\*\*(-[a-zA-Z0-9.])\*\*(=\*[a-zA-Z0-9-]+\*)?//g) {
$man{$1} = 1;
}
}
}
close $fh;
return \%man;
}
# END data gathering
###############################################################################
1;

View File

@@ -1,4 +1,4 @@
# Installing from packages
# Installing Skopeo
## Distribution Packages
`skopeo` may already be packaged in your distribution.
@@ -15,29 +15,6 @@ sudo dnf -y install skopeo
sudo dnf -y install skopeo
```
Newer Skopeo releases may be available on the repositories provided by the
Kubic project. Beware, these may not be suitable for production environments.
on CentOS 8:
```sh
sudo dnf -y module disable container-tools
sudo dnf -y install 'dnf-command(copr)'
sudo dnf -y copr enable rhcontainerbot/container-selinux
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_8/devel:kubic:libcontainers:stable.repo
sudo dnf -y install skopeo
```
on CentOS 8 Stream:
```sh
sudo dnf -y module disable container-tools
sudo dnf -y install 'dnf-command(copr)'
sudo dnf -y copr enable rhcontainerbot/container-selinux
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_8_Stream/devel:kubic:libcontainers:stable.repo
sudo dnf -y install skopeo
```
### RHEL/CentOS ≤ 7.x
```sh
@@ -69,12 +46,11 @@ $ nix-env -i skopeo
### Debian
The skopeo package is available in
the [Bullseye (testing) branch](https://packages.debian.org/bullseye/skopeo), which
will be the next stable release (Debian 11) as well as Debian Unstable/Sid.
The skopeo package is available on [Bullseye](https://packages.debian.org/bullseye/skopeo),
and Debian Testing and Unstable.
```bash
# Debian Testing/Bullseye or Unstable/Sid
# Debian Bullseye, Testing or Unstable/Sid
sudo apt-get update
sudo apt-get -y install skopeo
```
@@ -97,17 +73,8 @@ sudo apt-get -y update
sudo apt-get -y install skopeo
```
If you would prefer newer (though not as well-tested) packages,
the [Kubic project](https://build.opensuse.org/package/show/devel:kubic:libcontainers:stable/skopeo)
provides packages for active Ubuntu releases 20.04 and newer (it should also work with direct derivatives like Pop!\_OS).
Checkout the [Kubic project page](https://build.opensuse.org/package/show/devel:kubic:libcontainers:stable/skopeo)
for a list of supported Ubuntu version and
architecture combinations. **NOTE:** The command `sudo apt-get -y upgrade`
maybe required in some cases if Skopeo cannot be installed without it.
The build sources for the Kubic packages can be found [here](https://gitlab.com/rhcontainerbot/skopeo/-/tree/debian/debian).
CAUTION: On Ubuntu 20.10 and newer, we highly recommend you use Buildah, Podman and Skopeo ONLY from EITHER the Kubic repo
OR the official Ubuntu repos. Mixing and matching may lead to unpredictable situations including installation conflicts.
The [Kubic project](https://build.opensuse.org/package/show/devel:kubic:libcontainers:stable/skopeo)
provides packages for Ubuntu 20.04 (it should also work with direct derivatives like Pop!\_OS).
```bash
. /etc/os-release
@@ -118,6 +85,25 @@ sudo apt-get -y upgrade
sudo apt-get -y install skopeo
```
### Windows
Skopeo has not yet been packaged for Windows. There is an [open feature
request](https://github.com/containers/skopeo/issues/715) and contributions are
always welcome.
## Container Images
Skopeo container images are available at `quay.io/skopeo/stable:latest`.
For example,
```bash
podman run docker://quay.io/skopeo/stable:latest copy --help
```
[Read more](./contrib/skopeoimage/README.md).
## Building from Source
Otherwise, read on for building and installing it from source:
@@ -126,8 +112,6 @@ To build the `skopeo` binary you need at least Go 1.12.
There are two ways to build skopeo: in a container, or locally without a
container. Choose the one which better matches your needs and environment.
## Building from Source
### Building without a container
Building without a container requires a bit more manual work and setup in your
@@ -146,7 +130,7 @@ sudo dnf install gpgme-devel libassuan-devel btrfs-progs-devel device-mapper-dev
```bash
# Ubuntu (`libbtrfs-dev` requires Ubuntu 18.10 and above):
sudo apt install libgpgme-dev libassuan-dev libbtrfs-dev libdevmapper-dev
sudo apt install libgpgme-dev libassuan-dev libbtrfs-dev libdevmapper-dev pkg-config
```
```bash
@@ -168,6 +152,12 @@ cd $GOPATH/src/github.com/containers/skopeo && make bin/skopeo
By default the `make` command (make all) will build bin/skopeo and the documentation locally.
Building of documentation requires `go-md2man`. On systems that do not have this tool, the
document generation can be skipped by passing `DISABLE_DOCS=1`:
```
DISABLE_DOCS=1 make
```
### Building documentation
To build the manual you will need go-md2man.
@@ -182,6 +172,11 @@ sudo apt-get install go-md2man
sudo dnf install go-md2man
```
```
# MacOS:
brew install go-md2man
```
Then
```bash
@@ -208,3 +203,41 @@ Finally, after the binary and documentation is built:
```bash
sudo make install
```
### Building a static binary
There have been efforts in the past to produce and maintain static builds, but the maintainers prefer to run Skopeo using distro packages or within containers. This is because static builds of Skopeo tend to be unreliable and functionally restricted. Specifically:
- Some features of Skopeo depend on non-Go libraries like `libgpgme` and `libdevmapper`.
- Generating static Go binaries uses native Go libraries, which don't support e.g. `.local` or LDAP-based name resolution.
That being said, if you would like to build Skopeo statically, you might be able to do it by combining all the following steps.
- Export environment variable `CGO_ENABLED=0` (disabling CGO causes Go to prefer native libraries when possible, instead of dynamically linking against system libraries).
- Set the `BUILDTAGS=containers_image_openpgp` Make variable (this remove the dependency on `libgpgme` and its companion libraries).
- Clear the `GO_DYN_FLAGS` Make variable (which otherwise seems to force the creation of a dynamic executable).
The following command implements these steps to produce a static binary in the `bin` subdirectory of the repository:
```bash
docker run -v $PWD:/src -w /src -e CGO_ENABLED=0 golang \
make BUILDTAGS=containers_image_openpgp GO_DYN_FLAGS=
```
Keep in mind that the resulting binary is unsupported and might crash randomly. Only use if you know what you're doing!
For more information, history, and context about static builds, check the following issues:
- [#391] - Consider distributing statically built binaries as part of release
- [#669] - Static build fails with segmentation violation
- [#670] - Fixing static binary build using container
- [#755] - Remove static and in-container targets from Makefile
- [#932] - Add nix derivation for static builds
- [#1336] - Unable to run skopeo on Fedora 30 (due to dyn lib dependency)
- [#1478] - Publish binary releases to GitHub (request+discussion)
[#391]: https://github.com/containers/skopeo/issues/391
[#669]: https://github.com/containers/skopeo/issues/669
[#670]: https://github.com/containers/skopeo/issues/670
[#755]: https://github.com/containers/skopeo/issues/755
[#932]: https://github.com/containers/skopeo/issues/932
[#1336]: https://github.com/containers/skopeo/issues/1336
[#1478]: https://github.com/containers/skopeo/issues/1478

View File

@@ -1,7 +1,7 @@
package main
import (
"github.com/go-check/check"
"gopkg.in/check.v1"
)
const blockedRegistriesConf = "./fixtures/blocked-registries.conf"

View File

@@ -6,7 +6,7 @@ import (
"testing"
"github.com/containers/skopeo/version"
"github.com/go-check/check"
"gopkg.in/check.v1"
)
const (
@@ -101,7 +101,7 @@ func (s *SkopeoSuite) TestCopyWithLocalAuth(c *check.C) {
assertSkopeoSucceeds(c, wanted, "login", "--tls-verify=false", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, s.regV2WithAuth.url)
// copy to private registry using local authentication
imageName := fmt.Sprintf("docker://%s/busybox:mine", s.regV2WithAuth.url)
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "docker://docker.io/library/busybox:latest", imageName)
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", testFQIN+":latest", imageName)
// inspect from private registry
assertSkopeoSucceeds(c, "", "inspect", "--tls-verify=false", imageName)
// logout from the registry

View File

@@ -17,10 +17,10 @@ import (
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/signature"
"github.com/containers/image/v5/types"
"github.com/go-check/check"
digest "github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/image-tools/image"
"gopkg.in/check.v1"
)
func init() {
@@ -123,10 +123,10 @@ func (s *CopySuite) TestCopyAllWithManifestListRoundTrip(c *check.C) {
dir2, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
assertSkopeoSucceeds(c, "", "copy", "--all", knownListImage, "oci:"+oci1)
assertSkopeoSucceeds(c, "", "copy", "--all", "oci:"+oci1, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--all", "dir:"+dir1, "oci:"+oci2)
assertSkopeoSucceeds(c, "", "copy", "--all", "oci:"+oci2, "dir:"+dir2)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", knownListImage, "oci:"+oci1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "oci:"+oci1, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "dir:"+dir1, "oci:"+oci2)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "oci:"+oci2, "dir:"+dir2)
assertDirImagesAreEqual(c, dir1, dir2)
out := combinedOutputOfCommand(c, "diff", "-urN", oci1, oci2)
c.Assert(out, check.Equals, "")
@@ -145,15 +145,30 @@ func (s *CopySuite) TestCopyAllWithManifestListConverge(c *check.C) {
dir2, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
assertSkopeoSucceeds(c, "", "copy", "--all", knownListImage, "oci:"+oci1)
assertSkopeoSucceeds(c, "", "copy", "--all", "oci:"+oci1, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--all", "--format", "oci", knownListImage, "dir:"+dir2)
assertSkopeoSucceeds(c, "", "copy", "--all", "dir:"+dir2, "oci:"+oci2)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", knownListImage, "oci:"+oci1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "oci:"+oci1, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "--format", "oci", knownListImage, "dir:"+dir2)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "dir:"+dir2, "oci:"+oci2)
assertDirImagesAreEqual(c, dir1, dir2)
out := combinedOutputOfCommand(c, "diff", "-urN", oci1, oci2)
c.Assert(out, check.Equals, "")
}
func (s *CopySuite) TestCopyNoneWithManifestList(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=index-only", knownListImage, "dir:"+dir1)
manifestPath := filepath.Join(dir1, "manifest.json")
readManifest, err := ioutil.ReadFile(manifestPath)
c.Assert(err, check.IsNil)
mimeType := manifest.GuessMIMEType(readManifest)
c.Assert(mimeType, check.Equals, "application/vnd.docker.distribution.manifest.list.v2+json")
out := combinedOutputOfCommand(c, "ls", "-1", dir1)
c.Assert(out, check.Equals, "manifest.json\nversion\n")
}
func (s *CopySuite) TestCopyWithManifestListConverge(c *check.C) {
oci1, err := ioutil.TempDir("", "copy-all-manifest-list-oci")
c.Assert(err, check.IsNil)
@@ -168,9 +183,9 @@ func (s *CopySuite) TestCopyWithManifestListConverge(c *check.C) {
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
assertSkopeoSucceeds(c, "", "copy", knownListImage, "oci:"+oci1)
assertSkopeoSucceeds(c, "", "copy", "--all", "oci:"+oci1, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "oci:"+oci1, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--format", "oci", knownListImage, "dir:"+dir2)
assertSkopeoSucceeds(c, "", "copy", "--all", "dir:"+dir2, "oci:"+oci2)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "dir:"+dir2, "oci:"+oci2)
assertDirImagesAreEqual(c, dir1, dir2)
out := combinedOutputOfCommand(c, "diff", "-urN", oci1, oci2)
c.Assert(out, check.Equals, "")
@@ -181,7 +196,7 @@ func (s *CopySuite) TestCopyAllWithManifestListStorageFails(c *check.C) {
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
assertSkopeoFails(c, `.*destination transport .* does not support copying multiple images as a group.*`, "copy", "--all", knownListImage, "containers-storage:"+storage+"test")
assertSkopeoFails(c, `.*destination transport .* does not support copying multiple images as a group.*`, "copy", "--multi-arch=all", knownListImage, "containers-storage:"+storage+"test")
}
func (s *CopySuite) TestCopyWithManifestListStorage(c *check.C) {
@@ -239,7 +254,7 @@ func (s *CopySuite) TestCopyWithManifestListDigest(c *check.C) {
c.Assert(err, check.IsNil)
digest := manifestDigest.String()
assertSkopeoSucceeds(c, "", "copy", knownListImage+"@"+digest, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--all", knownListImage+"@"+digest, "dir:"+dir2)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", knownListImage+"@"+digest, "dir:"+dir2)
assertSkopeoSucceeds(c, "", "copy", "dir:"+dir1, "oci:"+oci1)
assertSkopeoSucceeds(c, "", "copy", "dir:"+dir2, "oci:"+oci2)
out := combinedOutputOfCommand(c, "diff", "-urN", oci1, oci2)
@@ -318,8 +333,8 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesBothUseLi
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "--override-arch=amd64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(c, "", "--override-arch=arm64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*error reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*error reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
i2 := combinedOutputOfCommand(c, skopeoBinary, "--override-arch=arm64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
var image2 imgspecv1.Image
err = json.Unmarshal([]byte(i2), &image2)
@@ -354,8 +369,8 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesFirstUses
err = json.Unmarshal([]byte(i2), &image2)
c.Assert(err, check.IsNil)
c.Assert(image2.Architecture, check.Equals, "amd64")
assertSkopeoFails(c, `.*error reading manifest for image instance.*does not exist.*`, "--override-arch=arm64", "inspect", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*error reading manifest for image instance.*does not exist.*`, "--override-arch=arm64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=arm64", "inspect", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=arm64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
i3 := combinedOutputOfCommand(c, skopeoBinary, "--override-arch=arm64", "inspect", "--config", "containers-storage:"+storage+"test@"+arm64Instance.String())
var image3 imgspecv1.Image
err = json.Unmarshal([]byte(i3), &image3)
@@ -385,8 +400,8 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesSecondUse
err = json.Unmarshal([]byte(i1), &image1)
c.Assert(err, check.IsNil)
c.Assert(image1.Architecture, check.Equals, "amd64")
assertSkopeoFails(c, `.*error reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*error reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
i2 := combinedOutputOfCommand(c, skopeoBinary, "--override-arch=arm64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
var image2 imgspecv1.Image
err = json.Unmarshal([]byte(i2), &image2)
@@ -417,7 +432,7 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesThirdUses
assertSkopeoSucceeds(c, "", "--override-arch=amd64", "copy", knownListImage+"@"+amd64Instance.String(), "containers-storage:"+storage+"test@"+amd64Instance.String())
assertSkopeoSucceeds(c, "", "--override-arch=amd64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(c, "", "--override-arch=arm64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*error reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
i1 := combinedOutputOfCommand(c, skopeoBinary, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+amd64Instance.String())
var image1 imgspecv1.Image
err = json.Unmarshal([]byte(i1), &image1)
@@ -452,7 +467,7 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesTagAndDig
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "--override-arch=amd64", "copy", knownListImage, "containers-storage:"+storage+"test:latest")
assertSkopeoSucceeds(c, "", "--override-arch=arm64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*error reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(c, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
i1 := combinedOutputOfCommand(c, skopeoBinary, "--override-arch=arm64", "inspect", "--config", "containers-storage:"+storage+"test:latest")
var image1 imgspecv1.Image
err = json.Unmarshal([]byte(i1), &image1)
@@ -506,12 +521,12 @@ func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
// "pull": docker: → dir:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", testFQIN64, "dir:"+dir1)
// "push": dir: → atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, "atomic:localhost:5000/myns/unsigned:unsigned")
// The result of pushing and pulling is an equivalent image, except for schema1 embedded names.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:unsigned", "dir:"+dir2)
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:unsigned")
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "libpod/busybox:amd64", dir2, "myns/unsigned:unsigned")
}
// The most basic (skopeo copy) use:
@@ -568,6 +583,7 @@ func (s *CopySuite) TestCopyEncryption(c *check.C) {
c.Assert(err, check.IsNil)
defer os.RemoveAll(keysDir)
undecryptedImgDir, err := ioutil.TempDir("", "copy-5")
c.Assert(err, check.IsNil)
defer os.RemoveAll(undecryptedImgDir)
multiLayerImageDir, err := ioutil.TempDir("", "copy-6")
c.Assert(err, check.IsNil)
@@ -602,7 +618,7 @@ func (s *CopySuite) TestCopyEncryption(c *check.C) {
"oci:"+encryptedImgDir+":encrypted", "oci:"+decryptedImgDir+":decrypted")
// Copy a standard busybox image locally
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:1.31.1", "oci:"+originalImageDir+":latest")
assertSkopeoSucceeds(c, "", "copy", testFQIN+":1.30.1", "oci:"+originalImageDir+":latest")
// Encrypt the image
assertSkopeoSucceeds(c, "", "copy", "--encryption-key",
@@ -633,7 +649,7 @@ func (s *CopySuite) TestCopyEncryption(c *check.C) {
matchLayerBlobBinaryType(c, decryptedImgDir+"/blobs/sha256", "application/x-gzip", 1)
// Copy a standard multi layer nginx image locally
assertSkopeoSucceeds(c, "", "copy", "docker://nginx:1.17.8", "oci:"+multiLayerImageDir+":latest")
assertSkopeoSucceeds(c, "", "copy", testFQINMultiLayer, "oci:"+multiLayerImageDir+":latest")
// Partially encrypt the image
assertSkopeoSucceeds(c, "", "copy", "--encryption-key", "jwe:"+keysDir+"/public.key",
@@ -738,11 +754,11 @@ func (s *CopySuite) TestCopyStreaming(c *check.C) {
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
// streaming: docker: → atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "docker://estesp/busybox:amd64", "atomic:localhost:5000/myns/unsigned:streaming")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", testFQIN64, "atomic:localhost:5000/myns/unsigned:streaming")
// Compare (copies of) the original and the copy:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", testFQIN64, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:streaming", "dir:"+dir2)
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:streaming")
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "libpod/busybox:amd64", dir2, "myns/unsigned:streaming")
// FIXME: Also check pushing to docker://
}
@@ -762,7 +778,7 @@ func (s *CopySuite) TestCopyOCIRoundTrip(c *check.C) {
defer os.RemoveAll(oci2)
// Docker -> OCI
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "docker://busybox", "oci:"+oci1+":latest")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", testFQIN, "oci:"+oci1+":latest")
// OCI -> Docker
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "oci:"+oci1+":latest", ourRegistry+"original/busybox:oci_copy")
// Docker -> OCI
@@ -813,16 +829,16 @@ func (s *CopySuite) TestCopySignatures(c *check.C) {
defer os.Remove(policy)
// type: reject
assertSkopeoFails(c, ".*Source image rejected: Running image docker://busybox:latest is rejected by policy.*",
"--policy", policy, "copy", "docker://busybox:latest", dirDest)
assertSkopeoFails(c, fmt.Sprintf(".*Source image rejected: Running image %s:latest is rejected by policy.*", testFQIN),
"--policy", policy, "copy", testFQIN+":latest", dirDest)
// type: insecureAcceptAnything
assertSkopeoSucceeds(c, "", "--policy", policy, "copy", "docker://quay.io/openshift/origin-hello-openshift", dirDest)
// type: signedBy
// Sign the images
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "docker://busybox:1.26", "atomic:localhost:5006/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", "docker://busybox:1.26.1", "atomic:localhost:5006/myns/official:official")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", testFQIN+":1.26", "atomic:localhost:5006/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", testFQIN+":1.26.1", "atomic:localhost:5006/myns/official:official")
// Verify that we can pull them
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/personal:personal", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/official:official", dirDest)
@@ -876,8 +892,8 @@ func (s *CopySuite) TestCopyDirSignatures(c *check.C) {
defer os.Remove(policy)
// Get some images.
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:armfh", topDirDest+"/dir1")
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:s390x", topDirDest+"/dir2")
assertSkopeoSucceeds(c, "", "copy", testFQIN+":armfh", topDirDest+"/dir1")
assertSkopeoSucceeds(c, "", "copy", testFQIN+":s390x", topDirDest+"/dir2")
// Sign the images. By coping from a topDirDest/dirN, also test that non-/restricted paths
// use the dir:"" default of insecureAcceptAnything.
@@ -993,7 +1009,7 @@ func (s *CopySuite) TestCopyDockerSigstore(c *check.C) {
c.Assert(err, check.IsNil)
// Get an image to work with. Also verifies that we can use Docker repositories with no sigstore configured.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "copy", "docker://busybox", ourRegistry+"original/busybox")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "copy", testFQIN, ourRegistry+"original/busybox")
// Pulling an unsigned image fails.
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
"--tls-verify=false", "--policy", policy, "--registries.d", registriesDir, "copy", ourRegistry+"original/busybox", dirDest)
@@ -1047,7 +1063,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
defer os.Remove(policy)
// Get an image to work with to an atomic: destination. Also verifies that we can use Docker repositories without X-Registry-Supports-Signatures
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "copy", "docker://busybox", "atomic:localhost:5000/myns/extension:unsigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir, "copy", testFQIN, "atomic:localhost:5000/myns/extension:unsigned")
// Pulling an unsigned image using atomic: fails.
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
"--tls-verify=false", "--policy", policy,
@@ -1071,7 +1087,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
// Get another image (different so that they don't share signatures, and sign it using docker://)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir,
"copy", "--sign-by", "personal@example.com", "docker://estesp/busybox:ppc64le", "docker://localhost:5000/myns/extension:extension")
"copy", "--sign-by", "personal@example.com", testFQIN+":ppc64le", "docker://localhost:5000/myns/extension:extension")
c.Logf("%s", combinedOutputOfCommand(c, "oc", "get", "istag", "extension:extension", "-o", "json"))
// Pulling the image using atomic: succeeds.
assertSkopeoSucceeds(c, "", "--debug", "--tls-verify=false", "--policy", policy,
@@ -1092,12 +1108,10 @@ func copyWithSignedIdentity(c *check.C, src, dest, signedIdentity, signBy, regis
signingDir := filepath.Join(topDir, "signing-temp")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", src, "dir:"+signingDir)
// Unknown error in Travis: https://github.com/containers/skopeo/issues/1093
// c.Logf("%s", combinedOutputOfCommand(c, "ls", "-laR", signingDir))
c.Logf("%s", combinedOutputOfCommand(c, "ls", "-laR", signingDir))
assertSkopeoSucceeds(c, "^$", "standalone-sign", "-o", filepath.Join(signingDir, "signature-1"),
filepath.Join(signingDir, "manifest.json"), signedIdentity, signBy)
// Unknown error in Travis: https://github.com/containers/skopeo/issues/1093
// c.Logf("%s", combinedOutputOfCommand(c, "ls", "-laR", signingDir))
c.Logf("%s", combinedOutputOfCommand(c, "ls", "-laR", signingDir))
assertSkopeoSucceeds(c, "", "--registries.d", registriesDir, "copy", "--dest-tls-verify=false", "dir:"+signingDir, dest)
}
@@ -1127,7 +1141,7 @@ func (s *CopySuite) TestCopyVerifyingMirroredSignatures(c *check.C) {
// So, make sure to never create a signature that could be considered valid in a different part of the test (i.e. don't reuse tags).
// Get an image to work with.
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "docker://busybox", regPrefix+"primary:unsigned")
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", testFQIN, regPrefix+"primary:unsigned")
// Verify that unsigned images are rejected
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
"--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"primary:unsigned", dirDest)
@@ -1170,11 +1184,11 @@ func (s *CopySuite) TestCopyVerifyingMirroredSignatures(c *check.C) {
assertSkopeoSucceeds(c, "", "--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"remap:remapped", dirDest)
// To be extra clear about the semantics, verify that the signedPrefix (primary) location never exists
// and only the remapped prefix (mirror) is accessed.
assertSkopeoFails(c, ".*Error initializing source docker://localhost:5006/myns/mirroring-primary:remapped:.*manifest unknown: manifest unknown.*", "--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"primary:remapped", dirDest)
assertSkopeoFails(c, ".*initializing source docker://localhost:5006/myns/mirroring-primary:remapped:.*manifest unknown: manifest unknown.*", "--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"primary:remapped", dirDest)
}
func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", testFQIN, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
@@ -1182,21 +1196,23 @@ func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
}
func (s *SkopeoSuite) TestCopyDestWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", testFQIN, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestCopySrcAndDestWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", testFQIN, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--src-creds=testuser:testpassword", "--dest-creds=testuser:testpassword", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), fmt.Sprintf("docker://%s/test:auth", s.regV2WithAuth.url))
}
func (s *CopySuite) TestCopyNoPanicOnHTTPResponseWithoutTLSVerifyFalse(c *check.C) {
topDir, err := ioutil.TempDir("", "no-panic-on-https-response-without-tls-verify-false")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
// dir:test isn't created beforehand just because we already know this could
// just fail when evaluating the src
assertSkopeoFails(c, ".*server gave HTTP response to HTTPS client.*",
"copy", ourRegistry+"foobar", "dir:test")
"copy", ourRegistry+"foobar", "dir:"+topDir)
}
func (s *CopySuite) TestCopySchemaConversion(c *check.C) {
@@ -1216,7 +1232,7 @@ func (s *CopySuite) TestCopyManifestConversion(c *check.C) {
// oci to v2s1 and vice-versa not supported yet
// get v2s2 manifest type
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+srcDir)
assertSkopeoSucceeds(c, "", "copy", testFQIN, "dir:"+srcDir)
verifyManifestMIMEType(c, srcDir, manifest.DockerV2Schema2MediaType)
// convert from v2s2 to oci
assertSkopeoSucceeds(c, "", "copy", "--format=oci", "dir:"+srcDir, "dir:"+destDir1)
@@ -1232,6 +1248,15 @@ func (s *CopySuite) TestCopyManifestConversion(c *check.C) {
verifyManifestMIMEType(c, destDir2, manifest.DockerV2Schema2MediaType)
}
func (s *CopySuite) TestCopyPreserveDigests(c *check.C) {
topDir, err := ioutil.TempDir("", "preserve-digests")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
assertSkopeoSucceeds(c, "", "copy", knownListImage, "--multi-arch=all", "--preserve-digests", "dir:"+topDir)
assertSkopeoFails(c, ".*Instructed to preserve digests.*", "copy", knownListImage, "--multi-arch=all", "--preserve-digests", "--format=oci", "dir:"+topDir)
}
func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Registry, schema2Registry string) {
topDir, err := ioutil.TempDir("", "schema-conversion")
c.Assert(err, check.IsNil)
@@ -1246,7 +1271,7 @@ func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Regist
// Ensure we are working with a schema2 image.
// dir: accepts any manifest format, i.e. this makes …/input2 a schema2 source which cannot be asked to produce schema1 like ordinary docker: registries can.
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+input2Dir)
assertSkopeoSucceeds(c, "", "copy", testFQIN, "dir:"+input2Dir)
verifyManifestMIMEType(c, input2Dir, manifest.DockerV2Schema2MediaType)
// 2→2 (the "f2t2" in tag means "from 2 to 2")
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input2Dir, schema2Registry+":f2t2")
@@ -1280,8 +1305,10 @@ func (s *SkopeoSuite) TestFailureCopySrcWithMirrorsUnavailable(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoFails(c, ".*no such host.*", "--registries-conf="+regConfFixture, "copy",
"docker://invalid.invalid/busybox", "dir:"+dir)
// .invalid domains are, per RFC 6761, supposed to result in NXDOMAIN.
// With systemd-resolved (used only via NSS?), we instead seem to get “Temporary failure in name resolution”
assertSkopeoFails(c, ".*(no such host|Temporary failure in name resolution).*",
"--registries-conf="+regConfFixture, "copy", "docker://invalid.invalid/busybox", "dir:"+dir)
}
func (s *SkopeoSuite) TestSuccessCopySrcWithMirrorAndPrefix(c *check.C) {
@@ -1296,8 +1323,10 @@ func (s *SkopeoSuite) TestFailureCopySrcWithMirrorAndPrefixUnavailable(c *check.
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
assertSkopeoFails(c, ".*no such host.*", "--registries-conf="+regConfFixture, "copy",
"docker://gcr.invalid/wrong/prefix/busybox", "dir:"+dir)
// .invalid domains are, per RFC 6761, supposed to result in NXDOMAIN.
// With systemd-resolved (used only via NSS?), we instead seem to get “Temporary failure in name resolution”
assertSkopeoFails(c, ".*(no such host|Temporary failure in name resolution).*",
"--registries-conf="+regConfFixture, "copy", "docker://gcr.invalid/wrong/prefix/busybox", "dir:"+dir)
}
func (s *CopySuite) TestCopyFailsWhenReferenceIsInvalid(c *check.C) {

View File

@@ -2,6 +2,7 @@ package main
import (
"bufio"
"context"
"encoding/base64"
"fmt"
"io/ioutil"
@@ -9,9 +10,10 @@ import (
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/docker/docker/pkg/homedir"
"github.com/go-check/check"
"gopkg.in/check.v1"
)
var adminKUBECONFIG = map[string]string{
@@ -62,6 +64,7 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
cmd := cluster.clusterCmd(nil, "openshift", "start", "master")
cluster.processes = append(cluster.processes, cmd)
stdout, err := cmd.StdoutPipe()
c.Assert(err, check.IsNil)
// Send both to the same pipe. This might cause the two streams to be mixed up,
// but logging actually goes only to stderr - this primarily ensure we log any
// unexpected output to stdout.
@@ -108,6 +111,8 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
gotPortCheck := false
gotLogCheck := false
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
defer cancel()
for !gotPortCheck || !gotLogCheck {
c.Logf("Waiting for master")
select {
@@ -120,6 +125,8 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
c.Fatal("log check done, success message not found")
}
gotLogCheck = true
case <-ctx.Done():
c.Fatalf("Timed out waiting for master: %v", ctx.Err())
}
}
c.Logf("OK, master started!")
@@ -165,8 +172,14 @@ func (cluster *openshiftCluster) startRegistryProcess(c *check.C, port int, conf
terminatePortCheck <- true
}()
c.Logf("Waiting for registry to start")
<-portOpen
c.Logf("OK, Registry port open")
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Minute)
defer cancel()
select {
case <-portOpen:
c.Logf("OK, Registry port open")
case <-ctx.Done():
c.Fatalf("Timed out waiting for registry to start: %v", ctx.Err())
}
return cmd
}

View File

@@ -1,3 +1,4 @@
//go:build openshift_shell
// +build openshift_shell
package main
@@ -6,7 +7,7 @@ import (
"os"
"os/exec"
"github.com/go-check/check"
"gopkg.in/check.v1"
)
/*
@@ -20,7 +21,7 @@ to start a container, then within the container:
An example of what can be done within the container:
cd ..; make bin/skopeo PREFIX=/usr install
./skopeo --tls-verify=false copy --sign-by=personal@example.com docker://busybox:latest atomic:localhost:5000/myns/personal:personal
./skopeo --tls-verify=false copy --sign-by=personal@example.com docker://quay.io/libpod/busybox:latest atomic:localhost:5000/myns/personal:personal
oc get istag personal:personal -o json
curl -L -v 'http://localhost:5000/v2/'
cat ~/.docker/config.json

12
integration/procutils.go Normal file
View File

@@ -0,0 +1,12 @@
//go:build !linux
// +build !linux
package main
import (
"os/exec"
)
// cmdLifecycleToParentIfPossible tries to exit if the parent process exits (only works on Linux)
func cmdLifecycleToParentIfPossible(c *exec.Cmd) {
}

View File

@@ -0,0 +1,14 @@
package main
import (
"os/exec"
"syscall"
)
// cmdLifecyleToParentIfPossible is a thin wrapper around prctl(PR_SET_PDEATHSIG)
// on Linux.
func cmdLifecycleToParentIfPossible(c *exec.Cmd) {
c.SysProcAttr = &syscall.SysProcAttr{
Pdeathsig: syscall.SIGTERM,
}
}

307
integration/proxy_test.go Normal file
View File

@@ -0,0 +1,307 @@
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"net"
"os"
"os/exec"
"strings"
"syscall"
"time"
"gopkg.in/check.v1"
"github.com/containers/image/v5/manifest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
// This image is known to be x86_64 only right now
const knownNotManifestListedImage_x8664 = "docker://quay.io/coreos/11bot"
const expectedProxySemverMajor = "0.2"
// request is copied from proxy.go
// We intentionally copy to ensure that we catch any unexpected "API" changes
// in the JSON.
type request struct {
// Method is the name of the function
Method string `json:"method"`
// Args is the arguments (parsed inside the function)
Args []interface{} `json:"args"`
}
// reply is copied from proxy.go
type reply struct {
// Success is true if and only if the call succeeded.
Success bool `json:"success"`
// Value is an arbitrary value (or values, as array/map) returned from the call.
Value interface{} `json:"value"`
// PipeID is an index into open pipes, and should be passed to FinishPipe
PipeID uint32 `json:"pipeid"`
// Error should be non-empty if Success == false
Error string `json:"error"`
}
// maxMsgSize is also copied from proxy.go
const maxMsgSize = 32 * 1024
type proxy struct {
c *net.UnixConn
}
type pipefd struct {
// id is the remote identifier "pipeid"
id uint
fd *os.File
}
func (self *proxy) call(method string, args []interface{}) (rval interface{}, fd *pipefd, err error) {
req := request{
Method: method,
Args: args,
}
reqbuf, err := json.Marshal(&req)
if err != nil {
return
}
n, err := self.c.Write(reqbuf)
if err != nil {
return
}
if n != len(reqbuf) {
err = fmt.Errorf("short write during call of %d bytes", n)
return
}
oob := make([]byte, syscall.CmsgSpace(1))
replybuf := make([]byte, maxMsgSize)
n, oobn, _, _, err := self.c.ReadMsgUnix(replybuf, oob)
if err != nil {
err = fmt.Errorf("reading reply: %v", err)
return
}
var reply reply
err = json.Unmarshal(replybuf[0:n], &reply)
if err != nil {
err = fmt.Errorf("Failed to parse reply: %w", err)
return
}
if !reply.Success {
err = fmt.Errorf("remote error: %s", reply.Error)
return
}
if reply.PipeID > 0 {
var scms []syscall.SocketControlMessage
scms, err = syscall.ParseSocketControlMessage(oob[:oobn])
if err != nil {
err = fmt.Errorf("failed to parse control message: %v", err)
return
}
if len(scms) != 1 {
err = fmt.Errorf("Expected 1 received fd, found %d", len(scms))
return
}
var fds []int
fds, err = syscall.ParseUnixRights(&scms[0])
if err != nil {
err = fmt.Errorf("failed to parse unix rights: %v", err)
return
}
fd = &pipefd{
fd: os.NewFile(uintptr(fds[0]), "replyfd"),
id: uint(reply.PipeID),
}
}
rval = reply.Value
return
}
func (self *proxy) callNoFd(method string, args []interface{}) (rval interface{}, err error) {
var fd *pipefd
rval, fd, err = self.call(method, args)
if err != nil {
return
}
if fd != nil {
err = fmt.Errorf("Unexpected fd from method %s", method)
return
}
return rval, nil
}
func (self *proxy) callReadAllBytes(method string, args []interface{}) (rval interface{}, buf []byte, err error) {
var fd *pipefd
rval, fd, err = self.call(method, args)
if err != nil {
return
}
if fd == nil {
err = fmt.Errorf("Expected fd from method %s", method)
return
}
fetchchan := make(chan byteFetch)
go func() {
manifestBytes, err := ioutil.ReadAll(fd.fd)
fetchchan <- byteFetch{
content: manifestBytes,
err: err,
}
}()
_, err = self.callNoFd("FinishPipe", []interface{}{fd.id})
if err != nil {
return
}
select {
case fetchRes := <-fetchchan:
err = fetchRes.err
if err != nil {
return
}
buf = fetchRes.content
case <-time.After(5 * time.Minute):
err = fmt.Errorf("timed out during proxy fetch")
}
return
}
func newProxy() (*proxy, error) {
fds, err := syscall.Socketpair(syscall.AF_LOCAL, syscall.SOCK_SEQPACKET, 0)
if err != nil {
return nil, err
}
myfd := os.NewFile(uintptr(fds[0]), "myfd")
defer myfd.Close()
theirfd := os.NewFile(uintptr(fds[1]), "theirfd")
defer theirfd.Close()
mysock, err := net.FileConn(myfd)
if err != nil {
return nil, err
}
// Note ExtraFiles starts at 3
proc := exec.Command("skopeo", "experimental-image-proxy", "--sockfd", "3")
proc.Stderr = os.Stderr
cmdLifecycleToParentIfPossible(proc)
proc.ExtraFiles = append(proc.ExtraFiles, theirfd)
if err = proc.Start(); err != nil {
return nil, err
}
p := &proxy{
c: mysock.(*net.UnixConn),
}
v, err := p.callNoFd("Initialize", nil)
if err != nil {
return nil, err
}
semver, ok := v.(string)
if !ok {
return nil, fmt.Errorf("proxy Initialize: Unexpected value %T", v)
}
if !strings.HasPrefix(semver, expectedProxySemverMajor) {
return nil, fmt.Errorf("Unexpected semver %s", semver)
}
return p, nil
}
func init() {
check.Suite(&ProxySuite{})
}
type ProxySuite struct {
}
func (s *ProxySuite) SetUpSuite(c *check.C) {
}
func (s *ProxySuite) TearDownSuite(c *check.C) {
}
type byteFetch struct {
content []byte
err error
}
func runTestGetManifestAndConfig(p *proxy, img string) error {
v, err := p.callNoFd("OpenImage", []interface{}{knownNotManifestListedImage_x8664})
if err != nil {
return err
}
imgidv, ok := v.(float64)
if !ok {
return fmt.Errorf("OpenImage return value is %T", v)
}
imgid := uint32(imgidv)
v, manifestBytes, err := p.callReadAllBytes("GetManifest", []interface{}{imgid})
if err != nil {
return err
}
_, err = manifest.OCI1FromManifest(manifestBytes)
if err != nil {
return err
}
v, configBytes, err := p.callReadAllBytes("GetFullConfig", []interface{}{imgid})
if err != nil {
return err
}
var config imgspecv1.Image
err = json.Unmarshal(configBytes, &config)
if err != nil {
return err
}
// Validate that the image config seems sane
if config.Architecture == "" {
return fmt.Errorf("No architecture found")
}
if len(config.Config.Cmd) == 0 && len(config.Config.Entrypoint) == 0 {
return fmt.Errorf("No CMD or ENTRYPOINT set")
}
// Also test this legacy interface
v, ctrconfigBytes, err := p.callReadAllBytes("GetConfig", []interface{}{imgid})
if err != nil {
return err
}
var ctrconfig imgspecv1.ImageConfig
err = json.Unmarshal(ctrconfigBytes, &ctrconfig)
if err != nil {
return err
}
// Validate that the config seems sane
if len(ctrconfig.Cmd) == 0 && len(ctrconfig.Entrypoint) == 0 {
return fmt.Errorf("No CMD or ENTRYPOINT set")
}
_, err = p.callNoFd("CloseImage", []interface{}{imgid})
return nil
}
func (s *ProxySuite) TestProxy(c *check.C) {
p, err := newProxy()
c.Assert(err, check.IsNil)
err = runTestGetManifestAndConfig(p, knownNotManifestListedImage_x8664)
if err != nil {
err = fmt.Errorf("Testing image %s: %v", knownNotManifestListedImage_x8664, err)
}
c.Assert(err, check.IsNil)
err = runTestGetManifestAndConfig(p, knownListImage)
if err != nil {
err = fmt.Errorf("Testing image %s: %v", knownListImage, err)
}
c.Assert(err, check.IsNil)
}

View File

@@ -9,7 +9,7 @@ import (
"path/filepath"
"time"
"github.com/go-check/check"
"gopkg.in/check.v1"
)
const (

View File

@@ -9,7 +9,7 @@ import (
"strings"
"github.com/containers/image/v5/signature"
"github.com/go-check/check"
"gopkg.in/check.v1"
)
const (

View File

@@ -14,8 +14,8 @@ import (
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/types"
"github.com/go-check/check"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"gopkg.in/check.v1"
)
const (
@@ -163,6 +163,22 @@ func (s *SyncSuite) TestDocker2DirTaggedAll(c *check.C) {
c.Assert(out, check.Equals, "")
}
func (s *SyncSuite) TestPreserveDigests(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedManifestList
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--all", "--preserve-digests", "docker://"+image, "dir:"+tmpDir)
_, err = os.Stat(path.Join(tmpDir, "manifest.json"))
c.Assert(err, check.IsNil)
assertSkopeoFails(c, ".*Instructed to preserve digests.*", "copy", "--all", "--preserve-digests", "--format=oci", "docker://"+image, "dir:"+tmpDir)
}
func (s *SyncSuite) TestScoped(c *check.C) {
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedImage
@@ -635,7 +651,7 @@ func (s *SyncSuite) TestFailsWithDockerSourceNotExisting(c *check.C) {
"sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", repo, tmpDir)
//tagged
assertSkopeoFails(c, ".*Error reading manifest.*",
assertSkopeoFails(c, ".*reading manifest.*",
"sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", repo+":thetag", tmpDir)
}

View File

@@ -11,12 +11,16 @@ import (
"time"
"github.com/containers/image/v5/manifest"
"github.com/go-check/check"
"gopkg.in/check.v1"
)
const skopeoBinary = "skopeo"
const decompressDirsBinary = "./decompress-dirs.sh"
const testFQIN = "docker://quay.io/libpod/busybox" // tag left off on purpose, some tests need to add a special one
const testFQIN64 = "docker://quay.io/libpod/busybox:amd64"
const testFQINMultiLayer = "docker://quay.io/libpod/alpine_nginx:master" // multi-layer
// consumeAndLogOutputStream takes (f, err) from an exec.*Pipe(), and causes all output to it to be logged to c.
func consumeAndLogOutputStream(c *check.C, id string, f io.ReadCloser, err error) {
c.Assert(err, check.IsNil)

View File

@@ -1,67 +0,0 @@
let
pkgs = (import ./nixpkgs.nix {
crossSystem = {
config = "aarch64-unknown-linux-gnu";
};
config = {
packageOverrides = pkg: {
gpgme = (static pkg.gpgme);
libassuan = (static pkg.libassuan);
libgpgerror = (static pkg.libgpgerror);
libseccomp = (static pkg.libseccomp);
glib = (static pkg.glib).overrideAttrs (x: {
outputs = [ "bin" "out" "dev" ];
mesonFlags = [
"-Ddefault_library=static"
"-Ddevbindir=${placeholder ''dev''}/bin"
"-Dgtk_doc=false"
"-Dnls=disabled"
];
postInstall = ''
moveToOutput "share/glib-2.0" "$dev"
substituteInPlace "$dev/bin/gdbus-codegen" --replace "$out" "$dev"
sed -i "$dev/bin/glib-gettextize" -e "s|^gettext_dir=.*|gettext_dir=$dev/share/glib-2.0/gettext|"
sed '1i#line 1 "${x.pname}-${x.version}/include/glib-2.0/gobject/gobjectnotifyqueue.c"' \
-i "$dev"/include/glib-2.0/gobject/gobjectnotifyqueue.c
'';
});
};
};
});
static = pkg: pkg.overrideAttrs (x: {
doCheck = false;
configureFlags = (x.configureFlags or [ ]) ++ [
"--without-shared"
"--disable-shared"
];
dontDisableStatic = true;
enableSharedExecutables = false;
enableStatic = true;
});
self = with pkgs; buildGoModule rec {
name = "skopeo";
src = ./..;
vendorSha256 = null;
doCheck = false;
enableParallelBuilding = true;
outputs = [ "out" ];
nativeBuildInputs = [ bash gitMinimal go-md2man installShellFiles makeWrapper pkg-config which ];
buildInputs = [ glibc glibc.static gpgme libassuan libgpgerror libseccomp ];
prePatch = ''
export CFLAGS='-static -pthread'
export LDFLAGS='-s -w -static-libgcc -static'
export EXTRA_LDFLAGS='-s -w -linkmode external -extldflags "-static -lm"'
export BUILDTAGS='static netgo osusergo exclude_graphdriver_btrfs exclude_graphdriver_devicemapper'
'';
buildPhase = ''
patchShebangs .
make bin/skopeo
'';
installPhase = ''
install -Dm755 bin/skopeo $out/bin/skopeo
'';
};
in
self

View File

@@ -1,65 +0,0 @@
{ system ? builtins.currentSystem }:
let
pkgs = (import ./nixpkgs.nix {
config = {
packageOverrides = pkg: {
gpgme = (static pkg.gpgme);
libassuan = (static pkg.libassuan);
libgpgerror = (static pkg.libgpgerror);
libseccomp = (static pkg.libseccomp);
glib = (static pkg.glib).overrideAttrs (x: {
outputs = [ "bin" "out" "dev" ];
mesonFlags = [
"-Ddefault_library=static"
"-Ddevbindir=${placeholder ''dev''}/bin"
"-Dgtk_doc=false"
"-Dnls=disabled"
];
postInstall = ''
moveToOutput "share/glib-2.0" "$dev"
substituteInPlace "$dev/bin/gdbus-codegen" --replace "$out" "$dev"
sed -i "$dev/bin/glib-gettextize" -e "s|^gettext_dir=.*|gettext_dir=$dev/share/glib-2.0/gettext|"
sed '1i#line 1 "${x.pname}-${x.version}/include/glib-2.0/gobject/gobjectnotifyqueue.c"' \
-i "$dev"/include/glib-2.0/gobject/gobjectnotifyqueue.c
'';
});
};
};
});
static = pkg: pkg.overrideAttrs (x: {
doCheck = false;
configureFlags = (x.configureFlags or [ ]) ++ [
"--without-shared"
"--disable-shared"
];
dontDisableStatic = true;
enableSharedExecutables = false;
enableStatic = true;
});
self = with pkgs; buildGoModule rec {
name = "skopeo";
src = ./..;
vendorSha256 = null;
doCheck = false;
enableParallelBuilding = true;
outputs = [ "out" ];
nativeBuildInputs = [ bash gitMinimal go-md2man installShellFiles makeWrapper pkg-config which ];
buildInputs = [ glibc glibc.static gpgme libassuan libgpgerror libseccomp ];
prePatch = ''
export CFLAGS='-static -pthread'
export LDFLAGS='-s -w -static-libgcc -static'
export EXTRA_LDFLAGS='-s -w -linkmode external -extldflags "-static -lm"'
export BUILDTAGS='static netgo osusergo exclude_graphdriver_btrfs exclude_graphdriver_devicemapper'
'';
buildPhase = ''
patchShebangs .
make bin/skopeo
'';
installPhase = ''
install -Dm755 bin/skopeo $out/bin/skopeo
'';
};
in
self

View File

@@ -1,10 +0,0 @@
{
"url": "https://github.com/nixos/nixpkgs",
"rev": "eb7e1ef185f6c990cda5f71fdc4fb02e76ab06d5",
"date": "2021-05-05T23:16:00+02:00",
"path": "/nix/store/a98lkhjlsqh32ic2kkrv5kkik6jy25wh-nixpkgs",
"sha256": "1ibz204c41g7baqga2iaj11yz9l75cfdylkiqjnk5igm81ivivxg",
"fetchSubmodules": false,
"deepClone": false,
"leaveDotGit": false
}

View File

@@ -1,8 +0,0 @@
let
json = builtins.fromJSON (builtins.readFile ./nixpkgs.json);
nixpkgs = import (builtins.fetchTarball {
name = "nixos-unstable";
url = "${json.url}/archive/${json.rev}.tar.gz";
inherit (json) sha256;
});
in nixpkgs

View File

@@ -1,51 +0,0 @@
## This Makefile is used to publish skopeo container images with Travis CI ##
## Environment variables, used in this Makefile are specified in .travis.yml
export DOCKER_CLI_EXPERIMENTAL=enabled
GOARCH ?= $(shell go env GOARCH)
# Dereference variable $(1), return value if non-empty, otherwise raise an error.
err_if_empty = $(if $(strip $($(1))),$(strip $($(1))),$(error Required $(1) variable is undefined or empty))
# Requires two arguments: Names of the username and the password env. vars.
define quay_login
@echo "$(call err_if_empty,$(2))" | \
docker login quay.io -u "$(call err_if_empty,$(1))" --password-stdin
endef
# Build container image of skopeo upstream based on host architecture
build-image/upstream:
docker build -t "${UPSTREAM_IMAGE}-${GOARCH}" contrib/skopeoimage/upstream
# Build container image of skopeo stable based on host architecture
build-image/stable:
docker build -t "${STABLE_IMAGE}-${GOARCH}" contrib/skopeoimage/stable
# Push container image of skopeo upstream (based on host architecture) to image repository
push-image/upstream:
$(call quay_login,SKOPEO_QUAY_USERNAME,SKOPEO_QUAY_PASSWORD)
docker push "${UPSTREAM_IMAGE}-${GOARCH}"
# Push container image of skopeo stable (based on host architecture) to image default and extra repositories
push-image/stable:
$(call quay_login,SKOPEO_QUAY_USERNAME,SKOPEO_QUAY_PASSWORD)
docker push "${STABLE_IMAGE}-${GOARCH}"
docker tag "${STABLE_IMAGE}-${GOARCH}" "${EXTRA_STABLE_IMAGE}-${GOARCH}"
$(call quay_login,CONTAINERS_QUAY_USERNAME,CONTAINERS_QUAY_PASSWORD)
docker push "${EXTRA_STABLE_IMAGE}-${GOARCH}"
# Create and push multiarch image manifest of skopeo upstream
push-manifest-multiarch/upstream:
docker manifest create "${UPSTREAM_IMAGE}" $(foreach arch,${MULTIARCH_MANIFEST_ARCHITECTURES}, ${UPSTREAM_IMAGE}-${arch})
$(call quay_login,SKOPEO_QUAY_USERNAME,SKOPEO_QUAY_PASSWORD)
docker manifest push --purge "${UPSTREAM_IMAGE}"
# Create and push multiarch image manifest of skopeo stable
push-manifest-multiarch/stable:
docker manifest create "${STABLE_IMAGE}" $(foreach arch,${MULTIARCH_MANIFEST_ARCHITECTURES}, ${STABLE_IMAGE}-${arch})
$(call quay_login,SKOPEO_QUAY_USERNAME,SKOPEO_QUAY_PASSWORD)
docker manifest push --purge "${STABLE_IMAGE}"
# Push to extra repository
docker manifest create "${EXTRA_STABLE_IMAGE}" $(foreach arch,${MULTIARCH_MANIFEST_ARCHITECTURES}, ${EXTRA_STABLE_IMAGE}-${arch})
$(call quay_login,CONTAINERS_QUAY_USERNAME,CONTAINERS_QUAY_PASSWORD)
docker manifest push --purge "${EXTRA_STABLE_IMAGE}"

View File

@@ -1,40 +0,0 @@
# skopeo container image build with Travis
This document describes the details and requirements to build and publish skopeo container images. The images are published as several architecture specific images and multiarch images on top for upstream and stable versions.
The Travis configuration is available at `.travis.yml`.
The code to build and publish images is available at `release/Makefile` and should be used only via Travis.
Travis workflow has 3 major pieces:
- `local-build` - build and test source code locally on osx and linux/amd64 environments, 2 jobs are running in parallel
- `image-build-push` - build and push container images with several Travis jobs running in parallel to build images for several architectures (linux/amd64, linux/s390x, linux/ppc64le). Build part is done for each PR, push part is executed only in case of cron job or master branch update.
- `manifest-multiarch-push` - create and push image manifests, which consists of architecture specific images from previous step. Executed only in case of cron job or master branch update.
## Ways to have full workflow run
- [cron job](https://docs.travis-ci.com/user/cron-jobs/#adding-cron-jobs)
- Trigger build from Travis CI
- Update code in master branch
## Environment variables
Several environment variables are used to customize image names and keep private credentials to push to quay.io repositories.
**Image tags** are specified in environment variable and should be manually updated in case of new release.
- `SKOPEO_QUAY_USERNAME` and `SKOPEO_QUAY_PASSWORD` are credentials to push images to `quay.io/skopeo/stable` and `quay.io/skopeo/upstream` repos, and require the credentials to have write permissions. These variables should be specified in [Travis](https://docs.travis-ci.com/user/environment-variables/#defining-variables-in-repository-settings).
- `CONTAINERS_QUAY_USERNAME` and `CONTAINERS_QUAY_PASSWORD` are credentials to push images to `quay.io/containers/skopeo` repos, and require the credentials to have write permissions. These variables should be specified in [Travis](https://docs.travis-ci.com/user/environment-variables/#defining-variables-in-repository-settings).
Variables in .travis.yml
- `MULTIARCH_MANIFEST_ARCHITECTURES` is a list with architecture shortnames, to appear in final multiarch manifest. The values should fit to architectures used in the `image-build-push` Travis step.
- `STABLE_IMAGE`, `EXTRA_STABLE_IMAGE` are image names to publish stable Skopeo.
- `UPSTREAM_IMAGE` is an image name to publish upstream Skopeo.
### Values for environment variables
| Env variable | Value |
| -------------------------------- |----------------------------------|
| MULTIARCH_MANIFEST_ARCHITECTURES | "amd64 s390x ppc64le" |
| STABLE_IMAGE | quay.io/skopeo/stable:v1.2.0 |
| EXTRA_STABLE_IMAGE | quay.io/containers/skopeo:v1.2.0 |
| UPSTREAM_IMAGE | quay.io/skopeo/upstream:master |

View File

@@ -27,11 +27,19 @@ load helpers
# Now run inspect locally
run_skopeo inspect dir:$workdir
inspect_local=$output
run_skopeo inspect --raw dir:$workdir
inspect_local_raw=$output
config_digest=$(jq -r '.config.digest' <<<"$inspect_local_raw")
# Each SHA-named file must be listed in the output of 'inspect'
# Each SHA-named layer file (but not the config) must be listed in the output of 'inspect'.
# As of Skopeo 1.6, (skopeo inspect)'s output lists layer digests,
# but not the digest of the config blob ($config_digest), if any.
layers=$(jq -r '.Layers' <<<"$inspect_local")
for sha in $(find $workdir -type f | xargs -l1 basename | egrep '^[0-9a-f]{64}$'); do
expect_output --from="$inspect_local" --substring "sha256:$sha" \
"Locally-extracted SHA file is present in 'inspect'"
if [ "sha256:$sha" != "$config_digest" ]; then
expect_output --from="$layers" --substring "sha256:$sha" \
"Locally-extracted SHA file is present in 'inspect'"
fi
done
# Simple sanity check on 'inspect' output.
@@ -93,7 +101,7 @@ END_EXPECT
# By default, 'inspect' tries to match our host os+arch. This should fail.
run_skopeo 1 inspect $img
expect_output --substring "Error parsing manifest for image: Error choosing image instance: no image found in manifest list for architecture $arch, variant " \
expect_output --substring "parsing manifest for image: choosing image instance: no image found in manifest list for architecture $arch, variant " \
"skopeo inspect, without --raw, fails"
# With --raw, we can inspect
@@ -108,4 +116,15 @@ END_EXPECT
"os - variant - architecture of $img"
}
@test "inspect: don't list tags" {
remote_image=docker://quay.io/fedora/fedora
# use --no-tags to not list any tags
run_skopeo inspect --no-tags $remote_image
inspect_output=$output
# extract the content of "RepoTags" property from the JSON output
repo_tags=$(jq '.RepoTags[]' <<<"$inspect_output")
# verify that the RepoTags was empty
expect_output --from="$repo_tags" "" "inspect --no-tags was expected to return empty RepoTags[]"
}
# vim: filetype=sh

View File

@@ -125,6 +125,10 @@ function setup() {
run podman --root $TESTDIR/podmanroot images
expect_output --substring "mine"
# rootless cleanup needs to be done with unshare due to subuids
if [[ "$(id -u)" != "0" ]]; then
run podman unshare rm -rf $TESTDIR/podmanroot
fi
}
# shared blob directory
@@ -144,6 +148,16 @@ function setup() {
diff -urN $shareddir $dir2/blobs
}
@test "copy: sif image" {
type -path fakeroot || skip "'fakeroot' tool not available"
local localimg=dir:$TESTDIR/dir
run_skopeo copy sif:${TEST_SOURCE_DIR}/testdata/busybox_latest.sif $localimg
run_skopeo inspect $localimg --format "{{.Architecture}}"
expect_output "amd64"
}
teardown() {
podman rm -f reg

View File

@@ -12,6 +12,13 @@ function setup() {
export GNUPGHOME=$TESTDIR/skopeo-gpg
mkdir --mode=0700 $GNUPGHOME
PASSPHRASE_FILE=$TESTDIR/passphrase-file
passphrase=$(random_string 20)
echo $passphrase > $PASSPHRASE_FILE
PASSPHRASE_FILE_WRONG=$TESTDIR/passphrase-file-wrong
echo $(random_string 10) > $PASSPHRASE_FILE_WRONG
# gpg on f30 needs this, otherwise:
# gpg: agent_genkey failed: Inappropriate ioctl for device
# ...but gpg on f29 (and, probably, Ubuntu) doesn't grok this
@@ -21,7 +28,7 @@ function setup() {
fi
for k in alice bob;do
gpg --batch $GPGOPTS --gen-key --passphrase '' <<END_GPG
gpg --batch $GPGOPTS --gen-key --passphrase $passphrase <<END_GPG
Key-Type: RSA
Name-Real: Test key - $k
Name-email: $k@test.redhat.com
@@ -81,8 +88,18 @@ END_POLICY_JSON
start_registry reg
}
function kill_gpg_agent {
# Kill the running gpg-agent to drop unlocked keys. This allows for testing
# handling of invalid passphrases.
run gpgconf --kill gpg-agent
if [ "$status" -ne 0 ]; then
die "could not restart gpg-agent: $output"
fi
}
@test "signing" {
run_skopeo '?' standalone-sign /dev/null busybox alice@test.redhat.com -o /dev/null
kill_gpg_agent
run_skopeo '?' standalone-sign /dev/null busybox alice@test.redhat.com -o /dev/null --passphrase-file $PASSPHRASE_FILE
if [[ "$output" =~ 'signing is not supported' ]]; then
skip "skopeo built without support for creating signatures"
return 1
@@ -100,7 +117,8 @@ END_POLICY_JSON
while read path sig comments; do
local sign_opt=
if [[ $sig != '-' ]]; then
sign_opt="--sign-by=${sig}@test.redhat.com"
kill_gpg_agent
sign_opt=" --sign-passphrase-file=$PASSPHRASE_FILE --sign-by=${sig}@test.redhat.com"
fi
run_skopeo --registries.d $REGISTRIES_D \
copy --dest-tls-verify=false \
@@ -144,7 +162,8 @@ END_TESTS
}
@test "signing: remove signature" {
run_skopeo '?' standalone-sign /dev/null busybox alice@test.redhat.com -o /dev/null
kill_gpg_agent
run_skopeo '?' standalone-sign /dev/null busybox alice@test.redhat.com -o /dev/null --passphrase-file $PASSPHRASE_FILE
if [[ "$output" =~ 'signing is not supported' ]]; then
skip "skopeo built without support for creating signatures"
return 1
@@ -157,11 +176,24 @@ END_TESTS
run_skopeo copy docker://quay.io/libpod/busybox:latest \
dir:$TESTDIR/busybox
# Push a signed image
kill_gpg_agent
run_skopeo --registries.d $REGISTRIES_D \
copy --dest-tls-verify=false \
--sign-by=alice@test.redhat.com \
--sign-passphrase-file $PASSPHRASE_FILE \
dir:$TESTDIR/busybox \
docker://localhost:5000/myns/alice:signed
# Wrong passphrase file
kill_gpg_agent
run_skopeo 1 --registries.d $REGISTRIES_D \
copy --dest-tls-verify=false \
--sign-by=alice@test.redhat.com \
--sign-passphrase-file $PASSPHRASE_FILE_WRONG \
dir:$TESTDIR/busybox \
docker://localhost:5000/myns/alice:signed
expect_output --substring "Bad passphrase"
# Fetch the image with signature
run_skopeo --registries.d $REGISTRIES_D \
--policy $POLICY_JSON \
@@ -180,7 +212,8 @@ END_TESTS
}
@test "signing: standalone" {
run_skopeo '?' standalone-sign /dev/null busybox alice@test.redhat.com -o /dev/null
kill_gpg_agent
run_skopeo '?' standalone-sign /dev/null busybox alice@test.redhat.com -o /dev/null --passphrase-file $PASSPHRASE_FILE
if [[ "$output" =~ 'signing is not supported' ]]; then
skip "skopeo built without support for creating signatures"
return 1
@@ -196,7 +229,9 @@ END_TESTS
docker://localhost:5000/busybox:latest \
dir:$TESTDIR/busybox
# Standalone sign
kill_gpg_agent
run_skopeo standalone-sign -o $TESTDIR/busybox.signature \
--passphrase-file $PASSPHRASE_FILE \
$TESTDIR/busybox/manifest.json \
localhost:5000/busybox:latest \
alice@test.redhat.com

View File

@@ -1,6 +1,10 @@
#!/bin/bash
SKOPEO_BINARY=${SKOPEO_BINARY:-$(dirname ${BASH_SOURCE})/../skopeo}
# Directory containing system test sources
TEST_SOURCE_DIR=${TEST_SOURCE_DIR:-$(dirname ${BASH_SOURCE})}
# Skopeo executable
SKOPEO_BINARY=${SKOPEO_BINARY:-${TEST_SOURCE_DIR}/../bin/skopeo}
# Default timeout for a skopeo command.
SKOPEO_TIMEOUT=${SKOPEO_TIMEOUT:-300}
@@ -356,9 +360,10 @@ start_registry() {
return
fi
timeout=$(expr $timeout - 1)
timeout=$(( timeout - 1 ))
sleep 1
done
log_and_run $PODMAN logs $name
die "Timed out waiting for registry container to respond on :$port"
}

BIN
systemtest/testdata/busybox_latest.sif vendored Executable file

Binary file not shown.

View File

@@ -1,5 +1,2 @@
TAGS
tags
.*.swp
tomlcheck/tomlcheck
toml.test
/toml-test

View File

@@ -1,15 +0,0 @@
language: go
go:
- 1.1
- 1.2
- 1.3
- 1.4
- 1.5
- 1.6
- tip
install:
- go install ./...
- go get github.com/BurntSushi/toml-test
script:
- export PATH="$PATH:$HOME/gopath/bin"
- make test

View File

@@ -1,3 +1 @@
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/v0.4.0/versions/en/toml-v0.4.0.md)
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).

View File

@@ -1,19 +0,0 @@
install:
go install ./...
test: install
go test -v
toml-test toml-test-decoder
toml-test -encoder toml-test-encoder
fmt:
gofmt -w *.go */*.go
colcheck *.go */*.go
tags:
find ./ -name '*.go' -print0 | xargs -0 gotags > TAGS
push:
git push origin master
git push github master

View File

@@ -1,46 +1,36 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
packages.
Spec: https://github.com/toml-lang/toml
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
Documentation: https://godocs.io/github.com/BurntSushi/toml
Documentation: https://godoc.org/github.com/BurntSushi/toml
See the [releases page](https://github.com/BurntSushi/toml/releases) for a
changelog; this information is also in the git tag annotations (e.g. `git show
v0.4.0`).
Installation:
This library requires Go 1.13 or newer; install it with:
```bash
go get github.com/BurntSushi/toml
```
% go get github.com/BurntSushi/toml@latest
Try the toml validator:
It also comes with a TOML validator CLI tool:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
% go install github.com/BurntSushi/toml/cmd/tomlv@latest
% tomlv some-toml-file.toml
### Testing
This package passes all tests in [toml-test] for both the decoder and the
encoder.
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
[toml-test]: https://github.com/BurntSushi/toml-test
### Examples
This package works similar to how the Go standard library handles XML and JSON.
Namely, data is loaded into Go values via reflection.
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
For the simplest example, consider some TOML file as just a list of keys and
values:
```toml
Age = 25
@@ -54,11 +44,11 @@ Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
@@ -66,9 +56,8 @@ And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
err := toml.Decode(tomlData, &conf)
// handle error
```
You can also use struct tags if your struct field name doesn't map to a TOML
@@ -80,12 +69,14 @@ some_key_NAME = "wat"
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Beware that like other most other decoders **only exported fields** are
considered when encoding and decoding; private fields are silently ignored.
### Using the `Marshaler` and `encoding.TextUnmarshaler` interfaces
Here's an example that automatically parses duration strings into
`time.Duration` values:
@@ -103,19 +94,19 @@ Which can be decoded with:
```go
type song struct {
Name string
Duration duration
Name string
Duration duration
}
type songs struct {
Song []song
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
@@ -134,8 +125,10 @@ func (d *duration) UnmarshalText(text []byte) error {
}
```
### More complex usage
To target TOML specifically you can implement `UnmarshalTOML` TOML interface in
a similar way.
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
@@ -180,23 +173,23 @@ And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
@@ -207,7 +200,7 @@ type server struct {
}
type clients struct {
Data [][]interface{}
Data [][]interface{}
Hosts []string
}
```
@@ -215,4 +208,4 @@ type clients struct {
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.
A working example of the above can be found in `_example/example.{go,toml}`.

View File

@@ -1,19 +1,16 @@
package toml
import (
"encoding"
"fmt"
"io"
"io/ioutil"
"math"
"os"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
@@ -27,29 +24,27 @@ func Unmarshal(p []byte, v interface{}) error {
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
// This type can be used for any value, which will cause decoding to be delayed.
// You can use the PrimitiveDecode() function to "manually" decode these values.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
// NOTE: The underlying representation of a `Primitive` value is subject to
// change. Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
// NOTE: Primitive values are still parsed, so using them will only avoid the
// overhead of reflection. They can be useful when you don't know the exact type
// of TOML data until runtime.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// The significand precision for float32 and float64 is 24 and 53 bits; this is
// the range a natural number can be stored in a float without loss of data.
const (
maxSafeFloat32Int = 16777215 // 2^24-1
maxSafeFloat64Int = 9007199254740991 // 2^53-1
)
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
@@ -68,79 +63,117 @@ func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
// Decoder decodes TOML data.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
// TOML tables correspond to Go structs or maps (dealer's choice they can be
// used interchangeably).
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
// TOML table arrays correspond to either a slice of structs or a slice of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
// TOML datetimes correspond to Go time.Time values. Local datetimes are parsed
// in the local timezone.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
// All other TOML types (float, string, int, bool and array) correspond to the
// obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
// An exception to the above rules is if a type implements the TextUnmarshaler
// interface, in which case any primitive TOML value (floats, strings, integers,
// booleans, datetimes) will be converted to a []byte and given to the value's
// UnmarshalText method. See the Unmarshaler example for a demonstration with
// time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
// TOML keys can map to either keys in a Go map or field names in a Go struct.
// The special `toml` struct tag can be used to map TOML keys to struct fields
// that don't match the key name exactly (see the example). A case insensitive
// match to struct names will be tried if an exact match can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
// The mapping between TOML values and Go values is loose. That is, there may
// exist TOML values that cannot be placed into your representation, and there
// may be parts of your representation that do not correspond to TOML values.
// This loose mapping can be made stricter by using the IsDefined and/or
// Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
// This decoder does not handle cyclic types. Decode will not terminate if a
// cyclic type is passed.
type Decoder struct {
r io.Reader
}
// NewDecoder creates a new Decoder.
func NewDecoder(r io.Reader) *Decoder {
return &Decoder{r: r}
}
var (
unmarshalToml = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
unmarshalText = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
)
// Decode TOML data in to the pointer `v`.
func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
s := "%q"
if reflect.TypeOf(v) == nil {
s = "%v"
}
return MetaData{}, e("cannot decode to non-pointer "+s, reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
return MetaData{}, e("cannot decode to nil value of %q", reflect.TypeOf(v))
}
p, err := parse(data)
// Check if this is a supported type: struct, map, interface{}, or something
// that implements UnmarshalTOML or UnmarshalText.
rv = indirect(rv)
rt := rv.Type()
if rv.Kind() != reflect.Struct && rv.Kind() != reflect.Map &&
!(rv.Kind() == reflect.Interface && rv.NumMethod() == 0) &&
!rt.Implements(unmarshalToml) && !rt.Implements(unmarshalText) {
return MetaData{}, e("cannot decode to type %s", rt)
}
// TODO: parser should read from io.Reader? Or at the very least, make it
// read from []byte rather than string
data, err := ioutil.ReadAll(dec.r)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
p, err := parse(string(data))
if err != nil {
return MetaData{}, err
}
return md, md.unify(p.mapping, indirect(rv))
md := MetaData{
mapping: p.mapping,
types: p.types,
keys: p.ordered,
decoded: make(map[string]struct{}, len(p.ordered)),
context: nil,
}
return md, md.unify(p.mapping, rv)
}
// Decode the TOML data in to the pointer v.
//
// See the documentation on Decoder for a description of the decoding process.
func Decode(data string, v interface{}) (MetaData, error) {
return NewDecoder(strings.NewReader(data)).Decode(v)
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
// contents of the file at path and decode it for you.
func DecodeFile(path string, v interface{}) (MetaData, error) {
fp, err := os.Open(path)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
defer fp.Close()
return NewDecoder(fp).Decode(v)
}
// unify performs a sort of type unification based on the structure of `rv`,
@@ -149,8 +182,8 @@ func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
// TODO: #76 would make this superfluous after implemented.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
@@ -170,25 +203,17 @@ func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
if v, ok := rv.Interface().(encoding.TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// TODO:
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
// encoding.TextUnmarshaler interface but also corresponds to a TOML hash or
// array. In particular, the unmarshaler should only be applied to primitive
// TOML values. But at this point, it will be applied to all kinds of values
// and produce an incorrect error whenever those values are hashes or arrays
// (including arrays of tables).
k := rv.Kind()
@@ -223,9 +248,7 @@ func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
case reflect.Float32, reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
@@ -259,17 +282,17 @@ func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.decoded[md.context.add(key).String()] = struct{}{}
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
err := md.unify(datum, subv)
if err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
return e("cannot write unexported field %s.%s", rv.Type().String(), f.name)
}
}
}
@@ -277,27 +300,33 @@ func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
if k := rv.Type().Key().Kind(); k != reflect.String {
return fmt.Errorf(
"toml: cannot decode to a map with non-string key type (%s in %q)",
k, rv.Type())
}
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
return md.badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.decoded[md.context.add(k).String()] = struct{}{}
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey := indirect(reflect.New(rv.Type().Key()))
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
@@ -310,12 +339,10 @@ func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
return md.badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
if l := datav.Len(); l != rv.Len() {
return e("expected array length %d; got TOML array of length %d", rv.Len(), l)
}
return md.unifySliceArray(datav, rv)
}
@@ -326,7 +353,7 @@ func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
return md.badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
@@ -337,37 +364,31 @@ func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
l := data.Len()
for i := 0; i < l; i++ {
err := md.unify(data.Index(i).Interface(), indirect(rv.Index(i)))
if err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
return md.badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
if num < -math.MaxFloat32 || num > math.MaxFloat32 {
return e("value %f is out of range for float32", num)
}
fallthrough
case reflect.Float64:
rv.SetFloat(num)
@@ -376,7 +397,26 @@ func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
}
return nil
}
return badtype("float", data)
if num, ok := data.(int64); ok {
switch rv.Kind() {
case reflect.Float32:
if num < -maxSafeFloat32Int || num > maxSafeFloat32Int {
return e("value %d is out of range for float32", num)
}
fallthrough
case reflect.Float64:
if num < -maxSafeFloat64Int || num > maxSafeFloat64Int {
return e("value %d is out of range for float64", num)
}
rv.SetFloat(float64(num))
default:
panic("bug")
}
return nil
}
return md.badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
@@ -423,7 +463,7 @@ func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
}
return nil
}
return badtype("integer", data)
return md.badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
@@ -431,7 +471,7 @@ func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
return md.badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
@@ -439,9 +479,15 @@ func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case Marshaler:
text, err := sdata.MarshalTOML()
if err != nil {
return err
}
s = string(text)
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
@@ -459,7 +505,7 @@ func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
return md.badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
@@ -467,22 +513,27 @@ func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
return nil
}
func (md *MetaData) badtype(dst string, data interface{}) error {
return e("incompatible types: TOML key %q has type %T; destination has type %s", md.context, data, dst)
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
// Pointers are followed until the value is not a pointer. New values are
// allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of interest
// to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
if _, ok := pv.Interface().(encoding.TextUnmarshaler); ok {
return pv
}
}
@@ -498,12 +549,12 @@ func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
if _, ok := rv.Interface().(encoding.TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}

19
vendor/github.com/BurntSushi/toml/decode_go116.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
//go:build go1.16
// +build go1.16
package toml
import (
"io/fs"
)
// DecodeFS is just like Decode, except it will automatically read the contents
// of the file at `path` from a fs.FS instance.
func DecodeFS(fsys fs.FS, path string, v interface{}) (MetaData, error) {
fp, err := fsys.Open(path)
if err != nil {
return MetaData{}, err
}
defer fp.Close()
return NewDecoder(fp).Decode(v)
}

21
vendor/github.com/BurntSushi/toml/deprecated.go generated vendored Normal file
View File

@@ -0,0 +1,21 @@
package toml
import (
"encoding"
"io"
)
// Deprecated: use encoding.TextMarshaler
type TextMarshaler encoding.TextMarshaler
// Deprecated: use encoding.TextUnmarshaler
type TextUnmarshaler encoding.TextUnmarshaler
// Deprecated: use MetaData.PrimitiveDecode.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]struct{})}
return md.unify(primValue.undecoded, rvalue(v))
}
// Deprecated: use NewDecoder(reader).Decode(&value).
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) { return NewDecoder(r).Decode(v) }

View File

@@ -1,27 +1,13 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
Package toml implements decoding and encoding of TOML files.
The specification implemented: https://github.com/toml-lang/toml
This package supports TOML v1.0.0, as listed on https://toml.io
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
There is also support for delaying decoding with the Primitive type, and
querying the set of keys in a TOML document with the MetaData type.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
and can be used to verify if TOML document is valid. It can also be used to
print the type of each key.
*/
package toml

View File

@@ -2,57 +2,106 @@ package toml
import (
"bufio"
"encoding"
"errors"
"fmt"
"io"
"math"
"reflect"
"sort"
"strconv"
"strings"
"time"
"github.com/BurntSushi/toml/internal"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
errArrayNilElement = errors.New("toml: cannot encode array with nil element")
errNonString = errors.New("toml: cannot encode a map with non-string key type")
errNoKey = errors.New("toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
var dblQuotedReplacer = strings.NewReplacer(
"\"", "\\\"",
"\\", "\\\\",
"\x00", `\u0000`,
"\x01", `\u0001`,
"\x02", `\u0002`,
"\x03", `\u0003`,
"\x04", `\u0004`,
"\x05", `\u0005`,
"\x06", `\u0006`,
"\x07", `\u0007`,
"\b", `\b`,
"\t", `\t`,
"\n", `\n`,
"\x0b", `\u000b`,
"\f", `\f`,
"\r", `\r`,
"\x0e", `\u000e`,
"\x0f", `\u000f`,
"\x10", `\u0010`,
"\x11", `\u0011`,
"\x12", `\u0012`,
"\x13", `\u0013`,
"\x14", `\u0014`,
"\x15", `\u0015`,
"\x16", `\u0016`,
"\x17", `\u0017`,
"\x18", `\u0018`,
"\x19", `\u0019`,
"\x1a", `\u001a`,
"\x1b", `\u001b`,
"\x1c", `\u001c`,
"\x1d", `\u001d`,
"\x1e", `\u001e`,
"\x1f", `\u001f`,
"\x7f", `\u007f`,
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
// Marshaler is the interface implemented by types that can marshal themselves
// into valid TOML.
type Marshaler interface {
MarshalTOML() ([]byte, error)
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
// Encoder encodes a Go to a TOML document.
//
// The mapping between Go values and TOML values should be precisely the same as
// for the Decode* functions.
//
// The toml.Marshaler and encoder.TextMarshaler interfaces are supported to
// encoding the value as custom TOML.
//
// If you want to write arbitrary binary data then you will need to use
// something like base64 since TOML does not have any binary types.
//
// When encoding TOML hashes (Go maps or structs), keys without any sub-hashes
// are encoded first.
//
// Go maps will be sorted alphabetically by key for deterministic output.
//
// Encoding Go values without a corresponding TOML representation will return an
// error. Examples of this includes maps with non-string keys, slices with nil
// elements, embedded non-struct types, and nested slices containing maps or
// structs. (e.g. [][]map[string]string is not allowed but []map[string]string
// is okay, as is []map[string][]string).
//
// NOTE: only exported keys are encoded due to the use of reflection. Unexported
// keys are silently discarded.
type Encoder struct {
// String to use for a single indentation level; default is two spaces.
Indent string
w *bufio.Writer
hasWritten bool // written any output to w yet?
}
// NewEncoder create a new Encoder.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
@@ -60,29 +109,10 @@ func NewEncoder(w io.Writer) *Encoder {
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
// Encode writes a TOML representation of the Go value to the Encoder's writer.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
// An error is returned if the value given cannot be encoded to a valid TOML
// document.
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
@@ -106,13 +136,18 @@ func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
// Special case: time needs to be in ISO8601 format.
//
// Special case: if we can marshal the type to text, then we used that. This
// prevents the encoder for handling these types as generic structs (or
// whatever the underlying type of a TextMarshaler is).
switch t := rv.Interface().(type) {
case time.Time, encoding.TextMarshaler, Marshaler:
enc.writeKeyValue(key, rv, false)
return
// TODO: #76 would make this superfluous after implemented.
case Primitive:
enc.encode(key, reflect.ValueOf(t.undecoded))
return
}
@@ -123,12 +158,12 @@ func (enc *Encoder) encode(key Key, rv reflect.Value) {
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
enc.writeKeyValue(key, rv, false)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
enc.writeKeyValue(key, rv, false)
}
case reflect.Interface:
if rv.IsNil() {
@@ -148,55 +183,88 @@ func (enc *Encoder) encode(key Key, rv reflect.Value) {
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
encPanic(fmt.Errorf("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
// eElement encodes any value that can be an array element.
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
case time.Time: // Using TextMarshaler adds extra quotes, which we don't want.
format := time.RFC3339Nano
switch v.Location() {
case internal.LocalDatetime:
format = "2006-01-02T15:04:05.999999999"
case internal.LocalDate:
format = "2006-01-02"
case internal.LocalTime:
format = "15:04:05.999999999"
}
switch v.Location() {
default:
enc.wf(v.Format(format))
case internal.LocalDatetime, internal.LocalDate, internal.LocalTime:
enc.wf(v.In(time.UTC).Format(format))
}
return
case Marshaler:
s, err := v.MarshalTOML()
if err != nil {
encPanic(err)
}
enc.writeQuoted(string(s))
return
case encoding.TextMarshaler:
s, err := v.MarshalText()
if err != nil {
encPanic(err)
}
enc.writeQuoted(string(s))
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
f := rv.Float()
if math.IsNaN(f) {
enc.wf("nan")
} else if math.IsInf(f, 0) {
enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
} else {
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 32)))
}
case reflect.Float64:
f := rv.Float()
if math.IsNaN(f) {
enc.wf("nan")
} else if math.IsInf(f, 0) {
enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
} else {
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 64)))
}
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Struct:
enc.eStruct(nil, rv, true)
case reflect.Map:
enc.eMap(nil, rv, true)
case reflect.Interface:
enc.eElement(rv.Elem())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
encPanic(fmt.Errorf("unexpected primitive type: %T", rv.Interface()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
// By the TOML spec, all floats must have a decimal with at least one number on
// either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
@@ -205,7 +273,7 @@ func floatAddDecimal(fstr string) string {
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
enc.wf("\"%s\"", dblQuotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
@@ -230,40 +298,39 @@ func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.wf("%s[[%s]]", enc.indentStr(key), key)
enc.newline()
enc.eMapOrStruct(key, trv)
enc.eMapOrStruct(key, trv, false)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.wf("%s[%s]", enc.indentStr(key), key)
enc.newline()
}
enc.eMapOrStruct(key, rv)
enc.eMapOrStruct(key, rv, false)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value, inline bool) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
enc.eMap(key, rv, inline)
case reflect.Struct:
enc.eStruct(key, rv)
enc.eStruct(key, rv, inline)
default:
// Should never happen?
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
@@ -274,114 +341,159 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value) {
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
if typeIsTable(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
var writeMapKeys = func(mapKeys []string, trailC bool) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
for i, mapKey := range mapKeys {
val := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(val) {
continue
}
enc.encode(key.add(mapKey), mrv)
if inline {
enc.writeKeyValue(Key{mapKey}, val, true)
if trailC || i != len(mapKeys)-1 {
enc.wf(", ")
}
} else {
enc.encode(key.add(mapKey), val)
}
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
if inline {
enc.wf("{")
}
writeMapKeys(mapKeysDirect, len(mapKeysSub) > 0)
writeMapKeys(mapKeysSub, false)
if inline {
enc.wf("}")
}
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
const is32Bit = (32 << (^uint(0) >> 63)) == 32
func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// a field that creates a new table then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
//
// Fields is a [][]int: for fieldsDirect this always has one entry (the
// struct index). For fieldsSub it contains two entries: the parent field
// index from tv, and the field indexes for the fields of the sub.
var (
rt = rv.Type()
fieldsDirect, fieldsSub [][]int
addFields func(rt reflect.Type, rv reflect.Value, start []int)
)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
if f.PkgPath != "" && !f.Anonymous { /// Skip unexported fields.
continue
}
frv := rv.Field(i)
// Treat anonymous struct fields with tag names as though they are
// not anonymous, like encoding/json does.
//
// Non-struct anonymous fields use the normal encoding logic.
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
addFields(t, frv, append(start, f.Index...))
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if t.Elem().Kind() == reflect.Struct && getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
addFields(t.Elem(), frv.Elem(), append(start, f.Index...))
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
if typeIsTable(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
// Copy so it works correct on 32bit archs; not clear why this
// is needed. See #314, and https://www.reddit.com/r/golang/comments/pnx8v4
// This also works fine on 64bit, but 32bit archs are somewhat
// rare and this is a wee bit faster.
if is32Bit {
copyStart := make([]int, len(start))
copy(copyStart, start)
fieldsDirect = append(fieldsDirect, append(copyStart, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
writeFields := func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
fieldType := rt.FieldByIndex(fieldIndex)
fieldVal := rv.FieldByIndex(fieldIndex)
if isNil(fieldVal) { /// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
opts := getOptions(fieldType.Tag)
if opts.skip {
continue
}
keyName := sft.Name
keyName := fieldType.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
if opts.omitempty && isEmpty(fieldVal) {
continue
}
if opts.omitzero && isZero(sf) {
if opts.omitzero && isZero(fieldVal) {
continue
}
enc.encode(key.add(keyName), sf)
if inline {
enc.writeKeyValue(Key{keyName}, fieldVal, true)
if fieldIndex[0] != len(fields)-1 {
enc.wf(", ")
}
} else {
enc.encode(key.add(keyName), fieldVal)
}
}
}
if inline {
enc.wf("{")
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
if inline {
enc.wf("}")
}
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
// tomlTypeOfGo returns the TOML type name of the Go value's type.
//
// It is used to determine whether the types of array elements are mixed (which
// is forbidden). If the Go value is nil, then it is illegal for it to be an
// array element, and valueIsNil is returned as true.
//
// The type may be `nil`, which means no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
@@ -408,19 +520,43 @@ func tomlTypeOfGo(rv reflect.Value) tomlType {
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
if _, ok := rv.Interface().(time.Time); ok {
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
if isMarshaler(rv) {
return tomlString
}
return tomlHash
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
if isMarshaler(rv) {
return tomlString
}
encPanic(errors.New("unsupported type: " + rv.Kind().String()))
panic("unreachable")
}
}
func isMarshaler(rv reflect.Value) bool {
switch rv.Interface().(type) {
case encoding.TextMarshaler:
return true
case Marshaler:
return true
}
// Someone used a pointer receiver: we can make it work for pointer values.
if rv.CanAddr() {
if _, ok := rv.Addr().Interface().(encoding.TextMarshaler); ok {
return true
}
if _, ok := rv.Addr().Interface().(Marshaler); ok {
return true
}
}
return false
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
@@ -430,30 +566,19 @@ func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
/// Don't allow nil.
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
if tomlTypeOfGo(rv.Index(i)) == nil {
encPanic(errArrayNilElement)
}
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
@@ -511,18 +636,32 @@ func (enc *Encoder) newline() {
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
// Write a key/value pair:
//
// key = <any value>
//
// This is also used for "k = v" in inline tables; so something like this will
// be written in three calls:
//
// ┌────────────────────┐
// │ ┌───┐ ┌─────┐│
// v v v v vv
// key = {k = v, k2 = v2}
//
func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
if !inline {
enc.newline()
}
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
_, err := fmt.Fprintf(enc.w, format, v...)
if err != nil {
encPanic(err)
}
enc.hasWritten = true
@@ -553,16 +692,3 @@ func isNil(rv reflect.Value) bool {
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

View File

@@ -1,19 +0,0 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View File

@@ -1,18 +0,0 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

229
vendor/github.com/BurntSushi/toml/error.go generated vendored Normal file
View File

@@ -0,0 +1,229 @@
package toml
import (
"fmt"
"strings"
)
// ParseError is returned when there is an error parsing the TOML syntax.
//
// For example invalid syntax, duplicate keys, etc.
//
// In addition to the error message itself, you can also print detailed location
// information with context by using ErrorWithLocation():
//
// toml: error: Key 'fruit' was already created and cannot be used as an array.
//
// At line 4, column 2-7:
//
// 2 | fruit = []
// 3 |
// 4 | [[fruit]] # Not allowed
// ^^^^^
//
// Furthermore, the ErrorWithUsage() can be used to print the above with some
// more detailed usage guidance:
//
// toml: error: newlines not allowed within inline tables
//
// At line 1, column 18:
//
// 1 | x = [{ key = 42 #
// ^
//
// Error help:
//
// Inline tables must always be on a single line:
//
// table = {key = 42, second = 43}
//
// It is invalid to split them over multiple lines like so:
//
// # INVALID
// table = {
// key = 42,
// second = 43
// }
//
// Use regular for this:
//
// [table]
// key = 42
// second = 43
type ParseError struct {
Message string // Short technical message.
Usage string // Longer message with usage guidance; may be blank.
Position Position // Position of the error
LastKey string // Last parsed key, may be blank.
Line int // Line the error occurred. Deprecated: use Position.
err error
input string
}
// Position of an error.
type Position struct {
Line int // Line number, starting at 1.
Start int // Start of error, as byte offset starting at 0.
Len int // Lenght in bytes.
}
func (pe ParseError) Error() string {
msg := pe.Message
if msg == "" { // Error from errorf()
msg = pe.err.Error()
}
if pe.LastKey == "" {
return fmt.Sprintf("toml: line %d: %s", pe.Position.Line, msg)
}
return fmt.Sprintf("toml: line %d (last key %q): %s",
pe.Position.Line, pe.LastKey, msg)
}
// ErrorWithUsage() returns the error with detailed location context.
//
// See the documentation on ParseError.
func (pe ParseError) ErrorWithPosition() string {
if pe.input == "" { // Should never happen, but just in case.
return pe.Error()
}
var (
lines = strings.Split(pe.input, "\n")
col = pe.column(lines)
b = new(strings.Builder)
)
msg := pe.Message
if msg == "" {
msg = pe.err.Error()
}
// TODO: don't show control characters as literals? This may not show up
// well everywhere.
if pe.Position.Len == 1 {
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d:\n\n",
msg, pe.Position.Line, col+1)
} else {
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d-%d:\n\n",
msg, pe.Position.Line, col, col+pe.Position.Len)
}
if pe.Position.Line > 2 {
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-2, lines[pe.Position.Line-3])
}
if pe.Position.Line > 1 {
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-1, lines[pe.Position.Line-2])
}
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line, lines[pe.Position.Line-1])
fmt.Fprintf(b, "% 10s%s%s\n", "", strings.Repeat(" ", col), strings.Repeat("^", pe.Position.Len))
return b.String()
}
// ErrorWithUsage() returns the error with detailed location context and usage
// guidance.
//
// See the documentation on ParseError.
func (pe ParseError) ErrorWithUsage() string {
m := pe.ErrorWithPosition()
if u, ok := pe.err.(interface{ Usage() string }); ok && u.Usage() != "" {
return m + "Error help:\n\n " +
strings.ReplaceAll(strings.TrimSpace(u.Usage()), "\n", "\n ") +
"\n"
}
return m
}
func (pe ParseError) column(lines []string) int {
var pos, col int
for i := range lines {
ll := len(lines[i]) + 1 // +1 for the removed newline
if pos+ll >= pe.Position.Start {
col = pe.Position.Start - pos
if col < 0 { // Should never happen, but just in case.
col = 0
}
break
}
pos += ll
}
return col
}
type (
errLexControl struct{ r rune }
errLexEscape struct{ r rune }
errLexUTF8 struct{ b byte }
errLexInvalidNum struct{ v string }
errLexInvalidDate struct{ v string }
errLexInlineTableNL struct{}
errLexStringNL struct{}
)
func (e errLexControl) Error() string {
return fmt.Sprintf("TOML files cannot contain control characters: '0x%02x'", e.r)
}
func (e errLexControl) Usage() string { return "" }
func (e errLexEscape) Error() string { return fmt.Sprintf(`invalid escape in string '\%c'`, e.r) }
func (e errLexEscape) Usage() string { return usageEscape }
func (e errLexUTF8) Error() string { return fmt.Sprintf("invalid UTF-8 byte: 0x%02x", e.b) }
func (e errLexUTF8) Usage() string { return "" }
func (e errLexInvalidNum) Error() string { return fmt.Sprintf("invalid number: %q", e.v) }
func (e errLexInvalidNum) Usage() string { return "" }
func (e errLexInvalidDate) Error() string { return fmt.Sprintf("invalid date: %q", e.v) }
func (e errLexInvalidDate) Usage() string { return "" }
func (e errLexInlineTableNL) Error() string { return "newlines not allowed within inline tables" }
func (e errLexInlineTableNL) Usage() string { return usageInlineNewline }
func (e errLexStringNL) Error() string { return "strings cannot contain newlines" }
func (e errLexStringNL) Usage() string { return usageStringNewline }
const usageEscape = `
A '\' inside a "-delimited string is interpreted as an escape character.
The following escape sequences are supported:
\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX
To prevent a '\' from being recognized as an escape character, use either:
- a ' or '''-delimited string; escape characters aren't processed in them; or
- write two backslashes to get a single backslash: '\\'.
If you're trying to add a Windows path (e.g. "C:\Users\martin") then using '/'
instead of '\' will usually also work: "C:/Users/martin".
`
const usageInlineNewline = `
Inline tables must always be on a single line:
table = {key = 42, second = 43}
It is invalid to split them over multiple lines like so:
# INVALID
table = {
key = 42,
second = 43
}
Use regular for this:
[table]
key = 42
second = 43
`
const usageStringNewline = `
Strings must always be on a single line, and cannot span more than one line:
# INVALID
string = "Hello,
world!"
Instead use """ or ''' to split strings over multiple lines:
string = """Hello,
world!"""
`

3
vendor/github.com/BurntSushi/toml/go.mod generated vendored Normal file
View File

@@ -0,0 +1,3 @@
module github.com/BurntSushi/toml
go 1.16

36
vendor/github.com/BurntSushi/toml/internal/tz.go generated vendored Normal file
View File

@@ -0,0 +1,36 @@
package internal
import "time"
// Timezones used for local datetime, date, and time TOML types.
//
// The exact way times and dates without a timezone should be interpreted is not
// well-defined in the TOML specification and left to the implementation. These
// defaults to current local timezone offset of the computer, but this can be
// changed by changing these variables before decoding.
//
// TODO:
// Ideally we'd like to offer people the ability to configure the used timezone
// by setting Decoder.Timezone and Encoder.Timezone; however, this is a bit
// tricky: the reason we use three different variables for this is to support
// round-tripping without these specific TZ names we wouldn't know which
// format to use.
//
// There isn't a good way to encode this right now though, and passing this sort
// of information also ties in to various related issues such as string format
// encoding, encoding of comments, etc.
//
// So, for the time being, just put this in internal until we can write a good
// comprehensive API for doing all of this.
//
// The reason they're exported is because they're referred from in e.g.
// internal/tag.
//
// Note that this behaviour is valid according to the TOML spec as the exact
// behaviour is left up to implementations.
var (
localOffset = func() int { _, o := time.Now().Zone(); return o }()
LocalDatetime = time.FixedZone("datetime-local", localOffset)
LocalDate = time.FixedZone("date-local", localOffset)
LocalTime = time.FixedZone("time-local", localOffset)
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,33 +1,39 @@
package toml
import "strings"
import (
"strings"
)
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
// MetaData allows access to meta information about TOML data that's not
// accessible otherwise.
//
// It allows checking if a key is defined in the TOML data, whether any keys
// were undecoded, and the TOML type of a key.
type MetaData struct {
context Key // Used only during decoding.
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
decoded map[string]struct{}
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
// IsDefined reports if the key exists in the TOML data.
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
// The key should be specified hierarchically, for example to access the TOML
// key "a.b.c" you would use IsDefined("a", "b", "c"). Keys are case sensitive.
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
// Returns false for an empty key.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
var (
hash map[string]interface{}
ok bool
hashOrVal interface{} = md.mapping
)
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
@@ -41,58 +47,20 @@ func (md *MetaData) IsDefined(key ...string) bool {
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
// Type will return the empty string if given an empty key or a key that does
// not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
if typ, ok := md.types[Key(key).String()]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific. The list will have the same
// order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
@@ -113,9 +81,40 @@ func (md *MetaData) Keys() []Key {
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
if _, ok := md.decoded[key.String()]; !ok {
undecoded = append(undecoded, key)
}
}
return undecoded
}
// Key represents any TOML key, including key groups. Use (MetaData).Keys to get
// values of this type.
type Key []string
func (k Key) String() string {
ss := make([]string, len(k))
for i := range k {
ss[i] = k.maybeQuoted(i)
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
if k[i] == "" {
return `""`
}
for _, c := range k[i] {
if !isBareKeyChar(c) {
return `"` + dblQuotedReplacer.Replace(k[i]) + `"`
}
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}

Some files were not shown because too many files have changed in this diff Show More