Compare commits

...

250 Commits

Author SHA1 Message Date
Miloslav Trmač
a552909737 Release 1.12.0
More template functions available in (skopeo inspect --format)
Adds new ways to supply trusted keys to (skopeo standalone-verify).

Now requires Go 1.18.

- [CI:DOCS] Fix up language in README
- Add unit tests for tlsVerifyConfig's yaml.Unmarshaler
- Cirrus: Use human-readable CI VM Images
- [CI:BUILD] copr: fix el8 build and enable debuginfo
- [CI:BUILD] enable debuginfo for el8 copr builds
- Update to use, and benefit from, Go 1.18
- [CI:DOCS] Disable dependabot
- Renovate: c/common rule moved to defaults
- [CI:BUILD] Packit: initial enablement
- Replace gopkg.in/check.v1 by github.com/stretchr/testify/suite/
- Corrected typo in skopeo-sync and updated description
- Fix tabelating output in (skopeo inspect --format)
- Use common library reporter
- Fix formatting of inspect examples
- Use io.WriteString
- Factor out the output of data in (skopeo inspect)
- Simplify inspectOptions.writeOutput a bit more
- Cirrus: Update CI VM images
- Make the installation instructions more prominent in README.md
- [CI:BUILD] Packit: trigger builds on commit to main branch
- systemtests: Fix 040-local-registry-auth about XDG_RUNTIME_DIR
- Verify signatures from a trust store
- Rename argument. Only use any with public key file. Double check fingerprint is in public key file.
- Use multiple fingerprint function Allow comma separated fingerprint list
- Avoid use of a deprecated capability.NewPid
- Fix error handling of signature.NewEphemeralGPGSigningMechanism
- Cross-link the top-level and subcommand option lists
- Use golangci-lint instead of golint
- Add (make tools) to install (for now only) golangci-lint, use it in Cirrus

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-12 22:47:03 +02:00
Miloslav Trmač
2363dfeeab Merge pull request #1969 from containers/renovate/github.com-containers-common-0.x
Update module github.com/containers/common to v0.52.0
2023-04-11 20:02:19 +02:00
renovate[bot]
5f0314f342 Update module github.com/containers/common to v0.52.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-11 17:35:08 +00:00
Miloslav Trmač
c1836e1971 Merge pull request #1963 from containers/renovate/github.com-containers-storage-1.x
Update module github.com/containers/storage to v1.46.1
2023-04-11 19:32:23 +02:00
renovate[bot]
66157589c5 Update module github.com/containers/storage to v1.46.1
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-10 22:42:41 +00:00
Daniel J Walsh
1aa6371db4 Merge pull request #1944 from mtrmac/top-options
Cross-link the top-level and subcommand option lists
2023-04-10 15:07:07 -04:00
Daniel J Walsh
c1d54feac9 Merge pull request #1967 from mtrmac/golangci-lint
Rework hack/make*, and switch from golint to golangci-lint
2023-04-06 16:53:40 -04:00
Miloslav Trmač
7c66b7405a Add (make tools) to install (for now only) golangci-lint, use it in Cirrus
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
d4bd787e5f Use golangci-lint instead of golint
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
c538340e3b Finally, eliminate hack/make.sh
The only thing hack/make.sh is now really doing is the
warning + sleep without SKOPEO_CONTAINER_TESTS .

So, make that a separate script, and eliminate the
hack/make directory.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
f8f5a25fe2 Actually fail if (go vet) fails
The errors are printed on stderr, so read that.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
aebab49285 Speed up validate-git-marks by about a factor of three
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
4298692dd4 Don't use hack/make.sh for validate-git-marks
hack/make.sh now does not make a difference, so simplify.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
7e35ad54c3 Test all files by validate-git-marks
This is simpler to do, cheap enough for our repo size, and it
does not require a network access to see which files to check.

And it's the last user of hack/make/.validate, which I wanted to
remove in the first place.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
789257f767 Simplify the package list of (go vet)
That extra process is an extra 0.5s on macOS at least.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
bee51e5eed Don't use hack/make.sh for validate-gofmt
hack/make.sh now does not make a difference, so simplify.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
85fef03670 Run gofmt on all files, not just the changed ones
... so that if we upgrade gofmt, the updates
need happen immediately.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
82268ea8bf Don't use hack/make.sh for validate-lint
hack/make.sh now does not make a difference, so simplify.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
694b1565d1 Lint many more files in validate-lint
- Always lint everything, not just changed files;
  that means that if we upgrade the linter, we will
  need to clean everything up, but that's a good thing
  for contributors who come after that linter upgrade.

- Don't skip linting the integration tests, there's no
  good reason to skip them.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
43090b2917 Don't use hack/make.sh for validate-vet
hack/make.sh now does not make a difference, so simplify.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
225f239a69 Remove no-longer-necessary module options
We now require Go 1.18. As of that version:
- GO111MODULE=on is implied by having a go.mod file
- -mod=vendor is implied by having a vendor directory

so just remove both options everywhere

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
98b01af031 Fix Makefile dependencies
validate-docs requires bin/skopeo; test-unit-local ctually doesn't.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
835d71a3a4 Remove some outright unused code from hack/make*
Should not affect observable behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
30ecd8f040 Cross-link the top-level and subcommand option lists
... primarily to make the top-level options more discoverable.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:36 +02:00
Miloslav Trmač
cd1c43c65d Merge pull request #1965 from mtrmac/verify-error
Fix error handling of signature.NewEphemeralGPGSigningMechanism
2023-04-06 21:44:58 +02:00
Miloslav Trmač
4be583c8a9 Fix error handling of signature.NewEphemeralGPGSigningMechanism
signature.NewEphemeralGPGSigningMechanism is called in an if branch
where the previous err := introduces a "new" err variable, which means
the failure isn't visible after the if.

So, do the dumb thing and just check on both branches explicitly.
(We still need to worry about correctly setting "mech" and
"publicKeyfingerprints" to persist after the if.)

How I hate Go sometimes. And this shows we really should update
the linter.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:23:36 +02:00
Miloslav Trmač
0f3e6e30e4 Merge pull request #1966 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20230405
2023-04-06 21:22:33 +02:00
renovate[bot]
e841409787 chore(deps): update dependency containers/automation_images to v20230405
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-06 16:29:28 +00:00
Miloslav Trmač
9ffdceb157 Merge pull request #1968 from mtrmac/NewPid2
Avoid use of a deprecated capability.NewPid
2023-04-06 18:24:32 +02:00
Miloslav Trmač
4f5e821436 Avoid use of a deprecated capability.NewPid
This is reported by golangci-lint.

(Absolutely untested.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-05 22:21:26 +02:00
Miloslav Trmač
1e70fee21d Merge pull request #1962 from containers/renovate/github.com-spf13-cobra-1.x
fix(deps): update module github.com/spf13/cobra to v1.7.0
2023-04-05 20:07:27 +02:00
renovate[bot]
ca0f8418e1 fix(deps): update module github.com/spf13/cobra to v1.7.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-05 16:29:43 +00:00
Miloslav Trmač
3ddbfec17c Merge pull request #1903 from containers/renovate/github.com-containers-image-v5-5.x
fix(deps): update module github.com/containers/image/v5 to v5.25.0
2023-04-05 18:28:43 +02:00
renovate[bot]
b0d339f0fd fix(deps): update module github.com/containers/image/v5 to v5.25.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-05 16:14:44 +00:00
Miloslav Trmač
c4dac7632c Merge pull request #1964 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.7.0
2023-04-05 18:13:15 +02:00
renovate[bot]
03ca2871fe fix(deps): update module golang.org/x/term to v0.7.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-05 15:45:36 +00:00
Miloslav Trmač
841eab319f Merge pull request #1950 from Jamstah/easy-verify-options
Verify signatures from a list of public keys
2023-04-05 17:40:52 +02:00
James Hewitt
4ca2058d01 Use multiple fingerprint function
Allow comma separated fingerprint list

To be squashed later

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2023-04-01 12:07:35 +01:00
James Hewitt
c54f2025d8 Review comments (to be squashed later
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2023-04-01 11:51:58 +01:00
James Hewitt
9b1f1fa1a9 Rename argument. Only use any with public key file. Double check fingerprint is in public key file.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2023-04-01 11:51:57 +01:00
James Hewitt
3097b7a4e9 Verify signatures from a trust store
Add the ability to use an on-disk trust store to verify signatures. Also allow the user to trust any known fingerprint instead of having to specify one.

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2023-04-01 11:51:57 +01:00
Miloslav Trmač
d08bf21367 Merge pull request #1959 from mtrmac/update-image
Update c/image from the main branch
2023-04-01 12:41:59 +02:00
Miloslav Trmač
bfe82593c8 Update c/image from the main branch
> go get github.com/containers/image/v5@main
> make vendor

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-01 12:24:04 +02:00
Miloslav Trmač
4f475bd4d2 Merge pull request #1957 from containers/renovate/github.com-containers-common-0.x
Update module github.com/containers/common to v0.51.2
2023-04-01 12:14:03 +02:00
renovate[bot]
468ac6559e Update module github.com/containers/common to v0.51.2
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-01 09:30:13 +00:00
Miloslav Trmač
2409c25922 Merge pull request #1955 from containers/renovate/major-ci-vm-image
Update dependency containers/automation_images to v20230330
2023-04-01 11:27:58 +02:00
renovate[bot]
7481aae6ac Update dependency containers/automation_images to v20230330
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-30 19:02:55 +00:00
Miloslav Trmač
3952227939 Merge pull request #1953 from sstosh/tests-xdg
systemtests: Fix 040-local-registry-auth about XDG_RUNTIME_DIR
2023-03-28 22:36:09 +02:00
Toshiki Sonoda
454f8559c1 systemtests: Fix 040-local-registry-auth about XDG_RUNTIME_DIR
Need to restore XDG_RUNTIME_DIR when podman removes the registry.

Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
2023-03-28 15:01:41 +09:00
Miloslav Trmač
0c43e43e36 Merge pull request #1942 from lsm5/packit-build-on-main-commit
[CI:BUILD] Packit: trigger builds on commit to main branch
2023-03-24 21:03:43 +01:00
Lokesh Mandvekar
bbdcb79c03 [CI:BUILD] Packit: trigger builds on commit to main branch
This commit lets packit trigger builds on
`rhcontainerbot/podman-next` copr after a commit to the main branch
instead of the current github webhook trigger.

The builds triggered via packit also provide more information in their
`version-release`:

Current webhook triggered build:
`101:0.0.git.2460.cfd6f20f-1`.
Ref: https://copr.fedorainfracloud.org/coprs/rhcontainerbot/podman-next/package/skopeo/

Packit triggered build for another package (netavark) on podman-next:
101:1.6.0~dev-1.20230321121647013339.main.61.gd6f0352
Ref: https://copr.fedorainfracloud.org/coprs/rhcontainerbot/podman-next/package/netavark/

The packit triggered build correctly shows the upstream branch name,
commit id, timestamp as well as the upstream version.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-03-24 11:21:54 +05:30
Miloslav Trmač
dd80c87c2c Merge pull request #1947 from containers/renovate/actions-stale-8.x
[skip-ci] Update actions/stale action to v8
2023-03-23 21:33:37 +01:00
renovate[bot]
cd4f2ee554 [skip-ci] Update actions/stale action to v8
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-23 09:38:29 +00:00
Valentin Rothberg
f588f8bde7 Merge pull request #1943 from mtrmac/promote-install
Make the installation instructions more prominent in README.md
2023-03-23 09:33:11 +01:00
Miloslav Trmač
b2ede999f3 Make the installation instructions more prominent in README.md
... per https://github.com/containers/skopeo/issues/1940 .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-03-22 22:29:47 +01:00
Miloslav Trmač
b43ec279d2 Merge pull request #1945 from containers/renovate/major-ci-vm-image
Update dependency containers/automation_images to v20230320
2023-03-22 22:28:17 +01:00
renovate[bot]
8ea5fd4458 Update dependency containers/automation_images to v20230320
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-22 20:14:04 +00:00
Miloslav Trmač
81c63c331f Merge pull request #1946 from containers/renovate/github.com-containers-common-0.x
Update module github.com/containers/common to v0.51.1
2023-03-22 21:12:46 +01:00
renovate[bot]
aa9862a718 Update module github.com/containers/common to v0.51.1
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-22 15:11:42 +00:00
Miloslav Trmač
cfd6f20fbd Merge pull request #1939 from cevich/debian_vms
Cirrus: Update CI VM images
2023-03-16 22:30:48 +01:00
Chris Evich
0ad54d6df3 Cirrus: Update CI VM images
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-16 15:34:44 -04:00
Miloslav Trmač
c88f3eab64 Merge pull request #1938 from lsm5/update-golang-org-x-net
bump golang.org/x/net to v0.8.0
2023-03-15 19:17:08 +01:00
Lokesh Mandvekar
20447df139 bump golang.org/x/net to v0.8.0
Resolves: CVE-2022-41723
Ref: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-41723

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-03-15 18:58:10 +05:30
Miloslav Trmač
d15ecce269 Merge pull request #1932 from containers/renovate/golang.org-x-term-0.x
Update module golang.org/x/term to v0.6.0
2023-03-06 17:59:46 +01:00
renovate[bot]
3481a5b927 Update module golang.org/x/term to v0.6.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-05 03:40:10 +00:00
Valentin Rothberg
44eff83253 Merge pull request #1927 from mtrmac/WriteString
Use io.WriteString
2023-02-28 09:16:12 +01:00
Valentin Rothberg
a1c9655cff Merge pull request #1928 from mtrmac/inspect-duplicated
Avoid duplicated code in `skopeo inspect`
2023-02-28 09:14:18 +01:00
Miloslav Trmač
bcc0d54e54 Simplify inspectOptions.writeOutput a bit more
Don't maintain a named array variable that we only append to
exactly once.

Should not change behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-27 19:36:16 +01:00
Miloslav Trmač
c345785d28 Factor out the output of data in (skopeo inspect)
The two code paths are basically exactly identical, so
share the code.

Should not change behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-27 19:35:53 +01:00
Miloslav Trmač
2a6a944c13 Use io.WriteString
... mostly so that I get practice and remember this exists in the future.

(This saves one allocation & copy when the target implements
io.StringWriter. And that makes absolutely no relevant difference
on this path.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-27 18:15:47 +01:00
Miloslav Trmač
ad1b09dea4 Merge pull request #1925 from containers/renovate/github.com-stretchr-testify-1.x
Update module github.com/stretchr/testify to v1.8.2
2023-02-27 15:40:25 +01:00
renovate[bot]
9a02c1eb57 Update module github.com/stretchr/testify to v1.8.2
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-27 10:52:05 +00:00
Daniel J Walsh
7060b094a7 Merge pull request #1923 from containers/renovate/github.com-containers-storage-1.x
Update module github.com/containers/storage to v1.45.4
2023-02-27 05:48:27 -05:00
renovate[bot]
f1c03ef104 Update module github.com/containers/storage to v1.45.4
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-23 19:18:39 +00:00
Daniel J Walsh
f0abd60623 Merge pull request #1904 from containers/renovate/golang.org-x-exp-digest
Update golang.org/x/exp digest to 5e25df0
2023-02-23 14:15:50 -05:00
Daniel J Walsh
719ae1d890 Merge pull request #1908 from mtrmac/warnings
Fix some warnings
2023-02-23 14:15:11 -05:00
renovate[bot]
64daedca6c Update golang.org/x/exp digest to 5e25df0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-23 18:49:00 +00:00
Daniel J Walsh
508891281c Merge pull request #1822 from cblecker/report
Use common library reporter
2023-02-23 12:32:55 -05:00
Christoph Blecker
c07f20982c Fix formatting of inspect examples
Signed-off-by: Christoph Blecker <cblecker@redhat.com>
2023-02-23 09:06:16 -08:00
Christoph Blecker
313f142c87 Use common library reporter
Signed-off-by: Christoph Blecker <cblecker@redhat.com>
2023-02-23 08:59:13 -08:00
Miloslav Trmač
4beb3f0af9 Fix some warnings
golangci-lint linter: unparam

This primarily fixes TestProxy to test the correct image

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-23 16:08:45 +01:00
Valentin Rothberg
371604ba27 Merge pull request #1921 from mtrmac/inspect-tabs
Fix tabelating output in (skopeo inspect --format)
2023-02-23 08:26:40 +01:00
Miloslav Trmač
1c3d49f012 Fix tabelating output in (skopeo inspect --format)
tabwriter buffers lines that contain \t in memory, and only
writes them out on a .Flush(). So actually call that.

Without this, things like
> --format 'name\tdigest\tlabels\n{{.Name}}\t{{.Digest}}\t{{.Labels}}\n'
result in no output at all.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-23 00:41:09 +01:00
Miloslav Trmač
d833619740 Merge pull request #1920 from dbeezt/main
Corrected Skopeo-Sync Documentation
2023-02-22 18:11:47 +01:00
dbeezt
fb0be61374 Corrected typo in skopeo-sync and updated description
Signed-off-by: dbeezt <dbrereton1995@gmail.com>
2023-02-22 15:30:02 +00:00
Valentin Rothberg
e61893eebd Merge pull request #1913 from mtrmac/testify-suite
Replace gopkg.in/check.v1 by github.com/stretchr/testify/suite/
2023-02-21 10:24:59 +01:00
Miloslav Trmač
2ef9cf6902 Replace gopkg.in/check.v1 by github.com/stretchr/testify/suite/
gopkg.in/check.v1 hasn't had any commit since Nov 2020.
That's not a immediate issue for a test-only dependency, but
because it hides access to the standard library *testing.T,
eventually it will become limiting.

Also, using the same framework for unit and integration tests
seems practical.

This is mostly a batch copy&paste job, with a fairly high risk
of unexpected breakage.

Also, I didn't take much time at all to carefully choose between
assert.* and require.*; we can tune that as failures show up.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-17 02:35:51 +01:00
Valentin Rothberg
c0c7065737 Merge pull request #1911 from mtrmac/c-image-update
Update c/image after https://github.com/containers/image/pull/1842
2023-02-16 09:04:54 +01:00
Miloslav Trmač
0ba164f072 Update c/image after https://github.com/containers/image/pull/1842
... so that Renovate doesn't keep proposing a downgrade.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-15 19:20:03 +01:00
Miloslav Trmač
e0893efc48 Merge pull request #1905 from lsm5/packit
[CI:BUILD] Packit: initial enablement
2023-02-14 18:33:12 +01:00
Lokesh Mandvekar
012e1144cf [CI:BUILD] Packit: initial enablement
This commit will run COPR builds on every PR against all active
releases of CentOS Stream and Fedora, thus allowing buildability checks before the
PR merges.

Builds are done on a custom COPR project:
`rhcontainerbot/packit-builds`.
Ref: https://copr.fedorainfracloud.org/coprs/rhcontainerbot/packit-builds/

The build targets are set in the copr itself, so we don't need to
explicitly mention them in `.packit.yaml`, making upstream configuration
a lot simpler.

The `spec.rpkg` file meant for rpm builds post-pr-merge at
`rhcontainerbot/podman-next` copr gets reused for packit builds, so the
packit jobs are independent of Fedora / CentOS dist-git.

NOTE: The Packit copr_build tasks help to check if every commit builds on
supported Fedora and CentOS Stream arches. They do not block the current
Cirrus-based workflow.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-02-14 16:15:01 +05:30
Miloslav Trmač
3362a1998a Merge pull request #1906 from cevich/update_renovate
[CI:DOCS] Renovate: c/common rule moved to defaults
2023-02-13 16:50:47 +01:00
Chris Evich
5435c80867 Renovate: c/common rule moved to defaults
Ref: https://github.com/containers/automation/pull/128

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-02-13 09:21:34 -05:00
Miloslav Trmač
95680f3c07 Merge pull request #1901 from mtrmac/c-image-eof
Update c/image after https://github.com/containers/image/pull/1816
2023-02-10 18:24:17 +01:00
Miloslav Trmač
643a2359e4 Update c/image after https://github.com/containers/image/pull/1816
... to work around some of the "unexpected EOF" failures.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-09 20:36:27 +01:00
Miloslav Trmač
e9f30e5b65 Merge pull request #1900 from rhatdan/codespell
Run codespell on codebase
2023-02-09 20:35:52 +01:00
Daniel J Walsh
2c6e15b5ab Run codespell on codebase
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2023-02-09 09:13:32 -05:00
Miloslav Trmač
e257db0aa9 Merge pull request #1898 from cevich/disable_dependabot
[CI:DOCS] Disable dependabot
2023-02-08 20:44:34 +01:00
Chris Evich
df708d1652 [CI:DOCS] Disable dependabot
Ref:
https://docs.github.com/en/code-security/dependabot/dependabot-security-updates/configuring-dependabot-security-updates

Disabling it via the WebUI isn't good enough, the configuration file
must also be absent.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-02-08 14:39:10 -05:00
Miloslav Trmač
0fdd10491e Merge pull request #1897 from containers/renovate/golang.org-x-term-0.x
Update module golang.org/x/term to v0.5.0
2023-02-07 23:30:02 +01:00
renovate[bot]
2acac8a6c2 Update module golang.org/x/term to v0.5.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-07 21:58:55 +00:00
Miloslav Trmač
5d7edf1f7c Merge pull request #1896 from containers/renovate/golang.org-x-exp-digest
Update golang.org/x/exp digest to 46f607a
2023-02-07 03:34:17 +01:00
renovate[bot]
f9e2c67648 Update golang.org/x/exp digest to 46f607a
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-06 22:14:39 +00:00
Daniel J Walsh
8f5d4fc0dc Merge pull request #1893 from mtrmac/golangci-lint
Fix some golangci-lint-reported nits
2023-02-03 18:39:38 -05:00
Miloslav Trmač
47c7902ea2 Remove unnecessary blank lines
golangci-lint linter: whitespace

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:49:01 +01:00
Miloslav Trmač
c1a57ca199 Pre-allocate an array
golangci-lint linter: prealloc

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:49:01 +01:00
Miloslav Trmač
2a7b132790 Simplify a condition
golangci-lint linter: ifshort

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:49:01 +01:00
Miloslav Trmač
e7ab33e65f Rename a variable to avoid an underscore
golangci-lint linter: golint

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:59 +01:00
Miloslav Trmač
e90c381a02 Add missing comment punctuation
golangci-lint linter: godot

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
70c06b4ac0 Fix, or remove, comments using lint syntax
golangci-lint linter: gocritic

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
9137ac5697 Simplify an increment
golangci-lint linter: gocritic

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
efc6e837c6 Reformat import statements
golangci-lint linter: gci

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
a8b9e4e385 Use %w when wrapping errors
golangci-lint linter: errorlint

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
99215e40e9 Remove a duplicate word
golangci-lint linter: dupword

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Valentin Rothberg
7afbf30963 Merge pull request #1894 from mtrmac/go1.18
Update to Go 1.18
2023-02-03 09:25:41 +01:00
Miloslav Trmač
afa031e846 Use net/netip.Addr instead of net.IP
This is not really shorter, but more a matter of principle...

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 22:27:35 +01:00
Miloslav Trmač
891ba3d4a6 s/interface{}/any/g
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 22:27:35 +01:00
Miloslav Trmač
f2b3a9c04b Use golang.org/x/exp
... instead of open-coding loops.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 22:27:35 +01:00
Miloslav Trmač
f1a6d427b3 Use strings.Cut
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 21:43:35 +01:00
Miloslav Trmač
22955d0506 go mod tidy -go=1.18
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 21:43:12 +01:00
Miloslav Trmač
45dc2071ca Merge pull request #1890 from lsm5/fix-copr-el8-debuginfo
[CI:BUILD] enable debuginfo for el8 copr builds
2023-02-02 17:54:12 +01:00
Lokesh Mandvekar
007f01c656 [CI:BUILD] enable debuginfo for el8 copr builds
Follow up on #1816.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-02-02 16:52:10 +05:30
Miloslav Trmač
2fc5f2ef60 Merge pull request #1816 from lsm5/fix-copr
[CI:BUILD] copr: fix el8 build
2023-02-01 19:08:47 +01:00
Lokesh Mandvekar
036bf59885 [CI:BUILD] copr: fix el8 build and enable debuginfo
Fedora 35 builds are disabled, so remove fedora 35
conditionals while we're at it.

Bump containers-common dependency to match with that in
podman.spec.rpkg.

TODO: fix debuginfo for rhel8

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-02-01 20:52:38 +05:30
Miloslav Trmač
ce16b5b007 Merge pull request #1888 from cevich/managed_ci_vm_images
Cirrus: Use human-readable CI VM Images
2023-02-01 14:42:01 +01:00
Chris Evich
f9406bb06b Cirrus: Use human-readable CI VM Images
Image content hasn't changed much, the biggest thing here is the
`$IMAGE_SUFFIX` value.  This new schema is also fully manageable
by renovate.  Allowing a tag-push to c/automation_images to create image
update PRs in all repos automatically.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-31 17:03:35 -05:00
Miloslav Trmač
e7f4af0651 Merge pull request #1885 from rhatdan/main
Update module gopkg.in/yaml.v2 to v3
2023-01-31 17:29:05 +01:00
Daniel J Walsh
b41b85abc4 Update module gopkg.in/yaml.v2 to v3
Signed-off-by: Renovate Bot <bot@renovateapp.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2023-01-31 05:50:34 -06:00
Valentin Rothberg
5e0156e13b Merge pull request #1886 from mtrmac/yaml-unit-test
Add unit tests for tlsVerifyConfig's yaml.Unmarshaler
2023-01-30 08:23:43 +01:00
Miloslav Trmač
d2fbec3508 Add unit tests for tlsVerifyConfig's yaml.Unmarshaler
This is a security-critical piece of code, make sure we detect
if it breaks.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-27 22:03:36 +01:00
Miloslav Trmač
601d255bb3 Merge pull request #1884 from TomSweeneyRedHat/dev/tsweeney/langread
[CI:DOCS] Fix up language in README
2023-01-27 17:04:11 +01:00
tomsweeneyredhat
9e24a19543 [CI:DOCS] Fix up language in README
Touch up a few left over comments from #1871

[NO NEW TESTS NEEDED]

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2023-01-26 18:11:09 -05:00
Daniel J Walsh
968670116c Merge pull request #1883 from rhatdan/VERSION
Bump to v1.11.0
2023-01-26 16:13:24 -05:00
Daniel J Walsh
cc958d3e5d Move to v1.11.1-dev
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2023-01-26 15:34:30 -05:00
Daniel J Walsh
9d036f3053 Bump to v1.11.0
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2023-01-26 15:34:30 -05:00
Daniel J Walsh
7b886d11bb Merge pull request #1871 from TomSweeneyRedHat/dev/tsweeney/fixlang
Touch up conscious language issues
2023-01-26 15:33:46 -05:00
Valentin Rothberg
17df36a3e6 Merge pull request #1879 from sstosh/fix-docs
[CI:DOCS] Format manual page documents
2023-01-26 08:05:00 +01:00
Toshiki Sonoda
83bcd13659 [CI:DOCS] Format manual page documents
- Add a prompt to the skopeo commands.

- Add a "console" identifier to fenced code
blocks which has a prompt, not "sh".

Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
2023-01-25 17:10:11 +09:00
Miloslav Trmač
b3b2c73764 Merge pull request #1877 from containers/renovate/github.com-containers-common-0.x
Update module github.com/containers/common to v0.51.0
2023-01-24 17:57:41 +01:00
renovate[bot]
afbdaf8ecb Update module github.com/containers/common to v0.51.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-01-24 17:39:17 +01:00
Miloslav Trmač
fe15a36ed9 Merge pull request #1876 from containers/renovate/github.com-containers-image-v5-5.x
Update module github.com/containers/image/v5 to v5.24.0
2023-01-23 22:56:19 +01:00
renovate[bot]
c91142485e Update module github.com/containers/image/v5 to v5.24.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-01-23 21:30:51 +00:00
Valentin Rothberg
61c519dcf2 Merge pull request #1869 from mtrmac/generate-keys
Add (skopeo generate-sigstore-key)
2023-01-23 17:54:34 +01:00
Miloslav Trmač
0fad119375 Add (skopeo generate-sigstore-key)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-23 17:39:09 +01:00
Miloslav Trmač
48b9d94c87 Update c/image after https://github.com/containers/image/pull/1810
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-23 17:39:09 +01:00
Daniel J Walsh
47919520f5 Merge pull request #1868 from mtrmac/developer-system-tests
Fix `make test-system` when run as an unprivileged user (containerized)
2023-01-23 11:13:50 -05:00
Valentin Rothberg
e0a5df297d Merge pull request #1864 from mtrmac/storage-big-hammer
Fix storage.conf overrides in test-system in CI, update c/storage
2023-01-23 10:06:00 +01:00
tomsweeneyredhat
80e3fd1095 Touch up conscious language issues
Touch up a few issues with language in the project to
make it more inclusive.

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2023-01-21 17:13:25 -05:00
Miloslav Trmač
9f04dfdec9 Partially fix removal of temporary data in (make test-system)
Use (podman unshare) as already suggested, it is necessary for an unprivileged
user to remove the temporary c/storage state.  OTOH it doesn't work with Docker at all.

Don't use the - prefix, it only works at the _start_ of a rule, not in the middle of
a multi-line shell script.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-20 20:07:45 +01:00
Miloslav Trmač
36c480f643 Don't affect $XDG_RUNTIME_DIR of Podman starting the registry
Otherwise $XDG_RUNTIME_DIR/netns gets created and mounted,
breaking (rm -rf).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-20 20:06:08 +01:00
renovate[bot]
850bc49d27 Update module github.com/containers/storage to v1.45.3
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-01-20 17:46:01 +01:00
Miloslav Trmač
a98c137243 Fix storage.conf setup in test-system
- Don't do it at all for the CI VM: We can use the
  VM's global Podman configuration, and use faster overlay
  instead of vfs, so let's do that.
- For the developer-run (make test-system):
  - Add graphroot and runroot paths to make the configuration minimally valid
  - Explicitly point CONTAINERS_STORAGE_CONF at the configutation
    to be certain it will get used.

Then drop the (podman pull ...) in runner.sh:_podman_reset that seemed to
previously workaround the invalid /etc/containers/storage.conf .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-20 17:43:21 +01:00
Miloslav Trmač
198155027d Fix (test-integration), in a container without CI
Fixes #1222 .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-20 17:38:29 +01:00
Miloslav Trmač
641efe9930 Merge pull request #1862 from cevich/fix_image_testing
Cirrus: Fix c/image CI testing
2023-01-19 19:28:23 +01:00
Chris Evich
67a8bef6ea Cirrus: Fix c/image CI testing
The containers/image CI setup reuses the runner script from this repo to
execute the skopeo tests.  However, an env. var. is being taken out of
context in that environment, leading to failure.  Fix this by
hard-coding an image-name which will always be available in both
environments.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-19 12:34:03 -05:00
Daniel J Walsh
9d65de7d61 Merge pull request #1861 from containers/dependabot/go_modules/github.com/containers/ocicrypt-1.1.7
Bump github.com/containers/ocicrypt from 1.1.6 to 1.1.7
2023-01-19 08:05:35 -05:00
dependabot[bot]
63da8390f1 Bump github.com/containers/ocicrypt from 1.1.6 to 1.1.7
Bumps [github.com/containers/ocicrypt](https://github.com/containers/ocicrypt) from 1.1.6 to 1.1.7.
- [Release notes](https://github.com/containers/ocicrypt/releases)
- [Commits](https://github.com/containers/ocicrypt/compare/v1.1.6...v1.1.7)

---
updated-dependencies:
- dependency-name: github.com/containers/ocicrypt
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-19 09:03:20 +00:00
Miloslav Trmač
b51eb214c2 Merge pull request #1821 from cevich/F37_update
Cirrus: Update to F37 CI VM Images
2023-01-18 17:16:13 +01:00
Chris Evich
1fac61ef57 Cirrus: Add a common intra-test reset function
This is necessary, since running the skopeo tests modifies the host
environment.  This can result in some warning messages the first time
a container is started.  These messages can interfere with tests which
are sensitive to stdout/stderr.  Since many/most tests require a local
image registry, launch it with `/bin/true` after doing a system reset
to clear away any pesky warning messages.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-18 10:09:44 -05:00
Chris Evich
292962d34c Fix unnecessary use of podman in CI test
For whatever reasons, the podman configuration in CI results in the
inspect test throwing the following error:

```
not ok 4 inspect: image manifest list w/ diff platform
125
configuration is unset - using hardcoded default graph root
\"/var/lib/containers/storage\""
configuration is unset - using hardcoded default graph root
\"/var/lib/containers/storage\""
StoreOptions
```

Fix this by not using `podman`. It's unnecessary, since all the test
needs is the golang-flavor of the current system's architecture name.
That can easily be obtained by asking the go tool directly.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-18 10:09:44 -05:00
Chris Evich
e239f32ae0 Cirrus: Update to F37 CI VM Images
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-18 10:09:44 -05:00
Chris Evich
ee8048583b Cirrus: Remove redundant package install attempt
These are already present in the VM images.  These instructions only
cause the DNF cache to be refreshed, wasting precious developer time.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-18 10:09:43 -05:00
Miloslav Trmač
1db6846c01 Merge pull request #1857 from containers/renovate/github.com-containers-storage-1.x
fix(deps): update module github.com/containers/storage to v1.45.1
2023-01-18 15:14:35 +01:00
renovate[bot]
0698e82b30 fix(deps): update module github.com/containers/storage to v1.45.1
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-01-18 00:13:07 +00:00
Daniel J Walsh
8e09e641bf Merge pull request #1849 from mtrmac/sign-by-sigstore
Add support for Fulcio and Rekor, and --sign-by-sigstore=param-file
2023-01-16 04:22:41 -05:00
Miloslav Trmač
bb1ac89327 Add support for Fulcio and Rekor, and --sign-by-sigstore=param-file
(skopeo copy) and (skopeo sync) now support --sign-by-sigstore=param-file,
using the containers-sigstore-signing-params.yaml(5) file format.

That notably adds support for Fulcio and Rekor signing.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-14 13:33:57 +01:00
Miloslav Trmač
03b5bdec24 Update c/image after https://github.com/containers/image/pull/1787
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-14 13:33:00 +01:00
Miloslav Trmač
28995cd5d4 Merge pull request #1853 from containers/renovate/github.com-containers-storage-1.x
fix(deps): update module github.com/containers/storage to v1.45.0
2023-01-13 12:57:40 +01:00
renovate[bot]
1133a2a395 fix(deps): update module github.com/containers/storage to v1.45.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-01-13 01:15:34 +00:00
Miloslav Trmač
28175104d7 Merge pull request #1850 from cevich/simple_release_ci
Cirrus: Skip OSX CI on release-branches
2023-01-12 21:47:56 +01:00
Chris Evich
d0cf39d860 Cirrus: Skip OSX CI on release-branches
This task does not make sense to maintain long-term on release
branches.  Its intent is always/only to test the latest/greatest code
and environment.  After release, it's simply too difficult to maintain
functioning CI with a constantly changing (Cirrus-managed) OSX environment.
Ensure the task only runs for PRs targeted at the default branch, or if
the current branch is the default branch.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-12 15:27:15 -05:00
Daniel J Walsh
71fa1f441f Merge pull request #1848 from mtrmac/stdout
Correctly use the stdout parameter in some places
2023-01-11 18:04:18 -05:00
Miloslav Trmač
f17eafe85b Correctly use the stdout parameter in some places
Should not change behavior - it would matter for unit tests
which don't exist.

Also, promptForPassphrase must continue to hard-code "real" os.Stdin and os.Stdout.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-11 22:09:35 +01:00
Miloslav Trmač
4517ea0b7b Merge pull request #1839 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.4.0
2023-01-04 20:03:54 +01:00
renovate[bot]
58bccf3882 fix(deps): update module golang.org/x/term to v0.4.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-01-04 18:38:35 +00:00
Daniel J Walsh
e71305f7bb Merge pull request #1835 from containers/renovate/actions-stale-7.x
[skip-ci] Update actions/stale action to v7
2023-01-02 07:29:10 -05:00
renovate[bot]
f0c08985b3 [skip-ci] Update actions/stale action to v7
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-01-02 11:27:05 +00:00
Valentin Rothberg
ae44ecd570 Merge pull request #1837 from cgwalters/blob-close
proxy: Fix leak of blobs from containers-storage
2023-01-02 09:35:32 +01:00
Colin Walters
92e3146aa0 proxy: Fix leak of blobs from containers-storage
Missing `.Close()` on the blob currently leaks a temporary
file.  Noticed this when doing repeated pulls.

Signed-off-by: Colin Walters <walters@verbum.org>
2022-12-30 11:35:10 -05:00
Colin Walters
f5aaabd5cc Merge pull request #1828 from cgwalters/update-http-vendoring
vendor: Bump golang.org/x/net to 4.0
2022-12-13 16:52:28 -05:00
Colin Walters
960713da32 vendor: Bump golang.org/x/net to 4.0
I originally thought I needed this to fix a build, but that
was apparently not the case.

Signed-off-by: Colin Walters <walters@verbum.org>
2022-12-13 16:36:57 -05:00
Miloslav Trmač
60ecf7a031 Merge pull request #1825 from cgwalters/auto-close-images
proxy: Ensure images are closed when proxy is shutting down
2022-12-13 21:50:22 +01:00
Colin Walters
b51f8ea200 proxy: Ensure images are closed when proxy is shutting down
This is a complementary fix for
https://github.com/coreos/rpm-ostree/issues/4213

Basically in the case of `oci-archive` we have a temporary
directory that needs cleanup.

Signed-off-by: Colin Walters <walters@verbum.org>
Co-authored-by: Miloslav Trmač <mitr@redhat.com>
Signed-off-by: Colin Walters <walters@verbum.org>
2022-12-13 15:30:55 -05:00
Valentin Rothberg
e024c43892 Merge pull request #1817 from mtrmac/copy-archive-example
Add an example for creating a docker-archive file
2022-12-07 09:19:47 +01:00
Miloslav Trmač
9c6cbc94c7 Add an example for creating a docker-archive file
... with the image correctly tagged.

I also snuck a warning against `docker-archive:` in there.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-12-06 23:47:34 +01:00
Miloslav Trmač
fb4c49739f Merge pull request #1802 from RishabhSaini/issue/153
proxy: Add LayerInfoJSON API
2022-12-06 23:37:54 +01:00
RishabhSaini
3eb9d71d7f proxy: Add GetLayerInfo API
Extract the LayerInfos of cached image
used for exposing diffIDs of Blobs

Signed-off-by: RishabhSaini <rsaini@redhat.com>
2022-12-06 16:57:12 -05:00
Miloslav Trmač
6e6104ff8b Merge pull request #1818 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.3.0
2022-12-06 21:01:49 +01:00
renovate[bot]
46d48295fb fix(deps): update module golang.org/x/term to v0.3.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2022-12-06 19:47:04 +00:00
Miloslav Trmač
c093484820 Merge pull request #1820 from cevich/fix_job_sequence
[skip-ci] GHA/Cirrus-cron: Fix execution order
2022-12-06 19:57:04 +01:00
Chris Evich
3212bbed6f [skip-ci] GHA/Cirrus-cron: Fix execution order
Fairly universally, the last Cirrus-Cron job is set to fire off at
22:22 UTC.  However, the re-run of failed jobs GHA workflow was
scheduled for 22:05, meaning it will never re-run the last cirrus-cron
job should it fail.

Re-arrange the execution order so as to give plenty of time between the
last cirrus-cron job starting, the auto-re-run attempt, and the final
failure-check e-mail.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-12-06 10:26:18 -05:00
Miloslav Trmač
b72a5c98a9 Merge pull request #1767 from Phuurl/main
Adds `--append-suffix` flag to sync command
2022-11-29 20:46:18 +01:00
Miloslav Trmač
f6d587d816 Merge pull request #1815 from kerel-fs/update_readme
README: Update example to show newly exposed LayerData
2022-11-28 20:43:49 +01:00
Fabian P. Schmidt
40ba7a27af Update skopeo-inspect man page example
Patch created by re-running the two example commands and manually
abbreviating long lists in the output.

Fixes #1766.

Signed-off-by: Fabian P. Schmidt <kerel@mailbox.org>
2022-11-28 18:12:35 +01:00
Fabian P. Schmidt
278be5a5d0 README: Update example to show newly exposed LayerData
Since d9dfc44 the 'skopeo inspect' command exposes the LayerData
which often contains the layer size. This is a very useful feature
so we mentioned it in the README now.

Signed-off-by: Fabian P. Schmidt <kerel@mailbox.org>
2022-11-25 17:35:28 +01:00
Miloslav Trmač
dc3f2b6cec Merge pull request #1813 from ashley-cui/cirrusm1
[CI:BUILD] Cirrus: Migrate OSX task to M1
2022-11-23 18:30:29 +01:00
Ashley Cui
b5ac534960 [CI:BUILD] Cirrus: Migrate OSX task to M1
Migrate our OSX build to a M1 instance, since Cirrus is sunsetting Intel-based macOS instances.

Signed-off-by: Ashley Cui <acui@redhat.com>
2022-11-22 10:49:55 -05:00
Chris Evich
661c9698ee Merge pull request #1811 from cevich/update_gha
[skip-ci] GHA: Add cirrus-cron auto-rerun job
2022-11-21 10:24:24 -05:00
Phil Corbett
35532b2404 Adds sync with tag suffix example
Signed-off-by: Phil Corbett <phil@phicorb.me.uk>
2022-11-17 17:49:27 +00:00
Chris Evich
1af1d9c261 GHA: Add cirrus-cron auto-rerun job
Also update the cirrus-cron monitoring job to reuse the podman workflow
instead of buildah.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-11-15 14:40:38 -05:00
Phil Corbett
bdf1930221 Adds --append-suffix flag to sync
Signed-off-by: Phil Corbett <phil@phicorb.me.uk>
2022-11-13 13:57:38 +00:00
Miloslav Trmač
b665ac4c09 Merge pull request #1808 from cevich/revdep-test-proxy
Cirrus: Add reverse-deps. test to verify proxy with ostree-rs-ext
2022-11-10 16:01:24 +01:00
Valentin Rothberg
e62fcca5ed Merge pull request #1809 from containers/renovate/github.com-containers-storage-1.x
fix(deps): update module github.com/containers/storage to v1.44.0
2022-11-09 09:58:19 +01:00
renovate[bot]
563c91a2fd fix(deps): update module github.com/containers/storage to v1.44.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2022-11-08 19:50:20 +00:00
Chris Evich
cf29c73079 Merge pull request #1805 from containers/renovate/actions-stale-6.x
[skip-ci] Update actions/stale action to v6
2022-11-08 14:48:13 -05:00
Chris Evich
e1fdb4da03 Cirrus: Add reverse-deps. test to verify proxy ext
This does reverse-dependency testing, verifying `proxy.go` using
the ostree-rs-ext Rust code's unit tests.

Based on #1781 by @cgwalters

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-11-08 12:45:25 -05:00
renovate[bot]
d06bf27eb8 [skip-ci] Update actions/stale action to v6
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2022-11-08 12:21:07 +00:00
Miloslav Trmač
7e6264136c Merge pull request #1804 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.2.0
2022-11-08 13:20:25 +01:00
renovate[bot]
8410bfdd91 fix(deps): update module golang.org/x/term to v0.2.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2022-11-07 22:25:08 +00:00
Daniel J Walsh
6136a2b9c3 Merge pull request #1803 from cevich/renovate_rebase
Renovate: Override global no-rebase option
2022-11-07 17:23:00 -05:00
Chris Evich
16d4a81b79 Renovate: Override global no-rebase option
The `behind-base-branch` setting means:

    Renovate will rebase whenever the branch falls 1 or more
    commit behind its base branch

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-11-07 15:28:21 -05:00
Chris Evich
794d6b4650 Merge pull request #1788 from containers/renovate/actions-stale-digest
chore(deps): update actions/stale digest to 65b52af
2022-11-07 15:02:30 -05:00
renovate[bot]
2b55a7231a chore(deps): update actions/stale to v3
Signed-off-by: Renovate Bot <bot@renovateapp.com>
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-11-07 14:56:58 -05:00
Miloslav Trmač
62e698b567 Merge pull request #1800 from containers/renovate/github.com-spf13-cobra-1.x
fix(deps): update module github.com/spf13/cobra to v1.6.1
2022-11-01 01:31:43 +01:00
renovate[bot]
f968b2a890 fix(deps): update module github.com/spf13/cobra to v1.6.1
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2022-10-31 03:11:30 +00:00
Valentin Rothberg
2739a29aea Merge pull request #1797 from mtrmac/warnings
Close a HTTP response body
2022-10-27 13:18:09 +02:00
Miloslav Trmač
fe5c4091ee Close a HTTP response body
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-27 08:00:52 +02:00
Daniel J Walsh
5a8d72635c Merge pull request #1791 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.1.0
2022-10-24 06:56:37 -04:00
Daniel J Walsh
88f6ff09f9 Merge pull request #1789 from containers/renovate/github.com-stretchr-testify-1.x
fix(deps): update module github.com/stretchr/testify to v1.8.1
2022-10-24 06:55:28 -04:00
renovate[bot]
d5327bced1 fix(deps): update module golang.org/x/term to v0.1.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2022-10-24 08:51:19 +00:00
renovate[bot]
6d3d9a3bb2 fix(deps): update module github.com/stretchr/testify to v1.8.1
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2022-10-24 05:27:27 +00:00
Valentin Rothberg
723351cec1 Merge pull request #1786 from mtrmac/update-image
Update to c/image main branch
2022-10-21 08:04:34 +02:00
Miloslav Trmač
5c69302d75 Update to c/image main branch
> go get github.com/containers/image/v5@main
> make vendor

... to make sure that we don't regress against Skopeo 1.9.3.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-20 20:09:25 +02:00
Daniel J Walsh
bdbb46be5a Merge pull request #1783 from vrothberg/bump
bump to v1.11.0-dev
2022-10-19 10:27:21 -04:00
Valentin Rothberg
6d564d4de8 bump to v1.11.0-dev
Given there is a release-1.10 branch, we should bump main to the next
minor version.

Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
2022-10-19 14:45:29 +02:00
Miloslav Trmač
01201df865 Merge pull request #1780 from containers/renovate/configure
[CI:DOCS] Configure Renovate
2022-10-18 23:09:36 +02:00
renovate[bot]
4c0e565038 chore(deps): add renovate.json
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-18 15:02:34 -04:00
Miloslav Trmač
03da797e42 Merge pull request #1776 from cgwalters/more-proxy-bits
proxy: Bump semver for OpenImageOptional
2022-10-17 16:24:12 +02:00
Colin Walters
757ec5dbf6 proxy: Bump semver for OpenImageOptional
I should have done this in the previous commit, it's how
clients can discover that we have the API.

Signed-off-by: Colin Walters <walters@verbum.org>
2022-10-13 16:09:19 -04:00
Miloslav Trmač
08c290170d Merge pull request #1757 from cgwalters/get-manifest-optional
proxy: Add `OpenImageOptional`
2022-10-13 03:28:06 +02:00
Colin Walters
08b27fc50e proxy: Add OpenImageOptional
In some code I'm writing I want to be able to cleanly test if an
image exists, as distinguished from other errors like authentication
problems, network flakes etc.

As best I can tell, the containers/image abstraction doesn't
offer a clean way to do this.

For now, I chose the route of adding the ugly string error matching
here for the two cases I care about (docker v2s2 registry and oci
directories), so my Rust code can operate in terms of clean
`Option<Image>`.

Signed-off-by: Colin Walters <walters@verbum.org>
2022-10-12 21:11:55 -04:00
Miloslav Trmač
7738dbb335 Merge pull request #1522 from mtrmac/collapse-errors
Update for https://github.com/containers/image/pull/1299
2022-10-12 23:27:27 +02:00
Miloslav Trmač
9b6f5b6e75 Add a workaround for public.ecr.aws not implementing tag list at all
Per https://github.com/containers/skopeo/issues/1230 , and
155d0665e8/docker/errors_test.go (L88) .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:57:29 +02:00
Miloslav Trmač
632cebd74e Update AWS workaround to use Golang types
FIXME: This is not actually tested against a representative
error; we basically assume generic "scope is not sufficient" handling.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:57:22 +02:00
Miloslav Trmač
ea9aa68b0f Reorganize the "list tags failed" logic in inspect.go a bit
... to allow adding more cases.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:57:14 +02:00
Miloslav Trmač
c476d62671 Remove a (skopeo inspect) workaround for IBM Bluemix
AFAICT, “IBM Bluemix” has become “IBM Cloud”, and the “Bluemix” registry
is now (somehow related to?) icr.io; e.g.
https://cloud.ibm.com/docs/Registry?topic=Registry-registry_overview
lists bluemix.net and icr.io host names.

Randomly looking for a public image hosted on that registry, at least
> skopeo list-tags docker://icr.io/codeengine/firstjob
now succeeds.

So I’m assuming that at least the current cloud deployment now allows
listing tags, and does not need special handling. (It's unclear if
that is true for all existing deployments.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:57:07 +02:00
Miloslav Trmač
fce2cf9c72 Fix an error message to refer to repo, not a single image
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:56:59 +02:00
Miloslav Trmač
9724da1ff2 Remove a special case for failing to list tags in (skopeo sync)
- It's unclear why it exists in the first place
- Looking at callers of imagesToCopyFromRepo, the only caller of this:
  either the input is a single repo, in which case the failure to
  list tags clearly results in a no-op and a "No images to sync" fatal
  failure ...
- ... or the input is YAML, and in that case the caller is already
  skipping the repo on a failure.

Either way, it's unclear why we would have a special "Registry disallows
tag retrieval" error special case instead of the generic text.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:56:52 +02:00
Miloslav Trmač
955a59c864 Update tests for changed error texts
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:56:46 +02:00
Miloslav Trmač
ae50898b8a Include c/image after https://github.com/containers/image/pull/1299
> go get github.com/containers/image/v5@main
> make vendor

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:56:18 +02:00
Miloslav Trmač
f3aee25c7c Fold a long line.
Should not change (test) behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:10:01 +02:00
Miloslav Trmač
1983173b60 Remove single-use "wanted" variables
They were useful before assertSkopeoSucceeds/assertSkopeoFails,
when they were used multiple times. Now, they don't
make the code any shorter.

Should not change (test) behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-10-12 22:09:48 +02:00
Miloslav Trmač
3411ebd462 Merge pull request #1775 from containers/dependabot/go_modules/github.com/spf13/cobra-1.6.0
Bump github.com/spf13/cobra from 1.5.0 to 1.6.0
2022-10-12 22:05:33 +02:00
dependabot[bot]
4ccfb033fb Bump github.com/spf13/cobra from 1.5.0 to 1.6.0
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.5.0 to 1.6.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.5.0...v1.6.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-12 08:11:32 +00:00
Daniel J Walsh
2133fa36da Merge pull request #1772 from containers/dependabot/go_modules/github.com/containers/ocicrypt-1.1.6
Bump github.com/containers/ocicrypt from 1.1.5 to 1.1.6
2022-10-10 09:40:05 -04:00
dependabot[bot]
a495155030 Bump github.com/containers/ocicrypt from 1.1.5 to 1.1.6
Bumps [github.com/containers/ocicrypt](https://github.com/containers/ocicrypt) from 1.1.5 to 1.1.6.
- [Release notes](https://github.com/containers/ocicrypt/releases)
- [Commits](https://github.com/containers/ocicrypt/compare/v1.1.5...v1.1.6)

---
updated-dependencies:
- dependency-name: github.com/containers/ocicrypt
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-10 08:10:52 +00:00
Miloslav Trmač
032fd15c10 Merge pull request #1770 from containers/dependabot/go_modules/github.com/opencontainers/image-spec-1.1.0-rc2
Bump github.com/opencontainers/image-spec from 1.1.0-rc1 to 1.1.0-rc2
2022-10-07 19:20:18 +02:00
dependabot[bot]
e021b675e2 Bump github.com/opencontainers/image-spec from 1.1.0-rc1 to 1.1.0-rc2
Bumps [github.com/opencontainers/image-spec](https://github.com/opencontainers/image-spec) from 1.1.0-rc1 to 1.1.0-rc2.
- [Release notes](https://github.com/opencontainers/image-spec/releases)
- [Changelog](https://github.com/opencontainers/image-spec/blob/main/RELEASES.md)
- [Commits](https://github.com/opencontainers/image-spec/compare/v1.1.0-rc1...v1.1.0-rc2)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/image-spec
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-04 08:26:31 +00:00
Daniel J Walsh
7ee3396575 Merge pull request #1765 from mtrmac/v1.10.0
Release v1.10.0
2022-09-30 21:44:33 -04:00
Miloslav Trmač
5eace4078f Bump to v1.10.1-dev
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-09-30 20:43:02 +02:00
2117 changed files with 248760 additions and 35907 deletions

View File

@@ -23,10 +23,10 @@ env:
####
#### Cache-image names to test with (double-quotes around names are critical)
####
FEDORA_NAME: "fedora-36"
FEDORA_NAME: "fedora-37"
# Google-cloud VM Images
IMAGE_SUFFIX: "c5495735033528320"
IMAGE_SUFFIX: "c20230405t152256z-f37f36d12"
FEDORA_CACHE_IMAGE_NAME: "fedora-${IMAGE_SUFFIX}"
# Container FQIN's
@@ -52,7 +52,9 @@ validate_task:
image: '${SKOPEO_CIDEV_CONTAINER_FQIN}'
cpu: 4
memory: 8
script: |
setup_script: |
make tools
test_script: |
make validate-local
make vendor && hack/tree_status.sh
@@ -75,19 +77,23 @@ doccheck_task:
"${SKOPEO_PATH}/${SCRIPT_BASE}/runner.sh" doccheck
osx_task:
# Run for regular PRs and those with [CI:BUILD] but not [CI:DOCS]
only_if: &not_docs_multiarch >-
# Don't run for docs-only or multi-arch image builds.
# Also don't run on release-branches or their PRs,
# since base container-image is not version-constrained.
only_if: &not_docs_or_release_branch >-
($CIRRUS_BASE_BRANCH == $CIRRUS_DEFAULT_BRANCH ||
$CIRRUS_BRANCH == $CIRRUS_DEFAULT_BRANCH ) &&
$CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*' &&
$CIRRUS_CRON != 'multiarch'
depends_on:
- validate
macos_instance:
image: catalina-xcode
image: ghcr.io/cirruslabs/macos-ventura-base:latest
setup_script: |
export PATH=$GOPATH/bin:$PATH
brew update
brew install gpgme go go-md2man
go install golang.org/x/lint/golint@latest
make tools
test_script: |
export PATH=$GOPATH/bin:$PATH
go version
@@ -99,7 +105,9 @@ osx_task:
cross_task:
alias: cross
only_if: *not_docs_multiarch
only_if: >-
$CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*' &&
$CIRRUS_CRON != 'multiarch'
depends_on:
- validate
gce_instance: &standardvm
@@ -119,6 +127,42 @@ cross_task:
"${GOSRC}/${SCRIPT_BASE}/runner.sh" cross
ostree-rs-ext_task:
alias: proxy_ostree_ext
only_if: *not_docs_or_release_branch
# WARNING: This task potentially performs a container image
# build (on change) with runtime package installs. Therefore,
# its behavior can be unpredictable and potentially flake-prone.
# In case of emergency, uncomment the next statement to bypass.
#
# skip: $CI == "true"
#
depends_on:
- validate
# Ref: https://cirrus-ci.org/guide/docker-builder-vm/#dockerfile-as-a-ci-environment
container:
# The runtime image will be rebuilt on change
dockerfile: contrib/cirrus/ostree_ext.dockerfile
docker_arguments: # required build-args
BASE_FQIN: quay.io/coreos-assembler/fcos-buildroot:testing-devel
CIRRUS_IMAGE_VERSION: 1
env:
EXT_REPO_NAME: ostree-rs-ext
EXT_REPO_HOME: $CIRRUS_WORKING_DIR/../$EXT_REPO_NAME
EXT_REPO: https://github.com/ostreedev/${EXT_REPO_NAME}.git
skopeo_build_script:
- dnf builddep -y skopeo
- make
- make install
proxy_ostree_ext_build_script:
- git clone --depth 1 $EXT_REPO $EXT_REPO_HOME
- cd $EXT_REPO_HOME
- cargo test --no-run
proxy_ostree_ext_test_script:
- cd $EXT_REPO_HOME
- cargo test -- --nocapture --quiet
#####
##### NOTE: This task is subtantially duplicated in the containers/image
##### repository's `.cirrus.yml`. Changes made here should be fully merged
@@ -242,6 +286,7 @@ success_task:
- doccheck
- osx
- cross
- proxy_ostree_ext
- test_skopeo
- image_build
- meta

View File

@@ -1,10 +0,0 @@
version: 2
updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
time: "10:00"
timezone: Europe/Berlin
open-pull-requests-limit: 10

59
.github/renovate.json5 vendored Normal file
View File

@@ -0,0 +1,59 @@
/*
Renovate is a service similar to GitHub Dependabot, but with
(fantastically) more configuration options. So many options
in fact, if you're new I recommend glossing over this cheat-sheet
prior to the official documentation:
https://www.augmentedmind.de/2021/07/25/renovate-bot-cheat-sheet
Configuration Update/Change Procedure:
1. Make changes
2. Manually validate changes (from repo-root):
podman run -it \
-v ./.github/renovate.json5:/usr/src/app/renovate.json5:z \
docker.io/renovate/renovate:latest \
renovate-config-validator
3. Commit.
Configuration Reference:
https://docs.renovatebot.com/configuration-options/
Monitoring Dashboard:
https://app.renovatebot.com/dashboard#github/containers
Note: The Renovate bot will create/manage it's business on
branches named 'renovate/*'. Otherwise, and by
default, the only the copy of this file that matters
is the one on the `main` branch. No other branches
will be monitored or touched in any way.
*/
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
/*************************************************
****** Global/general configuration options *****
*************************************************/
// Re-use predefined sets of configuration options to DRY
"extends": [
// https://github.com/containers/automation/blob/main/renovate/defaults.json5
"github>containers/automation//renovate/defaults.json5"
],
// Permit automatic rebasing when base-branch changes by more than
// one commit.
"rebaseWhen": "behind-base-branch",
/*************************************************
*** Repository-specific configuration options ***
*************************************************/
// Don't leave dep. update. PRs "hanging", assign them to people.
"assignees": ["containers/image-maintainers"], // same for skopeo
/*************************************************
***** Golang-specific configuration options *****
*************************************************/
}

View File

@@ -1,17 +1,20 @@
---
# See also:
# https://github.com/containers/podman/blob/main/.github/workflows/check_cirrus_cron.yml
on:
# Note: This only applies to the default branch.
schedule:
# N/B: This should correspond to a period slightly after
# the last job finishes running. See job defs. at:
# https://cirrus-ci.com/settings/repository/6706677464432640
- cron: '59 23 * * 1-5'
- cron: '03 03 * * 1-5'
# Debug: Allow triggering job manually in github-actions WebUI
workflow_dispatch: {}
jobs:
# Ref: https://docs.github.com/en/actions/using-workflows/reusing-workflows
call_cron_failures:
uses: containers/buildah/.github/workflows/check_cirrus_cron.yml@main
uses: containers/podman/.github/workflows/check_cirrus_cron.yml@main
secrets: inherit

19
.github/workflows/rerun_cirrus_cron.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
---
# See also: https://github.com/containers/podman/blob/main/.github/workflows/rerun_cirrus_cron.yml
on:
# Note: This only applies to the default branch.
schedule:
# N/B: This should correspond to a period slightly after
# the last job finishes running. See job defs. at:
# https://cirrus-ci.com/settings/repository/6706677464432640
- cron: '01 01 * * 1-5'
# Debug: Allow triggering job manually in github-actions WebUI
workflow_dispatch: {}
jobs:
# Ref: https://docs.github.com/en/actions/using-workflows/reusing-workflows
call_cron_rerun:
uses: containers/podman/.github/workflows/rerun_cirrus_cron.yml@main
secrets: inherit

View File

@@ -17,7 +17,7 @@ jobs:
pull-requests: write # for actions/stale to close stale PRs
runs-on: ubuntu-latest
steps:
- uses: actions/stale@98ed4cb500039dbcccf4bd9bedada4d0187f2757 # v3
- uses: actions/stale@v8
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'A friendly reminder that this issue had no activity for 30 days.'

26
.packit.sh Normal file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/env bash
# This script handles any custom processing of the spec file generated using the `post-upstream-clone`
# action and gets used by the fix-spec-file action in .packit.yaml.
set -eo pipefail
# Get Version from HEAD
VERSION=$(grep '^const Version' version/version.go | cut -d\" -f2 | sed -e 's/-/~/')
# Generate source tarball
git archive --prefix=skopeo-$VERSION/ -o skopeo-$VERSION.tar.gz HEAD
# RPM Spec modifications
# Update Version in spec with Version from Cargo.toml
sed -i "s/^Version:.*/Version: $VERSION/" skopeo.spec
# Update Release in spec with Packit's release envvar
sed -i "s/^Release:.*/Release: $PACKIT_RPMSPEC_RELEASE%{?dist}/" skopeo.spec
# Update Source tarball name in spec
sed -i "s/^Source:.*.tar.gz/Source: skopeo-$VERSION.tar.gz/" skopeo.spec
# Update setup macro to use the correct build dir
sed -i "s/^%setup.*/%autosetup -Sgit -n %{name}-$VERSION/" skopeo.spec

34
.packit.yaml Normal file
View File

@@ -0,0 +1,34 @@
---
# See the documentation for more information:
# https://packit.dev/docs/configuration/
# NOTE: The Packit copr_build tasks help to check if every commit builds on
# supported Fedora and CentOS Stream arches.
# They do not block the current Cirrus-based workflow.
# Build targets can be found at:
# https://copr.fedorainfracloud.org/coprs/rhcontainerbot/packit-builds/
specfile_path: skopeo.spec
jobs:
- &copr
job: copr_build
trigger: pull_request
owner: rhcontainerbot
project: packit-builds
enable_net: true
srpm_build_deps:
- make
- rpkg
actions:
post-upstream-clone:
- "rpkg spec --outdir ./"
fix-spec-file:
- "bash .packit.sh"
- <<: *copr
# Run on commit to main branch
trigger: commit
branch: main
project: podman-next

View File

@@ -38,13 +38,6 @@ endif
export CONTAINER_RUNTIME ?= $(if $(shell command -v podman ;),podman,docker)
GOMD2MAN ?= $(if $(shell command -v go-md2man ;),go-md2man,$(GOBIN)/go-md2man)
# Go module support: set `-mod=vendor` to use the vendored sources.
# See also hack/make.sh.
ifeq ($(shell go help mod >/dev/null 2>&1 && echo true), true)
GO:=GO111MODULE=on $(GO)
MOD_VENDOR=-mod=vendor
endif
ifeq ($(DEBUG), 1)
override GOGCFLAGS += -N -l
endif
@@ -56,17 +49,11 @@ ifeq ($(GOOS), linux)
endif
# If $TESTFLAGS is set, it is passed as extra arguments to 'go test'.
# You can increase test output verbosity with the option '-test.vv'.
# You can select certain tests to run, with `-test.run <regex>` for example:
# You can select certain tests to run, with `-run <regex>` for example:
#
# make test-unit TESTFLAGS='-test.run ^TestManifestDigest$'
#
# For integration test, we use [gocheck](https://labix.org/gocheck).
# You can increase test output verbosity with the option '-check.vv'.
# You can limit test selection with `-check.f <regex>`, for example:
#
# make test-integration TESTFLAGS='-check.f CopySuite.TestCopy.*'
export TESTFLAGS ?= -v -check.v -test.timeout=15m
# make test-unit TESTFLAGS='-run ^TestManifestDigest$'
# make test-integration TESTFLAGS='-run copySuite.TestCopy.*'
export TESTFLAGS ?= -timeout=15m
# This is assumed to be set non-empty when operating inside a CI/automation environment
CI ?=
@@ -139,9 +126,9 @@ binary: cmd/skopeo
# Build w/o using containers
.PHONY: bin/skopeo
bin/skopeo:
$(GO) build $(MOD_VENDOR) ${GO_DYN_FLAGS} ${SKOPEO_LDFLAGS} -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o $@ ./cmd/skopeo
$(GO) build ${GO_DYN_FLAGS} ${SKOPEO_LDFLAGS} -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o $@ ./cmd/skopeo
bin/skopeo.%:
GOOS=$(word 2,$(subst ., ,$@)) GOARCH=$(word 3,$(subst ., ,$@)) $(GO) build $(MOD_VENDOR) ${SKOPEO_LDFLAGS} -tags "containers_image_openpgp $(BUILDTAGS)" -o $@ ./cmd/skopeo
GOOS=$(word 2,$(subst ., ,$@)) GOARCH=$(word 3,$(subst ., ,$@)) $(GO) build ${SKOPEO_LDFLAGS} -tags "containers_image_openpgp $(BUILDTAGS)" -o $@ ./cmd/skopeo
local-cross: bin/skopeo.darwin.amd64 bin/skopeo.linux.arm bin/skopeo.linux.arm64 bin/skopeo.windows.386.exe bin/skopeo.windows.amd64.exe
$(MANPAGES): %: %.md
@@ -194,18 +181,27 @@ install-completions: completions
shell:
$(CONTAINER_RUN) bash
tools:
if [ ! -x "$(GOBIN)/golangci-lint" ]; then \
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOBIN) v1.52.2 ; \
fi
check: validate test-unit test-integration test-system
test-integration:
$(CONTAINER_RUN) $(MAKE) test-integration-local
# This is intended to be equal to $(CONTAINER_RUN), but with --cap-add=cap_mknod.
# --cap-add=cap_mknod is important to allow skopeo to use containers-storage: directly as it exists in the callers environment, without
# creating a nested user namespace (which requires /etc/subuid and /etc/subgid to be set up)
$(CONTAINER_CMD) --security-opt label=disable --cap-add=cap_mknod -v $(CURDIR):$(CONTAINER_GOSRC) -w $(CONTAINER_GOSRC) $(SKOPEO_CIDEV_CONTAINER_FQIN) \
$(MAKE) test-integration-local
# Intended for CI, assumed to be running in quay.io/libpod/skopeo_cidev container.
test-integration-local: bin/skopeo
hack/make.sh test-integration
hack/warn-destructive-tests.sh
hack/test-integration.sh
# complicated set of options needed to run podman-in-podman
# TODO: The $(RM) command will likely fail w/o `podman unshare`
test-system:
DTEMP=$(shell mktemp -d --tmpdir=/var/tmp podman-tmp.XXXXXX); \
$(CONTAINER_CMD) --privileged \
@@ -214,12 +210,13 @@ test-system:
"$(SKOPEO_CIDEV_CONTAINER_FQIN)" \
$(MAKE) test-system-local; \
rc=$$?; \
-$(RM) -rf $$DTEMP; \
$(CONTAINER_RUNTIME) unshare rm -rf $$DTEMP; # This probably doesn't work with Docker, oh well, better than nothing... \
exit $$rc
# Intended for CI, assumed to already be running in quay.io/libpod/skopeo_cidev container.
test-system-local: bin/skopeo
hack/make.sh test-system
hack/warn-destructive-tests.sh
hack/test-system.sh
test-unit:
# Just call (make test unit-local) here instead of worrying about environment differences
@@ -233,16 +230,19 @@ test-all-local: validate-local validate-docs test-unit-local
.PHONY: validate-local
validate-local:
BUILDTAGS="${BUILDTAGS}" hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
hack/validate-git-marks.sh
hack/validate-gofmt.sh
GOBIN=$(GOBIN) hack/validate-lint.sh
BUILDTAGS="${BUILDTAGS}" hack/validate-vet.sh
# This invokes bin/skopeo, hence cannot be run as part of validate-local
.PHONY: validate-docs
validate-docs:
validate-docs: bin/skopeo
hack/man-page-checker
hack/xref-helpmsgs-manpages
test-unit-local: bin/skopeo
$(GO) test $(MOD_VENDOR) -tags "$(BUILDTAGS)" $$($(GO) list $(MOD_VENDOR) -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/containers/skopeo/\(integration\|vendor/.*\)$$')
test-unit-local:
$(GO) test -tags "$(BUILDTAGS)" $$($(GO) list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/containers/skopeo/\(integration\|vendor/.*\)$$')
vendor:
$(GO) mod tidy

View File

@@ -1,7 +1,4 @@
skopeo [![Build Status](https://travis-ci.org/containers/skopeo.svg?branch=master)](https://travis-ci.org/containers/skopeo)
=
<img src="https://cdn.rawgit.com/containers/skopeo/master/docs/skopeo.svg" width="250">
<img src="https://cdn.rawgit.com/containers/skopeo/main/docs/skopeo.svg" width="250" alt="Skopeo">
----
@@ -42,6 +39,12 @@ Skopeo works with API V2 container image registries such as [docker.io](https://
* oci:path:tag
An image tag in a directory compliant with "Open Container Image Layout Specification" at path.
[Obtaining skopeo](./install.md)
-
For a detailed description how to install or build skopeo, see
[install.md](./install.md).
## Inspecting a repository
`skopeo` is able to _inspect_ a repository on a container registry and fetch images layers.
The _inspect_ command fetches the repository's manifest and it is able to show you a `docker inspect`-like
@@ -56,29 +59,37 @@ Examples:
$ skopeo inspect docker://registry.fedoraproject.org/fedora:latest
{
"Name": "registry.fedoraproject.org/fedora",
"Digest": "sha256:655721ff613ee766a4126cb5e0d5ae81598e1b0c3bcf7017c36c4d72cb092fe9",
"Digest": "sha256:0f65bee641e821f8118acafb44c2f8fe30c2fc6b9a2b3729c0660376391aa117",
"RepoTags": [
"24",
"25",
"26-modular",
...
"34-aarch64",
"34",
"latest",
...
],
"Created": "2020-04-29T06:48:16Z",
"Created": "2022-11-24T13:54:18Z",
"DockerVersion": "1.10.1",
"Labels": {
"license": "MIT",
"name": "fedora",
"vendor": "Fedora Project",
"version": "32"
"version": "37"
},
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:3088721d7dbf674fc0be64cd3cf00c25aab921cacf35fa0e7b1578500a3e1653"
"sha256:2a0fc6bf62e155737f0ace6142ee686f3c471c1aab4241dc3128904db46288f0"
],
"LayersData": [
{
"MIMEType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"Digest": "sha256:2a0fc6bf62e155737f0ace6142ee686f3c471c1aab4241dc3128904db46288f0",
"Size": 71355009,
"Annotations": null
}
],
"Env": [
"DISTTAG=f32container",
"FGC=f32",
"DISTTAG=f37container",
"FGC=f37",
"container=oci"
]
}
@@ -184,12 +195,6 @@ $ skopeo inspect --creds=testuser:testpassword docker://myregistrydomain.com:500
$ skopeo copy --src-creds=testuser:testpassword docker://myregistrydomain.com:5000/private oci:local_oci_image
```
[Obtaining skopeo](./install.md)
-
For a detailed description how to install or build skopeo, see
[install.md](./install.md).
Contributing
-
@@ -200,6 +205,7 @@ Please read the [contribution guide](CONTRIBUTING.md) if you want to collaborate
| -------------------------------------------------- | ---------------------------------------------------------------------------------------------|
| [skopeo-copy(1)](/docs/skopeo-copy.1.md) | Copy an image (manifest, filesystem layers, signatures) from one location to another. |
| [skopeo-delete(1)](/docs/skopeo-delete.1.md) | Mark the image-name for later deletion by the registry's garbage collector. |
| [skopeo-generate-sigstore-key(1)](/docs/skopeo-generate-sigstore-key.1.md) | Generate a sigstore public/private key pair. |
| [skopeo-inspect(1)](/docs/skopeo-inspect.1.md) | Return low-level information about image-name in a registry. |
| [skopeo-list-tags(1)](/docs/skopeo-list-tags.1.md) | Return a list of tags for the transport-specific image repository. |
| [skopeo-login(1)](/docs/skopeo-login.1.md) | Login to a container registry. |

View File

@@ -13,6 +13,8 @@ import (
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/cli"
"github.com/containers/image/v5/pkg/cli/sigstore"
"github.com/containers/image/v5/signature/signer"
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/transports/alltransports"
encconfig "github.com/containers/ocicrypt/config"
@@ -29,6 +31,7 @@ type copyOptions struct {
additionalTags []string // For docker-archive: destinations, in addition to the name:tag specified as destination, also add these
removeSignatures bool // Do not copy signatures from the source image
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
signBySigstoreParamFile string // Sign the image using a sigstore signature per configuration in a param file
signBySigstorePrivateKey string // Sign the image using a sigstore private key
signPassphraseFile string // Path pointing to a passphrase file when signing (for either signature format, but only one of them)
signIdentity string // Identity of the signed image, must be a fully specified docker reference
@@ -83,6 +86,7 @@ See skopeo(1) section "IMAGE NAMES" for the expected format
flags.BoolVar(&opts.preserveDigests, "preserve-digests", false, "Preserve digests of images and lists")
flags.BoolVar(&opts.removeSignatures, "remove-signatures", false, "Do not copy signatures from SOURCE-IMAGE")
flags.StringVar(&opts.signByFingerprint, "sign-by", "", "Sign the image using a GPG key with the specified `FINGERPRINT`")
flags.StringVar(&opts.signBySigstoreParamFile, "sign-by-sigstore", "", "Sign the image using a sigstore parameter file at `PATH`")
flags.StringVar(&opts.signBySigstorePrivateKey, "sign-by-sigstore-private-key", "", "Sign the image using a sigstore private key at `PATH`")
flags.StringVar(&opts.signPassphraseFile, "sign-passphrase-file", "", "Read a passphrase for signing an image from `PATH`")
flags.StringVar(&opts.signIdentity, "sign-identity", "", "Identity of signed image, must be a fully specified docker reference. Defaults to the target docker reference.")
@@ -252,6 +256,22 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) (retErr error) {
passphrase = p
} // opts.signByFingerprint triggers a GPG-agent passphrase prompt, possibly using a more secure channel, so we usually shouldnt prompt ourselves if no passphrase was explicitly provided.
var signers []*signer.Signer
if opts.signBySigstoreParamFile != "" {
signer, err := sigstore.NewSignerFromParameterFile(opts.signBySigstoreParamFile, &sigstore.Options{
PrivateKeyPassphrasePrompt: func(keyFile string) (string, error) {
return promptForPassphrase(keyFile, os.Stdin, os.Stdout)
},
Stdin: os.Stdin,
Stdout: stdout,
})
if err != nil {
return fmt.Errorf("Error using --sign-by-sigstore: %w", err)
}
defer signer.Close()
signers = append(signers, signer)
}
var signIdentity reference.Named = nil
if opts.signIdentity != "" {
signIdentity, err = reference.ParseNamed(opts.signIdentity)
@@ -265,6 +285,7 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) (retErr error) {
return retry.IfNecessary(ctx, func() error {
manifestBytes, err := copy.Image(ctx, policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: opts.removeSignatures,
Signers: signers,
SignBy: opts.signByFingerprint,
SignPassphrase: passphrase,
SignBySigstorePrivateKeyFile: opts.signBySigstorePrivateKey,

View File

@@ -0,0 +1,90 @@
package main
import (
"errors"
"fmt"
"io"
"io/fs"
"os"
"github.com/containers/image/v5/pkg/cli"
"github.com/containers/image/v5/signature/sigstore"
"github.com/spf13/cobra"
)
type generateSigstoreKeyOptions struct {
outputPrefix string
passphraseFile string
}
func generateSigstoreKeyCmd() *cobra.Command {
var opts generateSigstoreKeyOptions
cmd := &cobra.Command{
Use: "generate-sigstore-key [command options] --output-prefix PREFIX",
Short: "Generate a sigstore public/private key pair",
RunE: commandAction(opts.run),
Example: "skopeo generate-sigstore-key --output-prefix my-key",
}
adjustUsage(cmd)
flags := cmd.Flags()
flags.StringVar(&opts.outputPrefix, "output-prefix", "", "Write the keys to `PREFIX`.pub and `PREFIX`.private")
flags.StringVar(&opts.passphraseFile, "passphrase-file", "", "Read a passphrase for the private key from `PATH`")
return cmd
}
// ensurePathDoesNotExist verifies that path does not refer to an existing file,
// and returns an error if so.
func ensurePathDoesNotExist(path string) error {
switch _, err := os.Stat(path); {
case err == nil:
return fmt.Errorf("Refusing to overwrite existing %q", path)
case errors.Is(err, fs.ErrNotExist):
return nil
default:
return fmt.Errorf("Error checking existence of %q: %w", path, err)
}
}
func (opts *generateSigstoreKeyOptions) run(args []string, stdout io.Writer) error {
if len(args) != 0 || opts.outputPrefix == "" {
return errors.New("Usage: generate-sigstore-key --output-prefix PREFIX")
}
pubKeyPath := opts.outputPrefix + ".pub"
privateKeyPath := opts.outputPrefix + ".private"
if err := ensurePathDoesNotExist(pubKeyPath); err != nil {
return err
}
if err := ensurePathDoesNotExist(privateKeyPath); err != nil {
return err
}
var passphrase string
if opts.passphraseFile != "" {
p, err := cli.ReadPassphraseFile(opts.passphraseFile)
if err != nil {
return err
}
passphrase = p
} else {
p, err := promptForPassphrase(privateKeyPath, os.Stdin, os.Stdout)
if err != nil {
return err
}
passphrase = p
}
keys, err := sigstore.GenerateKeyPair([]byte(passphrase))
if err != nil {
return fmt.Errorf("Error generating key pair: %w", err)
}
if err := os.WriteFile(privateKeyPath, keys.PrivateKey, 0600); err != nil {
return fmt.Errorf("Error writing private key to %q: %w", privateKeyPath, err)
}
if err := os.WriteFile(pubKeyPath, keys.PublicKey, 0644); err != nil {
return fmt.Errorf("Error writing private key to %q: %w", pubKeyPath, err)
}
fmt.Fprintf(stdout, "Key written to %q and %q", privateKeyPath, pubKeyPath)
return nil
}

View File

@@ -0,0 +1,79 @@
package main
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestGenerateSigstoreKey(t *testing.T) {
// Invalid command-line arguments
for _, args := range [][]string{
{},
{"--output-prefix", "foo", "a1"},
} {
out, err := runSkopeo(append([]string{"generate-sigstore-key"}, args...)...)
assertTestFailed(t, out, err, "Usage")
}
// One of the destination files already exists
outputSuffixes := []string{".pub", ".private"}
for _, suffix := range outputSuffixes {
dir := t.TempDir()
prefix := filepath.Join(dir, "prefix")
err := os.WriteFile(prefix+suffix, []byte{}, 0600)
require.NoError(t, err)
out, err := runSkopeo("generate-sigstore-key",
"--output-prefix", prefix, "--passphrase-file", "/dev/null",
)
assertTestFailed(t, out, err, "Refusing to overwrite")
}
// One of the destinations is inaccessible (simulate by a symlink that tries to
// traverse a non-directory)
for _, suffix := range outputSuffixes {
dir := t.TempDir()
nonDirectory := filepath.Join(dir, "nondirectory")
err := os.WriteFile(nonDirectory, []byte{}, 0600)
require.NoError(t, err)
prefix := filepath.Join(dir, "prefix")
err = os.Symlink(filepath.Join(nonDirectory, "unaccessible"), prefix+suffix)
require.NoError(t, err)
out, err := runSkopeo("generate-sigstore-key",
"--output-prefix", prefix, "--passphrase-file", "/dev/null",
)
assertTestFailed(t, out, err, prefix+suffix) // + an OS-specific error message
}
destDir := t.TempDir()
// Error reading passphrase
out, err := runSkopeo("generate-sigstore-key",
"--output-prefix", filepath.Join(destDir, "prefix"),
"--passphrase-file", filepath.Join(destDir, "this-does-not-exist"),
)
assertTestFailed(t, out, err, "this-does-not-exist")
// (The interactive passphrase prompting is not yet tested)
// Error writing outputs is untested: when unit tests run as root, we cant use permissions on a directory to cause write failures,
// with the --output-prefix mechanism, and refusing to even start writing to pre-exisiting files, directories are the only mechanism
// we have to trigger a write failure.
// Success
// Just a smoke-test, usability of the keys is tested in the generate implementation.
dir := t.TempDir()
prefix := filepath.Join(dir, "prefix")
passphraseFile := filepath.Join(dir, "passphrase")
err = os.WriteFile(passphraseFile, []byte("some passphrase"), 0600)
require.NoError(t, err)
out, err = runSkopeo("generate-sigstore-key",
"--output-prefix", prefix, "--passphrase-file", passphraseFile,
)
assert.NoError(t, err)
for _, suffix := range outputSuffixes {
assert.Contains(t, out, prefix+suffix)
}
}

View File

@@ -5,10 +5,7 @@ import (
"errors"
"fmt"
"io"
"os"
"strings"
"text/tabwriter"
"text/template"
"github.com/containers/common/pkg/report"
"github.com/containers/common/pkg/retry"
@@ -18,6 +15,7 @@ import (
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/types"
"github.com/containers/skopeo/cmd/skopeo/inspect"
"github.com/docker/distribution/registry/api/errcode"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@@ -53,8 +51,8 @@ See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
RunE: commandAction(opts.run),
Example: `skopeo inspect docker://registry.fedoraproject.org/fedora
skopeo inspect --config docker://docker.io/alpine
skopeo inspect --format "Name: {{.Name}} Digest: {{.Digest}}" docker://registry.access.redhat.com/ubi8`,
skopeo inspect --config docker://docker.io/alpine
skopeo inspect --format "Name: {{.Name}} Digest: {{.Digest}}" docker://registry.access.redhat.com/ubi8`,
ValidArgsFunction: autocompleteSupportedTransports,
}
adjustUsage(cmd)
@@ -74,7 +72,6 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
rawManifest []byte
src types.ImageSource
imgInspect *types.ImageInspectInfo
data []interface{}
)
ctx, cancel := opts.global.commandTimeoutContext()
defer cancel()
@@ -151,18 +148,7 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
}, opts.retryOpts); err != nil {
return fmt.Errorf("Error reading OCI-formatted configuration data: %w", err)
}
if report.IsJSON(opts.format) || opts.format == "" {
var out []byte
out, err = json.MarshalIndent(config, "", " ")
if err == nil {
fmt.Fprintf(stdout, "%s\n", string(out))
}
} else {
row := "{{range . }}" + report.NormalizeFormat(opts.format) + "{{end}}"
data = append(data, config)
err = printTmpl(row, data)
}
if err != nil {
if err := opts.writeOutput(stdout, config); err != nil {
return fmt.Errorf("Error writing OCI-formatted configuration data to standard output: %w", err)
}
return nil
@@ -203,34 +189,48 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
}
outputData.RepoTags, err = docker.GetRepositoryTags(ctx, sys, img.Reference())
if err != nil {
// some registries may decide to block the "list all tags" endpoint
// gracefully allow the inspect to continue in this case. Currently
// the IBM Bluemix container registry has this restriction.
// In addition, AWS ECR rejects it with 403 (Forbidden) if the "ecr:ListImages"
// action is not allowed.
if !strings.Contains(err.Error(), "401") && !strings.Contains(err.Error(), "403") {
// Some registries may decide to block the "list all tags" endpoint;
// gracefully allow the inspect to continue in this case:
fatalFailure := true
// - AWS ECR rejects it if the "ecr:ListImages" action is not allowed.
// https://github.com/containers/skopeo/issues/726
var ec errcode.ErrorCoder
if ok := errors.As(err, &ec); ok && ec.ErrorCode() == errcode.ErrorCodeDenied {
fatalFailure = false
}
// - public.ecr.aws does not implement the endpoint at all, and fails with 404:
// https://github.com/containers/skopeo/issues/1230
// This is actually "code":"NOT_FOUND", and the parser doesnt preserve that.
// So, also check the error text.
if ok := errors.As(err, &ec); ok && ec.ErrorCode() == errcode.ErrorCodeUnknown {
var e errcode.Error
if ok := errors.As(err, &e); ok && e.Code == errcode.ErrorCodeUnknown && e.Message == "404 page not found" {
fatalFailure = false
}
}
if fatalFailure {
return fmt.Errorf("Error determining repository tags: %w", err)
}
logrus.Warnf("Registry disallows tag list retrieval; skipping")
}
}
return opts.writeOutput(stdout, outputData)
}
// writeOutput writes data depending on opts.format to stdout
func (opts *inspectOptions) writeOutput(stdout io.Writer, data any) error {
if report.IsJSON(opts.format) || opts.format == "" {
out, err := json.MarshalIndent(outputData, "", " ")
out, err := json.MarshalIndent(data, "", " ")
if err == nil {
fmt.Fprintf(stdout, "%s\n", string(out))
}
return err
}
row := "{{range . }}" + report.NormalizeFormat(opts.format) + "{{end}}"
data = append(data, outputData)
return printTmpl(row, data)
}
func printTmpl(row string, data []interface{}) error {
t, err := template.New("skopeo inspect").Parse(row)
rpt, err := report.New(stdout, "skopeo inspect").Parse(report.OriginUser, opts.format)
if err != nil {
return err
}
w := tabwriter.NewWriter(os.Stdout, 8, 2, 2, ' ', 0)
return t.Execute(w, data)
defer rpt.Flush()
return rpt.Execute([]any{data})
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/containers/image/v5/transports/alltransports"
"github.com/containers/image/v5/types"
"github.com/spf13/cobra"
"golang.org/x/exp/maps"
)
// tagListOutput is the output format of (skopeo list-tags), primarily so that we can format it with a simple json.MarshalIndent.
@@ -37,10 +38,7 @@ var transportHandlers = map[string]func(ctx context.Context, sys *types.SystemCo
// supportedTransports returns all the supported transports
func supportedTransports(joinStr string) string {
res := make([]string, 0, len(transportHandlers))
for handlerName := range transportHandlers {
res = append(res, handlerName)
}
res := maps.Keys(transportHandlers)
sort.Strings(res)
return strings.Join(res, joinStr)
}
@@ -84,12 +82,12 @@ func parseDockerRepositoryReference(refString string) (types.ImageReference, err
return nil, fmt.Errorf("docker: image reference %s does not start with %s://", refString, docker.Transport.Name())
}
parts := strings.SplitN(refString, ":", 2)
if len(parts) != 2 {
_, dockerImageName, hasColon := strings.Cut(refString, ":")
if !hasColon {
return nil, fmt.Errorf(`Invalid image name "%s", expected colon-separated transport:reference`, refString)
}
ref, err := reference.ParseNormalizedNamed(strings.TrimPrefix(parts[1], "//"))
ref, err := reference.ParseNormalizedNamed(strings.TrimPrefix(dockerImageName, "//"))
if err != nil {
return nil, err
}
@@ -130,7 +128,7 @@ func listDockerRepoTags(ctx context.Context, sys *types.SystemContext, opts *tag
}
// return the tagLists from a docker archive file
func listDockerArchiveTags(ctx context.Context, sys *types.SystemContext, opts *tagsOptions, userInput string) (repositoryName string, tagListing []string, err error) {
func listDockerArchiveTags(_ context.Context, sys *types.SystemContext, _ *tagsOptions, userInput string) (repositoryName string, tagListing []string, err error) {
ref, err := alltransports.ParseImageName(userInput)
if err != nil {
return

View File

@@ -16,7 +16,6 @@ func TestDockerRepositoryReferenceParser(t *testing.T) {
{"docker://somehost.com"}, // Valid default expansion
{"docker://nginx"}, // Valid default expansion
} {
ref, err := parseDockerRepositoryReference(test[0])
require.NoError(t, err)
expected, err := alltransports.ParseImageName(test[0])
@@ -47,7 +46,6 @@ func TestDockerRepositoryReferenceParserDrift(t *testing.T) {
{"docker://somehost.com", "docker.io/library/somehost.com"}, // Valid default expansion
{"docker://nginx", "docker.io/library/nginx"}, // Valid default expansion
} {
ref, err := parseDockerRepositoryReference(test[0])
ref2, err2 := alltransports.ParseImageName(test[0])

View File

@@ -55,14 +55,12 @@ func createApp() (*cobra.Command, *globalOptions) {
opts := globalOptions{}
rootCommand := &cobra.Command{
Use: "skopeo",
Long: "Various operations with container images and container image registries",
RunE: requireSubcommand,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
return opts.before(cmd)
},
SilenceUsage: true,
SilenceErrors: true,
Use: "skopeo",
Long: "Various operations with container images and container image registries",
RunE: requireSubcommand,
PersistentPreRunE: opts.before,
SilenceUsage: true,
SilenceErrors: true,
// Hide the completion command which is provided by cobra
CompletionOptions: cobra.CompletionOptions{HiddenDefaultCmd: true},
// This is documented to parse "local" (non-PersistentFlags) flags of parent commands before
@@ -98,6 +96,7 @@ func createApp() (*cobra.Command, *globalOptions) {
rootCommand.AddCommand(
copyCmd(&opts),
deleteCmd(&opts),
generateSigstoreKeyCmd(),
inspectCmd(&opts),
layersCmd(&opts),
loginCmd(&opts),
@@ -114,7 +113,7 @@ func createApp() (*cobra.Command, *globalOptions) {
}
// before is run by the cli package for any command, before running the command-specific handler.
func (opts *globalOptions) before(cmd *cobra.Command) error {
func (opts *globalOptions) before(cmd *cobra.Command, args []string) error {
if opts.debug {
logrus.SetLevel(logrus.DebugLevel)
}

View File

@@ -73,11 +73,16 @@ import (
"github.com/containers/image/v5/image"
"github.com/containers/image/v5/manifest"
ocilayout "github.com/containers/image/v5/oci/layout"
"github.com/containers/image/v5/pkg/blobinfocache"
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/transports/alltransports"
"github.com/containers/image/v5/types"
dockerdistributionerrcode "github.com/docker/distribution/registry/api/errcode"
dockerdistributionapi "github.com/docker/distribution/registry/api/v2"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
@@ -88,7 +93,9 @@ import (
// 0.2.1: Initial version
// 0.2.2: Added support for fetching image configuration as OCI
// 0.2.3: Added GetFullConfig
const protocolVersion = "0.2.3"
// 0.2.4: Added OpenImageOptional
// 0.2.5: Added LayerInfoJSON
const protocolVersion = "0.2.5"
// maxMsgSize is the current limit on a packet size.
// Note that all non-metadata (i.e. payload data) is sent over a pipe.
@@ -100,12 +107,15 @@ const maxMsgSize = 32 * 1024
// integers are above this.
const maxJSONFloat = float64(uint64(1)<<53 - 1)
// sentinelImageID represents "image not found" on the wire
const sentinelImageID = 0
// request is the JSON serialization of a function call
type request struct {
// Method is the name of the function
Method string `json:"method"`
// Args is the arguments (parsed inside the function)
Args []interface{} `json:"args"`
Args []any `json:"args"`
}
// reply is serialized to JSON as the return value from a function call.
@@ -113,7 +123,7 @@ type reply struct {
// Success is true if and only if the call succeeded.
Success bool `json:"success"`
// Value is an arbitrary value (or values, as array/map) returned from the call.
Value interface{} `json:"value"`
Value any `json:"value"`
// PipeID is an index into open pipes, and should be passed to FinishPipe
PipeID uint32 `json:"pipeid"`
// Error should be non-empty if Success == false
@@ -123,7 +133,7 @@ type reply struct {
// replyBuf is our internal deserialization of reply plus optional fd
type replyBuf struct {
// value will be converted to a reply Value
value interface{}
value any
// fd is the read half of a pipe, passed back to the client
fd *os.File
// pipeid will be provided to the client as PipeID, an index into our open pipes
@@ -166,8 +176,16 @@ type proxyHandler struct {
activePipes map[uint32]*activePipe
}
// convertedLayerInfo is the reduced form of the OCI type BlobInfo
// Used in the return value of GetLayerInfo
type convertedLayerInfo struct {
Digest digest.Digest `json:"digest"`
Size int64 `json:"size"`
MediaType string `json:"media_type"`
}
// Initialize performs one-time initialization, and returns the protocol version
func (h *proxyHandler) Initialize(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) Initialize(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -196,7 +214,30 @@ func (h *proxyHandler) Initialize(args []interface{}) (replyBuf, error) {
// OpenImage accepts a string image reference i.e. TRANSPORT:REF - like `skopeo copy`.
// The return value is an opaque integer handle.
func (h *proxyHandler) OpenImage(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) OpenImage(args []any) (replyBuf, error) {
return h.openImageImpl(args, false)
}
// isDockerManifestUnknownError is a copy of code from containers/image,
// please update there first.
func isDockerManifestUnknownError(err error) bool {
var ec dockerdistributionerrcode.ErrorCoder
if !errors.As(err, &ec) {
return false
}
return ec.ErrorCode() == dockerdistributionapi.ErrorCodeManifestUnknown
}
// isNotFoundImageError heuristically attempts to determine whether an error
// is saying the remote source couldn't find the image (as opposed to an
// authentication error, an I/O error etc.)
// TODO drive this into containers/image properly
func isNotFoundImageError(err error) bool {
return isDockerManifestUnknownError(err) ||
errors.Is(err, ocilayout.ImageNotFoundError{})
}
func (h *proxyHandler) openImageImpl(args []any, allowNotFound bool) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
@@ -218,9 +259,15 @@ func (h *proxyHandler) OpenImage(args []interface{}) (replyBuf, error) {
}
imgsrc, err := imgRef.NewImageSource(context.Background(), h.sysctx)
if err != nil {
if allowNotFound && isNotFoundImageError(err) {
ret.value = sentinelImageID
return ret, nil
}
return ret, err
}
// Note that we never return zero as an imageid; this code doesn't yet
// handle overflow though.
h.imageSerial++
openimg := &openImage{
id: h.imageSerial,
@@ -232,7 +279,14 @@ func (h *proxyHandler) OpenImage(args []interface{}) (replyBuf, error) {
return ret, nil
}
func (h *proxyHandler) CloseImage(args []interface{}) (replyBuf, error) {
// OpenImage accepts a string image reference i.e. TRANSPORT:REF - like `skopeo copy`.
// The return value is an opaque integer handle. If the image does not exist, zero
// is returned.
func (h *proxyHandler) OpenImageOptional(args []any) (replyBuf, error) {
return h.openImageImpl(args, true)
}
func (h *proxyHandler) CloseImage(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
@@ -253,7 +307,7 @@ func (h *proxyHandler) CloseImage(args []interface{}) (replyBuf, error) {
return ret, nil
}
func parseImageID(v interface{}) (uint32, error) {
func parseImageID(v any) (uint32, error) {
imgidf, ok := v.(float64)
if !ok {
return 0, fmt.Errorf("expecting integer imageid, not %T", v)
@@ -262,7 +316,7 @@ func parseImageID(v interface{}) (uint32, error) {
}
// parseUint64 validates that a number fits inside a JavaScript safe integer
func parseUint64(v interface{}) (uint64, error) {
func parseUint64(v any) (uint64, error) {
f, ok := v.(float64)
if !ok {
return 0, fmt.Errorf("expecting numeric, not %T", v)
@@ -273,11 +327,14 @@ func parseUint64(v interface{}) (uint64, error) {
return uint64(f), nil
}
func (h *proxyHandler) parseImageFromID(v interface{}) (*openImage, error) {
func (h *proxyHandler) parseImageFromID(v any) (*openImage, error) {
imgid, err := parseImageID(v)
if err != nil {
return nil, err
}
if imgid == sentinelImageID {
return nil, fmt.Errorf("Invalid imageid value of zero")
}
imgref, ok := h.images[imgid]
if !ok {
return nil, fmt.Errorf("no image %v", imgid)
@@ -300,7 +357,7 @@ func (h *proxyHandler) allocPipe() (*os.File, *activePipe, error) {
// returnBytes generates a return pipe() from a byte array
// In the future it might be nicer to return this via memfd_create()
func (h *proxyHandler) returnBytes(retval interface{}, buf []byte) (replyBuf, error) {
func (h *proxyHandler) returnBytes(retval any, buf []byte) (replyBuf, error) {
var ret replyBuf
piper, f, err := h.allocPipe()
if err != nil {
@@ -362,7 +419,7 @@ func (h *proxyHandler) cacheTargetManifest(img *openImage) error {
// GetManifest returns a copy of the manifest, converted to OCI format, along with the original digest.
// Manifest lists are resolved to the current operating system and architecture.
func (h *proxyHandler) GetManifest(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) GetManifest(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -433,7 +490,7 @@ func (h *proxyHandler) GetManifest(args []interface{}) (replyBuf, error) {
// GetFullConfig returns a copy of the image configuration, converted to OCI format.
// https://github.com/opencontainers/image-spec/blob/main/config.md
func (h *proxyHandler) GetFullConfig(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) GetFullConfig(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -470,7 +527,7 @@ func (h *proxyHandler) GetFullConfig(args []interface{}) (replyBuf, error) {
// GetConfig returns a copy of the container runtime configuration, converted to OCI format.
// Note that due to a historical mistake, this returns not the full image configuration,
// but just the container runtime configuration. You should use GetFullConfig instead.
func (h *proxyHandler) GetConfig(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) GetConfig(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -505,7 +562,7 @@ func (h *proxyHandler) GetConfig(args []interface{}) (replyBuf, error) {
}
// GetBlob fetches a blob, performing digest verification.
func (h *proxyHandler) GetBlob(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) GetBlob(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -542,10 +599,12 @@ func (h *proxyHandler) GetBlob(args []interface{}) (replyBuf, error) {
piper, f, err := h.allocPipe()
if err != nil {
blobr.Close()
return ret, err
}
go func() {
// Signal completion when we return
defer blobr.Close()
defer f.wg.Done()
verifier := d.Verifier()
tr := io.TeeReader(blobr, verifier)
@@ -568,8 +627,58 @@ func (h *proxyHandler) GetBlob(args []interface{}) (replyBuf, error) {
return ret, nil
}
// GetLayerInfo returns data about the layers of an image, useful for reading the layer contents.
//
// This needs to be called since the data returned by GetManifest() does not allow to correctly
// calling GetBlob() for the containers-storage: transport (which doesnt store the original compressed
// representations referenced in the manifest).
func (h *proxyHandler) GetLayerInfo(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
if h.sysctx == nil {
return ret, fmt.Errorf("client error: must invoke Initialize")
}
if len(args) != 1 {
return ret, fmt.Errorf("found %d args, expecting (imgid)", len(args))
}
imgref, err := h.parseImageFromID(args[0])
if err != nil {
return ret, err
}
ctx := context.TODO()
err = h.cacheTargetManifest(imgref)
if err != nil {
return ret, err
}
img := imgref.cachedimg
layerInfos, err := img.LayerInfosForCopy(ctx)
if err != nil {
return ret, err
}
if layerInfos == nil {
layerInfos = img.LayerInfos()
}
layers := make([]convertedLayerInfo, 0, len(layerInfos))
for _, layer := range layerInfos {
layers = append(layers, convertedLayerInfo{layer.Digest, layer.Size, layer.MediaType})
}
ret.value = layers
return ret, nil
}
// FinishPipe waits for the worker goroutine to finish, and closes the write side of the pipe.
func (h *proxyHandler) FinishPipe(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) FinishPipe(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -596,6 +705,17 @@ func (h *proxyHandler) FinishPipe(args []interface{}) (replyBuf, error) {
return ret, err
}
// close releases all resources associated with this proxy backend
func (h *proxyHandler) close() {
for _, image := range h.images {
err := image.src.Close()
if err != nil {
// This shouldn't be fatal
logrus.Warnf("Failed to close image %s: %v", transports.ImageName(image.cachedimg.Reference()), err)
}
}
}
// send writes a reply buffer to the socket
func (buf replyBuf) send(conn *net.UnixConn, err error) error {
replyToSerialize := reply{
@@ -678,6 +798,8 @@ func (h *proxyHandler) processRequest(readBytes []byte) (rb replyBuf, terminate
rb, err = h.Initialize(req.Args)
case "OpenImage":
rb, err = h.OpenImage(req.Args)
case "OpenImageOptional":
rb, err = h.OpenImageOptional(req.Args)
case "CloseImage":
rb, err = h.CloseImage(req.Args)
case "GetManifest":
@@ -688,10 +810,14 @@ func (h *proxyHandler) processRequest(readBytes []byte) (rb replyBuf, terminate
rb, err = h.GetFullConfig(req.Args)
case "GetBlob":
rb, err = h.GetBlob(req.Args)
case "GetLayerInfo":
rb, err = h.GetLayerInfo(req.Args)
case "FinishPipe":
rb, err = h.FinishPipe(req.Args)
case "Shutdown":
terminate = true
// NOTE: If you add a method here, you should very likely be bumping the
// const protocolVersion above.
default:
err = fmt.Errorf("unknown method: %s", req.Method)
}
@@ -705,6 +831,7 @@ func (opts *proxyOptions) run(args []string, stdout io.Writer) error {
images: make(map[uint32]*openImage),
activePipes: make(map[uint32]*activePipe),
}
defer handler.close()
// Convert the socket FD passed by client into a net.FileConn
fd := os.NewFile(uintptr(opts.sockFd), "sock")

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"io"
"os"
"strings"
"github.com/containers/image/v5/pkg/cli"
"github.com/containers/image/v5/signature"
@@ -41,12 +42,12 @@ func (opts *standaloneSignOptions) run(args []string, stdout io.Writer) error {
manifest, err := os.ReadFile(manifestPath)
if err != nil {
return fmt.Errorf("Error reading %s: %v", manifestPath, err)
return fmt.Errorf("Error reading %s: %w", manifestPath, err)
}
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
return fmt.Errorf("Error initializing GPG: %w", err)
}
defer mech.Close()
@@ -57,25 +58,31 @@ func (opts *standaloneSignOptions) run(args []string, stdout io.Writer) error {
signature, err := signature.SignDockerManifestWithOptions(manifest, dockerReference, mech, fingerprint, &signature.SignOptions{Passphrase: passphrase})
if err != nil {
return fmt.Errorf("Error creating signature: %v", err)
return fmt.Errorf("Error creating signature: %w", err)
}
if err := os.WriteFile(opts.output, signature, 0644); err != nil {
return fmt.Errorf("Error writing signature to %s: %v", opts.output, err)
return fmt.Errorf("Error writing signature to %s: %w", opts.output, err)
}
return nil
}
type standaloneVerifyOptions struct {
publicKeyFile string
}
func standaloneVerifyCmd() *cobra.Command {
opts := standaloneVerifyOptions{}
cmd := &cobra.Command{
Use: "standalone-verify MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT SIGNATURE",
Use: "standalone-verify MANIFEST DOCKER-REFERENCE KEY-FINGERPRINTS SIGNATURE",
Short: "Verify a signature using local files",
RunE: commandAction(opts.run),
Long: `Verify a signature using local files
KEY-FINGERPRINTS can be a comma separated list of fingerprints, or "any" if you trust all the keys in the public key file.`,
RunE: commandAction(opts.run),
}
flags := cmd.Flags()
flags.StringVar(&opts.publicKeyFile, "public-key-file", "", `File containing public keys. If not specified, will use local GPG keys.`)
adjustUsage(cmd)
return cmd
}
@@ -86,29 +93,51 @@ func (opts *standaloneVerifyOptions) run(args []string, stdout io.Writer) error
}
manifestPath := args[0]
expectedDockerReference := args[1]
expectedFingerprint := args[2]
expectedFingerprints := strings.Split(args[2], ",")
signaturePath := args[3]
if opts.publicKeyFile == "" && len(expectedFingerprints) == 1 && expectedFingerprints[0] == "any" {
return fmt.Errorf("Cannot use any fingerprint without a public key file")
}
unverifiedManifest, err := os.ReadFile(manifestPath)
if err != nil {
return fmt.Errorf("Error reading manifest from %s: %v", manifestPath, err)
return fmt.Errorf("Error reading manifest from %s: %w", manifestPath, err)
}
unverifiedSignature, err := os.ReadFile(signaturePath)
if err != nil {
return fmt.Errorf("Error reading signature from %s: %v", signaturePath, err)
return fmt.Errorf("Error reading signature from %s: %w", signaturePath, err)
}
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
var mech signature.SigningMechanism
var publicKeyfingerprints []string
if opts.publicKeyFile != "" {
publicKeys, err := os.ReadFile(opts.publicKeyFile)
if err != nil {
return fmt.Errorf("Error reading public keys from %s: %w", opts.publicKeyFile, err)
}
mech, publicKeyfingerprints, err = signature.NewEphemeralGPGSigningMechanism(publicKeys)
if err != nil {
return fmt.Errorf("Error initializing GPG: %w", err)
}
} else {
mech, err = signature.NewGPGSigningMechanism()
if err != nil {
return fmt.Errorf("Error initializing GPG: %w", err)
}
}
defer mech.Close()
sig, err := signature.VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest, expectedDockerReference, mech, expectedFingerprint)
if err != nil {
return fmt.Errorf("Error verifying signature: %v", err)
if len(expectedFingerprints) == 1 && expectedFingerprints[0] == "any" {
expectedFingerprints = publicKeyfingerprints
}
fmt.Fprintf(stdout, "Signature verified, digest %s\n", sig.DockerManifestDigest)
sig, verificationFingerprint, err := signature.VerifyImageManifestSignatureUsingKeyIdentityList(unverifiedSignature, unverifiedManifest, expectedDockerReference, mech, expectedFingerprints)
if err != nil {
return fmt.Errorf("Error verifying signature: %w", err)
}
fmt.Fprintf(stdout, "Signature verified using fingerprint %s, digest %s\n", verificationFingerprint, sig.DockerManifestDigest)
return nil
}
@@ -141,7 +170,7 @@ func (opts *untrustedSignatureDumpOptions) run(args []string, stdout io.Writer)
untrustedSignature, err := os.ReadFile(untrustedSignaturePath)
if err != nil {
return fmt.Errorf("Error reading untrusted signature from %s: %v", untrustedSignaturePath, err)
return fmt.Errorf("Error reading untrusted signature from %s: %w", untrustedSignaturePath, err)
}
untrustedInfo, err := signature.GetUntrustedSignatureInformationWithoutVerifying(untrustedSignature)

View File

@@ -127,11 +127,36 @@ func TestStandaloneVerify(t *testing.T) {
dockerReference, fixturesTestKeyFingerprint, "fixtures/corrupt.signature")
assertTestFailed(t, out, err, "Error verifying signature")
// Error using any without a public key file
out, err = runSkopeo("standalone-verify", manifestPath,
dockerReference, "any", signaturePath)
assertTestFailed(t, out, err, "Cannot use any fingerprint without a public key file")
// Success
out, err = runSkopeo("standalone-verify", manifestPath,
dockerReference, fixturesTestKeyFingerprint, signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified, digest "+fixturesTestImageManifestDigest.String()+"\n", out)
assert.Equal(t, "Signature verified using fingerprint "+fixturesTestKeyFingerprint+", digest "+fixturesTestImageManifestDigest.String()+"\n", out)
// Using multiple fingerprints
out, err = runSkopeo("standalone-verify", manifestPath,
dockerReference, "0123456789ABCDEF0123456789ABCDEF01234567,"+fixturesTestKeyFingerprint+",DEADBEEFDEADBEEFDEADBEEFDEADBEEFDEADBEEF", signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified using fingerprint "+fixturesTestKeyFingerprint+", digest "+fixturesTestImageManifestDigest.String()+"\n", out)
// Using a public key file
t.Setenv("GNUPGHOME", "")
out, err = runSkopeo("standalone-verify", "--public-key-file", "fixtures/pubring.gpg", manifestPath,
dockerReference, fixturesTestKeyFingerprint, signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified using fingerprint "+fixturesTestKeyFingerprint+", digest "+fixturesTestImageManifestDigest.String()+"\n", out)
// Using a public key file matching any public key
t.Setenv("GNUPGHOME", "")
out, err = runSkopeo("standalone-verify", "--public-key-file", "fixtures/pubring.gpg", manifestPath,
dockerReference, "any", signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified using fingerprint "+fixturesTestKeyFingerprint+", digest "+fixturesTestImageManifestDigest.String()+"\n", out)
}
func TestUntrustedSignatureDump(t *testing.T) {

View File

@@ -19,12 +19,15 @@ import (
"github.com/containers/image/v5/docker"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/pkg/cli"
"github.com/containers/image/v5/pkg/cli/sigstore"
"github.com/containers/image/v5/signature/signer"
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/types"
"github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gopkg.in/yaml.v2"
"golang.org/x/exp/slices"
"gopkg.in/yaml.v3"
)
// syncOptions contains information retrieved from the skopeo sync command line.
@@ -36,6 +39,7 @@ type syncOptions struct {
retryOpts *retry.Options
removeSignatures bool // Do not copy signatures from the source image
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
signBySigstoreParamFile string // Sign the image using a sigstore signature per configuration in a param file
signBySigstorePrivateKey string // Sign the image using a sigstore private key
signPassphraseFile string // Path pointing to a passphrase file when signing
format commonFlag.OptionalString // Force conversion of the image to a specified format
@@ -46,6 +50,7 @@ type syncOptions struct {
dryRun bool // Don't actually copy anything, just output what it would have done
preserveDigests bool // Preserve digests during sync
keepGoing bool // Whether or not to abort the sync if there are any errors during syncing the images
appendSuffix string // Suffix to append to destination image tag
}
// repoDescriptor contains information of a single repository used as a sync source.
@@ -106,12 +111,14 @@ See skopeo-sync(1) for details.
flags := cmd.Flags()
flags.BoolVar(&opts.removeSignatures, "remove-signatures", false, "Do not copy signatures from SOURCE images")
flags.StringVar(&opts.signByFingerprint, "sign-by", "", "Sign the image using a GPG key with the specified `FINGERPRINT`")
flags.StringVar(&opts.signBySigstoreParamFile, "sign-by-sigstore", "", "Sign the image using a sigstore parameter file at `PATH`")
flags.StringVar(&opts.signBySigstorePrivateKey, "sign-by-sigstore-private-key", "", "Sign the image using a sigstore private key at `PATH`")
flags.StringVar(&opts.signPassphraseFile, "sign-passphrase-file", "", "File that contains a passphrase for the --sign-by key")
flags.VarP(commonFlag.NewOptionalStringValue(&opts.format), "format", "f", `MANIFEST TYPE (oci, v2s1, or v2s2) to use when syncing image(s) to a destination (default is manifest type of source, with fallbacks)`)
flags.StringVarP(&opts.source, "src", "s", "", "SOURCE transport type")
flags.StringVarP(&opts.destination, "dest", "d", "", "DESTINATION transport type")
flags.BoolVar(&opts.scoped, "scoped", false, "Images at DESTINATION are prefix using the full source image path as scope")
flags.StringVar(&opts.appendSuffix, "append-suffix", "", "String to append to DESTINATION tags")
flags.BoolVarP(&opts.all, "all", "a", false, "Copy all images if SOURCE-IMAGE is a list")
flags.BoolVar(&opts.dryRun, "dry-run", false, "Run without actually copying data")
flags.BoolVar(&opts.preserveDigests, "preserve-digests", false, "Preserve digests of images and lists")
@@ -125,12 +132,12 @@ See skopeo-sync(1) for details.
}
// UnmarshalYAML is the implementation of the Unmarshaler interface method
// method for the tlsVerifyConfig type.
// for the tlsVerifyConfig type.
// It unmarshals the 'tls-verify' YAML key so that, when they key is not
// specified, tls verification is enforced.
func (tls *tlsVerifyConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
func (tls *tlsVerifyConfig) UnmarshalYAML(value *yaml.Node) error {
var verify bool
if err := unmarshal(&verify); err != nil {
if err := value.Decode(&verify); err != nil {
return err
}
@@ -216,15 +223,7 @@ func getImageTags(ctx context.Context, sysCtx *types.SystemContext, repoRef refe
}
tags, err := docker.GetRepositoryTags(ctx, sysCtx, dockerRef)
if err != nil {
var unauthorizedForCredentials docker.ErrUnauthorizedForCredentials
if errors.As(err, &unauthorizedForCredentials) {
// Some registries may decide to block the "list all tags" endpoint.
// Gracefully allow the sync to continue in this case.
logrus.Warnf("Registry disallows tag list retrieval: %s", err)
tags = nil
} else {
return nil, fmt.Errorf("Error determining repository tags for image %s: %w", name, err)
}
return nil, fmt.Errorf("Error determining repository tags for repo %s: %w", name, err)
}
return tags, nil
@@ -525,26 +524,17 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) (retErr error) {
}()
// validate source and destination options
contains := func(val string, list []string) (_ bool) {
for _, l := range list {
if l == val {
return true
}
}
return
}
if len(opts.source) == 0 {
return errors.New("A source transport must be specified")
}
if !contains(opts.source, []string{docker.Transport.Name(), directory.Transport.Name(), "yaml"}) {
if !slices.Contains([]string{docker.Transport.Name(), directory.Transport.Name(), "yaml"}, opts.source) {
return fmt.Errorf("%q is not a valid source transport", opts.source)
}
if len(opts.destination) == 0 {
return errors.New("A destination transport must be specified")
}
if !contains(opts.destination, []string{docker.Transport.Name(), directory.Transport.Name()}) {
if !slices.Contains([]string{docker.Transport.Name(), directory.Transport.Name()}, opts.destination) {
return fmt.Errorf("%q is not a valid destination transport", opts.destination)
}
@@ -610,13 +600,31 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) (retErr error) {
}
passphrase = p
}
var signers []*signer.Signer
if opts.signBySigstoreParamFile != "" {
signer, err := sigstore.NewSignerFromParameterFile(opts.signBySigstoreParamFile, &sigstore.Options{
PrivateKeyPassphrasePrompt: func(keyFile string) (string, error) {
return promptForPassphrase(keyFile, os.Stdin, os.Stdout)
},
Stdin: os.Stdin,
Stdout: stdout,
})
if err != nil {
return fmt.Errorf("Error using --sign-by-sigstore: %w", err)
}
defer signer.Close()
signers = append(signers, signer)
}
options := copy.Options{
RemoveSignatures: opts.removeSignatures,
Signers: signers,
SignBy: opts.signByFingerprint,
SignPassphrase: passphrase,
SignBySigstorePrivateKeyFile: opts.signBySigstorePrivateKey,
SignSigstorePrivateKeyPassphrase: []byte(passphrase),
ReportWriter: os.Stdout,
ReportWriter: stdout,
DestinationCtx: destinationCtx,
ImageListSelection: imageListSelection,
PreserveDigests: opts.preserveDigests,
@@ -650,7 +658,7 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) (retErr error) {
destSuffix = path.Base(destSuffix)
}
destRef, err := destinationReference(path.Join(destination, destSuffix), opts.destination)
destRef, err := destinationReference(path.Join(destination, destSuffix)+opts.appendSuffix, opts.destination)
if err != nil {
return err
}

46
cmd/skopeo/sync_test.go Normal file
View File

@@ -0,0 +1,46 @@
package main
import (
"testing"
"github.com/containers/image/v5/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
)
var _ yaml.Unmarshaler = (*tlsVerifyConfig)(nil)
func TestTLSVerifyConfig(t *testing.T) {
type container struct { // An example of a larger config file
TLSVerify tlsVerifyConfig `yaml:"tls-verify"`
}
for _, c := range []struct {
input string
expected tlsVerifyConfig
}{
{
input: `tls-verify: true`,
expected: tlsVerifyConfig{skip: types.OptionalBoolFalse},
},
{
input: `tls-verify: false`,
expected: tlsVerifyConfig{skip: types.OptionalBoolTrue},
},
{
input: ``, // No value
expected: tlsVerifyConfig{skip: types.OptionalBoolUndefined},
},
} {
config := container{}
err := yaml.Unmarshal([]byte(c.input), &config)
require.NoError(t, err, c.input)
assert.Equal(t, c.expected, config.TLSVerify, c.input)
}
// Invalid input
config := container{}
err := yaml.Unmarshal([]byte(`tls-verify: "not a valid bool"`), &config)
assert.Error(t, err)
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/containers/image/v5/transports/alltransports"
"github.com/containers/storage/pkg/unshare"
"github.com/syndtr/gocapability/capability"
"golang.org/x/exp/slices"
)
var neededCapabilities = []capability.Cap{
@@ -21,29 +22,32 @@ func maybeReexec() error {
// With Skopeo we need only the subset of the root capabilities necessary
// for pulling an image to the storage. Do not attempt to create a namespace
// if we already have the capabilities we need.
capabilities, err := capability.NewPid(0)
capabilities, err := capability.NewPid2(0)
if err != nil {
return fmt.Errorf("error reading the current capabilities sets: %w", err)
}
for _, cap := range neededCapabilities {
if !capabilities.Get(capability.EFFECTIVE, cap) {
// We miss a capability we need, create a user namespaces
unshare.MaybeReexecUsingUserNamespace(true)
return nil
}
if err := capabilities.Load(); err != nil {
return fmt.Errorf("error loading the current capabilities sets: %w", err)
}
if slices.ContainsFunc(neededCapabilities, func(cap capability.Cap) bool {
return !capabilities.Get(capability.EFFECTIVE, cap)
}) {
// We miss a capability we need, create a user namespaces
unshare.MaybeReexecUsingUserNamespace(true)
return nil
}
return nil
}
func reexecIfNecessaryForImages(imageNames ...string) error {
// Check if container-storage is used before doing unshare
for _, imageName := range imageNames {
if slices.ContainsFunc(imageNames, func(imageName string) bool {
transport := alltransports.TransportFromImageName(imageName)
// Hard-code the storage name to avoid a reference on c/image/storage.
// See https://github.com/containers/skopeo/issues/771#issuecomment-563125006.
if transport != nil && transport.Name() == "containers-storage" {
return maybeReexec()
}
return transport != nil && transport.Name() == "containers-storage"
}) {
return maybeReexec()
}
return nil
}

View File

@@ -44,7 +44,7 @@ func noteCloseFailure(err error, description string, closeErr error) error {
if err == nil {
return fmt.Errorf("%s: %w", description, closeErr)
}
// In this case we prioritize the primary error for use with %w; closeErr is usually less relevant, or might be a consequence of the primary erorr.
// In this case we prioritize the primary error for use with %w; closeErr is usually less relevant, or might be a consequence of the primary error.
return fmt.Errorf("%w (%s: %v)", err, description, closeErr)
}
@@ -315,14 +315,11 @@ func parseCreds(creds string) (string, string, error) {
if creds == "" {
return "", "", errors.New("credentials can't be empty")
}
up := strings.SplitN(creds, ":", 2)
if len(up) == 1 {
return up[0], "", nil
}
if up[0] == "" {
username, password, _ := strings.Cut(creds, ":") // Sets password to "" if there is no ":"
if username == "" {
return "", "", errors.New("username can't be empty")
}
return up[0], up[1], nil
return username, password, nil
}
func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {

View File

@@ -385,7 +385,6 @@ func TestParseManifestFormat(t *testing.T) {
// since there is a shared authfile image option and a non-shared (prefixed) one, make sure the override logic
// works correctly.
func TestImageOptionsAuthfileOverride(t *testing.T) {
for _, testCase := range []struct {
flagPrefix string
cmdFlags []string

View File

@@ -0,0 +1,15 @@
ARG BASE_FQIN=quay.io/coreos-assembler/fcos-buildroot:testing-devel
FROM $BASE_FQIN
# See 'Danger of using COPY and ADD instructions'
# at https://cirrus-ci.org/guide/docker-builder-vm/#dockerfile-as-a-ci-environment
# Provide easy way to force-invalidate image cache by .cirrus.yml change
ARG CIRRUS_IMAGE_VERSION
ENV CIRRUS_IMAGE_VERSION=$CIRRUS_IMAGE_VERSION
ADD https://sh.rustup.rs /var/tmp/rustup_installer.sh
RUN dnf erase -y rust && \
chmod +x /var/tmp/rustup_installer.sh && \
/var/tmp/rustup_installer.sh -y --default-toolchain stable --profile minimal
ENV PATH=/root/.cargo/bin:/root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

View File

@@ -55,9 +55,6 @@ _run_setup() {
# VM's come with the distro. skopeo package pre-installed
dnf erase -y skopeo
# Required for testing the SIF transport
dnf install -y fakeroot squashfs-tools
msg "Removing systemd-resolved from nsswitch.conf"
# /etc/resolv.conf is already set to bypass systemd-resolvd
sed -i -r -e 's/^(hosts.+)resolve.+dns/\1dns/' /etc/nsswitch.conf
@@ -115,18 +112,19 @@ _run_unit() {
make test-unit-local BUILDTAGS="$BUILDTAGS"
}
_run_integration() {
_podman_reset() {
# Ensure we start with a clean-slate
podman system reset --force
showrun podman system reset --force
}
_run_integration() {
_podman_reset
make test-integration-local BUILDTAGS="$BUILDTAGS"
}
_run_system() {
# Ensure we start with a clean-slate
podman system reset --force
# Executes with containers required for testing.
_podman_reset
##### Note: Test MODIFIES THE HOST SETUP #####
make test-system-local BUILDTAGS="$BUILDTAGS"
}

View File

@@ -1,6 +1,6 @@
[comment]: <> (***ATTENTION*** ***WARNING*** ***ALERT*** ***CAUTION*** ***DANGER***)
[comment]: <> ()
[comment]: <> (ANY changes made to this file, once commited/merged must)
[comment]: <> (ANY changes made to this file, once committed/merged must)
[comment]: <> (be manually copy/pasted -in markdown- into the description)
[comment]: <> (field on Quay at the following locations:)
[comment]: <> ()
@@ -11,7 +11,7 @@
[comment]: <> ()
[comment]: <> (***ATTENTION*** ***WARNING*** ***ALERT*** ***CAUTION*** ***DANGER***)
<img src="https://cdn.rawgit.com/containers/skopeo/master/docs/skopeo.svg" width="250">
<img src="https://cdn.rawgit.com/containers/skopeo/main/docs/skopeo.svg" width="250">
----

View File

@@ -20,6 +20,8 @@ automatically inherit any parts of the source name.
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--additional-tag**=_strings_
Additional tags (supports docker-archive).
@@ -93,6 +95,11 @@ Do not copy signatures, if any, from _source-image_. Necessary when copying a si
Add a “simple signing” signature using that key ID for an image name corresponding to _destination-image_
**--sign-by-sigstore** _param-file_
Add a sigstore signature based on the options in the specified containers sigstore signing parameter file, _param-file_.
See containers-sigstore-signing-params.yaml(5) for details about the file format.
**--sign-by-sigstore-private-key** _path_
Add a sigstore signature using a private key at _path_ for an image name corresponding to _destination-image_
@@ -214,12 +221,12 @@ The password to access the destination registry.
## EXAMPLES
To just copy an image from one registry to another:
```sh
```console
$ skopeo copy docker://quay.io/skopeo/stable:latest docker://registry.example.com/skopeo:latest
```
To copy the layers of the docker.io busybox image to a local directory:
```sh
```console
$ mkdir -p /var/lib/images/busybox
$ skopeo copy docker://busybox:latest dir:/var/lib/images/busybox
$ ls /var/lib/images/busybox/*
@@ -228,42 +235,46 @@ $ ls /var/lib/images/busybox/*
/tmp/busybox/8ddc19f16526912237dd8af81971d5e4dd0587907234be2b83e249518d5b673f.tar
```
To copy and sign an image:
To create an archive consumable by `docker load` (but note that using a registry is almost always more efficient):
```console
$ skopeo copy docker://busybox:latest docker-archive:archive-file.tar:busybox:latest
```
```sh
# skopeo copy --sign-by dev@example.com containers-storage:example/busybox:streaming docker://example/busybox:gold
To copy and sign an image:
```console
$ skopeo copy --sign-by dev@example.com containers-storage:example/busybox:streaming docker://example/busybox:gold
```
To encrypt an image:
```sh
skopeo copy docker://docker.io/library/nginx:1.17.8 oci:local_nginx:1.17.8
```console
$ skopeo copy docker://docker.io/library/nginx:1.17.8 oci:local_nginx:1.17.8
openssl genrsa -out private.key 1024
openssl rsa -in private.key -pubout > public.key
$ openssl genrsa -out private.key 1024
$ openssl rsa -in private.key -pubout > public.key
skopeo copy --encryption-key jwe:./public.key oci:local_nginx:1.17.8 oci:try-encrypt:encrypted
$ skopeo copy --encryption-key jwe:./public.key oci:local_nginx:1.17.8 oci:try-encrypt:encrypted
```
To decrypt an image:
```sh
skopeo copy --decryption-key ./private.key oci:try-encrypt:encrypted oci:try-decrypt:decrypted
```console
$ skopeo copy --decryption-key ./private.key oci:try-encrypt:encrypted oci:try-decrypt:decrypted
```
To copy encrypted image without decryption:
```sh
skopeo copy oci:try-encrypt:encrypted oci:try-encrypt-copy:encrypted
```console
$ skopeo copy oci:try-encrypt:encrypted oci:try-encrypt-copy:encrypted
```
To decrypt an image that requires more than one key:
```sh
skopeo copy --decryption-key ./private1.key --decryption-key ./private2.key --decryption-key ./private3.key oci:try-encrypt:encrypted oci:try-decrypt:decrypted
```console
$ skopeo copy --decryption-key ./private1.key --decryption-key ./private2.key --decryption-key ./private3.key oci:try-encrypt:encrypted oci:try-decrypt:decrypted
```
Container images can also be partially encrypted by specifying the index of the layer. Layers are 0-indexed indices, with support for negative indexing. i.e. 0 is the first layer, -1 is the last layer.
Let's say out of 3 layers that the image `docker.io/library/nginx:1.17.8` is made up of, we only want to encrypt the 2nd layer,
```sh
skopeo copy --encryption-key jwe:./public.key --encrypt-layer 1 oci:local_nginx:1.17.8 oci:try-encrypt:encrypted
```console
$ skopeo copy --encryption-key jwe:./public.key --encrypt-layer 1 oci:local_nginx:1.17.8 oci:try-encrypt:encrypted
```
## SEE ALSO

View File

@@ -31,6 +31,8 @@ $ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distrib
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
@@ -85,7 +87,7 @@ The password to access the registry.
## EXAMPLES
Mark image example/pause for deletion from the registry.example.com registry:
```sh
```console
$ skopeo delete docker://registry.example.com/example/pause:latest
```
See above for additional details on using the command **delete**.

View File

@@ -0,0 +1,49 @@
% skopeo-generate-sigstore-key(1)
## NAME
skopeo\-generate-sigstore-key - Generate a sigstore public/private key pair.
## SYNOPSIS
**skopeo generate-sigstore-key** [*options*] **--output-prefix** _prefix_
## DESCRIPTION
Generates a public/private key pair suitable for creating sigstore image signatures.
The private key is encrypted with a passphrase;
if one is not provided using an option, this command prompts for it interactively.
The private key is written to _prefix_**.private** .
The private key is written to _prefix_**.pub** .
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--help**, **-h**
Print usage statement
**--output-prefix** _prefix_
Mandatory.
Path prefix for the output keys (_prefix_**.private** and _prefix_**.pub**).
**--passphrase-file** _path_
The passphare to use to encrypt the private key.
Only the first line will be read.
A passphrase stored in a file is of questionable security if other users can read this file.
Do not use this option if at all avoidable.
## EXAMPLES
```console
$ skopeo generate-sigstore-key --output-prefix mykey
```
# SEE ALSO
skopeo(1), skopeo-copy(1), containers-policy.json(5)
## AUTHORS
Miloslav Trmač <mitr@redhat.com>

View File

@@ -17,6 +17,8 @@ To see values for a different architecture/OS, use the **--override-os** / **--o
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--authfile** _path_
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
@@ -42,6 +44,7 @@ Use docker daemon host at _host_ (`docker-daemon:` transport only)
Format the output using the given Go template.
The keys of the returned JSON can be used as the values for the --format flag (see examples below).
Supports the Go templating functions available at https://pkg.go.dev/github.com/containers/common/pkg/report#hdr-Template_Functions
**--help**, **-h**
@@ -87,74 +90,90 @@ Do not list the available tags from the repository in the output. When `true`, t
## EXAMPLES
To review information for the image fedora from the docker.io registry:
```sh
```console
$ skopeo inspect docker://docker.io/fedora
{
"Name": "docker.io/library/fedora",
"Digest": "sha256:a97914edb6ba15deb5c5acf87bd6bd5b6b0408c96f48a5cbd450b5b04509bb7d",
"Digest": "sha256:f99efcddc4dd6736d8a88cc1ab6722098ec1d77dbf7aed9a7a514fc997ca08e0",
"RepoTags": [
"20",
"21",
"22",
"23",
"24",
"heisenbug",
"latest",
"rawhide"
"20",
"21",
"..."
],
"Created": "2016-06-20T19:33:43.220526898Z",
"DockerVersion": "1.10.3",
"Labels": {},
"Created": "2022-11-16T07:26:42.618327645Z",
"DockerVersion": "20.10.12",
"Labels": {
"maintainer": "Clement Verna \u003ccverna@fedoraproject.org\u003e"
},
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:7c91a140e7a1025c3bc3aace4c80c0d9933ac4ee24b8630a6b0b5d8b9ce6b9d4"
"sha256:cb8b1ed77979b894115a983f391465651aa7eb3edd036be4b508eea47271eb93"
],
"LayersData": [
{
"MIMEType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"Digest": "sha256:cb8b1ed77979b894115a983f391465651aa7eb3edd036be4b508eea47271eb93",
"Size": 65990920,
"Annotations": null
}
],
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"DISTTAG=f37container",
"FGC=f37",
"FBR=f37"
]
}
```
To inspect python from the docker.io registry and not show the available tags:
```sh
```console
$ skopeo inspect --no-tags docker://docker.io/library/python
{
"Name": "docker.io/library/python",
"Digest": "sha256:5ca194a80ddff913ea49c8154f38da66a41d2b73028c5cf7e46bc3c1d6fda572",
"Digest": "sha256:10fc14aa6ae69f69e4c953cffd9b0964843d8c163950491d2138af891377bc1d",
"RepoTags": [],
"Created": "2021-10-05T23:40:54.936108045Z",
"DockerVersion": "20.10.7",
"Created": "2022-11-16T06:55:28.566254104Z",
"DockerVersion": "20.10.12",
"Labels": null,
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:df5590a8898bedd76f02205dc8caa5cc9863267dbcd8aac038bcd212688c1cc7",
"sha256:705bb4cb554eb7751fd21a994f6f32aee582fbe5ea43037db6c43d321763992b",
"sha256:519df5fceacdeaadeec563397b1d9f4d7c29c9f6eff879739cab6f0c144f49e1",
"sha256:ccc287cbeddc96a0772397ca00ec85482a7b7f9a9fac643bfddd87b932f743db",
"sha256:e3f8e6af58ed3a502f0c3c15dce636d9d362a742eb5b67770d0cfcb72f3a9884",
"sha256:aebed27b2d86a5a3a2cbe186247911047a7e432b9d17daad8f226597c0ea4276",
"sha256:54c32182bdcc3041bf64077428467109a70115888d03f7757dcf614ff6d95ebe",
"sha256:cc8b7caedab13af07adf4836e13af2d4e9e54d794129b0fd4c83ece6b1112e86",
"sha256:462c3718af1d5cdc050cfba102d06c26f78fe3b738ce2ca2eb248034b1738945"
"sha256:a8ca11554fce00d9177da2d76307bdc06df7faeb84529755c648ac4886192ed1",
"sha256:e4e46864aba2e62ba7c75965e4aa33ec856ee1b1074dda6b478101c577b63abd",
"..."
],
"LayersData": [
{
"MIMEType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"Digest": "sha256:a8ca11554fce00d9177da2d76307bdc06df7faeb84529755c648ac4886192ed1",
"Size": 55038615,
"Annotations": null
},
{
"MIMEType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"Digest": "sha256:e4e46864aba2e62ba7c75965e4aa33ec856ee1b1074dda6b478101c577b63abd",
"Size": 5164893,
"Annotations": null
},
"..."
],
"Env": [
"PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"GPG_KEY=A035C8C19219BA821ECEA86B64E628F8D684696D",
"PYTHON_VERSION=3.10.0",
"PYTHON_PIP_VERSION=21.2.4",
"PYTHON_SETUPTOOLS_VERSION=57.5.0",
"PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/d781367b97acf0ece7e9e304bf281e99b618bf10/public/get-pip.py",
"PYTHON_GET_PIP_SHA256=01249aa3e58ffb3e1686b7141b4e9aac4d398ef4ac3012ed9dff8dd9f685ffe0"
"...",
]
}
```
```
```console
$ /bin/skopeo inspect --config docker://registry.fedoraproject.org/fedora --format "{{ .Architecture }}"
amd64
```
```
```console
$ /bin/skopeo inspect --format '{{ .Env }}' docker://registry.access.redhat.com/ubi8
[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin container=oci]
```

View File

@@ -12,6 +12,8 @@ Return a list of tags from _source-image_ in a registry or a local docker-archiv
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--authfile** _path_
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
@@ -79,7 +81,7 @@ This commands refers to repositories using a _transport_`:`_details_ format. The
### Docker Transport
To get the list of tags in the "fedora" repository from the docker.io registry (the repository name expands to "library/fedora" per docker transport canonical form):
```sh
```console
$ skopeo list-tags docker://docker.io/fedora
{
"Repository": "docker.io/library/fedora",
@@ -110,7 +112,7 @@ $ skopeo list-tags docker://docker.io/fedora
To list the tags in a local host docker/distribution registry on port 5000, in this case for the "fedora" repository:
```sh
```console
$ skopeo list-tags docker://localhost:5000/fedora
{
"Repository": "localhost:5000/fedora",
@@ -127,7 +129,7 @@ $ skopeo list-tags docker://localhost:5000/fedora
To list the tags in a local docker-archive file:
```sh
```console
$ skopeo list-tags docker-archive:/tmp/busybox.tar.gz
{
"Tags": [
@@ -138,7 +140,7 @@ $ skopeo list-tags docker-archive:/tmp/busybox.tar.gz
Also supports more than one tags in an archive:
```sh
```console
$ skopeo list-tags docker-archive:/tmp/docker-two-images.tar.gz
{
"Tags": [
@@ -150,7 +152,7 @@ $ skopeo list-tags docker-archive:/tmp/docker-two-images.tar.gz
Will include a source-index entry for each untagged image:
```sh
```console
$ skopeo list-tags docker-archive:/tmp/four-tags-with-an-untag.tar
{
"Tags": [

View File

@@ -15,6 +15,8 @@ flag. The default path used is **${XDG\_RUNTIME\_DIR}/containers/auth.json**.
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--password**, **-p**=*password*
Password for registry
@@ -57,41 +59,41 @@ Write more detailed information to stdout
## EXAMPLES
```
```console
$ skopeo login docker.io
Username: testuser
Password:
Login Succeeded!
```
```
```console
$ skopeo login -u testuser -p testpassword localhost:5000
Login Succeeded!
```
```
```console
$ skopeo login --authfile authdir/myauths.json docker.io
Username: testuser
Password:
Login Succeeded!
```
```
```console
$ skopeo login --tls-verify=false -u test -p test localhost:5000
Login Succeeded!
```
```
```console
$ skopeo login --cert-dir /etc/containers/certs.d/ -u foo -p bar localhost:5000
Login Succeeded!
```
```
```console
$ skopeo login -u testuser --password-stdin < testpassword.txt docker.io
Login Succeeded!
```
```
```console
$ echo $testpassword | skopeo login -u testuser --password-stdin docker.io
Login Succeeded!
```

View File

@@ -14,6 +14,8 @@ All the cached credentials can be removed by setting the **all** flag.
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--authfile**=*path*
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json
@@ -35,17 +37,17 @@ Require HTTPS and verify certificates when talking to the container registry or
## EXAMPLES
```
```console
$ skopeo logout docker.io
Remove login credentials for docker.io
```
```
```console
$ skopeo logout --authfile authdir/myauths.json docker.io
Remove login credentials for docker.io
```
```
```console
$ skopeo logout --all
Remove login credentials for all registries
```

View File

@@ -18,7 +18,7 @@ Print usage statement
## EXAMPLES
```sh
```console
$ skopeo manifest-digest manifest.json
sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6
```

View File

@@ -17,6 +17,8 @@ This is primarily a debugging tool, useful for special cases, and usually should
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--help**, **-h**
Print usage statement
@@ -31,7 +33,7 @@ The passphare to use when signing with the key ID from `--sign-by`. Only the fir
## EXAMPLES
```sh
```console
$ skopeo standalone-sign busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 --output busybox.signature
$
```

View File

@@ -4,7 +4,7 @@
skopeo\-standalone\-verify - Verify an image signature.
## SYNOPSIS
**skopeo standalone-verify** _manifest_ _docker-reference_ _key-fingerprint_ _signature_
**skopeo standalone-verify** _manifest_ _docker-reference_ _key-fingerprints_ _signature_
## DESCRIPTION
@@ -16,7 +16,7 @@ as per containers-policy.json(5).
_docker-reference_ A docker reference expected to identify the image in the signature
_key-fingerprint_ Expected identity of the signing key
_key-fingerprints_ Identities of trusted signing keys (comma separated), or "any" to trust any known key when using a public key file
_signature_ Path to signature file
@@ -24,13 +24,19 @@ as per containers-policy.json(5).
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--help**, **-h**
Print usage statement
**--public-key-file** _public key file_
File containing the public keys to use when verifying signatures. If this is not specified, keys from the GPG homedir are used.
## EXAMPLES
```sh
```console
$ skopeo standalone-verify busybox-manifest.json registry.example.com/example/busybox 1D8230F6CDB6A06716E414C1DB72F2188BB46CC8 busybox.signature
Signature verified, digest sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55
```

View File

@@ -8,10 +8,7 @@ skopeo\-sync - Synchronize images between registry repositories and local direct
**skopeo sync** [*options*] --src _transport_ --dest _transport_ _source_ _destination_
## DESCRIPTION
Synchronize images between registry repoositories and local directories.
The synchronization is achieved by copying all the images found at _source_ to _destination_.
Useful to synchronize a local container registry mirror, and to to populate registries running inside of air-gapped environments.
Synchronize images between registry repositories and local directories. Synchronization is achieved by copying all the images found at _source_ to _destination_ - useful when synchronizing a local container registry mirror or for populating registries running inside of air-gapped environments.
Differently from other skopeo commands, skopeo sync requires both source and destination transports to be specified separately from _source_ and _destination_.
One of the problems of prefixing a destination with its transport is that, the registry `docker://hostname:port` would be wrongly interpreted as an image reference at a non-fully qualified registry, with `hostname` and `port` the image name and tag.
@@ -32,6 +29,9 @@ When the `--scoped` option is specified, images are prefixed with the source ima
name can be stored at _destination_.
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--all**, **-a**
If one of the images in __src__ refers to a list of images, instead of copying just the image which matches the current OS and
architecture (subject to the use of the global --override-os, --override-arch and --override-variant options), attempt to copy all of
@@ -66,6 +66,8 @@ Print usage statement.
**--scoped** Prefix images with the source image path, so that multiple images with the same name can be stored at _destination_.
**--append-suffix** _tag-suffix_ String to append to destination tags.
**--preserve-digests** Preserve the digests during copying. Fail if the digest cannot be preserved. Consider using `--all` at the same time.
**--remove-signatures** Do not copy signatures, if any, from _source-image_. This is necessary when copying a signed image to a destination which does not support signatures.
@@ -74,6 +76,11 @@ Print usage statement.
Add a “simple signing” signature using that key ID for an image name corresponding to _destination-image_
**--sign-by-sigstore** _param-file_
Add a sigstore signature based on the options in the specified containers sigstore signing parameter file, _param-file_.
See containers-sigstore-signing-params.yaml(5) for details about the file format.
**--sign-by-sigstore-private-key** _path_
Add a sigstore signature using a private key at _path_ for an image name corresponding to _destination-image_
@@ -126,7 +133,7 @@ The password to access the destination registry.
## EXAMPLES
### Synchronizing to a local directory
```
```console
$ skopeo sync --src docker --dest dir registry.example.com/busybox /media/usb
```
Images are located at:
@@ -144,7 +151,7 @@ Images are located at:
/media/usb/busybox:1-glibc
```
Sync run
```
```console
$ skopeo sync --src dir --dest docker /media/usb/busybox:1-glibc my-registry.local.lan/test/
```
Destination registry content:
@@ -154,7 +161,7 @@ my-registry.local.lan/test/busybox 1-glibc
```
### Synchronizing to a local directory, scoped
```
```console
$ skopeo sync --src docker --dest dir --scoped registry.example.com/busybox /media/usb
```
Images are located at:
@@ -167,8 +174,8 @@ Images are located at:
```
### Synchronizing to a container registry
```
skopeo sync --src docker --dest docker registry.example.com/busybox my-registry.local.lan
```console
$ skopeo sync --src docker --dest docker registry.example.com/busybox my-registry.local.lan
```
Destination registry content:
```
@@ -177,8 +184,8 @@ registry.local.lan/busybox 1-glibc, 1-musl, 1-ubuntu, ..., latest
```
### Synchronizing to a container registry keeping the repository
```
skopeo sync --src docker --dest docker registry.example.com/repo/busybox my-registry.local.lan/repo
```console
$ skopeo sync --src docker --dest docker registry.example.com/repo/busybox my-registry.local.lan/repo
```
Destination registry content:
```
@@ -186,6 +193,16 @@ REPO TAGS
registry.local.lan/repo/busybox 1-glibc, 1-musl, 1-ubuntu, ..., latest
```
### Synchronizing to a container registry with tag suffix
```console
$ skopeo sync --src docker --dest docker --append-suffix '-mirror' registry.example.com/busybox my-registry.local.lan
```
Destination registry content:
```
REPO TAGS
registry.local.lan/busybox 1-glibc-mirror, 1-musl-mirror, 1-ubuntu-mirror, ..., latest-mirror
```
### YAML file content (used _source_ for `**--src yaml**`)
```yaml
@@ -210,8 +227,8 @@ quay.io:
- latest
```
If the yaml filename is `sync.yml`, sync run:
```
skopeo sync --src yaml --dest docker sync.yml my-registry.local.lan/repo/
```console
$ skopeo sync --src yaml --dest docker sync.yml my-registry.local.lan/repo/
```
This will copy the following images:
- Repository `registry.example.com/busybox`: all images, as no tags are specified.

View File

@@ -47,10 +47,13 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**oci-archive:**_path_**:**_tag_
An image _tag_ in a tar archive compliant with "Open Container Image Layout Specification" at _path_.
See [containers-transports(5)](https://github.com/containers/image/blob/master/docs/containers-transports.5.md) for details.
See [containers-transports(5)](https://github.com/containers/image/blob/main/docs/containers-transports.5.md) for details.
## OPTIONS
These options should be placed before the subcommand name.
Individual subcommands have their own options.
**--command-timeout** _duration_
Timeout for the command execution.
@@ -101,6 +104,7 @@ Print the version number
| ----------------------------------------- | ------------------------------------------------------------------------------ |
| [skopeo-copy(1)](skopeo-copy.1.md) | Copy an image (manifest, filesystem layers, signatures) from one location to another. |
| [skopeo-delete(1)](skopeo-delete.1.md) | Mark the _image-name_ for later deletion by the registry's garbage collector. |
| [skopeo-generate-sigstore-key(1)](skopeo-generate-sigstore-key.1.md) | Generate a sigstore public/private key pair. |
| [skopeo-inspect(1)](skopeo-inspect.1.md) | Return low-level information about _image-name_ in a registry. |
| [skopeo-list-tags(1)](skopeo-list-tags.1.md) | List image names in a transport-specific collection of images.|
| [skopeo-login(1)](skopeo-login.1.md) | Login to a container registry. |
@@ -113,11 +117,11 @@ Print the version number
## FILES
**/etc/containers/policy.json**
Default trust policy file, if **--policy** is not specified.
The policy format is documented in [containers-policy.json(5)](https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md) .
The policy format is documented in [containers-policy.json(5)](https://github.com/containers/image/blob/main/docs/containers-policy.json.5.md) .
**/etc/containers/registries.d**
Default directory containing registry configuration, if **--registries.d** is not specified.
The contents of this directory are documented in [containers-policy.json(5)](https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md).
The contents of this directory are documented in [containers-policy.json(5)](https://github.com/containers/image/blob/main/docs/containers-policy.json.5.md).
## SEE ALSO
skopeo-login(1), docker-login(1), containers-auth.json(5), containers-storage.conf(5), containers-policy.json(5), containers-transports(5)

140
go.mod
View File

@@ -1,100 +1,136 @@
module github.com/containers/skopeo
go 1.17
go 1.18
require (
github.com/containers/common v0.50.1
github.com/containers/image/v5 v5.23.0
github.com/containers/ocicrypt v1.1.5
github.com/containers/storage v1.43.0
github.com/containers/common v0.52.0
github.com/containers/image/v5 v5.25.0
github.com/containers/ocicrypt v1.1.7
github.com/containers/storage v1.46.1
github.com/docker/distribution v2.8.1+incompatible
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0-rc1
github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b
github.com/opencontainers/image-tools v1.0.0-rc3
github.com/sirupsen/logrus v1.9.0
github.com/spf13/cobra v1.5.0
github.com/spf13/cobra v1.7.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.0
github.com/stretchr/testify v1.8.2
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
gopkg.in/yaml.v2 v2.4.0
golang.org/x/exp v0.0.0-20230321023759-10a507213a29
golang.org/x/term v0.7.0
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/BurntSushi/toml v1.2.0 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/Microsoft/hcsshim v0.9.4 // indirect
github.com/BurntSushi/toml v1.2.1 // indirect
github.com/Microsoft/go-winio v0.6.0 // indirect
github.com/Microsoft/hcsshim v0.10.0-rc.7 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d // indirect
github.com/containerd/cgroups v1.0.4 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.12.0 // indirect
github.com/containers/libtrust v0.0.0-20200511145503-9c3a6c22cd9a // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01 // indirect
github.com/coreos/go-oidc/v3 v3.5.0 // indirect
github.com/cyberphone/json-canonicalization v0.0.0-20220623050100-57a0ce2678a7 // indirect
github.com/cyphar/filepath-securejoin v0.2.3 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/docker v20.10.18+incompatible // indirect
github.com/docker/docker v23.0.3+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/ghodss/yaml v1.0.0 // indirect
github.com/go-jose/go-jose/v3 v3.0.0 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/analysis v0.21.4 // indirect
github.com/go-openapi/errors v0.20.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/loads v0.21.2 // indirect
github.com/go-openapi/runtime v0.25.0 // indirect
github.com/go-openapi/spec v0.20.8 // indirect
github.com/go-openapi/strfmt v0.21.7 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/go-openapi/validate v0.22.1 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.12.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-containerregistry v0.11.0 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/go-containerregistry v0.13.0 // indirect
github.com/google/go-intervals v0.0.2 // indirect
github.com/google/trillian v1.5.1 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/honeycombio/libhoney-go v1.15.8 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/hashicorp/go-retryablehttp v0.7.2 // indirect
github.com/imdario/mergo v0.3.15 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.15.11 // indirect
github.com/klauspost/compress v1.16.4 // indirect
github.com/klauspost/pgzip v1.2.6-0.20220930104621-17e8dac29df8 // indirect
github.com/kr/pretty v0.2.1 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/letsencrypt/boulder v0.0.0-20220723181115-27de4befb95e // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/leodido/go-urn v1.2.2 // indirect
github.com/letsencrypt/boulder v0.0.0-20230213213521-fdfea0d469b6 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-runewidth v0.0.14 // indirect
github.com/mattn/go-shellwords v1.0.12 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mistifyio/go-zfs/v3 v3.0.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/opencontainers/runc v1.1.4 // indirect
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 // indirect
github.com/opencontainers/selinux v1.10.2 // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/ostreedev/ostree-go v0.0.0-20210805093236-719684c64e4f // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/proglottis/gpgme v0.1.3 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/rivo/uniseg v0.4.4 // indirect
github.com/russross/blackfriday v2.0.0+incompatible // indirect
github.com/sigstore/sigstore v1.4.2 // indirect
github.com/segmentio/ksuid v1.0.4 // indirect
github.com/sigstore/fulcio v1.2.0 // indirect
github.com/sigstore/rekor v1.1.0 // indirect
github.com/sigstore/sigstore v1.6.0 // indirect
github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 // indirect
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 // indirect
github.com/sylabs/sif/v2 v2.8.0 // indirect
github.com/tchap/go-patricia v2.3.0+incompatible // indirect
github.com/theupdateframework/go-tuf v0.5.1 // indirect
github.com/sylabs/sif/v2 v2.11.1 // indirect
github.com/tchap/go-patricia/v2 v2.3.1 // indirect
github.com/theupdateframework/go-tuf v0.5.2 // indirect
github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399 // indirect
github.com/ulikunitz/xz v0.5.10 // indirect
github.com/vbatts/tar-split v0.11.2 // indirect
github.com/vbauerster/mpb/v7 v7.5.3 // indirect
github.com/ulikunitz/xz v0.5.11 // indirect
github.com/vbatts/tar-split v0.11.3 // indirect
github.com/vbauerster/mpb/v8 v8.3.0 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
go.etcd.io/bbolt v1.3.6 // indirect
go.etcd.io/bbolt v1.3.7 // indirect
go.mongodb.org/mongo-driver v1.11.3 // indirect
go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 // indirect
go.opencensus.io v0.23.0 // indirect
golang.org/x/crypto v0.0.0-20220919173607-35f4265a4bc0 // indirect
golang.org/x/net v0.0.0-20220909164309-bea034e7d591 // indirect
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 // indirect
golang.org/x/sys v0.0.0-20220919091848-fb04ddd9f9c8 // indirect
golang.org/x/text v0.3.7 // indirect
google.golang.org/genproto v0.0.0-20220720214146-176da50484ac // indirect
google.golang.org/grpc v1.48.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.13.0 // indirect
go.opentelemetry.io/otel/trace v1.13.0 // indirect
golang.org/x/crypto v0.8.0 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.9.0 // indirect
golang.org/x/oauth2 v0.6.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/text v0.9.0 // indirect
golang.org/x/tools v0.7.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20230306155012-7f2fa6fef1f4 // indirect
google.golang.org/grpc v1.54.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
gopkg.in/go-jose/go-jose.v2 v2.6.1 // indirect
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

2488
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -1,92 +0,0 @@
#!/usr/bin/env bash
set -e
# This script builds various binary from a checkout of the skopeo
# source code. DO NOT CALL THIS SCRIPT DIRECTLY.
#
# Requirements:
# - The current directory should be a checkout of the skopeo source code
# (https://github.com/containers/skopeo). Whatever version is checked out
# will be built.
# - The script is intended to be run inside the container specified
# in the output of hack/get_fqin.sh
# - The right way to call this script is to invoke "make" from
# your checkout of the skopeo repository.
# the Makefile will do a "docker build -t skopeo ." and then
# "docker run hack/make.sh" in the resulting image.
#
set -o pipefail
export SKOPEO_PKG='github.com/containers/skopeo'
export SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
export MAKEDIR="$SCRIPTDIR/make"
# Set this to 1 to enable installation/modification of environment/services
export SKOPEO_CONTAINER_TESTS=${SKOPEO_CONTAINER_TESTS:-0}
if [[ "$SKOPEO_CONTAINER_TESTS" == "0" ]] && [[ "$CI" != "true" ]]; then
(
echo "***************************************************************"
echo "WARNING: Executing tests directly on the local development"
echo " host is highly discouraged. Many important items"
echo " will be skipped. For manual execution, please utilize"
echo " the Makefile targets WITHOUT the '-local' suffix."
echo "***************************************************************"
) > /dev/stderr
sleep 5
fi
echo
# List of bundles to create when no argument is passed
# TODO(runcom): these are the one left from Docker...for now
# test-unit
# validate-dco
# cover
DEFAULT_BUNDLES=(
validate-gofmt
validate-lint
validate-vet
validate-git-marks
test-integration
)
# Go module support: set `-mod=vendor` to use the vendored sources
# See also the top-level Makefile.
mod_vendor=
if go help mod >/dev/null 2>&1; then
export GO111MODULE=on
mod_vendor='-mod=vendor'
fi
go_test_dir() {
dir=$1
(
echo '+ go test' $mod_vendor $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"} "${SKOPEO_PKG}${dir#.}"
cd "$dir"
export DEST="$ABS_DEST" # we're in a subshell, so this is safe -- our integration-cli tests need DEST, and "cd" screws it up
go test $mod_vendor $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"}
)
}
bundle() {
local bundle="$1"; shift
echo "---> Making bundle: $(basename "$bundle")"
source "$SCRIPTDIR/make/$bundle" "$@"
}
main() {
if [ $# -lt 1 ]; then
bundles=(${DEFAULT_BUNDLES[@]})
else
bundles=($@)
fi
for bundle in ${bundles[@]}; do
bundle "$bundle"
echo
done
}
main "$@"

View File

@@ -1,31 +0,0 @@
#!/bin/bash
if [ -z "$VALIDATE_UPSTREAM" ]; then
# this is kind of an expensive check, so let's not do this twice if we
# are running more than one validate bundlescript
VALIDATE_REPO='https://github.com/containers/skopeo.git'
VALIDATE_BRANCH='main'
if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then
VALIDATE_REPO="https://github.com/${TRAVIS_REPO_SLUG}.git"
VALIDATE_BRANCH="${TRAVIS_BRANCH}"
fi
VALIDATE_HEAD="$(git rev-parse --verify HEAD)"
git fetch -q "$VALIDATE_REPO" "refs/heads/$VALIDATE_BRANCH"
VALIDATE_UPSTREAM="$(git rev-parse --verify FETCH_HEAD)"
VALIDATE_COMMIT_LOG="$VALIDATE_UPSTREAM..$VALIDATE_HEAD"
VALIDATE_COMMIT_DIFF="$VALIDATE_UPSTREAM...$VALIDATE_HEAD"
validate_diff() {
git diff "$VALIDATE_UPSTREAM" "$@"
}
validate_log() {
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then
git log "$VALIDATE_COMMIT_LOG" "$@"
fi
}
fi

View File

@@ -1,12 +0,0 @@
#!/bin/bash
set -e
bundle_test_integration() {
go_test_dir ./integration
}
# subshell so that we can export PATH without breaking other things
(
make PREFIX=/usr install
bundle_test_integration
) 2>&1

View File

@@ -1,24 +0,0 @@
#!/bin/bash
set -e
# These tests can run in/outside of a container. However,
# not all storage drivers are supported in a container
# environment. Detect this and setup storage when
# running in a container.
if ((SKOPEO_CONTAINER_TESTS)) && [[ -r /etc/containers/storage.conf ]]; then
sed -i \
-e 's/^driver\s*=.*/driver = "vfs"/' \
-e 's/^mountopt/#mountopt/' \
/etc/containers/storage.conf
elif ((SKOPEO_CONTAINER_TESTS)); then
cat >> /etc/containers/storage.conf << EOF
[storage]
driver = "vfs"
EOF
fi
# Build skopeo, install into /usr/bin
make PREFIX=/usr install
# Run tests
SKOPEO_BINARY=/usr/bin/skopeo bats --tap systemtest

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env bash
source "$(dirname "$BASH_SOURCE")/.validate"
# folders=$(find * -type d | egrep -v '^Godeps|bundles|.git')
IFS=$'\n'
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*' | grep -v '^vendor/' || true) )
unset IFS
badFiles=()
for f in "${files[@]}"; do
if [ $(grep -r "^<<<<<<<" $f) ]; then
badFiles+=( "$f" )
continue
fi
if [ $(grep -r "^>>>>>>>" $f) ]; then
badFiles+=( "$f" )
continue
fi
if [ $(grep -r "^=======$" $f) ]; then
badFiles+=( "$f" )
continue
fi
set -e
done
if [ ${#badFiles[@]} -eq 0 ]; then
echo 'Congratulations! There is no conflict.'
else
{
echo "There is trace of conflict(s) in the following files :"
for f in "${badFiles[@]}"; do
echo " - $f"
done
echo
echo 'Please fix the conflict(s) commit the result.'
echo
} >&2
false
fi

View File

@@ -1,33 +0,0 @@
#!/bin/bash
source "$(dirname "$BASH_SOURCE")/.validate"
# We will eventually get to the point where packages should be the complete list
# of subpackages, vendoring excluded, as given by:
#
IFS=$'\n'
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/\|^integration' || true) )
unset IFS
errors=()
for f in "${files[@]}"; do
failedLint=$(golint "$f")
if [ "$failedLint" ]; then
errors+=( "$failedLint" )
fi
done
if [ ${#errors[@]} -eq 0 ]; then
echo 'Congratulations! All Go source files have been linted.'
else
{
echo "Errors from golint:"
for err in "${errors[@]}"; do
echo "$err"
done
echo
echo 'Please fix the above errors. You can test via "golint" and commit the result.'
echo
} >&2
false
fi

8
hack/test-integration.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/bash
set -e
make PREFIX=/usr install
echo "cd ./integration;" go test $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"}
cd ./integration
go test $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"}

44
hack/test-system.sh Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/bash
set -e
# These tests can run in/outside of a container. However,
# not all storage drivers are supported in a container
# environment. Detect this and setup storage when
# running in a container.
#
# Paradoxically (FIXME: clean this up), SKOPEO_CONTAINER_TESTS is set
# both inside a container and without a container (in a CI VM); it actually means
# "it is safe to desctructively modify the system for tests".
#
# On a CI VM, we can just use Podman as it is already configured; the changes below,
# to use VFS, are necessary only inside a container, because overlay-inside-overlay
# does not work. So, make these changes conditional on both
# SKOPEO_CONTAINER_TESTS (for acceptability to do destructive modification) and !CI
# (for necessity to adjust for in-container operation)
if ((SKOPEO_CONTAINER_TESTS)) && [[ "$CI" != true ]]; then
if [[ -r /etc/containers/storage.conf ]]; then
echo "MODIFYING existing storage.conf"
sed -i \
-e 's/^driver\s*=.*/driver = "vfs"/' \
-e 's/^mountopt/#mountopt/' \
/etc/containers/storage.conf
else
echo "CREATING NEW storage.conf"
cat >> /etc/containers/storage.conf << EOF
[storage]
driver = "vfs"
runroot = "/run/containers/storage"
graphroot = "/var/lib/containers/storage"
EOF
fi
# The logic of finding the relevant storage.conf file is convoluted
# and in effect differs between Skopeo and Podman, at least in some versions;
# explicitly point at the file we want to use to hopefully avoid that.
export CONTAINERS_STORAGE_CONF=/etc/containers/storage.conf
fi
# Build skopeo, install into /usr/bin
make PREFIX=/usr install
# Run tests
SKOPEO_BINARY=/usr/bin/skopeo bats --tap systemtest

30
hack/validate-git-marks.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/usr/bin/env bash
IFS=$'\n'
files=( $(git ls-tree -r HEAD --name-only | grep -v '^vendor/' || true) )
unset IFS
badFiles=()
for f in "${files[@]}"; do
if [ $(grep -r "^\(<<<<<<<\|>>>>>>>\|^=======$\)" $f) ]; then
badFiles+=( "$f" )
continue
fi
set -e
done
if [ ${#badFiles[@]} -eq 0 ]; then
echo 'Congratulations! There is no conflict.'
else
{
echo "There is trace of conflict(s) in the following files :"
for f in "${badFiles[@]}"; do
echo " - $f"
done
echo
echo 'Please fix the conflict(s) commit the result.'
echo
} >&2
exit 1
fi

View File

@@ -1,9 +1,7 @@
#!/bin/bash
source "$(dirname "$BASH_SOURCE")/.validate"
IFS=$'\n'
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/' || true) )
files=( $(find . -name '*.go' | grep -v '^./vendor/' | sort || true) )
unset IFS
badFiles=()
@@ -25,5 +23,5 @@ else
echo 'Please reformat the above files using "gofmt -s -w" and commit the result.'
echo
} >&2
false
exit 1
fi

16
hack/validate-lint.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/bash
errors=$($GOBIN/golangci-lint run --build-tags "${BUILDTAGS}" 2>&1)
if [ -z "$errors" ]; then
echo 'Congratulations! All Go source files have been linted.'
else
{
echo "Errors from golangci-lint:"
echo "$errors"
echo
echo 'Please fix the above errors. You can test via "golangci-lint" and commit the result.'
echo
} >&2
exit 1
fi

View File

@@ -1,6 +1,6 @@
#!/bin/bash
errors=$(go vet -tags="${BUILDTAGS}" $mod_vendor $(go list $mod_vendor -e ./...))
errors=$(go vet -tags="${BUILDTAGS}" ./... 2>&1)
if [ -z "$errors" ]; then
echo 'Congratulations! All Go source files have been vetted.'
@@ -12,5 +12,5 @@ else
echo 'Please fix the above errors. You can test via "go vet" and commit the result.'
echo
} >&2
false
exit 1
fi

17
hack/warn-destructive-tests.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -e
# Set this to 1 to enable installation/modification of environment/services
export SKOPEO_CONTAINER_TESTS=${SKOPEO_CONTAINER_TESTS:-0}
if [[ "$SKOPEO_CONTAINER_TESTS" == "0" ]] && [[ "$CI" != "true" ]]; then
(
echo "***************************************************************"
echo "WARNING: Executing tests directly on the local development"
echo " host is highly discouraged. Many important items"
echo " will be skipped. For manual execution, please utilize"
echo " the Makefile targets WITHOUT the '-local' suffix."
echo "***************************************************************"
) > /dev/stderr
sleep 5
fi

View File

@@ -1,34 +1,34 @@
package main
import (
"gopkg.in/check.v1"
)
const blockedRegistriesConf = "./fixtures/blocked-registries.conf"
const blockedErrorRegex = `.*registry registry-blocked.com is blocked in .*`
func (s *SkopeoSuite) TestCopyBlockedSource(c *check.C) {
assertSkopeoFails(c, blockedErrorRegex,
func (s *skopeoSuite) TestCopyBlockedSource() {
t := s.T()
assertSkopeoFails(t, blockedErrorRegex,
"--registries-conf", blockedRegistriesConf, "copy",
"docker://registry-blocked.com/image:test",
"docker://registry-unblocked.com/image:test")
}
func (s *SkopeoSuite) TestCopyBlockedDestination(c *check.C) {
assertSkopeoFails(c, blockedErrorRegex,
func (s *skopeoSuite) TestCopyBlockedDestination() {
t := s.T()
assertSkopeoFails(t, blockedErrorRegex,
"--registries-conf", blockedRegistriesConf, "copy",
"docker://registry-unblocked.com/image:test",
"docker://registry-blocked.com/image:test")
}
func (s *SkopeoSuite) TestInspectBlocked(c *check.C) {
assertSkopeoFails(c, blockedErrorRegex,
func (s *skopeoSuite) TestInspectBlocked() {
t := s.T()
assertSkopeoFails(t, blockedErrorRegex,
"--registries-conf", blockedRegistriesConf, "inspect",
"docker://registry-blocked.com/image:test")
}
func (s *SkopeoSuite) TestDeleteBlocked(c *check.C) {
assertSkopeoFails(c, blockedErrorRegex,
func (s *skopeoSuite) TestDeleteBlocked() {
t := s.T()
assertSkopeoFails(t, blockedErrorRegex,
"--registries-conf", blockedRegistriesConf, "delete",
"docker://registry-blocked.com/image:test")
}

View File

@@ -6,7 +6,9 @@ import (
"testing"
"github.com/containers/skopeo/version"
"gopkg.in/check.v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
const (
@@ -14,100 +16,104 @@ const (
privateRegistryURL1 = "127.0.0.1:5001"
)
func Test(t *testing.T) {
check.TestingT(t)
func TestSkopeo(t *testing.T) {
suite.Run(t, &skopeoSuite{})
}
func init() {
check.Suite(&SkopeoSuite{})
}
type SkopeoSuite struct {
type skopeoSuite struct {
suite.Suite
regV2 *testRegistryV2
regV2WithAuth *testRegistryV2
}
func (s *SkopeoSuite) SetUpSuite(c *check.C) {
var _ = suite.SetupAllSuite(&skopeoSuite{})
var _ = suite.TearDownAllSuite(&skopeoSuite{})
func (s *skopeoSuite) SetupSuite() {
t := s.T()
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.regV2 = setupRegistryV2At(c, privateRegistryURL0, false, false)
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL1, true, false)
require.NoError(t, err)
s.regV2 = setupRegistryV2At(t, privateRegistryURL0, false, false)
s.regV2WithAuth = setupRegistryV2At(t, privateRegistryURL1, true, false)
}
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
func (s *skopeoSuite) TearDownSuite() {
if s.regV2 != nil {
s.regV2.tearDown(c)
s.regV2.tearDown()
}
if s.regV2WithAuth != nil {
//cmd := exec.Command("docker", "logout", s.regV2WithAuth)
//c.Assert(cmd.Run(), check.IsNil)
s.regV2WithAuth.tearDown(c)
// cmd := exec.Command("docker", "logout", s.regV2WithAuth)
// require.Noerror(t, cmd.Run())
s.regV2WithAuth.tearDown()
}
}
// TODO like dockerCmd but much easier, just out,err
//func skopeoCmd()
func (s *SkopeoSuite) TestVersion(c *check.C) {
wanted := fmt.Sprintf(".*%s version %s.*", skopeoBinary, version.Version)
assertSkopeoSucceeds(c, wanted, "--version")
func (s *skopeoSuite) TestVersion() {
t := s.T()
assertSkopeoSucceeds(t, fmt.Sprintf(".*%s version %s.*", skopeoBinary, version.Version),
"--version")
}
func (s *SkopeoSuite) TestCanAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
wanted := ".*manifest unknown: manifest unknown.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", "--creds="+s.regV2WithAuth.username+":"+s.regV2WithAuth.password, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
func (s *skopeoSuite) TestCanAuthToPrivateRegistryV2WithoutDockerCfg() {
t := s.T()
assertSkopeoFails(t, ".*manifest unknown.*",
"--tls-verify=false", "inspect", "--creds="+s.regV2WithAuth.username+":"+s.regV2WithAuth.password, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
wanted := ".*unauthorized: authentication required.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
func (s *skopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg() {
t := s.T()
assertSkopeoFails(t, ".*authentication required.*",
"--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestCertDirInsteadOfCertPath(c *check.C) {
wanted := ".*unknown flag: --cert-path.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-path=/")
wanted = ".*unauthorized: authentication required.*"
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-dir=/etc/docker/certs.d/")
func (s *skopeoSuite) TestCertDirInsteadOfCertPath() {
t := s.T()
assertSkopeoFails(t, ".*unknown flag: --cert-path.*",
"--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-path=/")
assertSkopeoFails(t, ".*authentication required.*",
"--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-dir=/etc/docker/certs.d/")
}
// TODO(runcom): as soon as we can push to registries ensure you can inspect here
// not just get image not found :)
func (s *SkopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound(c *check.C) {
func (s *skopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound() {
t := s.T()
out, err := exec.Command(skopeoBinary, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2.url)).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf(string(out)))
wanted := ".*manifest unknown.*"
c.Assert(string(out), check.Matches, "(?s)"+wanted) // (?s) : '.' will also match newlines
wanted = ".*unauthorized: authentication required.*"
c.Assert(string(out), check.Not(check.Matches), "(?s)"+wanted) // (?s) : '.' will also match newlines
assert.Error(t, err, "%s", string(out))
assert.Regexp(t, "(?s).*manifest unknown.*", string(out)) // (?s) : '.' will also match newlines
assert.NotRegexp(t, "(?s).*unauthorized: authentication required.*", string(out)) // (?s) : '.' will also match newlines
}
func (s *SkopeoSuite) TestInspectFailsWhenReferenceIsInvalid(c *check.C) {
assertSkopeoFails(c, `.*Invalid image name.*`, "inspect", "unknown")
func (s *skopeoSuite) TestInspectFailsWhenReferenceIsInvalid() {
t := s.T()
assertSkopeoFails(t, `.*Invalid image name.*`, "inspect", "unknown")
}
func (s *SkopeoSuite) TestLoginLogout(c *check.C) {
wanted := "^Login Succeeded!\n$"
assertSkopeoSucceeds(c, wanted, "login", "--tls-verify=false", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, s.regV2WithAuth.url)
func (s *skopeoSuite) TestLoginLogout() {
t := s.T()
assertSkopeoSucceeds(t, "^Login Succeeded!\n$",
"login", "--tls-verify=false", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, s.regV2WithAuth.url)
// test --get-login returns username
wanted = fmt.Sprintf("^%s\n$", s.regV2WithAuth.username)
assertSkopeoSucceeds(c, wanted, "login", "--tls-verify=false", "--get-login", s.regV2WithAuth.url)
assertSkopeoSucceeds(t, fmt.Sprintf("^%s\n$", s.regV2WithAuth.username),
"login", "--tls-verify=false", "--get-login", s.regV2WithAuth.url)
// test logout
wanted = fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url)
assertSkopeoSucceeds(c, wanted, "logout", s.regV2WithAuth.url)
assertSkopeoSucceeds(t, fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url),
"logout", s.regV2WithAuth.url)
}
func (s *SkopeoSuite) TestCopyWithLocalAuth(c *check.C) {
wanted := "^Login Succeeded!\n$"
assertSkopeoSucceeds(c, wanted, "login", "--tls-verify=false", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, s.regV2WithAuth.url)
func (s *skopeoSuite) TestCopyWithLocalAuth() {
t := s.T()
assertSkopeoSucceeds(t, "^Login Succeeded!\n$",
"login", "--tls-verify=false", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, s.regV2WithAuth.url)
// copy to private registry using local authentication
imageName := fmt.Sprintf("docker://%s/busybox:mine", s.regV2WithAuth.url)
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", testFQIN+":latest", imageName)
assertSkopeoSucceeds(t, "", "copy", "--dest-tls-verify=false", testFQIN+":latest", imageName)
// inspect from private registry
assertSkopeoSucceeds(c, "", "inspect", "--tls-verify=false", imageName)
assertSkopeoSucceeds(t, "", "inspect", "--tls-verify=false", imageName)
// logout from the registry
wanted = fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url)
assertSkopeoSucceeds(c, wanted, "logout", s.regV2WithAuth.url)
assertSkopeoSucceeds(t, fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url),
"logout", s.regV2WithAuth.url)
// inspect from private registry should fail after logout
wanted = ".*unauthorized: authentication required.*"
assertSkopeoFails(c, wanted, "inspect", "--tls-verify=false", imageName)
assertSkopeoFails(t, ".*authentication required.*",
"inspect", "--tls-verify=false", imageName)
}

File diff suppressed because it is too large Load Diff

View File

@@ -9,10 +9,11 @@ import (
"os/exec"
"path/filepath"
"strings"
"testing"
"time"
"github.com/containers/storage/pkg/homedir"
"gopkg.in/check.v1"
"github.com/stretchr/testify/require"
)
var adminKUBECONFIG = map[string]string{
@@ -30,21 +31,21 @@ type openshiftCluster struct {
// startOpenshiftCluster creates a new openshiftCluster.
// WARNING: This affects state in users' home directory! Only run
// in isolated test environment.
func startOpenshiftCluster(c *check.C) *openshiftCluster {
func startOpenshiftCluster(t *testing.T) *openshiftCluster {
cluster := &openshiftCluster{}
cluster.workingDir = c.MkDir()
cluster.workingDir = t.TempDir()
cluster.startMaster(c)
cluster.prepareRegistryConfig(c)
cluster.startRegistry(c)
cluster.ocLoginToProject(c)
cluster.dockerLogin(c)
cluster.relaxImageSignerPermissions(c)
cluster.startMaster(t)
cluster.prepareRegistryConfig(t)
cluster.startRegistry(t)
cluster.ocLoginToProject(t)
cluster.dockerLogin(t)
cluster.relaxImageSignerPermissions(t)
return cluster
}
// clusterCmd creates an exec.Cmd in cluster.workingDir with current environment modified by environment
// clusterCmd creates an exec.Cmd in cluster.workingDir with current environment modified by environment.
func (cluster *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
cmd := exec.Command(name, args...)
cmd.Dir = cluster.workingDir
@@ -56,21 +57,20 @@ func (cluster *openshiftCluster) clusterCmd(env map[string]string, name string,
}
// startMaster starts the OpenShift master (etcd+API server) and waits for it to be ready, or terminates on failure.
func (cluster *openshiftCluster) startMaster(c *check.C) {
func (cluster *openshiftCluster) startMaster(t *testing.T) {
cmd := cluster.clusterCmd(nil, "openshift", "start", "master")
cluster.processes = append(cluster.processes, cmd)
stdout, err := cmd.StdoutPipe()
c.Assert(err, check.IsNil)
// Send both to the same pipe. This might cause the two streams to be mixed up,
require.NoError(t, err)
// but logging actually goes only to stderr - this primarily ensure we log any
// unexpected output to stdout.
cmd.Stderr = cmd.Stdout
err = cmd.Start()
c.Assert(err, check.IsNil)
require.NoError(t, err)
portOpen, terminatePortCheck := newPortChecker(c, 8443)
portOpen, terminatePortCheck := newPortChecker(t, 8443)
defer func() {
c.Logf("Terminating port check")
t.Logf("Terminating port check")
terminatePortCheck <- true
}()
@@ -78,12 +78,12 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
logCheckFound := make(chan bool)
go func() {
defer func() {
c.Logf("Log checker exiting")
t.Logf("Log checker exiting")
}()
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
line := scanner.Text()
c.Logf("Log line: %s", line)
t.Logf("Log line: %s", line)
if strings.Contains(line, "Started Origin Controllers") {
logCheckFound <- true
return
@@ -92,7 +92,7 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
// Note: we can block before we get here.
select {
case <-terminateLogCheck:
c.Logf("terminated")
t.Logf("terminated")
return
default:
// Do not block here and read the next line.
@@ -101,7 +101,7 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
logCheckFound <- false
}()
defer func() {
c.Logf("Terminating log check")
t.Logf("Terminating log check")
terminateLogCheck <- true
}()
@@ -110,26 +110,26 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
defer cancel()
for !gotPortCheck || !gotLogCheck {
c.Logf("Waiting for master")
t.Logf("Waiting for master")
select {
case <-portOpen:
c.Logf("port check done")
t.Logf("port check done")
gotPortCheck = true
case found := <-logCheckFound:
c.Logf("log check done, found: %t", found)
t.Logf("log check done, found: %t", found)
if !found {
c.Fatal("log check done, success message not found")
t.Fatal("log check done, success message not found")
}
gotLogCheck = true
case <-ctx.Done():
c.Fatalf("Timed out waiting for master: %v", ctx.Err())
t.Fatalf("Timed out waiting for master: %v", ctx.Err())
}
}
c.Logf("OK, master started!")
t.Logf("OK, master started!")
}
// prepareRegistryConfig creates a registry service account and a related k8s client configuration in ${cluster.workingDir}/openshift.local.registry.
func (cluster *openshiftCluster) prepareRegistryConfig(c *check.C) {
func (cluster *openshiftCluster) prepareRegistryConfig(t *testing.T) {
// This partially mimics the objects created by (oadm registry), except that we run the
// server directly as an ordinary process instead of a pod with an implicitly attached service account.
saJSON := `{
@@ -140,93 +140,93 @@ func (cluster *openshiftCluster) prepareRegistryConfig(c *check.C) {
}
}`
cmd := cluster.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c, cmd, saJSON)
runExecCmdWithInput(t, cmd, saJSON)
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
out, err := cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
require.NoError(t, err, "%s", string(out))
require.Equal(t, "cluster role \"system:registry\" added: \"registry\"\n", string(out))
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
out, err = cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "")
require.NoError(t, err, "%s", string(out))
require.Equal(t, "", string(out))
}
// startRegistry starts the OpenShift registry with configPart on port, waits for it to be ready, and returns the process object, or terminates on failure.
func (cluster *openshiftCluster) startRegistryProcess(c *check.C, port int, configPath string) *exec.Cmd {
func (cluster *openshiftCluster) startRegistryProcess(t *testing.T, port uint16, configPath string) *exec.Cmd {
cmd := cluster.clusterCmd(map[string]string{
"KUBECONFIG": "openshift.local.registry/openshift-registry.kubeconfig",
"DOCKER_REGISTRY_URL": fmt.Sprintf("127.0.0.1:%d", port),
}, "dockerregistry", configPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%d", port), cmd)
consumeAndLogOutputs(t, fmt.Sprintf("registry-%d", port), cmd)
err := cmd.Start()
c.Assert(err, check.IsNil)
require.NoError(t, err, "%s")
portOpen, terminatePortCheck := newPortChecker(c, port)
portOpen, terminatePortCheck := newPortChecker(t, port)
defer func() {
terminatePortCheck <- true
}()
c.Logf("Waiting for registry to start")
t.Logf("Waiting for registry to start")
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Minute)
defer cancel()
select {
case <-portOpen:
c.Logf("OK, Registry port open")
t.Logf("OK, Registry port open")
case <-ctx.Done():
c.Fatalf("Timed out waiting for registry to start: %v", ctx.Err())
t.Fatalf("Timed out waiting for registry to start: %v", ctx.Err())
}
return cmd
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (cluster *openshiftCluster) startRegistry(c *check.C) {
func (cluster *openshiftCluster) startRegistry(t *testing.T) {
// Our “primary” registry
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5000, "/atomic-registry-config.yml"))
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(t, 5000, "/atomic-registry-config.yml"))
// A registry configured with acceptschema2:false
schema1Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
schema1Config := fileFromFixture(t, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5005",
"rootdirectory: /registry": "rootdirectory: /registry-schema1",
// The default configuration currently already contains acceptschema2: false
})
// Make sure the configuration contains "acceptschema2: false", because eventually it will be enabled upstream and this function will need to be updated.
configContents, err := os.ReadFile(schema1Config)
c.Assert(err, check.IsNil)
c.Assert(string(configContents), check.Matches, "(?s).*acceptschema2: false.*")
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5005, schema1Config))
require.NoError(t, err)
require.Regexp(t, "(?s).*acceptschema2: false.*", string(configContents))
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(t, 5005, schema1Config))
// A registry configured with acceptschema2:true
schema2Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
schema2Config := fileFromFixture(t, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5006",
"rootdirectory: /registry": "rootdirectory: /registry-schema2",
"acceptschema2: false": "acceptschema2: true",
})
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5006, schema2Config))
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(t, 5006, schema2Config))
}
// ocLogin runs (oc login) and (oc new-project) on the cluster, or terminates on failure.
func (cluster *openshiftCluster) ocLoginToProject(c *check.C) {
c.Logf("oc login")
func (cluster *openshiftCluster) ocLoginToProject(t *testing.T) {
t.Logf("oc login")
cmd := cluster.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
out, err := cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
require.NoError(t, err, "%s", out)
require.Regexp(t, "(?s).*Login successful.*", string(out)) // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c, "oc", "new-project", "myns")
c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(t, "oc", "new-project", "myns")
require.Regexp(t, `(?s).*Now using project "myns".*`, outString) // (?s) : '.' will also match newlines
}
// dockerLogin simulates (docker login) to the cluster, or terminates on failure.
// We do not run (docker login) directly, because that requires a running daemon and a docker package.
func (cluster *openshiftCluster) dockerLogin(c *check.C) {
func (cluster *openshiftCluster) dockerLogin(t *testing.T) {
cluster.dockerDir = filepath.Join(homedir.Get(), ".docker")
err := os.Mkdir(cluster.dockerDir, 0700)
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.Logf("oc config value: %s", out)
out := combinedOutputOfCommand(t, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
t.Logf("oc config value: %s", out)
authValue := base64.StdEncoding.EncodeToString([]byte("unused:" + out))
auths := []string{}
for _, port := range []int{5000, 5005, 5006} {
@@ -237,22 +237,22 @@ func (cluster *openshiftCluster) dockerLogin(c *check.C) {
}
configJSON := `{"auths": {` + strings.Join(auths, ",") + `}}`
err = os.WriteFile(filepath.Join(cluster.dockerDir, "config.json"), []byte(configJSON), 0600)
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
// relaxImageSignerPermissions opens up the system:image-signer permissions so that
// anyone can work with signatures
// FIXME: This also allows anyone to DoS anyone else; this design is really not all
// that workable, but it is the best we can do for now.
func (cluster *openshiftCluster) relaxImageSignerPermissions(c *check.C) {
func (cluster *openshiftCluster) relaxImageSignerPermissions(t *testing.T) {
cmd := cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
out, err := cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
require.NoError(t, err, "%s", string(out))
require.Equal(t, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n", string(out))
}
// tearDown stops the cluster services and deletes (only some!) of the state.
func (cluster *openshiftCluster) tearDown(c *check.C) {
func (cluster *openshiftCluster) tearDown(t *testing.T) {
for i := len(cluster.processes) - 1; i >= 0; i-- {
// Its undocumented what Kill() returns if the process has terminated,
// so we couldnt check just for that. This is running in a container anyway…
@@ -260,6 +260,6 @@ func (cluster *openshiftCluster) tearDown(c *check.C) {
}
if cluster.dockerDir != "" {
err := os.RemoveAll(cluster.dockerDir)
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
}

View File

@@ -7,7 +7,8 @@ import (
"os"
"os/exec"
"gopkg.in/check.v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
/*
@@ -20,7 +21,7 @@ To use it, run:
to start a container, then within the container:
SKOPEO_CONTAINER_TESTS=1 PS1='nested> ' go test -tags openshift_shell -timeout=24h ./integration -v -check.v -check.vv -check.f='CopySuite.TestRunShell'
SKOPEO_CONTAINER_TESTS=1 PS1='nested> ' go test -tags openshift_shell -timeout=24h ./integration -v -run='copySuite.TestRunShell'
An example of what can be done within the container:
@@ -33,13 +34,14 @@ An example of what can be done within the container:
curl -L -v 'http://localhost:5000/v2/myns/personal/manifests/personal' --header 'Authorization: Bearer $token_from_oauth'
curl -L -v 'http://localhost:5000/extensions/v2/myns/personal/signatures/$manifest_digest' --header 'Authorization: Bearer $token_from_oauth'
*/
func (s *CopySuite) TestRunShell(c *check.C) {
func (s *copySuite) TestRunShell() {
t := s.T()
cmd := exec.Command("bash", "-i")
tty, err := os.OpenFile("/dev/tty", os.O_RDWR, 0)
c.Assert(err, check.IsNil)
require.NoError(t, err)
cmd.Stdin = tty
cmd.Stdout = tty
cmd.Stderr = tty
err = cmd.Run()
c.Assert(err, check.IsNil)
assert.NoError(t, err)
}

View File

@@ -7,6 +7,6 @@ import (
"os/exec"
)
// cmdLifecycleToParentIfPossible tries to exit if the parent process exits (only works on Linux)
// cmdLifecycleToParentIfPossible tries to exit if the parent process exits (only works on Linux).
func cmdLifecycleToParentIfPossible(c *exec.Cmd) {
}

View File

@@ -9,16 +9,21 @@ import (
"os/exec"
"strings"
"syscall"
"testing"
"time"
"gopkg.in/check.v1"
"github.com/containers/image/v5/manifest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
// This image is known to be x86_64 only right now
const knownNotManifestListedImage_x8664 = "docker://quay.io/coreos/11bot"
const knownNotManifestListedImageX8664 = "docker://quay.io/coreos/11bot"
// knownNotExtantImage would be very surprising if it did exist
const knownNotExtantImage = "docker://quay.io/centos/centos:opensusewindowsubuntu"
const expectedProxySemverMajor = "0.2"
@@ -29,7 +34,7 @@ type request struct {
// Method is the name of the function
Method string `json:"method"`
// Args is the arguments (parsed inside the function)
Args []interface{} `json:"args"`
Args []any `json:"args"`
}
// reply is copied from proxy.go
@@ -37,7 +42,7 @@ type reply struct {
// Success is true if and only if the call succeeded.
Success bool `json:"success"`
// Value is an arbitrary value (or values, as array/map) returned from the call.
Value interface{} `json:"value"`
Value any `json:"value"`
// PipeID is an index into open pipes, and should be passed to FinishPipe
PipeID uint32 `json:"pipeid"`
// Error should be non-empty if Success == false
@@ -57,7 +62,7 @@ type pipefd struct {
fd *os.File
}
func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *pipefd, err error) {
func (p *proxy) call(method string, args []any) (rval any, fd *pipefd, err error) {
req := request{
Method: method,
Args: args,
@@ -78,7 +83,7 @@ func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *p
replybuf := make([]byte, maxMsgSize)
n, oobn, _, _, err := p.c.ReadMsgUnix(replybuf, oob)
if err != nil {
err = fmt.Errorf("reading reply: %v", err)
err = fmt.Errorf("reading reply: %w", err)
return
}
var reply reply
@@ -96,7 +101,7 @@ func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *p
var scms []syscall.SocketControlMessage
scms, err = syscall.ParseSocketControlMessage(oob[:oobn])
if err != nil {
err = fmt.Errorf("failed to parse control message: %v", err)
err = fmt.Errorf("failed to parse control message: %w", err)
return
}
if len(scms) != 1 {
@@ -106,7 +111,7 @@ func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *p
var fds []int
fds, err = syscall.ParseUnixRights(&scms[0])
if err != nil {
err = fmt.Errorf("failed to parse unix rights: %v", err)
err = fmt.Errorf("failed to parse unix rights: %w", err)
return
}
fd = &pipefd{
@@ -119,7 +124,7 @@ func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *p
return
}
func (p *proxy) callNoFd(method string, args []interface{}) (rval interface{}, err error) {
func (p *proxy) callNoFd(method string, args []any) (rval any, err error) {
var fd *pipefd
rval, fd, err = p.call(method, args)
if err != nil {
@@ -132,7 +137,7 @@ func (p *proxy) callNoFd(method string, args []interface{}) (rval interface{}, e
return rval, nil
}
func (p *proxy) callReadAllBytes(method string, args []interface{}) (rval interface{}, buf []byte, err error) {
func (p *proxy) callReadAllBytes(method string, args []any) (rval any, buf []byte, err error) {
var fd *pipefd
rval, fd, err = p.call(method, args)
if err != nil {
@@ -150,7 +155,7 @@ func (p *proxy) callReadAllBytes(method string, args []interface{}) (rval interf
err: err,
}
}()
_, err = p.callNoFd("FinishPipe", []interface{}{fd.id})
_, err = p.callNoFd("FinishPipe", []any{fd.id})
if err != nil {
return
}
@@ -211,17 +216,12 @@ func newProxy() (*proxy, error) {
return p, nil
}
func init() {
check.Suite(&ProxySuite{})
func TestProxy(t *testing.T) {
suite.Run(t, &proxySuite{})
}
type ProxySuite struct {
}
func (s *ProxySuite) SetUpSuite(c *check.C) {
}
func (s *ProxySuite) TearDownSuite(c *check.C) {
type proxySuite struct {
suite.Suite
}
type byteFetch struct {
@@ -230,7 +230,7 @@ type byteFetch struct {
}
func runTestGetManifestAndConfig(p *proxy, img string) error {
v, err := p.callNoFd("OpenImage", []interface{}{knownNotManifestListedImage_x8664})
v, err := p.callNoFd("OpenImage", []any{img})
if err != nil {
return err
}
@@ -240,8 +240,31 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
return fmt.Errorf("OpenImage return value is %T", v)
}
imgid := uint32(imgidv)
if imgid == 0 {
return fmt.Errorf("got zero from expected image")
}
_, manifestBytes, err := p.callReadAllBytes("GetManifest", []interface{}{imgid})
// Also verify the optional path
v, err = p.callNoFd("OpenImageOptional", []any{img})
if err != nil {
return err
}
imgidv, ok = v.(float64)
if !ok {
return fmt.Errorf("OpenImageOptional return value is %T", v)
}
imgid2 := uint32(imgidv)
if imgid2 == 0 {
return fmt.Errorf("got zero from expected image")
}
_, err = p.callNoFd("CloseImage", []any{imgid2})
if err != nil {
return err
}
_, manifestBytes, err := p.callReadAllBytes("GetManifest", []any{imgid})
if err != nil {
return err
}
@@ -250,7 +273,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
return err
}
_, configBytes, err := p.callReadAllBytes("GetFullConfig", []interface{}{imgid})
_, configBytes, err := p.callReadAllBytes("GetFullConfig", []any{imgid})
if err != nil {
return err
}
@@ -269,7 +292,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
}
// Also test this legacy interface
_, ctrconfigBytes, err := p.callReadAllBytes("GetConfig", []interface{}{imgid})
_, ctrconfigBytes, err := p.callReadAllBytes("GetConfig", []any{imgid})
if err != nil {
return err
}
@@ -284,7 +307,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
return fmt.Errorf("No CMD or ENTRYPOINT set")
}
_, err = p.callNoFd("CloseImage", []interface{}{imgid})
_, err = p.callNoFd("CloseImage", []any{imgid})
if err != nil {
return err
}
@@ -292,19 +315,43 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
return nil
}
func (s *ProxySuite) TestProxy(c *check.C) {
p, err := newProxy()
c.Assert(err, check.IsNil)
err = runTestGetManifestAndConfig(p, knownNotManifestListedImage_x8664)
func runTestOpenImageOptionalNotFound(p *proxy, img string) error {
v, err := p.callNoFd("OpenImageOptional", []any{img})
if err != nil {
err = fmt.Errorf("Testing image %s: %v", knownNotManifestListedImage_x8664, err)
return err
}
c.Assert(err, check.IsNil)
imgidv, ok := v.(float64)
if !ok {
return fmt.Errorf("OpenImageOptional return value is %T", v)
}
imgid := uint32(imgidv)
if imgid != 0 {
return fmt.Errorf("Unexpected optional image id %v", imgid)
}
return nil
}
func (s *proxySuite) TestProxy() {
t := s.T()
p, err := newProxy()
require.NoError(t, err)
err = runTestGetManifestAndConfig(p, knownNotManifestListedImageX8664)
if err != nil {
err = fmt.Errorf("Testing image %s: %v", knownNotManifestListedImageX8664, err)
}
assert.NoError(t, err)
err = runTestGetManifestAndConfig(p, knownListImage)
if err != nil {
err = fmt.Errorf("Testing image %s: %v", knownListImage, err)
}
c.Assert(err, check.IsNil)
assert.NoError(t, err)
err = runTestOpenImageOptionalNotFound(p, knownNotExtantImage)
if err != nil {
err = fmt.Errorf("Testing optional image %s: %v", knownNotExtantImage, err)
}
assert.NoError(t, err)
}

View File

@@ -6,9 +6,10 @@ import (
"os"
"os/exec"
"path/filepath"
"testing"
"time"
"gopkg.in/check.v1"
"github.com/stretchr/testify/require"
)
const (
@@ -24,9 +25,9 @@ type testRegistryV2 struct {
email string
}
func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistryV2 {
reg, err := newTestRegistryV2At(c, url, auth, schema1)
c.Assert(err, check.IsNil)
func setupRegistryV2At(t *testing.T, url string, auth, schema1 bool) *testRegistryV2 {
reg, err := newTestRegistryV2At(t, url, auth, schema1)
require.NoError(t, err)
// Wait for registry to be ready to serve requests.
for i := 0; i != 50; i++ {
@@ -37,13 +38,13 @@ func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistry
}
if err != nil {
c.Fatal("Timeout waiting for test registry to become available")
t.Fatal("Timeout waiting for test registry to become available")
}
return reg
}
func newTestRegistryV2At(c *check.C, url string, auth, schema1 bool) (*testRegistryV2, error) {
tmp := c.MkDir()
func newTestRegistryV2At(t *testing.T, url string, auth, schema1 bool) (*testRegistryV2, error) {
tmp := t.TempDir()
template := `version: 0.1
loglevel: debug
storage:
@@ -94,10 +95,10 @@ compatibility:
cmd = exec.Command(binaryV2, "serve", confPath)
}
consumeAndLogOutputs(c, fmt.Sprintf("registry-%s", url), cmd)
consumeAndLogOutputs(t, fmt.Sprintf("registry-%s", url), cmd)
if err := cmd.Start(); err != nil {
if os.IsNotExist(err) {
c.Skip(err.Error())
t.Skip(err.Error())
}
return nil, err
}
@@ -110,20 +111,21 @@ compatibility:
}, nil
}
func (t *testRegistryV2) Ping() error {
func (r *testRegistryV2) Ping() error {
// We always ping through HTTP for our test registry.
resp, err := http.Get(fmt.Sprintf("http://%s/v2/", t.url))
resp, err := http.Get(fmt.Sprintf("http://%s/v2/", r.url))
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
return fmt.Errorf("registry ping replied with an unexpected status code %d", resp.StatusCode)
}
return nil
}
func (t *testRegistryV2) tearDown(c *check.C) {
func (r *testRegistryV2) tearDown() {
// Its undocumented what Kill() returns if the process has terminated,
// so we couldnt check just for that. This is running in a container anyway…
_ = t.cmd.Process.Kill()
_ = r.cmd.Process.Kill()
}

View File

@@ -6,23 +6,28 @@ import (
"os"
"os/exec"
"strings"
"testing"
"github.com/containers/image/v5/signature"
"gopkg.in/check.v1"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
const (
gpgBinary = "gpg"
)
func init() {
check.Suite(&SigningSuite{})
func TestSigning(t *testing.T) {
suite.Run(t, &signingSuite{})
}
type SigningSuite struct {
type signingSuite struct {
suite.Suite
fingerprint string
}
var _ = suite.SetupAllSuite(&signingSuite{})
func findFingerprint(lineBytes []byte) (string, error) {
lines := string(lineBytes)
for _, line := range strings.Split(lines, "\n") {
@@ -34,43 +39,41 @@ func findFingerprint(lineBytes []byte) (string, error) {
return "", errors.New("No fingerprint found")
}
func (s *SigningSuite) SetUpSuite(c *check.C) {
func (s *signingSuite) SetupSuite() {
t := s.T()
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
require.NoError(t, err)
gpgHome := c.MkDir()
os.Setenv("GNUPGHOME", gpgHome)
gpgHome := t.TempDir()
t.Setenv("GNUPGHOME", gpgHome)
runCommandWithInput(c, "Key-Type: RSA\nName-Real: Testing user\n%no-protection\n%commit\n", gpgBinary, "--homedir", gpgHome, "--batch", "--gen-key")
runCommandWithInput(t, "Key-Type: RSA\nName-Real: Testing user\n%no-protection\n%commit\n", gpgBinary, "--homedir", gpgHome, "--batch", "--gen-key")
lines, err := exec.Command(gpgBinary, "--homedir", gpgHome, "--with-colons", "--no-permission-warning", "--fingerprint").Output()
c.Assert(err, check.IsNil)
require.NoError(t, err)
s.fingerprint, err = findFingerprint(lines)
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
func (s *SigningSuite) TearDownSuite(c *check.C) {
os.Unsetenv("GNUPGHOME")
}
func (s *SigningSuite) TestSignVerifySmoke(c *check.C) {
func (s *signingSuite) TestSignVerifySmoke() {
t := s.T()
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
require.NoError(t, err)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
t.Skipf("Signing not supported: %v", err)
}
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/smoketest"
sigOutput, err := os.CreateTemp("", "sig")
c.Assert(err, check.IsNil)
require.NoError(t, err)
defer os.Remove(sigOutput.Name())
assertSkopeoSucceeds(c, "^$", "standalone-sign", "-o", sigOutput.Name(),
assertSkopeoSucceeds(t, "^$", "standalone-sign", "-o", sigOutput.Name(),
manifestPath, dockerReference, s.fingerprint)
expected := fmt.Sprintf("^Signature verified, digest %s\n$", TestImageManifestDigest)
assertSkopeoSucceeds(c, expected, "standalone-verify", manifestPath,
expected := fmt.Sprintf("^Signature verified using fingerprint %s, digest %s\n$", s.fingerprint, TestImageManifestDigest)
assertSkopeoSucceeds(t, expected, "standalone-verify", manifestPath,
dockerReference, s.fingerprint, sigOutput.Name())
}

View File

@@ -9,13 +9,16 @@ import (
"path/filepath"
"regexp"
"strings"
"testing"
"github.com/containers/image/v5/docker"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/types"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"gopkg.in/check.v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
const (
@@ -33,30 +36,36 @@ const (
pullableRepoWithLatestTag = "k8s.gcr.io/pause"
)
func init() {
check.Suite(&SyncSuite{})
func TestSync(t *testing.T) {
suite.Run(t, &syncSuite{})
}
type SyncSuite struct {
type syncSuite struct {
suite.Suite
cluster *openshiftCluster
registry *testRegistryV2
}
func (s *SyncSuite) SetUpSuite(c *check.C) {
var _ = suite.SetupAllSuite(&syncSuite{})
var _ = suite.TearDownAllSuite(&syncSuite{})
func (s *syncSuite) SetupSuite() {
t := s.T()
const registryAuth = false
const registrySchema1 = false
if os.Getenv("SKOPEO_LOCAL_TESTS") == "1" {
c.Log("Running tests without a container")
t.Log("Running tests without a container")
fmt.Printf("NOTE: tests requires a V2 registry at url=%s, with auth=%t, schema1=%t \n", v2DockerRegistryURL, registryAuth, registrySchema1)
return
}
if os.Getenv("SKOPEO_CONTAINER_TESTS") != "1" {
c.Skip("Not running in a container, refusing to affect user state")
t.Skip("Not running in a container, refusing to affect user state")
}
s.cluster = startOpenshiftCluster(c) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
s.cluster = startOpenshiftCluster(t) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression", "schema1", "schema2"} {
isJSON := fmt.Sprintf(`{
@@ -67,41 +76,42 @@ func (s *SyncSuite) SetUpSuite(c *check.C) {
},
"spec": {}
}`, stream)
runCommandWithInput(c, isJSON, "oc", "create", "-f", "-")
runCommandWithInput(t, isJSON, "oc", "create", "-f", "-")
}
// FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, registryAuth, registrySchema1)
s.registry = setupRegistryV2At(t, v2DockerRegistryURL, registryAuth, registrySchema1)
gpgHome := c.MkDir()
os.Setenv("GNUPGHOME", gpgHome)
gpgHome := t.TempDir()
t.Setenv("GNUPGHOME", gpgHome)
for _, key := range []string{"personal", "official"} {
batchInput := fmt.Sprintf("Key-Type: RSA\nName-Real: Test key - %s\nName-email: %s@example.com\n%%no-protection\n%%commit\n",
key, key)
runCommandWithInput(c, batchInput, gpgBinary, "--batch", "--gen-key")
runCommandWithInput(t, batchInput, gpgBinary, "--batch", "--gen-key")
out := combinedOutputOfCommand(c, gpgBinary, "--armor", "--export", fmt.Sprintf("%s@example.com", key))
out := combinedOutputOfCommand(t, gpgBinary, "--armor", "--export", fmt.Sprintf("%s@example.com", key))
err := os.WriteFile(filepath.Join(gpgHome, fmt.Sprintf("%s-pubkey.gpg", key)),
[]byte(out), 0600)
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
}
func (s *SyncSuite) TearDownSuite(c *check.C) {
func (s *syncSuite) TearDownSuite() {
t := s.T()
if os.Getenv("SKOPEO_LOCAL_TESTS") == "1" {
return
}
if s.registry != nil {
s.registry.tearDown(c)
s.registry.tearDown()
}
if s.cluster != nil {
s.cluster.tearDown(c)
s.cluster.tearDown(t)
}
}
func assertNumberOfManifestsInSubdirs(c *check.C, dir string, expectedCount int) {
func assertNumberOfManifestsInSubdirs(t *testing.T, dir string, expectedCount int) {
nManifests := 0
err := filepath.WalkDir(dir, func(path string, d fs.DirEntry, err error) error {
if err != nil {
@@ -113,156 +123,163 @@ func assertNumberOfManifestsInSubdirs(c *check.C, dir string, expectedCount int)
}
return nil
})
c.Assert(err, check.IsNil)
c.Assert(nManifests, check.Equals, expectedCount)
require.NoError(t, err)
assert.Equal(t, expectedCount, nManifests)
}
func (s *SyncSuite) TestDocker2DirTagged(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestDocker2DirTagged() {
t := s.T()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedImage
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := path.Join(tmpDir, "dir1")
dir2 := path.Join(tmpDir, "dir2")
// sync docker => dir
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "docker://"+image, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", "docker://"+image, "dir:"+dir2)
_, err = os.Stat(path.Join(dir2, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "diff", "-urN", path.Join(dir1, imagePath), dir2)
c.Assert(out, check.Equals, "")
out := combinedOutputOfCommand(t, "diff", "-urN", path.Join(dir1, imagePath), dir2)
assert.Equal(t, "", out)
}
func (s *SyncSuite) TestDocker2DirTaggedAll(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestDocker2DirTaggedAll() {
t := s.T()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedManifestList
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := path.Join(tmpDir, "dir1")
dir2 := path.Join(tmpDir, "dir2")
// sync docker => dir
assertSkopeoSucceeds(c, "", "sync", "--all", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
assertSkopeoSucceeds(t, "", "sync", "--all", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--all", "docker://"+image, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", "--all", "docker://"+image, "dir:"+dir2)
_, err = os.Stat(path.Join(dir2, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "diff", "-urN", path.Join(dir1, imagePath), dir2)
c.Assert(out, check.Equals, "")
out := combinedOutputOfCommand(t, "diff", "-urN", path.Join(dir1, imagePath), dir2)
assert.Equal(t, "", out)
}
func (s *SyncSuite) TestPreserveDigests(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestPreserveDigests() {
t := s.T()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedManifestList
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--all", "--preserve-digests", "docker://"+image, "dir:"+tmpDir)
assertSkopeoSucceeds(t, "", "copy", "--all", "--preserve-digests", "docker://"+image, "dir:"+tmpDir)
_, err := os.Stat(path.Join(tmpDir, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
assertSkopeoFails(c, ".*Instructed to preserve digests.*", "copy", "--all", "--preserve-digests", "--format=oci", "docker://"+image, "dir:"+tmpDir)
assertSkopeoFails(t, ".*Instructed to preserve digests.*", "copy", "--all", "--preserve-digests", "--format=oci", "docker://"+image, "dir:"+tmpDir)
}
func (s *SyncSuite) TestScoped(c *check.C) {
func (s *syncSuite) TestScoped() {
t := s.T()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedImage
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := c.MkDir()
assertSkopeoSucceeds(c, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
dir1 := t.TempDir()
assertSkopeoSucceeds(t, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, path.Base(imagePath), "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
func (s *SyncSuite) TestDirIsNotOverwritten(c *check.C) {
func (s *syncSuite) TestDirIsNotOverwritten() {
t := s.T()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableRepoWithLatestTag
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
// make a copy of the image in the local registry
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "docker://"+image, "docker://"+path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())))
assertSkopeoSucceeds(t, "", "copy", "--dest-tls-verify=false", "docker://"+image, "docker://"+path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())))
//sync upstream image to dir, not scoped
dir1 := c.MkDir()
assertSkopeoSucceeds(c, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
dir1 := t.TempDir()
assertSkopeoSucceeds(t, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, path.Base(imagePath), "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
//sync local registry image to dir, not scoped
assertSkopeoFails(c, ".*Refusing to overwrite destination directory.*", "sync", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())), dir1)
assertSkopeoFails(t, ".*Refusing to overwrite destination directory.*", "sync", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())), dir1)
//sync local registry image to dir, scoped
imageRef, err = docker.ParseReference(fmt.Sprintf("//%s", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference()))))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath = imageRef.DockerReference().String()
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())), dir1)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())), dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
func (s *SyncSuite) TestDocker2DirUntagged(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestDocker2DirUntagged() {
t := s.T()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableRepo
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := path.Join(tmpDir, "dir1")
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
sysCtx := types.SystemContext{}
tags, err := docker.GetRepositoryTags(context.Background(), &sysCtx, imageRef)
c.Assert(err, check.IsNil)
c.Check(len(tags), check.Not(check.Equals), 0)
require.NoError(t, err)
assert.NotZero(t, len(tags))
nManifests, err := filepath.Glob(path.Join(dir1, path.Dir(imagePath), "*", "manifest.json"))
c.Assert(err, check.IsNil)
c.Assert(len(nManifests), check.Equals, len(tags))
require.NoError(t, err)
assert.Len(t, nManifests, len(tags))
}
func (s *SyncSuite) TestYamlUntagged(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestYamlUntagged() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
image := pullableRepo
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().Name()
sysCtx := types.SystemContext{}
tags, err := docker.GetRepositoryTags(context.Background(), &sysCtx, imageRef)
c.Assert(err, check.IsNil)
c.Check(len(tags), check.Not(check.Equals), 0)
require.NoError(t, err)
assert.NotZero(t, len(tags))
yamlConfig := fmt.Sprintf(`
%s:
@@ -273,8 +290,8 @@ func (s *SyncSuite) TestYamlUntagged(c *check.C) {
// sync to the local registry
yamlFile := path.Join(tmpDir, "registries.yaml")
err = os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "docker", "--dest-tls-verify=false", yamlFile, v2DockerRegistryURL)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "docker", "--dest-tls-verify=false", yamlFile, v2DockerRegistryURL)
// sync back from local registry to a folder
os.Remove(yamlFile)
yamlConfig = fmt.Sprintf(`
@@ -285,23 +302,24 @@ func (s *SyncSuite) TestYamlUntagged(c *check.C) {
`, v2DockerRegistryURL, imagePath)
err = os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
sysCtx = types.SystemContext{
DockerInsecureSkipTLSVerify: types.NewOptionalBool(true),
}
localImageRef, err := docker.ParseReference(fmt.Sprintf("//%s/%s", v2DockerRegistryURL, imagePath))
c.Assert(err, check.IsNil)
require.NoError(t, err)
localTags, err := docker.GetRepositoryTags(context.Background(), &sysCtx, localImageRef)
c.Assert(err, check.IsNil)
c.Check(len(localTags), check.Not(check.Equals), 0)
c.Assert(len(localTags), check.Equals, len(tags))
assertNumberOfManifestsInSubdirs(c, dir1, len(tags))
require.NoError(t, err)
assert.NotZero(t, len(localTags))
assert.Len(t, localTags, len(tags))
assertNumberOfManifestsInSubdirs(t, dir1, len(tags))
}
func (s *SyncSuite) TestYamlRegex2Dir(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestYamlRegex2Dir() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
@@ -311,17 +329,18 @@ k8s.gcr.io:
`
// the ↑ regex strings always matches only 2 images
var nTags = 2
c.Assert(nTags, check.Not(check.Equals), 0)
assert.NotZero(t, nTags)
yamlFile := path.Join(tmpDir, "registries.yaml")
err := os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(c, dir1, nTags)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(t, dir1, nTags)
}
func (s *SyncSuite) TestYamlDigest2Dir(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestYamlDigest2Dir() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
@@ -332,13 +351,14 @@ k8s.gcr.io:
`
yamlFile := path.Join(tmpDir, "registries.yaml")
err := os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(c, dir1, 1)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(t, dir1, 1)
}
func (s *SyncSuite) TestYaml2Dir(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestYaml2Dir() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
@@ -366,25 +386,26 @@ quay.io:
nTags++
}
}
c.Assert(nTags, check.Not(check.Equals), 0)
assert.NotZero(t, nTags)
yamlFile := path.Join(tmpDir, "registries.yaml")
err := os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(c, dir1, nTags)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(t, dir1, nTags)
}
func (s *SyncSuite) TestYamlTLSVerify(c *check.C) {
func (s *syncSuite) TestYamlTLSVerify() {
t := s.T()
const localRegURL = "docker://" + v2DockerRegistryURL + "/"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
image := pullableRepoWithLatestTag
tag := "latest"
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
// copy docker => docker
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "docker://"+image+":"+tag, localRegURL+image+":"+tag)
assertSkopeoSucceeds(t, "", "copy", "--dest-tls-verify=false", "docker://"+image+":"+tag, localRegURL+image+":"+tag)
yamlTemplate := `
%s:
@@ -396,7 +417,7 @@ func (s *SyncSuite) TestYamlTLSVerify(c *check.C) {
testCfg := []struct {
tlsVerify string
msg string
checker func(c *check.C, regexp string, args ...string)
checker func(t *testing.T, regexp string, args ...string)
}{
{
tlsVerify: "tls-verify: false",
@@ -420,17 +441,18 @@ func (s *SyncSuite) TestYamlTLSVerify(c *check.C) {
yamlConfig := fmt.Sprintf(yamlTemplate, v2DockerRegistryURL, cfg.tlsVerify, image, tag)
yamlFile := path.Join(tmpDir, "registries.yaml")
err := os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
require.NoError(t, err)
cfg.checker(c, cfg.msg, "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
cfg.checker(t, cfg.msg, "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
os.Remove(yamlFile)
os.RemoveAll(dir1)
}
}
func (s *SyncSuite) TestSyncManifestOutput(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestSyncManifestOutput() {
t := s.T()
tmpDir := t.TempDir()
destDir1 := filepath.Join(tmpDir, "dest1")
destDir2 := filepath.Join(tmpDir, "dest2")
@@ -439,154 +461,162 @@ func (s *SyncSuite) TestSyncManifestOutput(c *check.C) {
//Split image:tag path from image URI for manifest comparison
imageDir := pullableTaggedImage[strings.LastIndex(pullableTaggedImage, "/")+1:]
assertSkopeoSucceeds(c, "", "sync", "--format=oci", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir1)
verifyManifestMIMEType(c, filepath.Join(destDir1, imageDir), imgspecv1.MediaTypeImageManifest)
assertSkopeoSucceeds(c, "", "sync", "--format=v2s2", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir2)
verifyManifestMIMEType(c, filepath.Join(destDir2, imageDir), manifest.DockerV2Schema2MediaType)
assertSkopeoSucceeds(c, "", "sync", "--format=v2s1", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir3)
verifyManifestMIMEType(c, filepath.Join(destDir3, imageDir), manifest.DockerV2Schema1SignedMediaType)
assertSkopeoSucceeds(t, "", "sync", "--format=oci", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir1)
verifyManifestMIMEType(t, filepath.Join(destDir1, imageDir), imgspecv1.MediaTypeImageManifest)
assertSkopeoSucceeds(t, "", "sync", "--format=v2s2", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir2)
verifyManifestMIMEType(t, filepath.Join(destDir2, imageDir), manifest.DockerV2Schema2MediaType)
assertSkopeoSucceeds(t, "", "sync", "--format=v2s1", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir3)
verifyManifestMIMEType(t, filepath.Join(destDir3, imageDir), manifest.DockerV2Schema1SignedMediaType)
}
func (s *SyncSuite) TestDocker2DockerTagged(c *check.C) {
func (s *syncSuite) TestDocker2DockerTagged() {
t := s.T()
const localRegURL = "docker://" + v2DockerRegistryURL + "/"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedImage
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := path.Join(tmpDir, "dir1")
dir2 := path.Join(tmpDir, "dir2")
// sync docker => docker
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--dest-tls-verify=false", "--src", "docker", "--dest", "docker", image, v2DockerRegistryURL)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--dest-tls-verify=false", "--src", "docker", "--dest", "docker", image, v2DockerRegistryURL)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "docker://"+image, "dir:"+dir1)
assertSkopeoSucceeds(t, "", "copy", "docker://"+image, "dir:"+dir1)
_, err = os.Stat(path.Join(dir1, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", localRegURL+imagePath, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", "--src-tls-verify=false", localRegURL+imagePath, "dir:"+dir2)
_, err = os.Stat(path.Join(dir2, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
out := combinedOutputOfCommand(t, "diff", "-urN", dir1, dir2)
assert.Equal(t, "", out)
}
func (s *SyncSuite) TestDir2DockerTagged(c *check.C) {
func (s *syncSuite) TestDir2DockerTagged() {
t := s.T()
const localRegURL = "docker://" + v2DockerRegistryURL + "/"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableRepoWithLatestTag
dir1 := path.Join(tmpDir, "dir1")
err := os.Mkdir(dir1, 0755)
c.Assert(err, check.IsNil)
require.NoError(t, err)
dir2 := path.Join(tmpDir, "dir2")
err = os.Mkdir(dir2, 0755)
c.Assert(err, check.IsNil)
require.NoError(t, err)
// create leading dirs
err = os.MkdirAll(path.Dir(path.Join(dir1, image)), 0755)
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "docker://"+image, "dir:"+path.Join(dir1, image))
assertSkopeoSucceeds(t, "", "copy", "docker://"+image, "dir:"+path.Join(dir1, image))
_, err = os.Stat(path.Join(dir1, image, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
// sync dir => docker
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--dest-tls-verify=false", "--src", "dir", "--dest", "docker", dir1, v2DockerRegistryURL)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--dest-tls-verify=false", "--src", "dir", "--dest", "docker", dir1, v2DockerRegistryURL)
// create leading dirs
err = os.MkdirAll(path.Dir(path.Join(dir2, image)), 0755)
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", localRegURL+image, "dir:"+path.Join(dir2, image))
assertSkopeoSucceeds(t, "", "copy", "--src-tls-verify=false", localRegURL+image, "dir:"+path.Join(dir2, image))
_, err = os.Stat(path.Join(dir2, image, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
out := combinedOutputOfCommand(t, "diff", "-urN", dir1, dir2)
assert.Equal(t, "", out)
}
func (s *SyncSuite) TestFailsWithDir2Dir(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestFailsWithDir2Dir() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
dir2 := path.Join(tmpDir, "dir2")
// sync dir => dir is not allowed
assertSkopeoFails(c, ".*sync from 'dir' to 'dir' not implemented.*", "sync", "--scoped", "--src", "dir", "--dest", "dir", dir1, dir2)
assertSkopeoFails(t, ".*sync from 'dir' to 'dir' not implemented.*", "sync", "--scoped", "--src", "dir", "--dest", "dir", dir1, dir2)
}
func (s *SyncSuite) TestFailsNoSourceImages(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestFailsNoSourceImages() {
t := s.T()
tmpDir := t.TempDir()
assertSkopeoFails(c, ".*No images to sync found in .*",
assertSkopeoFails(t, ".*No images to sync found in .*",
"sync", "--scoped", "--dest-tls-verify=false", "--src", "dir", "--dest", "docker", tmpDir, v2DockerRegistryURL)
assertSkopeoFails(c, ".*No images to sync found in .*",
assertSkopeoFails(t, ".*Error determining repository tags for repo docker.io/library/hopefully_no_images_will_ever_be_called_like_this: fetching tags list: requested access to the resource is denied.*",
"sync", "--scoped", "--dest-tls-verify=false", "--src", "docker", "--dest", "docker", "hopefully_no_images_will_ever_be_called_like_this", v2DockerRegistryURL)
}
func (s *SyncSuite) TestFailsWithDockerSourceNoRegistry(c *check.C) {
func (s *syncSuite) TestFailsWithDockerSourceNoRegistry() {
t := s.T()
const regURL = "google.com/namespace/imagename"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
//untagged
assertSkopeoFails(c, ".*invalid status code from registry 404.*",
assertSkopeoFails(t, ".*StatusCode: 404.*",
"sync", "--scoped", "--src", "docker", "--dest", "dir", regURL, tmpDir)
//tagged
assertSkopeoFails(c, ".*invalid status code from registry 404.*",
assertSkopeoFails(t, ".*StatusCode: 404.*",
"sync", "--scoped", "--src", "docker", "--dest", "dir", regURL+":thetag", tmpDir)
}
func (s *SyncSuite) TestFailsWithDockerSourceUnauthorized(c *check.C) {
func (s *syncSuite) TestFailsWithDockerSourceUnauthorized() {
t := s.T()
const repo = "privateimagenamethatshouldnotbepublic"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
//untagged
assertSkopeoFails(c, ".*Registry disallows tag list retrieval.*",
assertSkopeoFails(t, ".*requested access to the resource is denied.*",
"sync", "--scoped", "--src", "docker", "--dest", "dir", repo, tmpDir)
//tagged
assertSkopeoFails(c, ".*unauthorized: authentication required.*",
assertSkopeoFails(t, ".*requested access to the resource is denied.*",
"sync", "--scoped", "--src", "docker", "--dest", "dir", repo+":thetag", tmpDir)
}
func (s *SyncSuite) TestFailsWithDockerSourceNotExisting(c *check.C) {
func (s *syncSuite) TestFailsWithDockerSourceNotExisting() {
t := s.T()
repo := path.Join(v2DockerRegistryURL, "imagedoesnotexist")
tmpDir := c.MkDir()
tmpDir := t.TempDir()
//untagged
assertSkopeoFails(c, ".*invalid status code from registry 404.*",
assertSkopeoFails(t, ".*repository name not known to registry.*",
"sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", repo, tmpDir)
//tagged
assertSkopeoFails(c, ".*reading manifest.*",
assertSkopeoFails(t, ".*reading manifest.*",
"sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", repo+":thetag", tmpDir)
}
func (s *SyncSuite) TestFailsWithDirSourceNotExisting(c *check.C) {
func (s *syncSuite) TestFailsWithDirSourceNotExisting() {
t := s.T()
// Make sure the dir does not exist!
tmpDir := c.MkDir()
tmpDir := t.TempDir()
tmpDir = filepath.Join(tmpDir, "this-does-not-exist")
err := os.RemoveAll(tmpDir)
c.Assert(err, check.IsNil)
require.NoError(t, err)
_, err = os.Stat(path.Join(tmpDir))
c.Check(os.IsNotExist(err), check.Equals, true)
assert.True(t, os.IsNotExist(err))
assertSkopeoFails(c, ".*no such file or directory.*",
assertSkopeoFails(t, ".*no such file or directory.*",
"sync", "--scoped", "--dest-tls-verify=false", "--src", "dir", "--dest", "docker", tmpDir, v2DockerRegistryURL)
}

View File

@@ -4,14 +4,17 @@ import (
"bytes"
"io"
"net"
"net/netip"
"os"
"os/exec"
"path/filepath"
"strings"
"testing"
"time"
"github.com/containers/image/v5/manifest"
"gopkg.in/check.v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const skopeoBinary = "skopeo"
@@ -19,21 +22,21 @@ const decompressDirsBinary = "./decompress-dirs.sh"
const testFQIN = "docker://quay.io/libpod/busybox" // tag left off on purpose, some tests need to add a special one
const testFQIN64 = "docker://quay.io/libpod/busybox:amd64"
const testFQINMultiLayer = "docker://quay.io/libpod/alpine_nginx:master" // multi-layer
const testFQINMultiLayer = "docker://quay.io/libpod/alpine_nginx:latest" // multi-layer
// consumeAndLogOutputStream takes (f, err) from an exec.*Pipe(), and causes all output to it to be logged to c.
func consumeAndLogOutputStream(c *check.C, id string, f io.ReadCloser, err error) {
c.Assert(err, check.IsNil)
// consumeAndLogOutputStream takes (f, err) from an exec.*Pipe(), and causes all output to it to be logged to t.
func consumeAndLogOutputStream(t *testing.T, id string, f io.ReadCloser, err error) {
require.NoError(t, err)
go func() {
defer func() {
f.Close()
c.Logf("Output %s: Closed", id)
t.Logf("Output %s: Closed", id)
}()
buf := make([]byte, 1024)
for {
c.Logf("Output %s: waiting", id)
t.Logf("Output %s: waiting", id)
n, err := f.Read(buf)
c.Logf("Output %s: got %d,%#v: %s", id, n, err, strings.TrimSuffix(string(buf[:n]), "\n"))
t.Logf("Output %s: got %d,%#v: %s", id, n, err, strings.TrimSuffix(string(buf[:n]), "\n"))
if n <= 0 {
break
}
@@ -41,72 +44,73 @@ func consumeAndLogOutputStream(c *check.C, id string, f io.ReadCloser, err error
}()
}
// consumeAndLogOutputs causes all output to stdout and stderr from an *exec.Cmd to be logged to c
func consumeAndLogOutputs(c *check.C, id string, cmd *exec.Cmd) {
// consumeAndLogOutputs causes all output to stdout and stderr from an *exec.Cmd to be logged to c.
func consumeAndLogOutputs(t *testing.T, id string, cmd *exec.Cmd) {
stdout, err := cmd.StdoutPipe()
consumeAndLogOutputStream(c, id+" stdout", stdout, err)
consumeAndLogOutputStream(t, id+" stdout", stdout, err)
stderr, err := cmd.StderrPipe()
consumeAndLogOutputStream(c, id+" stderr", stderr, err)
consumeAndLogOutputStream(t, id+" stderr", stderr, err)
}
// combinedOutputOfCommand runs a command as if exec.Command().CombinedOutput(), verifies that the exit status is 0, and returns the output,
// or terminates c on failure.
func combinedOutputOfCommand(c *check.C, name string, args ...string) string {
c.Logf("Running %s %s", name, strings.Join(args, " "))
func combinedOutputOfCommand(t *testing.T, name string, args ...string) string {
t.Logf("Running %s %s", name, strings.Join(args, " "))
out, err := exec.Command(name, args...).CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
require.NoError(t, err, "%s", out)
return string(out)
}
// assertSkopeoSucceeds runs a skopeo command as if exec.Command().CombinedOutput, verifies that the exit status is 0,
// and optionally that the output matches a multi-line regexp if it is nonempty;
// or terminates c on failure
func assertSkopeoSucceeds(c *check.C, regexp string, args ...string) {
c.Logf("Running %s %s", skopeoBinary, strings.Join(args, " "))
func assertSkopeoSucceeds(t *testing.T, regexp string, args ...string) {
t.Logf("Running %s %s", skopeoBinary, strings.Join(args, " "))
out, err := exec.Command(skopeoBinary, args...).CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
assert.NoError(t, err, "%s", out)
if regexp != "" {
c.Assert(string(out), check.Matches, "(?s)"+regexp) // (?s) : '.' will also match newlines
assert.Regexp(t, "(?s)"+regexp, string(out)) // (?s) : '.' will also match newlines
}
}
// assertSkopeoFails runs a skopeo command as if exec.Command().CombinedOutput, verifies that the exit status is 0,
// and that the output matches a multi-line regexp;
// or terminates c on failure
func assertSkopeoFails(c *check.C, regexp string, args ...string) {
c.Logf("Running %s %s", skopeoBinary, strings.Join(args, " "))
func assertSkopeoFails(t *testing.T, regexp string, args ...string) {
t.Logf("Running %s %s", skopeoBinary, strings.Join(args, " "))
out, err := exec.Command(skopeoBinary, args...).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf("%s", out))
c.Assert(string(out), check.Matches, "(?s)"+regexp) // (?s) : '.' will also match newlines
assert.Error(t, err, "%s", out)
assert.Regexp(t, "(?s)"+regexp, string(out)) // (?s) : '.' will also match newlines
}
// runCommandWithInput runs a command as if exec.Command(), sending it the input to stdin,
// and verifies that the exit status is 0, or terminates c on failure.
func runCommandWithInput(c *check.C, input string, name string, args ...string) {
func runCommandWithInput(t *testing.T, input string, name string, args ...string) {
cmd := exec.Command(name, args...)
runExecCmdWithInput(c, cmd, input)
runExecCmdWithInput(t, cmd, input)
}
// runExecCmdWithInput runs an exec.Cmd, sending it the input to stdin,
// and verifies that the exit status is 0, or terminates c on failure.
func runExecCmdWithInput(c *check.C, cmd *exec.Cmd, input string) {
c.Logf("Running %s %s", cmd.Path, strings.Join(cmd.Args, " "))
consumeAndLogOutputs(c, cmd.Path+" "+strings.Join(cmd.Args, " "), cmd)
func runExecCmdWithInput(t *testing.T, cmd *exec.Cmd, input string) {
t.Logf("Running %s %s", cmd.Path, strings.Join(cmd.Args, " "))
consumeAndLogOutputs(t, cmd.Path+" "+strings.Join(cmd.Args, " "), cmd)
stdin, err := cmd.StdinPipe()
c.Assert(err, check.IsNil)
require.NoError(t, err)
err = cmd.Start()
c.Assert(err, check.IsNil)
_, err = stdin.Write([]byte(input))
c.Assert(err, check.IsNil)
require.NoError(t, err)
_, err = io.WriteString(stdin, input)
require.NoError(t, err)
err = stdin.Close()
c.Assert(err, check.IsNil)
require.NoError(t, err)
err = cmd.Wait()
c.Assert(err, check.IsNil)
assert.NoError(t, err)
}
// isPortOpen returns true iff the specified port on localhost is open.
func isPortOpen(port int) bool {
conn, err := net.DialTCP("tcp", nil, &net.TCPAddr{IP: net.IPv4(127, 0, 0, 1), Port: port})
func isPortOpen(port uint16) bool {
ap := netip.AddrPortFrom(netip.AddrFrom4([4]byte{127, 0, 0, 1}), port)
conn, err := net.DialTCP("tcp", nil, net.TCPAddrFromAddrPort(ap))
if err != nil {
return false
}
@@ -118,29 +122,29 @@ func isPortOpen(port int) bool {
// The checking can be aborted by sending a value to the terminate channel, which the caller should
// always do using
// defer func() {terminate <- true}()
func newPortChecker(c *check.C, port int) (portOpen <-chan bool, terminate chan<- bool) {
func newPortChecker(t *testing.T, port uint16) (portOpen <-chan bool, terminate chan<- bool) {
portOpenBidi := make(chan bool)
// Buffered, so that sending a terminate request after the goroutine has exited does not block.
terminateBidi := make(chan bool, 1)
go func() {
defer func() {
c.Logf("Port checker for port %d exiting", port)
t.Logf("Port checker for port %d exiting", port)
}()
for {
c.Logf("Checking for port %d...", port)
t.Logf("Checking for port %d...", port)
if isPortOpen(port) {
c.Logf("Port %d open", port)
t.Logf("Port %d open", port)
portOpenBidi <- true
return
}
c.Logf("Sleeping for port %d", port)
t.Logf("Sleeping for port %d", port)
sleepChan := time.After(100 * time.Millisecond)
select {
case <-sleepChan: // Try again
c.Logf("Sleeping for port %d done, will retry", port)
t.Logf("Sleeping for port %d done, will retry", port)
case <-terminateBidi:
c.Logf("Check for port %d terminated", port)
t.Logf("Check for port %d terminated", port)
return
}
}
@@ -162,54 +166,51 @@ func modifyEnviron(env []string, name, value string) []string {
// fileFromFixtureFixture applies edits to inputPath and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func fileFromFixture(c *check.C, inputPath string, edits map[string]string) string {
func fileFromFixture(t *testing.T, inputPath string, edits map[string]string) string {
contents, err := os.ReadFile(inputPath)
c.Assert(err, check.IsNil)
require.NoError(t, err)
for template, value := range edits {
updated := bytes.ReplaceAll(contents, []byte(template), []byte(value))
c.Assert(bytes.Equal(updated, contents), check.Equals, false, check.Commentf("Replacing %s in %#v failed", template, string(contents))) // Verify that the template has matched something and we are not silently ignoring it.
require.NotEqual(t, contents, updated, "Replacing %s in %#v failed", template, string(contents)) // Verify that the template has matched something and we are not silently ignoring it.
contents = updated
}
file, err := os.CreateTemp("", "policy.json")
c.Assert(err, check.IsNil)
require.NoError(t, err)
path := file.Name()
_, err = file.Write(contents)
c.Assert(err, check.IsNil)
require.NoError(t, err)
err = file.Close()
c.Assert(err, check.IsNil)
require.NoError(t, err)
return path
}
// runDecompressDirs runs decompress-dirs.sh using exec.Command().CombinedOutput, verifies that the exit status is 0,
// and optionally that the output matches a multi-line regexp if it is nonempty; or terminates c on failure
func runDecompressDirs(c *check.C, regexp string, args ...string) {
c.Logf("Running %s %s", decompressDirsBinary, strings.Join(args, " "))
func runDecompressDirs(t *testing.T, args ...string) {
t.Logf("Running %s %s", decompressDirsBinary, strings.Join(args, " "))
for i, dir := range args {
m, err := os.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
c.Logf("manifest %d before: %s", i+1, string(m))
require.NoError(t, err)
t.Logf("manifest %d before: %s", i+1, string(m))
}
out, err := exec.Command(decompressDirsBinary, args...).CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
assert.NoError(t, err, "%s", out)
for i, dir := range args {
if len(out) > 0 {
c.Logf("output: %s", out)
t.Logf("output: %s", out)
}
m, err := os.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
c.Logf("manifest %d after: %s", i+1, string(m))
}
if regexp != "" {
c.Assert(string(out), check.Matches, "(?s)"+regexp) // (?s) : '.' will also match newlines
require.NoError(t, err)
t.Logf("manifest %d after: %s", i+1, string(m))
}
}
// Verify manifest in a dir: image at dir is expectedMIMEType.
func verifyManifestMIMEType(c *check.C, dir string, expectedMIMEType string) {
func verifyManifestMIMEType(t *testing.T, dir string, expectedMIMEType string) {
manifestBlob, err := os.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
mimeType := manifest.GuessMIMEType(manifestBlob)
c.Assert(mimeType, check.Equals, expectedMIMEType)
assert.Equal(t, expectedMIMEType, mimeType)
}

View File

@@ -9,8 +9,14 @@
# CAUTION: This is not a replacement for RPMs provided by your distro.
# Only intended to build and test the latest unreleased changes.
%global gomodulesmode GO111MODULE=on
%global with_debug 1
%global gomodulesmode GO111MODULE=on
# RHEL 8's default %%gobuild macro doesn't account for the BUILDTAGS variable, so we
# set it separately here and do not depend on RHEL 8's go-srpm-macros package.
%if !0%{?fedora} && 0%{?rhel} <= 8
%define gobuild(o:) %{gomodulesmode} go build -buildmode pie -compiler gc -tags="rpm_crashtraceback libtrust_openssl ${BUILDTAGS:-}" -ldflags "-linkmode=external -compressdwarf=false ${LDFLAGS:-} -B 0x$(head -c20 /dev/urandom|od -An -tx1|tr -d ' \\n') -extldflags '%__global_ldflags'" -a -v -x %{?**};
%endif
%if 0%{?with_debug}
%global _find_debuginfo_dwz_opts %{nil}
@@ -19,10 +25,6 @@
%global debug_package %{nil}
%endif
%if ! 0%{?gobuild:1}
%define gobuild(o:) go build -buildmode pie -compiler gc -tags="rpm_crashtraceback ${BUILDTAGS:-}" -ldflags "${LDFLAGS:-} -B 0x$(head -c20 /dev/urandom|od -An -tx1|tr -d ' \\n') -extldflags '-Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '" -a -v -x %{?**};
%endif
Name: {{{ git_dir_name }}}
Epoch: 101
Version: {{{ git_dir_version }}}
@@ -48,11 +50,7 @@ BuildRequires: libassuan-devel
BuildRequires: pkgconfig
BuildRequires: make
BuildRequires: ostree-devel
%if 0%{?fedora} <= 35
Requires: containers-common >= 4:1-39
%else
Requires: containers-common >= 4:1-46
%endif
Requires: containers-common >= 4:1-78
%description
Command line utility to inspect images and repositories directly on Docker
@@ -95,11 +93,7 @@ export CGO_CFLAGS+=" -m64 -mtune=generic -fcf-protection=full"
LDFLAGS=""
export BUILDTAGS="$(hack/libdm_tag.sh)"
%if 0%{?rhel}
export BUILDTAGS="$BUILDTAGS exclude_graphdriver_btrfs btrfs_noversion"
%endif
export BUILDTAGS="$(hack/libdm_tag.sh) $(hack/btrfs_installed_tag.sh) $(hack/btrfs_tag.sh)"
%gobuild -o bin/%{name} ./cmd/%{name}
%install

View File

@@ -95,10 +95,11 @@ END_EXPECT
# is created by the make-noarch-manifest script in this directory.
img=docker://quay.io/libpod/notmyarch:20210121
# Get our host arch (what we're running on). This assumes that skopeo
# arch matches podman; it also assumes running podman >= April 2020
# (prior to that, the format keys were lower-case).
arch=$(podman info --format '{{.Host.Arch}}')
# Get our host golang arch (what we're running on, according to golang).
# This assumes that skopeo arch matches host arch (which it always should).
# Buildah is used here because it depends less on the exact system config
# than podman - and all we're really after is the golang-flavored arch name.
arch=$(go env GOARCH)
# By default, 'inspect' tries to match our host os+arch. This should fail.
run_skopeo 1 inspect $img

View File

@@ -8,38 +8,41 @@ load helpers
function setup() {
standard_setup
# Remove old/stale cred file
_cred_dir=$TESTDIR/credentials
export XDG_RUNTIME_DIR=$_cred_dir
mkdir -p $_cred_dir/containers
rm -f $_cred_dir/containers/auth.json
# Start authenticated registry with random password
testuser=testuser
testpassword=$(random_string 15)
start_registry --testuser=$testuser --testpassword=$testpassword --enable-delete=true reg
_cred_dir=$TESTDIR/credentials
# It is important to change XDG_RUNTIME_DIR only after we start the registry, otherwise it affects the path of $XDG_RUNTIME_DIR/netns maintained by Podman,
# making it impossible to clean up after ourselves.
export XDG_RUNTIME_DIR_OLD=$XDG_RUNTIME_DIR
export XDG_RUNTIME_DIR=$_cred_dir
mkdir -p $_cred_dir/containers
# Remove old/stale cred file
rm -f $_cred_dir/containers/auth.json
}
@test "auth: credentials on command line" {
# No creds
run_skopeo 1 inspect --tls-verify=false docker://localhost:5000/nonesuch
expect_output --substring "unauthorized: authentication required"
expect_output --substring "authentication required"
# Wrong user
run_skopeo 1 inspect --tls-verify=false --creds=baduser:badpassword \
docker://localhost:5000/nonesuch
expect_output --substring "unauthorized: authentication required"
expect_output --substring "authentication required"
# Wrong password
run_skopeo 1 inspect --tls-verify=false --creds=$testuser:badpassword \
docker://localhost:5000/nonesuch
expect_output --substring "unauthorized: authentication required"
expect_output --substring "authentication required"
# Correct creds, but no such image
run_skopeo 1 inspect --tls-verify=false --creds=$testuser:$testpassword \
docker://localhost:5000/nonesuch
expect_output --substring "manifest unknown: manifest unknown"
expect_output --substring "manifest unknown"
# These should pass
run_skopeo copy --dest-tls-verify=false --dcreds=$testuser:$testpassword \
@@ -64,7 +67,7 @@ function setup() {
podman logout localhost:5000
run_skopeo 1 inspect --tls-verify=false docker://localhost:5000/busybox:mine
expect_output --substring "unauthorized: authentication required"
expect_output --substring "authentication required"
}
@test "auth: copy with --src-creds and --dest-creds" {
@@ -94,7 +97,7 @@ function setup() {
# inspect without authfile: should fail
run_skopeo 1 inspect --tls-verify=false docker://localhost:5000/busybox:mine
expect_output --substring "unauthorized: authentication required"
expect_output --substring "authentication required"
# inspect with authfile: should work
run_skopeo inspect --tls-verify=false --authfile $TESTDIR/test.auth docker://localhost:5000/busybox:mine
@@ -109,6 +112,9 @@ function setup() {
}
teardown() {
# Need to restore XDG_RUNTIME_DIR.
XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR_OLD
podman rm -f reg
if [[ -n $_cred_dir ]]; then

View File

@@ -242,7 +242,7 @@ END_TESTS
$fingerprint \
$TESTDIR/busybox.signature
# manifest digest
digest=$(echo "$output" | awk '{print $4;}')
digest=$(echo "$output" | awk '{print $NF;}')
run_skopeo manifest-digest $TESTDIR/busybox/manifest.json
expect_output $digest
}

View File

@@ -21,7 +21,9 @@ type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `data` in TOML format into a pointer `v`.
// Unmarshal decodes the contents of data in TOML format into a pointer v.
//
// See [Decoder] for a description of the decoding process.
func Unmarshal(data []byte, v interface{}) error {
_, err := NewDecoder(bytes.NewReader(data)).Decode(v)
return err
@@ -29,13 +31,12 @@ func Unmarshal(data []byte, v interface{}) error {
// Decode the TOML data in to the pointer v.
//
// See the documentation on Decoder for a description of the decoding process.
// See [Decoder] for a description of the decoding process.
func Decode(data string, v interface{}) (MetaData, error) {
return NewDecoder(strings.NewReader(data)).Decode(v)
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at path and decode it for you.
// DecodeFile reads the contents of a file and decodes it with [Decode].
func DecodeFile(path string, v interface{}) (MetaData, error) {
fp, err := os.Open(path)
if err != nil {
@@ -48,7 +49,7 @@ func DecodeFile(path string, v interface{}) (MetaData, error) {
// Primitive is a TOML value that hasn't been decoded into a Go value.
//
// This type can be used for any value, which will cause decoding to be delayed.
// You can use the PrimitiveDecode() function to "manually" decode these values.
// You can use [PrimitiveDecode] to "manually" decode these values.
//
// NOTE: The underlying representation of a `Primitive` value is subject to
// change. Do not rely on it.
@@ -70,15 +71,15 @@ const (
// Decoder decodes TOML data.
//
// TOML tables correspond to Go structs or maps (dealer's choice they can be
// used interchangeably).
// TOML tables correspond to Go structs or maps; they can be used
// interchangeably, but structs offer better type safety.
//
// TOML table arrays correspond to either a slice of structs or a slice of maps.
//
// TOML datetimes correspond to Go time.Time values. Local datetimes are parsed
// in the local timezone.
// TOML datetimes correspond to [time.Time]. Local datetimes are parsed in the
// local timezone.
//
// time.Duration types are treated as nanoseconds if the TOML value is an
// [time.Duration] types are treated as nanoseconds if the TOML value is an
// integer, or they're parsed with time.ParseDuration() if they're strings.
//
// All other TOML types (float, string, int, bool and array) correspond to the
@@ -90,7 +91,7 @@ const (
// UnmarshalText method. See the Unmarshaler example for a demonstration with
// email addresses.
//
// Key mapping
// ### Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go struct.
// The special `toml` struct tag can be used to map TOML keys to struct fields
@@ -168,17 +169,16 @@ func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
return md, md.unify(p.mapping, rv)
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
// PrimitiveDecode is just like the other Decode* functions, except it decodes a
// TOML value that has already been parsed. Valid primitive values can *only* be
// obtained from values filled by the decoder functions, including this method.
// (i.e., v may contain more [Primitive] values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
// Meta data for primitive values is included in the meta data returned by the
// Decode* functions with one exception: keys returned by the Undecoded method
// will only reflect keys that were decoded. Namely, any keys hidden behind a
// Primitive will be considered undecoded. Executing this method will update the
// undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()

View File

@@ -7,8 +7,8 @@ import (
"io/fs"
)
// DecodeFS is just like Decode, except it will automatically read the contents
// of the file at `path` from a fs.FS instance.
// DecodeFS reads the contents of a file from [fs.FS] and decodes it with
// [Decode].
func DecodeFS(fsys fs.FS, path string, v interface{}) (MetaData, error) {
fp, err := fsys.Open(path)
if err != nil {

View File

@@ -1,13 +1,11 @@
/*
Package toml implements decoding and encoding of TOML files.
This package supports TOML v1.0.0, as listed on https://toml.io
There is also support for delaying decoding with the Primitive type, and
querying the set of keys in a TOML document with the MetaData type.
The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
and can be used to verify if TOML document is valid. It can also be used to
print the type of each key.
*/
// Package toml implements decoding and encoding of TOML files.
//
// This package supports TOML v1.0.0, as specified at https://toml.io
//
// There is also support for delaying decoding with the Primitive type, and
// querying the set of keys in a TOML document with the MetaData type.
//
// The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
// and can be used to verify if TOML document is valid. It can also be used to
// print the type of each key.
package toml

View File

@@ -79,12 +79,12 @@ type Marshaler interface {
// Encoder encodes a Go to a TOML document.
//
// The mapping between Go values and TOML values should be precisely the same as
// for the Decode* functions.
// for [Decode].
//
// time.Time is encoded as a RFC 3339 string, and time.Duration as its string
// representation.
//
// The toml.Marshaler and encoder.TextMarshaler interfaces are supported to
// The [Marshaler] and [encoding.TextMarshaler] interfaces are supported to
// encoding the value as custom TOML.
//
// If you want to write arbitrary binary data then you will need to use
@@ -130,7 +130,7 @@ func NewEncoder(w io.Writer) *Encoder {
}
}
// Encode writes a TOML representation of the Go value to the Encoder's writer.
// Encode writes a TOML representation of the Go value to the [Encoder]'s writer.
//
// An error is returned if the value given cannot be encoded to a valid TOML
// document.
@@ -261,7 +261,7 @@ func (enc *Encoder) eElement(rv reflect.Value) {
enc.eElement(reflect.ValueOf(v))
return
}
encPanic(errors.New(fmt.Sprintf("Unable to convert \"%s\" to neither int64 nor float64", n)))
encPanic(fmt.Errorf("unable to convert %q to int64 or float64", n))
}
switch rv.Kind() {
@@ -504,7 +504,8 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(fieldVal) {
if opts.omitempty && enc.isEmpty(fieldVal) {
continue
}
if opts.omitzero && isZero(fieldVal) {
@@ -648,12 +649,26 @@ func isZero(rv reflect.Value) bool {
return false
}
func isEmpty(rv reflect.Value) bool {
func (enc *Encoder) isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Struct:
return reflect.Zero(rv.Type()).Interface() == rv.Interface()
if rv.Type().Comparable() {
return reflect.Zero(rv.Type()).Interface() == rv.Interface()
}
// Need to also check if all the fields are empty, otherwise something
// like this with uncomparable types will always return true:
//
// type a struct{ field b }
// type b struct{ s []string }
// s := a{field: b{s: []string{"AAA"}}}
for i := 0; i < rv.NumField(); i++ {
if !enc.isEmpty(rv.Field(i)) {
return false
}
}
return true
case reflect.Bool:
return !rv.Bool()
}
@@ -668,16 +683,15 @@ func (enc *Encoder) newline() {
// Write a key/value pair:
//
// key = <any value>
// key = <any value>
//
// This is also used for "k = v" in inline tables; so something like this will
// be written in three calls:
//
// ┌────────────────────┐
// │ ┌───┐ ┌────┐│
// v v v v vv
// key = {k = v, k2 = v2}
//
//───────────────────┐
// │ ┌───┐ ┌────┐│
// v v v v vv
// key = {k = 1, k2 = 2}
func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
if len(key) == 0 {
encPanic(errNoKey)

View File

@@ -5,57 +5,60 @@ import (
"strings"
)
// ParseError is returned when there is an error parsing the TOML syntax.
//
// For example invalid syntax, duplicate keys, etc.
// ParseError is returned when there is an error parsing the TOML syntax such as
// invalid syntax, duplicate keys, etc.
//
// In addition to the error message itself, you can also print detailed location
// information with context by using ErrorWithPosition():
// information with context by using [ErrorWithPosition]:
//
// toml: error: Key 'fruit' was already created and cannot be used as an array.
// toml: error: Key 'fruit' was already created and cannot be used as an array.
//
// At line 4, column 2-7:
// At line 4, column 2-7:
//
// 2 | fruit = []
// 3 |
// 4 | [[fruit]] # Not allowed
// ^^^^^
// 2 | fruit = []
// 3 |
// 4 | [[fruit]] # Not allowed
// ^^^^^
//
// Furthermore, the ErrorWithUsage() can be used to print the above with some
// more detailed usage guidance:
// [ErrorWithUsage] can be used to print the above with some more detailed usage
// guidance:
//
// toml: error: newlines not allowed within inline tables
// toml: error: newlines not allowed within inline tables
//
// At line 1, column 18:
// At line 1, column 18:
//
// 1 | x = [{ key = 42 #
// ^
// 1 | x = [{ key = 42 #
// ^
//
// Error help:
// Error help:
//
// Inline tables must always be on a single line:
// Inline tables must always be on a single line:
//
// table = {key = 42, second = 43}
// table = {key = 42, second = 43}
//
// It is invalid to split them over multiple lines like so:
// It is invalid to split them over multiple lines like so:
//
// # INVALID
// table = {
// key = 42,
// second = 43
// }
// # INVALID
// table = {
// key = 42,
// second = 43
// }
//
// Use regular for this:
// Use regular for this:
//
// [table]
// key = 42
// second = 43
// [table]
// key = 42
// second = 43
type ParseError struct {
Message string // Short technical message.
Usage string // Longer message with usage guidance; may be blank.
Position Position // Position of the error
LastKey string // Last parsed key, may be blank.
Line int // Line the error occurred. Deprecated: use Position.
// Line the error occurred.
//
// Deprecated: use [Position].
Line int
err error
input string
@@ -83,7 +86,7 @@ func (pe ParseError) Error() string {
// ErrorWithUsage() returns the error with detailed location context.
//
// See the documentation on ParseError.
// See the documentation on [ParseError].
func (pe ParseError) ErrorWithPosition() string {
if pe.input == "" { // Should never happen, but just in case.
return pe.Error()
@@ -124,7 +127,7 @@ func (pe ParseError) ErrorWithPosition() string {
// ErrorWithUsage() returns the error with detailed location context and usage
// guidance.
//
// See the documentation on ParseError.
// See the documentation on [ParseError].
func (pe ParseError) ErrorWithUsage() string {
m := pe.ErrorWithPosition()
if u, ok := pe.err.(interface{ Usage() string }); ok && u.Usage() != "" {

View File

@@ -771,7 +771,7 @@ func lexRawString(lx *lexer) stateFn {
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'''" has already been consumed and
// a string. It assumes that the beginning ''' has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
r := lx.next()

View File

@@ -71,7 +71,7 @@ func (md *MetaData) Keys() []Key {
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// This includes keys that haven't been decoded because of a [Primitive] value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
@@ -89,7 +89,7 @@ func (md *MetaData) Undecoded() []Key {
return undecoded
}
// Key represents any TOML key, including key groups. Use (MetaData).Keys to get
// Key represents any TOML key, including key groups. Use [MetaData.Keys] to get
// values of this type.
type Key []string

1
vendor/github.com/Microsoft/go-winio/.gitattributes generated vendored Normal file
View File

@@ -0,0 +1 @@
* text=auto eol=lf

View File

@@ -1 +1,10 @@
.vscode/
*.exe
# testing
testdata
# go workspaces
go.work
go.work.sum

144
vendor/github.com/Microsoft/go-winio/.golangci.yml generated vendored Normal file
View File

@@ -0,0 +1,144 @@
run:
skip-dirs:
- pkg/etw/sample
linters:
enable:
# style
- containedctx # struct contains a context
- dupl # duplicate code
- errname # erorrs are named correctly
- goconst # strings that should be constants
- godot # comments end in a period
- misspell
- nolintlint # "//nolint" directives are properly explained
- revive # golint replacement
- stylecheck # golint replacement, less configurable than revive
- unconvert # unnecessary conversions
- wastedassign
# bugs, performance, unused, etc ...
- contextcheck # function uses a non-inherited context
- errorlint # errors not wrapped for 1.13
- exhaustive # check exhaustiveness of enum switch statements
- gofmt # files are gofmt'ed
- gosec # security
- nestif # deeply nested ifs
- nilerr # returns nil even with non-nil error
- prealloc # slices that can be pre-allocated
- structcheck # unused struct fields
- unparam # unused function params
issues:
exclude-rules:
# err is very often shadowed in nested scopes
- linters:
- govet
text: '^shadow: declaration of "err" shadows declaration'
# ignore long lines for skip autogen directives
- linters:
- revive
text: "^line-length-limit: "
source: "^//(go:generate|sys) "
# allow unjustified ignores of error checks in defer statements
- linters:
- nolintlint
text: "^directive `//nolint:errcheck` should provide explanation"
source: '^\s*defer '
# allow unjustified ignores of error lints for io.EOF
- linters:
- nolintlint
text: "^directive `//nolint:errorlint` should provide explanation"
source: '[=|!]= io.EOF'
linters-settings:
govet:
enable-all: true
disable:
# struct order is often for Win32 compat
# also, ignore pointer bytes/GC issues for now until performance becomes an issue
- fieldalignment
check-shadowing: true
nolintlint:
allow-leading-space: false
require-explanation: true
require-specific: true
revive:
# revive is more configurable than static check, so likely the preferred alternative to static-check
# (once the perf issue is solved: https://github.com/golangci/golangci-lint/issues/2997)
enable-all-rules:
true
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md
rules:
# rules with required arguments
- name: argument-limit
disabled: true
- name: banned-characters
disabled: true
- name: cognitive-complexity
disabled: true
- name: cyclomatic
disabled: true
- name: file-header
disabled: true
- name: function-length
disabled: true
- name: function-result-limit
disabled: true
- name: max-public-structs
disabled: true
# geneally annoying rules
- name: add-constant # complains about any and all strings and integers
disabled: true
- name: confusing-naming # we frequently use "Foo()" and "foo()" together
disabled: true
- name: flag-parameter # excessive, and a common idiom we use
disabled: true
# general config
- name: line-length-limit
arguments:
- 140
- name: var-naming
arguments:
- []
- - CID
- CRI
- CTRD
- DACL
- DLL
- DOS
- ETW
- FSCTL
- GCS
- GMSA
- HCS
- HV
- IO
- LCOW
- LDAP
- LPAC
- LTSC
- MMIO
- NT
- OCI
- PMEM
- PWSH
- RX
- SACl
- SID
- SMB
- TX
- VHD
- VHDX
- VMID
- VPCI
- WCOW
- WIM
stylecheck:
checks:
- "all"
- "-ST1003" # use revive's var naming

View File

@@ -13,16 +13,60 @@ Please see the LICENSE file for licensing information.
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA)
declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
This project welcomes contributions and suggestions.
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that
you have the right to, and actually do, grant us the rights to use your contribution.
For details, visit [Microsoft CLA](https://cla.microsoft.com).
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR
appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
When you submit a pull request, a CLA-bot will automatically determine whether you need to
provide a CLA and decorate the PR appropriately (e.g., label, comment).
Simply follow the instructions provided by the bot.
You will only need to do this once across all repos using our CLA.
We also require that contributors sign their commits using git commit -s or git commit --signoff to certify they either authored the work themselves
or otherwise have permission to use it in this project. Please see https://developercertificate.org/ for more info, as well as to make sure that you can
attest to the rules listed. Our CI uses the DCO Github app to ensure that all commits in a given PR are signed-off.
Additionally, the pull request pipeline requires the following steps to be performed before
mergining.
### Code Sign-Off
We require that contributors sign their commits using [`git commit --signoff`][git-commit-s]
to certify they either authored the work themselves or otherwise have permission to use it in this project.
A range of commits can be signed off using [`git rebase --signoff`][git-rebase-s].
Please see [the developer certificate](https://developercertificate.org) for more info,
as well as to make sure that you can attest to the rules listed.
Our CI uses the DCO Github app to ensure that all commits in a given PR are signed-off.
### Linting
Code must pass a linting stage, which uses [`golangci-lint`][lint].
The linting settings are stored in [`.golangci.yaml`](./.golangci.yaml), and can be run
automatically with VSCode by adding the following to your workspace or folder settings:
```json
"go.lintTool": "golangci-lint",
"go.lintOnSave": "package",
```
Additional editor [integrations options are also available][lint-ide].
Alternatively, `golangci-lint` can be [installed locally][lint-install] and run from the repo root:
```shell
# use . or specify a path to only lint a package
# to show all lint errors, use flags "--max-issues-per-linter=0 --max-same-issues=0"
> golangci-lint run ./...
```
### Go Generate
The pipeline checks that auto-generated code, via `go generate`, are up to date.
This can be done for the entire repo:
```shell
> go generate ./...
```
## Code of Conduct
@@ -30,8 +74,16 @@ This project has adopted the [Microsoft Open Source Code of Conduct](https://ope
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Special Thanks
Thanks to natefinch for the inspiration for this library. See https://github.com/natefinch/npipe
for another named pipe implementation.
Thanks to [natefinch][natefinch] for the inspiration for this library.
See [npipe](https://github.com/natefinch/npipe) for another named pipe implementation.
[lint]: https://golangci-lint.run/
[lint-ide]: https://golangci-lint.run/usage/integrations/#editor-integration
[lint-install]: https://golangci-lint.run/usage/install/#local-installation
[git-commit-s]: https://git-scm.com/docs/git-commit#Documentation/git-commit.txt--s
[git-rebase-s]: https://git-scm.com/docs/git-rebase#Documentation/git-rebase.txt---signoff
[natefinch]: https://github.com/natefinch

41
vendor/github.com/Microsoft/go-winio/SECURITY.md generated vendored Normal file
View File

@@ -0,0 +1,41 @@
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.7 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package winio
@@ -7,11 +8,12 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"runtime"
"syscall"
"unicode/utf16"
"golang.org/x/sys/windows"
)
//sys backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupRead
@@ -24,7 +26,7 @@ const (
BackupAlternateData
BackupLink
BackupPropertyData
BackupObjectId
BackupObjectId //revive:disable-line:var-naming ID, not Id
BackupReparseData
BackupSparseBlock
BackupTxfsData
@@ -34,14 +36,16 @@ const (
StreamSparseAttributes = uint32(8)
)
//nolint:revive // var-naming: ALL_CAPS
const (
WRITE_DAC = 0x40000
WRITE_OWNER = 0x80000
ACCESS_SYSTEM_SECURITY = 0x1000000
WRITE_DAC = windows.WRITE_DAC
WRITE_OWNER = windows.WRITE_OWNER
ACCESS_SYSTEM_SECURITY = windows.ACCESS_SYSTEM_SECURITY
)
// BackupHeader represents a backup stream of a file.
type BackupHeader struct {
//revive:disable-next-line:var-naming ID, not Id
Id uint32 // The backup stream ID
Attributes uint32 // Stream attributes
Size int64 // The size of the stream in bytes
@@ -49,8 +53,8 @@ type BackupHeader struct {
Offset int64 // The offset of the stream in the file (for BackupSparseBlock only).
}
type win32StreamId struct {
StreamId uint32
type win32StreamID struct {
StreamID uint32
Attributes uint32
Size uint64
NameSize uint32
@@ -71,7 +75,7 @@ func NewBackupStreamReader(r io.Reader) *BackupStreamReader {
// Next returns the next backup stream and prepares for calls to Read(). It skips the remainder of the current stream if
// it was not completely read.
func (r *BackupStreamReader) Next() (*BackupHeader, error) {
if r.bytesLeft > 0 {
if r.bytesLeft > 0 { //nolint:nestif // todo: flatten this
if s, ok := r.r.(io.Seeker); ok {
// Make sure Seek on io.SeekCurrent sometimes succeeds
// before trying the actual seek.
@@ -82,16 +86,16 @@ func (r *BackupStreamReader) Next() (*BackupHeader, error) {
r.bytesLeft = 0
}
}
if _, err := io.Copy(ioutil.Discard, r); err != nil {
if _, err := io.Copy(io.Discard, r); err != nil {
return nil, err
}
}
var wsi win32StreamId
var wsi win32StreamID
if err := binary.Read(r.r, binary.LittleEndian, &wsi); err != nil {
return nil, err
}
hdr := &BackupHeader{
Id: wsi.StreamId,
Id: wsi.StreamID,
Attributes: wsi.Attributes,
Size: int64(wsi.Size),
}
@@ -102,7 +106,7 @@ func (r *BackupStreamReader) Next() (*BackupHeader, error) {
}
hdr.Name = syscall.UTF16ToString(name)
}
if wsi.StreamId == BackupSparseBlock {
if wsi.StreamID == BackupSparseBlock {
if err := binary.Read(r.r, binary.LittleEndian, &hdr.Offset); err != nil {
return nil, err
}
@@ -147,8 +151,8 @@ func (w *BackupStreamWriter) WriteHeader(hdr *BackupHeader) error {
return fmt.Errorf("missing %d bytes", w.bytesLeft)
}
name := utf16.Encode([]rune(hdr.Name))
wsi := win32StreamId{
StreamId: hdr.Id,
wsi := win32StreamID{
StreamID: hdr.Id,
Attributes: hdr.Attributes,
Size: uint64(hdr.Size),
NameSize: uint32(len(name) * 2),
@@ -203,7 +207,7 @@ func (r *BackupFileReader) Read(b []byte) (int, error) {
var bytesRead uint32
err := backupRead(syscall.Handle(r.f.Fd()), b, &bytesRead, false, r.includeSecurity, &r.ctx)
if err != nil {
return 0, &os.PathError{"BackupRead", r.f.Name(), err}
return 0, &os.PathError{Op: "BackupRead", Path: r.f.Name(), Err: err}
}
runtime.KeepAlive(r.f)
if bytesRead == 0 {
@@ -216,7 +220,7 @@ func (r *BackupFileReader) Read(b []byte) (int, error) {
// the underlying file.
func (r *BackupFileReader) Close() error {
if r.ctx != 0 {
backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx)
_ = backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx)
runtime.KeepAlive(r.f)
r.ctx = 0
}
@@ -242,7 +246,7 @@ func (w *BackupFileWriter) Write(b []byte) (int, error) {
var bytesWritten uint32
err := backupWrite(syscall.Handle(w.f.Fd()), b, &bytesWritten, false, w.includeSecurity, &w.ctx)
if err != nil {
return 0, &os.PathError{"BackupWrite", w.f.Name(), err}
return 0, &os.PathError{Op: "BackupWrite", Path: w.f.Name(), Err: err}
}
runtime.KeepAlive(w.f)
if int(bytesWritten) != len(b) {
@@ -255,7 +259,7 @@ func (w *BackupFileWriter) Write(b []byte) (int, error) {
// close the underlying file.
func (w *BackupFileWriter) Close() error {
if w.ctx != 0 {
backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx)
_ = backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx)
runtime.KeepAlive(w.f)
w.ctx = 0
}
@@ -271,7 +275,13 @@ func OpenForBackup(path string, access uint32, share uint32, createmode uint32)
if err != nil {
return nil, err
}
h, err := syscall.CreateFile(&winPath[0], access, share, nil, createmode, syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT, 0)
h, err := syscall.CreateFile(&winPath[0],
access,
share,
nil,
createmode,
syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT,
0)
if err != nil {
err = &os.PathError{Op: "open", Path: path, Err: err}
return nil, err

View File

@@ -1,4 +1,3 @@
// +build !windows
// This file only exists to allow go get on non-Windows platforms.
package backuptar

View File

@@ -1,3 +1,5 @@
//go:build windows
package backuptar
import (

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package backuptar
@@ -7,7 +8,6 @@ import (
"encoding/base64"
"fmt"
"io"
"io/ioutil"
"path/filepath"
"strconv"
"strings"
@@ -18,17 +18,18 @@ import (
"golang.org/x/sys/windows"
)
//nolint:deadcode,varcheck // keep unused constants for potential future use
const (
c_ISUID = 04000 // Set uid
c_ISGID = 02000 // Set gid
c_ISVTX = 01000 // Save text (sticky bit)
c_ISDIR = 040000 // Directory
c_ISFIFO = 010000 // FIFO
c_ISREG = 0100000 // Regular file
c_ISLNK = 0120000 // Symbolic link
c_ISBLK = 060000 // Block special file
c_ISCHR = 020000 // Character special file
c_ISSOCK = 0140000 // Socket
cISUID = 0004000 // Set uid
cISGID = 0002000 // Set gid
cISVTX = 0001000 // Save text (sticky bit)
cISDIR = 0040000 // Directory
cISFIFO = 0010000 // FIFO
cISREG = 0100000 // Regular file
cISLNK = 0120000 // Symbolic link
cISBLK = 0060000 // Block special file
cISCHR = 0020000 // Character special file
cISSOCK = 0140000 // Socket
)
const (
@@ -44,7 +45,7 @@ const (
// zeroReader is an io.Reader that always returns 0s.
type zeroReader struct{}
func (zr zeroReader) Read(b []byte) (int, error) {
func (zeroReader) Read(b []byte) (int, error) {
for i := range b {
b[i] = 0
}
@@ -55,7 +56,7 @@ func copySparse(t *tar.Writer, br *winio.BackupStreamReader) error {
curOffset := int64(0)
for {
bhdr, err := br.Next()
if err == io.EOF {
if err == io.EOF { //nolint:errorlint
err = io.ErrUnexpectedEOF
}
if err != nil {
@@ -71,8 +72,8 @@ func copySparse(t *tar.Writer, br *winio.BackupStreamReader) error {
}
// archive/tar does not support writing sparse files
// so just write zeroes to catch up to the current offset.
if _, err := io.CopyN(t, zeroReader{}, bhdr.Offset-curOffset); err != nil {
return fmt.Errorf("seek to offset %d: %s", bhdr.Offset, err)
if _, err = io.CopyN(t, zeroReader{}, bhdr.Offset-curOffset); err != nil {
return fmt.Errorf("seek to offset %d: %w", bhdr.Offset, err)
}
if bhdr.Size == 0 {
// A sparse block with size = 0 is used to mark the end of the sparse blocks.
@@ -106,7 +107,7 @@ func BasicInfoHeader(name string, size int64, fileInfo *winio.FileBasicInfo) *ta
hdr.PAXRecords[hdrCreationTime] = formatPAXTime(time.Unix(0, fileInfo.CreationTime.Nanoseconds()))
if (fileInfo.FileAttributes & syscall.FILE_ATTRIBUTE_DIRECTORY) != 0 {
hdr.Mode |= c_ISDIR
hdr.Mode |= cISDIR
hdr.Size = 0
hdr.Typeflag = tar.TypeDir
}
@@ -116,32 +117,29 @@ func BasicInfoHeader(name string, size int64, fileInfo *winio.FileBasicInfo) *ta
// SecurityDescriptorFromTarHeader reads the SDDL associated with the header of the current file
// from the tar header and returns the security descriptor into a byte slice.
func SecurityDescriptorFromTarHeader(hdr *tar.Header) ([]byte, error) {
// Maintaining old SDDL-based behavior for backward
// compatibility. All new tar headers written by this library
// will have raw binary for the security descriptor.
var sd []byte
var err error
if sddl, ok := hdr.PAXRecords[hdrSecurityDescriptor]; ok {
sd, err = winio.SddlToSecurityDescriptor(sddl)
if err != nil {
return nil, err
}
}
if sdraw, ok := hdr.PAXRecords[hdrRawSecurityDescriptor]; ok {
sd, err = base64.StdEncoding.DecodeString(sdraw)
sd, err := base64.StdEncoding.DecodeString(sdraw)
if err != nil {
// Not returning sd as-is in the error-case, as base64.DecodeString
// may return partially decoded data (not nil or empty slice) in case
// of a failure: https://github.com/golang/go/blob/go1.17.7/src/encoding/base64/base64.go#L382-L387
return nil, err
}
return sd, nil
}
return sd, nil
// Maintaining old SDDL-based behavior for backward compatibility. All new
// tar headers written by this library will have raw binary for the security
// descriptor.
if sddl, ok := hdr.PAXRecords[hdrSecurityDescriptor]; ok {
return winio.SddlToSecurityDescriptor(sddl)
}
return nil, nil
}
// ExtendedAttributesFromTarHeader reads the EAs associated with the header of the
// current file from the tar header and returns it as a byte slice.
func ExtendedAttributesFromTarHeader(hdr *tar.Header) ([]byte, error) {
var eas []winio.ExtendedAttribute
var eadata []byte
var err error
var eas []winio.ExtendedAttribute //nolint:prealloc // len(eas) <= len(hdr.PAXRecords); prealloc is wasteful
for k, v := range hdr.PAXRecords {
if !strings.HasPrefix(k, hdrEaPrefix) {
continue
@@ -155,13 +153,15 @@ func ExtendedAttributesFromTarHeader(hdr *tar.Header) ([]byte, error) {
Value: data,
})
}
var eaData []byte
var err error
if len(eas) != 0 {
eadata, err = winio.EncodeExtendedAttributes(eas)
eaData, err = winio.EncodeExtendedAttributes(eas)
if err != nil {
return nil, err
}
}
return eadata, nil
return eaData, nil
}
// EncodeReparsePointFromTarHeader reads the ReparsePoint structure from the tar header
@@ -182,11 +182,9 @@ func EncodeReparsePointFromTarHeader(hdr *tar.Header) []byte {
//
// The additional Win32 metadata is:
//
// MSWINDOWS.fileattr: The Win32 file attributes, as a decimal value
//
// MSWINDOWS.rawsd: The Win32 security descriptor, in raw binary format
//
// MSWINDOWS.mountpoint: If present, this is a mount point and not a symlink, even though the type is '2' (symlink)
// - MSWINDOWS.fileattr: The Win32 file attributes, as a decimal value
// - MSWINDOWS.rawsd: The Win32 security descriptor, in raw binary format
// - MSWINDOWS.mountpoint: If present, this is a mount point and not a symlink, even though the type is '2' (symlink)
func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size int64, fileInfo *winio.FileBasicInfo) error {
name = filepath.ToSlash(name)
hdr := BasicInfoHeader(name, size, fileInfo)
@@ -209,7 +207,7 @@ func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size
var dataHdr *winio.BackupHeader
for dataHdr == nil {
bhdr, err := br.Next()
if err == io.EOF {
if err == io.EOF { //nolint:errorlint
break
}
if err != nil {
@@ -217,21 +215,21 @@ func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size
}
switch bhdr.Id {
case winio.BackupData:
hdr.Mode |= c_ISREG
hdr.Mode |= cISREG
if !readTwice {
dataHdr = bhdr
}
case winio.BackupSecurity:
sd, err := ioutil.ReadAll(br)
sd, err := io.ReadAll(br)
if err != nil {
return err
}
hdr.PAXRecords[hdrRawSecurityDescriptor] = base64.StdEncoding.EncodeToString(sd)
case winio.BackupReparseData:
hdr.Mode |= c_ISLNK
hdr.Mode |= cISLNK
hdr.Typeflag = tar.TypeSymlink
reparseBuffer, err := ioutil.ReadAll(br)
reparseBuffer, _ := io.ReadAll(br)
rp, err := winio.DecodeReparsePoint(reparseBuffer)
if err != nil {
return err
@@ -242,7 +240,7 @@ func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size
hdr.Linkname = rp.Target
case winio.BackupEaData:
eab, err := ioutil.ReadAll(br)
eab, err := io.ReadAll(br)
if err != nil {
return err
}
@@ -276,7 +274,7 @@ func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size
}
for dataHdr == nil {
bhdr, err := br.Next()
if err == io.EOF {
if err == io.EOF { //nolint:errorlint
break
}
if err != nil {
@@ -311,7 +309,7 @@ func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size
// range of the file containing the range contents. Finally there is a sparse block stream with
// size = 0 and offset = <file size>.
if dataHdr != nil {
if dataHdr != nil { //nolint:nestif // todo: reduce nesting complexity
// A data stream was found. Copy the data.
// We assume that we will either have a data stream size > 0 XOR have sparse block streams.
if dataHdr.Size > 0 || (dataHdr.Attributes&winio.StreamSparseAttributes) == 0 {
@@ -319,13 +317,13 @@ func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size
return fmt.Errorf("%s: mismatch between file size %d and header size %d", name, size, dataHdr.Size)
}
if _, err = io.Copy(t, br); err != nil {
return fmt.Errorf("%s: copying contents from data stream: %s", name, err)
return fmt.Errorf("%s: copying contents from data stream: %w", name, err)
}
} else if size > 0 {
// As of a recent OS change, BackupRead now returns a data stream for empty sparse files.
// These files have no sparse block streams, so skip the copySparse call if file size = 0.
if err = copySparse(t, br); err != nil {
return fmt.Errorf("%s: copying contents from sparse block stream: %s", name, err)
return fmt.Errorf("%s: copying contents from sparse block stream: %w", name, err)
}
}
}
@@ -335,7 +333,7 @@ func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size
// been written. In practice, this means that we don't get EA or TXF metadata.
for {
bhdr, err := br.Next()
if err == io.EOF {
if err == io.EOF { //nolint:errorlint
break
}
if err != nil {
@@ -343,35 +341,30 @@ func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size
}
switch bhdr.Id {
case winio.BackupAlternateData:
altName := bhdr.Name
if strings.HasSuffix(altName, ":$DATA") {
altName = altName[:len(altName)-len(":$DATA")]
}
if (bhdr.Attributes & winio.StreamSparseAttributes) == 0 {
hdr = &tar.Header{
Format: hdr.Format,
Name: name + altName,
Mode: hdr.Mode,
Typeflag: tar.TypeReg,
Size: bhdr.Size,
ModTime: hdr.ModTime,
AccessTime: hdr.AccessTime,
ChangeTime: hdr.ChangeTime,
}
err = t.WriteHeader(hdr)
if err != nil {
return err
}
_, err = io.Copy(t, br)
if err != nil {
return err
}
} else {
if (bhdr.Attributes & winio.StreamSparseAttributes) != 0 {
// Unsupported for now, since the size of the alternate stream is not present
// in the backup stream until after the data has been read.
return fmt.Errorf("%s: tar of sparse alternate data streams is unsupported", name)
}
altName := strings.TrimSuffix(bhdr.Name, ":$DATA")
hdr = &tar.Header{
Format: hdr.Format,
Name: name + altName,
Mode: hdr.Mode,
Typeflag: tar.TypeReg,
Size: bhdr.Size,
ModTime: hdr.ModTime,
AccessTime: hdr.AccessTime,
ChangeTime: hdr.ChangeTime,
}
err = t.WriteHeader(hdr)
if err != nil {
return err
}
_, err = io.Copy(t, br)
if err != nil {
return err
}
case winio.BackupEaData, winio.BackupLink, winio.BackupPropertyData, winio.BackupObjectId, winio.BackupTxfsData:
// ignore these streams
default:
@@ -413,7 +406,7 @@ func FileInfoFromHeader(hdr *tar.Header) (name string, size int64, fileInfo *win
}
fileInfo.CreationTime = windows.NsecToFiletime(creationTime.UnixNano())
}
return
return name, size, fileInfo, err
}
// WriteBackupStreamFromTarFile writes a Win32 backup stream from the current tar file. Since this function may process multiple
@@ -474,7 +467,6 @@ func WriteBackupStreamFromTarFile(w io.Writer, t *tar.Reader, hdr *tar.Header) (
if err != nil {
return nil, err
}
}
if hdr.Typeflag == tar.TypeReg || hdr.Typeflag == tar.TypeRegA {

22
vendor/github.com/Microsoft/go-winio/doc.go generated vendored Normal file
View File

@@ -0,0 +1,22 @@
// This package provides utilities for efficiently performing Win32 IO operations in Go.
// Currently, this package is provides support for genreal IO and management of
// - named pipes
// - files
// - [Hyper-V sockets]
//
// This code is similar to Go's [net] package, and uses IO completion ports to avoid
// blocking IO on system threads, allowing Go to reuse the thread to schedule other goroutines.
//
// This limits support to Windows Vista and newer operating systems.
//
// Additionally, this package provides support for:
// - creating and managing GUIDs
// - writing to [ETW]
// - opening and manageing VHDs
// - parsing [Windows Image files]
// - auto-generating Win32 API code
//
// [Hyper-V sockets]: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service
// [ETW]: https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/event-tracing-for-windows--etw-
// [Windows Image files]: https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/work-with-windows-images
package winio

View File

@@ -33,7 +33,7 @@ func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
err = binary.Read(bytes.NewReader(b), binary.LittleEndian, &info)
if err != nil {
err = errInvalidEaBuffer
return
return ea, nb, err
}
nameOffset := fileFullEaInformationSize
@@ -43,7 +43,7 @@ func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
nextOffset := int(info.NextEntryOffset)
if valueLen+valueOffset > len(b) || nextOffset < 0 || nextOffset > len(b) {
err = errInvalidEaBuffer
return
return ea, nb, err
}
ea.Name = string(b[nameOffset : nameOffset+nameLen])
@@ -52,7 +52,7 @@ func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
if info.NextEntryOffset != 0 {
nb = b[info.NextEntryOffset:]
}
return
return ea, nb, err
}
// DecodeExtendedAttributes decodes a list of EAs from a FILE_FULL_EA_INFORMATION
@@ -67,7 +67,7 @@ func DecodeExtendedAttributes(b []byte) (eas []ExtendedAttribute, err error) {
eas = append(eas, ea)
b = nb
}
return
return eas, err
}
func writeEa(buf *bytes.Buffer, ea *ExtendedAttribute, last bool) error {

View File

@@ -11,6 +11,8 @@ import (
"sync/atomic"
"syscall"
"time"
"golang.org/x/sys/windows"
)
//sys cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) = CancelIoEx
@@ -24,6 +26,8 @@ type atomicBool int32
func (b *atomicBool) isSet() bool { return atomic.LoadInt32((*int32)(b)) != 0 }
func (b *atomicBool) setFalse() { atomic.StoreInt32((*int32)(b), 0) }
func (b *atomicBool) setTrue() { atomic.StoreInt32((*int32)(b), 1) }
//revive:disable-next-line:predeclared Keep "new" to maintain consistency with "atomic" pkg
func (b *atomicBool) swap(new bool) bool {
var newInt int32
if new {
@@ -32,11 +36,6 @@ func (b *atomicBool) swap(new bool) bool {
return atomic.SwapInt32((*int32)(b), newInt) == 1
}
const (
cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS = 1
cFILE_SKIP_SET_EVENT_ON_HANDLE = 2
)
var (
ErrFileClosed = errors.New("file has already been closed")
ErrTimeout = &timeoutError{}
@@ -44,28 +43,28 @@ var (
type timeoutError struct{}
func (e *timeoutError) Error() string { return "i/o timeout" }
func (e *timeoutError) Timeout() bool { return true }
func (e *timeoutError) Temporary() bool { return true }
func (*timeoutError) Error() string { return "i/o timeout" }
func (*timeoutError) Timeout() bool { return true }
func (*timeoutError) Temporary() bool { return true }
type timeoutChan chan struct{}
var ioInitOnce sync.Once
var ioCompletionPort syscall.Handle
// ioResult contains the result of an asynchronous IO operation
// ioResult contains the result of an asynchronous IO operation.
type ioResult struct {
bytes uint32
err error
}
// ioOperation represents an outstanding asynchronous Win32 IO
// ioOperation represents an outstanding asynchronous Win32 IO.
type ioOperation struct {
o syscall.Overlapped
ch chan ioResult
}
func initIo() {
func initIO() {
h, err := createIoCompletionPort(syscall.InvalidHandle, 0, 0, 0xffffffff)
if err != nil {
panic(err)
@@ -94,15 +93,15 @@ type deadlineHandler struct {
timedout atomicBool
}
// makeWin32File makes a new win32File from an existing file handle
// makeWin32File makes a new win32File from an existing file handle.
func makeWin32File(h syscall.Handle) (*win32File, error) {
f := &win32File{handle: h}
ioInitOnce.Do(initIo)
ioInitOnce.Do(initIO)
_, err := createIoCompletionPort(h, ioCompletionPort, 0, 0xffffffff)
if err != nil {
return nil, err
}
err = setFileCompletionNotificationModes(h, cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS|cFILE_SKIP_SET_EVENT_ON_HANDLE)
err = setFileCompletionNotificationModes(h, windows.FILE_SKIP_COMPLETION_PORT_ON_SUCCESS|windows.FILE_SKIP_SET_EVENT_ON_HANDLE)
if err != nil {
return nil, err
}
@@ -121,14 +120,14 @@ func MakeOpenFile(h syscall.Handle) (io.ReadWriteCloser, error) {
return f, nil
}
// closeHandle closes the resources associated with a Win32 handle
// closeHandle closes the resources associated with a Win32 handle.
func (f *win32File) closeHandle() {
f.wgLock.Lock()
// Atomically set that we are closing, releasing the resources only once.
if !f.closing.swap(true) {
f.wgLock.Unlock()
// cancel all IO and wait for it to complete
cancelIoEx(f.handle, nil)
_ = cancelIoEx(f.handle, nil)
f.wg.Wait()
// at this point, no new IO can start
syscall.Close(f.handle)
@@ -144,14 +143,14 @@ func (f *win32File) Close() error {
return nil
}
// IsClosed checks if the file has been closed
// IsClosed checks if the file has been closed.
func (f *win32File) IsClosed() bool {
return f.closing.isSet()
}
// prepareIo prepares for a new IO operation.
// prepareIO prepares for a new IO operation.
// The caller must call f.wg.Done() when the IO is finished, prior to Close() returning.
func (f *win32File) prepareIo() (*ioOperation, error) {
func (f *win32File) prepareIO() (*ioOperation, error) {
f.wgLock.RLock()
if f.closing.isSet() {
f.wgLock.RUnlock()
@@ -164,7 +163,7 @@ func (f *win32File) prepareIo() (*ioOperation, error) {
return c, nil
}
// ioCompletionProcessor processes completed async IOs forever
// ioCompletionProcessor processes completed async IOs forever.
func ioCompletionProcessor(h syscall.Handle) {
for {
var bytes uint32
@@ -178,15 +177,17 @@ func ioCompletionProcessor(h syscall.Handle) {
}
}
// asyncIo processes the return value from ReadFile or WriteFile, blocking until
// todo: helsaawy - create an asyncIO version that takes a context
// asyncIO processes the return value from ReadFile or WriteFile, blocking until
// the operation has actually completed.
func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) {
if err != syscall.ERROR_IO_PENDING {
func (f *win32File) asyncIO(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) {
if err != syscall.ERROR_IO_PENDING { //nolint:errorlint // err is Errno
return int(bytes), err
}
if f.closing.isSet() {
cancelIoEx(f.handle, &c.o)
_ = cancelIoEx(f.handle, &c.o)
}
var timeout timeoutChan
@@ -200,7 +201,7 @@ func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, er
select {
case r = <-c.ch:
err = r.err
if err == syscall.ERROR_OPERATION_ABORTED {
if err == syscall.ERROR_OPERATION_ABORTED { //nolint:errorlint // err is Errno
if f.closing.isSet() {
err = ErrFileClosed
}
@@ -210,10 +211,10 @@ func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, er
err = wsaGetOverlappedResult(f.handle, &c.o, &bytes, false, &flags)
}
case <-timeout:
cancelIoEx(f.handle, &c.o)
_ = cancelIoEx(f.handle, &c.o)
r = <-c.ch
err = r.err
if err == syscall.ERROR_OPERATION_ABORTED {
if err == syscall.ERROR_OPERATION_ABORTED { //nolint:errorlint // err is Errno
err = ErrTimeout
}
}
@@ -221,13 +222,14 @@ func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, er
// runtime.KeepAlive is needed, as c is passed via native
// code to ioCompletionProcessor, c must remain alive
// until the channel read is complete.
// todo: (de)allocate *ioOperation via win32 heap functions, instead of needing to KeepAlive?
runtime.KeepAlive(c)
return int(r.bytes), err
}
// Read reads from a file handle.
func (f *win32File) Read(b []byte) (int, error) {
c, err := f.prepareIo()
c, err := f.prepareIO()
if err != nil {
return 0, err
}
@@ -239,13 +241,13 @@ func (f *win32File) Read(b []byte) (int, error) {
var bytes uint32
err = syscall.ReadFile(f.handle, b, &bytes, &c.o)
n, err := f.asyncIo(c, &f.readDeadline, bytes, err)
n, err := f.asyncIO(c, &f.readDeadline, bytes, err)
runtime.KeepAlive(b)
// Handle EOF conditions.
if err == nil && n == 0 && len(b) != 0 {
return 0, io.EOF
} else if err == syscall.ERROR_BROKEN_PIPE {
} else if err == syscall.ERROR_BROKEN_PIPE { //nolint:errorlint // err is Errno
return 0, io.EOF
} else {
return n, err
@@ -254,7 +256,7 @@ func (f *win32File) Read(b []byte) (int, error) {
// Write writes to a file handle.
func (f *win32File) Write(b []byte) (int, error) {
c, err := f.prepareIo()
c, err := f.prepareIO()
if err != nil {
return 0, err
}
@@ -266,7 +268,7 @@ func (f *win32File) Write(b []byte) (int, error) {
var bytes uint32
err = syscall.WriteFile(f.handle, b, &bytes, &c.o)
n, err := f.asyncIo(c, &f.writeDeadline, bytes, err)
n, err := f.asyncIO(c, &f.writeDeadline, bytes, err)
runtime.KeepAlive(b)
return n, err
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package winio
@@ -14,13 +15,18 @@ import (
type FileBasicInfo struct {
CreationTime, LastAccessTime, LastWriteTime, ChangeTime windows.Filetime
FileAttributes uint32
pad uint32 // padding
_ uint32 // padding
}
// GetFileBasicInfo retrieves times and attributes for a file.
func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) {
bi := &FileBasicInfo{}
if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
if err := windows.GetFileInformationByHandleEx(
windows.Handle(f.Fd()),
windows.FileBasicInfo,
(*byte)(unsafe.Pointer(bi)),
uint32(unsafe.Sizeof(*bi)),
); err != nil {
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)
@@ -29,7 +35,12 @@ func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) {
// SetFileBasicInfo sets times and attributes for a file.
func SetFileBasicInfo(f *os.File, bi *FileBasicInfo) error {
if err := windows.SetFileInformationByHandle(windows.Handle(f.Fd()), windows.FileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
if err := windows.SetFileInformationByHandle(
windows.Handle(f.Fd()),
windows.FileBasicInfo,
(*byte)(unsafe.Pointer(bi)),
uint32(unsafe.Sizeof(*bi)),
); err != nil {
return &os.PathError{Op: "SetFileInformationByHandle", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)
@@ -48,7 +59,10 @@ type FileStandardInfo struct {
// GetFileStandardInfo retrieves ended information for the file.
func GetFileStandardInfo(f *os.File) (*FileStandardInfo, error) {
si := &FileStandardInfo{}
if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileStandardInfo, (*byte)(unsafe.Pointer(si)), uint32(unsafe.Sizeof(*si))); err != nil {
if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()),
windows.FileStandardInfo,
(*byte)(unsafe.Pointer(si)),
uint32(unsafe.Sizeof(*si))); err != nil {
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)
@@ -65,7 +79,12 @@ type FileIDInfo struct {
// GetFileID retrieves the unique (volume, file ID) pair for a file.
func GetFileID(f *os.File) (*FileIDInfo, error) {
fileID := &FileIDInfo{}
if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileIdInfo, (*byte)(unsafe.Pointer(fileID)), uint32(unsafe.Sizeof(*fileID))); err != nil {
if err := windows.GetFileInformationByHandleEx(
windows.Handle(f.Fd()),
windows.FileIdInfo,
(*byte)(unsafe.Pointer(fileID)),
uint32(unsafe.Sizeof(*fileID)),
); err != nil {
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)

View File

@@ -4,6 +4,8 @@
package winio
import (
"context"
"errors"
"fmt"
"io"
"net"
@@ -12,16 +14,87 @@ import (
"time"
"unsafe"
"golang.org/x/sys/windows"
"github.com/Microsoft/go-winio/internal/socket"
"github.com/Microsoft/go-winio/pkg/guid"
)
//sys bind(s syscall.Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socketError] = ws2_32.bind
const afHVSock = 34 // AF_HYPERV
const (
afHvSock = 34 // AF_HYPERV
// Well known Service and VM IDs
//https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service#vmid-wildcards
socketError = ^uintptr(0)
)
// HvsockGUIDWildcard is the wildcard VmId for accepting connections from all partitions.
func HvsockGUIDWildcard() guid.GUID { // 00000000-0000-0000-0000-000000000000
return guid.GUID{}
}
// HvsockGUIDBroadcast is the wildcard VmId for broadcasting sends to all partitions.
func HvsockGUIDBroadcast() guid.GUID { //ffffffff-ffff-ffff-ffff-ffffffffffff
return guid.GUID{
Data1: 0xffffffff,
Data2: 0xffff,
Data3: 0xffff,
Data4: [8]uint8{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
}
}
// HvsockGUIDLoopback is the Loopback VmId for accepting connections to the same partition as the connector.
func HvsockGUIDLoopback() guid.GUID { // e0e16197-dd56-4a10-9195-5ee7a155a838
return guid.GUID{
Data1: 0xe0e16197,
Data2: 0xdd56,
Data3: 0x4a10,
Data4: [8]uint8{0x91, 0x95, 0x5e, 0xe7, 0xa1, 0x55, 0xa8, 0x38},
}
}
// HvsockGUIDSiloHost is the address of a silo's host partition:
// - The silo host of a hosted silo is the utility VM.
// - The silo host of a silo on a physical host is the physical host.
func HvsockGUIDSiloHost() guid.GUID { // 36bd0c5c-7276-4223-88ba-7d03b654c568
return guid.GUID{
Data1: 0x36bd0c5c,
Data2: 0x7276,
Data3: 0x4223,
Data4: [8]byte{0x88, 0xba, 0x7d, 0x03, 0xb6, 0x54, 0xc5, 0x68},
}
}
// HvsockGUIDChildren is the wildcard VmId for accepting connections from the connector's child partitions.
func HvsockGUIDChildren() guid.GUID { // 90db8b89-0d35-4f79-8ce9-49ea0ac8b7cd
return guid.GUID{
Data1: 0x90db8b89,
Data2: 0xd35,
Data3: 0x4f79,
Data4: [8]uint8{0x8c, 0xe9, 0x49, 0xea, 0xa, 0xc8, 0xb7, 0xcd},
}
}
// HvsockGUIDParent is the wildcard VmId for accepting connections from the connector's parent partition.
// Listening on this VmId accepts connection from:
// - Inside silos: silo host partition.
// - Inside hosted silo: host of the VM.
// - Inside VM: VM host.
// - Physical host: Not supported.
func HvsockGUIDParent() guid.GUID { // a42e7cda-d03f-480c-9cc2-a4de20abb878
return guid.GUID{
Data1: 0xa42e7cda,
Data2: 0xd03f,
Data3: 0x480c,
Data4: [8]uint8{0x9c, 0xc2, 0xa4, 0xde, 0x20, 0xab, 0xb8, 0x78},
}
}
// hvsockVsockServiceTemplate is the Service GUID used for the VSOCK protocol.
func hvsockVsockServiceTemplate() guid.GUID { // 00000000-facb-11e6-bd58-64006a7986d3
return guid.GUID{
Data2: 0xfacb,
Data3: 0x11e6,
Data4: [8]uint8{0xbd, 0x58, 0x64, 0x00, 0x6a, 0x79, 0x86, 0xd3},
}
}
// An HvsockAddr is an address for a AF_HYPERV socket.
type HvsockAddr struct {
@@ -36,8 +109,10 @@ type rawHvsockAddr struct {
ServiceID guid.GUID
}
var _ socket.RawSockaddr = &rawHvsockAddr{}
// Network returns the address's network name, "hvsock".
func (addr *HvsockAddr) Network() string {
func (*HvsockAddr) Network() string {
return "hvsock"
}
@@ -47,14 +122,14 @@ func (addr *HvsockAddr) String() string {
// VsockServiceID returns an hvsock service ID corresponding to the specified AF_VSOCK port.
func VsockServiceID(port uint32) guid.GUID {
g, _ := guid.FromString("00000000-facb-11e6-bd58-64006a7986d3")
g := hvsockVsockServiceTemplate() // make a copy
g.Data1 = port
return g
}
func (addr *HvsockAddr) raw() rawHvsockAddr {
return rawHvsockAddr{
Family: afHvSock,
Family: afHVSock,
VMID: addr.VMID,
ServiceID: addr.ServiceID,
}
@@ -65,20 +140,48 @@ func (addr *HvsockAddr) fromRaw(raw *rawHvsockAddr) {
addr.ServiceID = raw.ServiceID
}
// Sockaddr returns a pointer to and the size of this struct.
//
// Implements the [socket.RawSockaddr] interface, and allows use in
// [socket.Bind] and [socket.ConnectEx].
func (r *rawHvsockAddr) Sockaddr() (unsafe.Pointer, int32, error) {
return unsafe.Pointer(r), int32(unsafe.Sizeof(rawHvsockAddr{})), nil
}
// Sockaddr interface allows use with `sockets.Bind()` and `.ConnectEx()`.
func (r *rawHvsockAddr) FromBytes(b []byte) error {
n := int(unsafe.Sizeof(rawHvsockAddr{}))
if len(b) < n {
return fmt.Errorf("got %d, want %d: %w", len(b), n, socket.ErrBufferSize)
}
copy(unsafe.Slice((*byte)(unsafe.Pointer(r)), n), b[:n])
if r.Family != afHVSock {
return fmt.Errorf("got %d, want %d: %w", r.Family, afHVSock, socket.ErrAddrFamily)
}
return nil
}
// HvsockListener is a socket listener for the AF_HYPERV address family.
type HvsockListener struct {
sock *win32File
addr HvsockAddr
}
var _ net.Listener = &HvsockListener{}
// HvsockConn is a connected socket of the AF_HYPERV address family.
type HvsockConn struct {
sock *win32File
local, remote HvsockAddr
}
func newHvSocket() (*win32File, error) {
fd, err := syscall.Socket(afHvSock, syscall.SOCK_STREAM, 1)
var _ net.Conn = &HvsockConn{}
func newHVSocket() (*win32File, error) {
fd, err := syscall.Socket(afHVSock, syscall.SOCK_STREAM, 1)
if err != nil {
return nil, os.NewSyscallError("socket", err)
}
@@ -94,12 +197,12 @@ func newHvSocket() (*win32File, error) {
// ListenHvsock listens for connections on the specified hvsock address.
func ListenHvsock(addr *HvsockAddr) (_ *HvsockListener, err error) {
l := &HvsockListener{addr: *addr}
sock, err := newHvSocket()
sock, err := newHVSocket()
if err != nil {
return nil, l.opErr("listen", err)
}
sa := addr.raw()
err = bind(sock.handle, unsafe.Pointer(&sa), int32(unsafe.Sizeof(sa)))
err = socket.Bind(windows.Handle(sock.handle), &sa)
if err != nil {
return nil, l.opErr("listen", os.NewSyscallError("socket", err))
}
@@ -121,7 +224,7 @@ func (l *HvsockListener) Addr() net.Addr {
// Accept waits for the next connection and returns it.
func (l *HvsockListener) Accept() (_ net.Conn, err error) {
sock, err := newHvSocket()
sock, err := newHVSocket()
if err != nil {
return nil, l.opErr("accept", err)
}
@@ -130,27 +233,42 @@ func (l *HvsockListener) Accept() (_ net.Conn, err error) {
sock.Close()
}
}()
c, err := l.sock.prepareIo()
c, err := l.sock.prepareIO()
if err != nil {
return nil, l.opErr("accept", err)
}
defer l.sock.wg.Done()
// AcceptEx, per documentation, requires an extra 16 bytes per address.
//
// https://docs.microsoft.com/en-us/windows/win32/api/mswsock/nf-mswsock-acceptex
const addrlen = uint32(16 + unsafe.Sizeof(rawHvsockAddr{}))
var addrbuf [addrlen * 2]byte
var bytes uint32
err = syscall.AcceptEx(l.sock.handle, sock.handle, &addrbuf[0], 0, addrlen, addrlen, &bytes, &c.o)
_, err = l.sock.asyncIo(c, nil, bytes, err)
if err != nil {
err = syscall.AcceptEx(l.sock.handle, sock.handle, &addrbuf[0], 0 /*rxdatalen*/, addrlen, addrlen, &bytes, &c.o)
if _, err = l.sock.asyncIO(c, nil, bytes, err); err != nil {
return nil, l.opErr("accept", os.NewSyscallError("acceptex", err))
}
conn := &HvsockConn{
sock: sock,
}
// The local address returned in the AcceptEx buffer is the same as the Listener socket's
// address. However, the service GUID reported by GetSockName is different from the Listeners
// socket, and is sometimes the same as the local address of the socket that dialed the
// address, with the service GUID.Data1 incremented, but othertimes is different.
// todo: does the local address matter? is the listener's address or the actual address appropriate?
conn.local.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[0])))
conn.remote.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[addrlen])))
// initialize the accepted socket and update its properties with those of the listening socket
if err = windows.Setsockopt(windows.Handle(sock.handle),
windows.SOL_SOCKET, windows.SO_UPDATE_ACCEPT_CONTEXT,
(*byte)(unsafe.Pointer(&l.sock.handle)), int32(unsafe.Sizeof(l.sock.handle))); err != nil {
return nil, conn.opErr("accept", os.NewSyscallError("setsockopt", err))
}
sock = nil
return conn, nil
}
@@ -160,43 +278,171 @@ func (l *HvsockListener) Close() error {
return l.sock.Close()
}
/* Need to finish ConnectEx handling
func DialHvsock(ctx context.Context, addr *HvsockAddr) (*HvsockConn, error) {
sock, err := newHvSocket()
// HvsockDialer configures and dials a Hyper-V Socket (ie, [HvsockConn]).
type HvsockDialer struct {
// Deadline is the time the Dial operation must connect before erroring.
Deadline time.Time
// Retries is the number of additional connects to try if the connection times out, is refused,
// or the host is unreachable
Retries uint
// RetryWait is the time to wait after a connection error to retry
RetryWait time.Duration
rt *time.Timer // redial wait timer
}
// Dial the Hyper-V socket at addr.
//
// See [HvsockDialer.Dial] for more information.
func Dial(ctx context.Context, addr *HvsockAddr) (conn *HvsockConn, err error) {
return (&HvsockDialer{}).Dial(ctx, addr)
}
// Dial attempts to connect to the Hyper-V socket at addr, and returns a connection if successful.
// Will attempt (HvsockDialer).Retries if dialing fails, waiting (HvsockDialer).RetryWait between
// retries.
//
// Dialing can be cancelled either by providing (HvsockDialer).Deadline, or cancelling ctx.
func (d *HvsockDialer) Dial(ctx context.Context, addr *HvsockAddr) (conn *HvsockConn, err error) {
op := "dial"
// create the conn early to use opErr()
conn = &HvsockConn{
remote: *addr,
}
if !d.Deadline.IsZero() {
var cancel context.CancelFunc
ctx, cancel = context.WithDeadline(ctx, d.Deadline)
defer cancel()
}
// preemptive timeout/cancellation check
if err = ctx.Err(); err != nil {
return nil, conn.opErr(op, err)
}
sock, err := newHVSocket()
if err != nil {
return nil, err
return nil, conn.opErr(op, err)
}
defer func() {
if sock != nil {
sock.Close()
}
}()
c, err := sock.prepareIo()
sa := addr.raw()
err = socket.Bind(windows.Handle(sock.handle), &sa)
if err != nil {
return nil, err
return nil, conn.opErr(op, os.NewSyscallError("bind", err))
}
c, err := sock.prepareIO()
if err != nil {
return nil, conn.opErr(op, err)
}
defer sock.wg.Done()
var bytes uint32
err = windows.ConnectEx(windows.Handle(sock.handle), sa, nil, 0, &bytes, &c.o)
_, err = sock.asyncIo(ctx, c, nil, bytes, err)
for i := uint(0); i <= d.Retries; i++ {
err = socket.ConnectEx(
windows.Handle(sock.handle),
&sa,
nil, // sendBuf
0, // sendDataLen
&bytes,
(*windows.Overlapped)(unsafe.Pointer(&c.o)))
_, err = sock.asyncIO(c, nil, bytes, err)
if i < d.Retries && canRedial(err) {
if err = d.redialWait(ctx); err == nil {
continue
}
}
break
}
if err != nil {
return nil, err
return nil, conn.opErr(op, os.NewSyscallError("connectex", err))
}
conn := &HvsockConn{
sock: sock,
remote: *addr,
// update the connection properties, so shutdown can be used
if err = windows.Setsockopt(
windows.Handle(sock.handle),
windows.SOL_SOCKET,
windows.SO_UPDATE_CONNECT_CONTEXT,
nil, // optvalue
0, // optlen
); err != nil {
return nil, conn.opErr(op, os.NewSyscallError("setsockopt", err))
}
// get the local name
var sal rawHvsockAddr
err = socket.GetSockName(windows.Handle(sock.handle), &sal)
if err != nil {
return nil, conn.opErr(op, os.NewSyscallError("getsockname", err))
}
conn.local.fromRaw(&sal)
// one last check for timeout, since asyncIO doesn't check the context
if err = ctx.Err(); err != nil {
return nil, conn.opErr(op, err)
}
conn.sock = sock
sock = nil
return conn, nil
}
*/
// redialWait waits before attempting to redial, resetting the timer as appropriate.
func (d *HvsockDialer) redialWait(ctx context.Context) (err error) {
if d.RetryWait == 0 {
return nil
}
if d.rt == nil {
d.rt = time.NewTimer(d.RetryWait)
} else {
// should already be stopped and drained
d.rt.Reset(d.RetryWait)
}
select {
case <-ctx.Done():
case <-d.rt.C:
return nil
}
// stop and drain the timer
if !d.rt.Stop() {
<-d.rt.C
}
return ctx.Err()
}
// assumes error is a plain, unwrapped syscall.Errno provided by direct syscall.
func canRedial(err error) bool {
//nolint:errorlint // guaranteed to be an Errno
switch err {
case windows.WSAECONNREFUSED, windows.WSAENETUNREACH, windows.WSAETIMEDOUT,
windows.ERROR_CONNECTION_REFUSED, windows.ERROR_CONNECTION_UNAVAIL:
return true
default:
return false
}
}
func (conn *HvsockConn) opErr(op string, err error) error {
// translate from "file closed" to "socket closed"
if errors.Is(err, ErrFileClosed) {
err = socket.ErrSocketClosed
}
return &net.OpError{Op: op, Net: "hvsock", Source: &conn.local, Addr: &conn.remote, Err: err}
}
func (conn *HvsockConn) Read(b []byte) (int, error) {
c, err := conn.sock.prepareIo()
c, err := conn.sock.prepareIO()
if err != nil {
return 0, conn.opErr("read", err)
}
@@ -204,10 +450,11 @@ func (conn *HvsockConn) Read(b []byte) (int, error) {
buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))}
var flags, bytes uint32
err = syscall.WSARecv(conn.sock.handle, &buf, 1, &bytes, &flags, &c.o, nil)
n, err := conn.sock.asyncIo(c, &conn.sock.readDeadline, bytes, err)
n, err := conn.sock.asyncIO(c, &conn.sock.readDeadline, bytes, err)
if err != nil {
if _, ok := err.(syscall.Errno); ok {
err = os.NewSyscallError("wsarecv", err)
var eno windows.Errno
if errors.As(err, &eno) {
err = os.NewSyscallError("wsarecv", eno)
}
return 0, conn.opErr("read", err)
} else if n == 0 {
@@ -230,7 +477,7 @@ func (conn *HvsockConn) Write(b []byte) (int, error) {
}
func (conn *HvsockConn) write(b []byte) (int, error) {
c, err := conn.sock.prepareIo()
c, err := conn.sock.prepareIO()
if err != nil {
return 0, conn.opErr("write", err)
}
@@ -238,10 +485,11 @@ func (conn *HvsockConn) write(b []byte) (int, error) {
buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))}
var bytes uint32
err = syscall.WSASend(conn.sock.handle, &buf, 1, &bytes, 0, &c.o, nil)
n, err := conn.sock.asyncIo(c, &conn.sock.writeDeadline, bytes, err)
n, err := conn.sock.asyncIO(c, &conn.sock.writeDeadline, bytes, err)
if err != nil {
if _, ok := err.(syscall.Errno); ok {
err = os.NewSyscallError("wsasend", err)
var eno windows.Errno
if errors.As(err, &eno) {
err = os.NewSyscallError("wsasend", eno)
}
return 0, conn.opErr("write", err)
}
@@ -257,13 +505,19 @@ func (conn *HvsockConn) IsClosed() bool {
return conn.sock.IsClosed()
}
// shutdown disables sending or receiving on a socket.
func (conn *HvsockConn) shutdown(how int) error {
if conn.IsClosed() {
return ErrFileClosed
return socket.ErrSocketClosed
}
err := syscall.Shutdown(conn.sock.handle, how)
if err != nil {
// If the connection was closed, shutdowns fail with "not connected"
if errors.Is(err, windows.WSAENOTCONN) ||
errors.Is(err, windows.WSAESHUTDOWN) {
err = socket.ErrSocketClosed
}
return os.NewSyscallError("shutdown", err)
}
return nil
@@ -273,7 +527,7 @@ func (conn *HvsockConn) shutdown(how int) error {
func (conn *HvsockConn) CloseRead() error {
err := conn.shutdown(syscall.SHUT_RD)
if err != nil {
return conn.opErr("close", err)
return conn.opErr("closeread", err)
}
return nil
}
@@ -283,7 +537,7 @@ func (conn *HvsockConn) CloseRead() error {
func (conn *HvsockConn) CloseWrite() error {
err := conn.shutdown(syscall.SHUT_WR)
if err != nil {
return conn.opErr("close", err)
return conn.opErr("closewrite", err)
}
return nil
}
@@ -300,8 +554,13 @@ func (conn *HvsockConn) RemoteAddr() net.Addr {
// SetDeadline implements the net.Conn SetDeadline method.
func (conn *HvsockConn) SetDeadline(t time.Time) error {
conn.SetReadDeadline(t)
conn.SetWriteDeadline(t)
// todo: implement `SetDeadline` for `win32File`
if err := conn.SetReadDeadline(t); err != nil {
return fmt.Errorf("set read deadline: %w", err)
}
if err := conn.SetWriteDeadline(t); err != nil {
return fmt.Errorf("set write deadline: %w", err)
}
return nil
}

View File

@@ -0,0 +1,20 @@
package socket
import (
"unsafe"
)
// RawSockaddr allows structs to be used with [Bind] and [ConnectEx]. The
// struct must meet the Win32 sockaddr requirements specified here:
// https://docs.microsoft.com/en-us/windows/win32/winsock/sockaddr-2
//
// Specifically, the struct size must be least larger than an int16 (unsigned short)
// for the address family.
type RawSockaddr interface {
// Sockaddr returns a pointer to the RawSockaddr and its struct size, allowing
// for the RawSockaddr's data to be overwritten by syscalls (if necessary).
//
// It is the callers responsibility to validate that the values are valid; invalid
// pointers or size can cause a panic.
Sockaddr() (unsafe.Pointer, int32, error)
}

View File

@@ -0,0 +1,179 @@
//go:build windows
package socket
import (
"errors"
"fmt"
"net"
"sync"
"syscall"
"unsafe"
"github.com/Microsoft/go-winio/pkg/guid"
"golang.org/x/sys/windows"
)
//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go socket.go
//sys getsockname(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) [failretval==socketError] = ws2_32.getsockname
//sys getpeername(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) [failretval==socketError] = ws2_32.getpeername
//sys bind(s windows.Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socketError] = ws2_32.bind
const socketError = uintptr(^uint32(0))
var (
// todo(helsaawy): create custom error types to store the desired vs actual size and addr family?
ErrBufferSize = errors.New("buffer size")
ErrAddrFamily = errors.New("address family")
ErrInvalidPointer = errors.New("invalid pointer")
ErrSocketClosed = fmt.Errorf("socket closed: %w", net.ErrClosed)
)
// todo(helsaawy): replace these with generics, ie: GetSockName[S RawSockaddr](s windows.Handle) (S, error)
// GetSockName writes the local address of socket s to the [RawSockaddr] rsa.
// If rsa is not large enough, the [windows.WSAEFAULT] is returned.
func GetSockName(s windows.Handle, rsa RawSockaddr) error {
ptr, l, err := rsa.Sockaddr()
if err != nil {
return fmt.Errorf("could not retrieve socket pointer and size: %w", err)
}
// although getsockname returns WSAEFAULT if the buffer is too small, it does not set
// &l to the correct size, so--apart from doubling the buffer repeatedly--there is no remedy
return getsockname(s, ptr, &l)
}
// GetPeerName returns the remote address the socket is connected to.
//
// See [GetSockName] for more information.
func GetPeerName(s windows.Handle, rsa RawSockaddr) error {
ptr, l, err := rsa.Sockaddr()
if err != nil {
return fmt.Errorf("could not retrieve socket pointer and size: %w", err)
}
return getpeername(s, ptr, &l)
}
func Bind(s windows.Handle, rsa RawSockaddr) (err error) {
ptr, l, err := rsa.Sockaddr()
if err != nil {
return fmt.Errorf("could not retrieve socket pointer and size: %w", err)
}
return bind(s, ptr, l)
}
// "golang.org/x/sys/windows".ConnectEx and .Bind only accept internal implementations of the
// their sockaddr interface, so they cannot be used with HvsockAddr
// Replicate functionality here from
// https://cs.opensource.google/go/x/sys/+/master:windows/syscall_windows.go
// The function pointers to `AcceptEx`, `ConnectEx` and `GetAcceptExSockaddrs` must be loaded at
// runtime via a WSAIoctl call:
// https://docs.microsoft.com/en-us/windows/win32/api/Mswsock/nc-mswsock-lpfn_connectex#remarks
type runtimeFunc struct {
id guid.GUID
once sync.Once
addr uintptr
err error
}
func (f *runtimeFunc) Load() error {
f.once.Do(func() {
var s windows.Handle
s, f.err = windows.Socket(windows.AF_INET, windows.SOCK_STREAM, windows.IPPROTO_TCP)
if f.err != nil {
return
}
defer windows.CloseHandle(s) //nolint:errcheck
var n uint32
f.err = windows.WSAIoctl(s,
windows.SIO_GET_EXTENSION_FUNCTION_POINTER,
(*byte)(unsafe.Pointer(&f.id)),
uint32(unsafe.Sizeof(f.id)),
(*byte)(unsafe.Pointer(&f.addr)),
uint32(unsafe.Sizeof(f.addr)),
&n,
nil, //overlapped
0, //completionRoutine
)
})
return f.err
}
var (
// todo: add `AcceptEx` and `GetAcceptExSockaddrs`
WSAID_CONNECTEX = guid.GUID{ //revive:disable-line:var-naming ALL_CAPS
Data1: 0x25a207b9,
Data2: 0xddf3,
Data3: 0x4660,
Data4: [8]byte{0x8e, 0xe9, 0x76, 0xe5, 0x8c, 0x74, 0x06, 0x3e},
}
connectExFunc = runtimeFunc{id: WSAID_CONNECTEX}
)
func ConnectEx(
fd windows.Handle,
rsa RawSockaddr,
sendBuf *byte,
sendDataLen uint32,
bytesSent *uint32,
overlapped *windows.Overlapped,
) error {
if err := connectExFunc.Load(); err != nil {
return fmt.Errorf("failed to load ConnectEx function pointer: %w", err)
}
ptr, n, err := rsa.Sockaddr()
if err != nil {
return err
}
return connectEx(fd, ptr, n, sendBuf, sendDataLen, bytesSent, overlapped)
}
// BOOL LpfnConnectex(
// [in] SOCKET s,
// [in] const sockaddr *name,
// [in] int namelen,
// [in, optional] PVOID lpSendBuffer,
// [in] DWORD dwSendDataLength,
// [out] LPDWORD lpdwBytesSent,
// [in] LPOVERLAPPED lpOverlapped
// )
func connectEx(
s windows.Handle,
name unsafe.Pointer,
namelen int32,
sendBuf *byte,
sendDataLen uint32,
bytesSent *uint32,
overlapped *windows.Overlapped,
) (err error) {
// todo: after upgrading to 1.18, switch from syscall.Syscall9 to syscall.SyscallN
r1, _, e1 := syscall.Syscall9(connectExFunc.addr,
7,
uintptr(s),
uintptr(name),
uintptr(namelen),
uintptr(unsafe.Pointer(sendBuf)),
uintptr(sendDataLen),
uintptr(unsafe.Pointer(bytesSent)),
uintptr(unsafe.Pointer(overlapped)),
0,
0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return err
}

View File

@@ -0,0 +1,72 @@
//go:build windows
// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT.
package socket
import (
"syscall"
"unsafe"
"golang.org/x/sys/windows"
)
var _ unsafe.Pointer
// Do the interface allocations only once for common
// Errno values.
const (
errnoERROR_IO_PENDING = 997
)
var (
errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
errERROR_EINVAL error = syscall.EINVAL
)
// errnoErr returns common boxed Errno values, to prevent
// allocations at runtime.
func errnoErr(e syscall.Errno) error {
switch e {
case 0:
return errERROR_EINVAL
case errnoERROR_IO_PENDING:
return errERROR_IO_PENDING
}
// TODO: add more here, after collecting data on the common
// error values see on Windows. (perhaps when running
// all.bat?)
return e
}
var (
modws2_32 = windows.NewLazySystemDLL("ws2_32.dll")
procbind = modws2_32.NewProc("bind")
procgetpeername = modws2_32.NewProc("getpeername")
procgetsockname = modws2_32.NewProc("getsockname")
)
func bind(s windows.Handle, name unsafe.Pointer, namelen int32) (err error) {
r1, _, e1 := syscall.Syscall(procbind.Addr(), 3, uintptr(s), uintptr(name), uintptr(namelen))
if r1 == socketError {
err = errnoErr(e1)
}
return
}
func getpeername(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) {
r1, _, e1 := syscall.Syscall(procgetpeername.Addr(), 3, uintptr(s), uintptr(name), uintptr(unsafe.Pointer(namelen)))
if r1 == socketError {
err = errnoErr(e1)
}
return
}
func getsockname(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) {
r1, _, e1 := syscall.Syscall(procgetsockname.Addr(), 3, uintptr(s), uintptr(name), uintptr(unsafe.Pointer(namelen)))
if r1 == socketError {
err = errnoErr(e1)
}
return
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package winio
@@ -13,6 +14,8 @@ import (
"syscall"
"time"
"unsafe"
"golang.org/x/sys/windows"
)
//sys connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) = ConnectNamedPipe
@@ -21,10 +24,10 @@ import (
//sys getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) = GetNamedPipeInfo
//sys getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW
//sys localAlloc(uFlags uint32, length uint32) (ptr uintptr) = LocalAlloc
//sys ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntstatus) = ntdll.NtCreateNamedPipeFile
//sys rtlNtStatusToDosError(status ntstatus) (winerr error) = ntdll.RtlNtStatusToDosErrorNoTeb
//sys rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntstatus) = ntdll.RtlDosPathNameToNtPathName_U
//sys rtlDefaultNpAcl(dacl *uintptr) (status ntstatus) = ntdll.RtlDefaultNpAcl
//sys ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntStatus) = ntdll.NtCreateNamedPipeFile
//sys rtlNtStatusToDosError(status ntStatus) (winerr error) = ntdll.RtlNtStatusToDosErrorNoTeb
//sys rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntStatus) = ntdll.RtlDosPathNameToNtPathName_U
//sys rtlDefaultNpAcl(dacl *uintptr) (status ntStatus) = ntdll.RtlDefaultNpAcl
type ioStatusBlock struct {
Status, Information uintptr
@@ -51,45 +54,22 @@ type securityDescriptor struct {
Control uint16
Owner uintptr
Group uintptr
Sacl uintptr
Dacl uintptr
Sacl uintptr //revive:disable-line:var-naming SACL, not Sacl
Dacl uintptr //revive:disable-line:var-naming DACL, not Dacl
}
type ntstatus int32
type ntStatus int32
func (status ntstatus) Err() error {
func (status ntStatus) Err() error {
if status >= 0 {
return nil
}
return rtlNtStatusToDosError(status)
}
const (
cERROR_PIPE_BUSY = syscall.Errno(231)
cERROR_NO_DATA = syscall.Errno(232)
cERROR_PIPE_CONNECTED = syscall.Errno(535)
cERROR_SEM_TIMEOUT = syscall.Errno(121)
cSECURITY_SQOS_PRESENT = 0x100000
cSECURITY_ANONYMOUS = 0
cPIPE_TYPE_MESSAGE = 4
cPIPE_READMODE_MESSAGE = 2
cFILE_OPEN = 1
cFILE_CREATE = 2
cFILE_PIPE_MESSAGE_TYPE = 1
cFILE_PIPE_REJECT_REMOTE_CLIENTS = 2
cSE_DACL_PRESENT = 4
)
var (
// ErrPipeListenerClosed is returned for pipe operations on listeners that have been closed.
// This error should match net.errClosing since docker takes a dependency on its text.
ErrPipeListenerClosed = errors.New("use of closed network connection")
ErrPipeListenerClosed = net.ErrClosed
errPipeWriteClosed = errors.New("pipe has been closed for write")
)
@@ -116,9 +96,10 @@ func (f *win32Pipe) RemoteAddr() net.Addr {
}
func (f *win32Pipe) SetDeadline(t time.Time) error {
f.SetReadDeadline(t)
f.SetWriteDeadline(t)
return nil
if err := f.SetReadDeadline(t); err != nil {
return err
}
return f.SetWriteDeadline(t)
}
// CloseWrite closes the write side of a message pipe in byte mode.
@@ -157,14 +138,14 @@ func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
return 0, io.EOF
}
n, err := f.win32File.Read(b)
if err == io.EOF {
if err == io.EOF { //nolint:errorlint
// If this was the result of a zero-byte read, then
// it is possible that the read was due to a zero-size
// message. Since we are simulating CloseWrite with a
// zero-byte message, ensure that all future Read() calls
// also return EOF.
f.readEOF = true
} else if err == syscall.ERROR_MORE_DATA {
} else if err == syscall.ERROR_MORE_DATA { //nolint:errorlint // err is Errno
// ERROR_MORE_DATA indicates that the pipe's read mode is message mode
// and the message still has more bytes. Treat this as a success, since
// this package presents all named pipes as byte streams.
@@ -173,7 +154,7 @@ func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
return n, err
}
func (s pipeAddress) Network() string {
func (pipeAddress) Network() string {
return "pipe"
}
@@ -184,16 +165,21 @@ func (s pipeAddress) String() string {
// tryDialPipe attempts to dial the pipe at `path` until `ctx` cancellation or timeout.
func tryDialPipe(ctx context.Context, path *string, access uint32) (syscall.Handle, error) {
for {
select {
case <-ctx.Done():
return syscall.Handle(0), ctx.Err()
default:
h, err := createFile(*path, access, 0, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_OVERLAPPED|cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
h, err := createFile(*path,
access,
0,
nil,
syscall.OPEN_EXISTING,
windows.FILE_FLAG_OVERLAPPED|windows.SECURITY_SQOS_PRESENT|windows.SECURITY_ANONYMOUS,
0)
if err == nil {
return h, nil
}
if err != cERROR_PIPE_BUSY {
if err != windows.ERROR_PIPE_BUSY { //nolint:errorlint // err is Errno
return h, &os.PathError{Err: err, Op: "open", Path: *path}
}
// Wait 10 msec and try again. This is a rather simplistic
@@ -213,9 +199,10 @@ func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
} else {
absTimeout = time.Now().Add(2 * time.Second)
}
ctx, _ := context.WithDeadline(context.Background(), absTimeout)
ctx, cancel := context.WithDeadline(context.Background(), absTimeout)
defer cancel()
conn, err := DialPipeContext(ctx, path)
if err == context.DeadlineExceeded {
if errors.Is(err, context.DeadlineExceeded) {
return nil, ErrTimeout
}
return conn, err
@@ -251,7 +238,7 @@ func DialPipeAccess(ctx context.Context, path string, access uint32) (net.Conn,
// If the pipe is in message mode, return a message byte pipe, which
// supports CloseWrite().
if flags&cPIPE_TYPE_MESSAGE != 0 {
if flags&windows.PIPE_TYPE_MESSAGE != 0 {
return &win32MessageBytePipe{
win32Pipe: win32Pipe{win32File: f, path: path},
}, nil
@@ -283,7 +270,11 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy
oa.Length = unsafe.Sizeof(oa)
var ntPath unicodeString
if err := rtlDosPathNameToNtPathName(&path16[0], &ntPath, 0, 0).Err(); err != nil {
if err := rtlDosPathNameToNtPathName(&path16[0],
&ntPath,
0,
0,
).Err(); err != nil {
return 0, &os.PathError{Op: "open", Path: path, Err: err}
}
defer localFree(ntPath.Buffer)
@@ -292,8 +283,8 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy
// The security descriptor is only needed for the first pipe.
if first {
if sd != nil {
len := uint32(len(sd))
sdb := localAlloc(0, len)
l := uint32(len(sd))
sdb := localAlloc(0, l)
defer localFree(sdb)
copy((*[0xffff]byte)(unsafe.Pointer(sdb))[:], sd)
oa.SecurityDescriptor = (*securityDescriptor)(unsafe.Pointer(sdb))
@@ -301,28 +292,28 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy
// Construct the default named pipe security descriptor.
var dacl uintptr
if err := rtlDefaultNpAcl(&dacl).Err(); err != nil {
return 0, fmt.Errorf("getting default named pipe ACL: %s", err)
return 0, fmt.Errorf("getting default named pipe ACL: %w", err)
}
defer localFree(dacl)
sdb := &securityDescriptor{
Revision: 1,
Control: cSE_DACL_PRESENT,
Control: windows.SE_DACL_PRESENT,
Dacl: dacl,
}
oa.SecurityDescriptor = sdb
}
}
typ := uint32(cFILE_PIPE_REJECT_REMOTE_CLIENTS)
typ := uint32(windows.FILE_PIPE_REJECT_REMOTE_CLIENTS)
if c.MessageMode {
typ |= cFILE_PIPE_MESSAGE_TYPE
typ |= windows.FILE_PIPE_MESSAGE_TYPE
}
disposition := uint32(cFILE_OPEN)
disposition := uint32(windows.FILE_OPEN)
access := uint32(syscall.GENERIC_READ | syscall.GENERIC_WRITE | syscall.SYNCHRONIZE)
if first {
disposition = cFILE_CREATE
disposition = windows.FILE_CREATE
// By not asking for read or write access, the named pipe file system
// will put this pipe into an initially disconnected state, blocking
// client connections until the next call with first == false.
@@ -335,7 +326,20 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy
h syscall.Handle
iosb ioStatusBlock
)
err = ntCreateNamedPipeFile(&h, access, &oa, &iosb, syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE, disposition, 0, typ, 0, 0, 0xffffffff, uint32(c.InputBufferSize), uint32(c.OutputBufferSize), &timeout).Err()
err = ntCreateNamedPipeFile(&h,
access,
&oa,
&iosb,
syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE,
disposition,
0,
typ,
0,
0,
0xffffffff,
uint32(c.InputBufferSize),
uint32(c.OutputBufferSize),
&timeout).Err()
if err != nil {
return 0, &os.PathError{Op: "open", Path: path, Err: err}
}
@@ -380,7 +384,7 @@ func (l *win32PipeListener) makeConnectedServerPipe() (*win32File, error) {
p.Close()
p = nil
err = <-ch
if err == nil || err == ErrFileClosed {
if err == nil || err == ErrFileClosed { //nolint:errorlint // err is Errno
err = ErrPipeListenerClosed
}
}
@@ -402,12 +406,12 @@ func (l *win32PipeListener) listenerRoutine() {
p, err = l.makeConnectedServerPipe()
// If the connection was immediately closed by the client, try
// again.
if err != cERROR_NO_DATA {
if err != windows.ERROR_NO_DATA { //nolint:errorlint // err is Errno
break
}
}
responseCh <- acceptResponse{p, err}
closed = err == ErrPipeListenerClosed
closed = err == ErrPipeListenerClosed //nolint:errorlint // err is Errno
}
}
syscall.Close(l.firstHandle)
@@ -469,15 +473,15 @@ func ListenPipe(path string, c *PipeConfig) (net.Listener, error) {
}
func connectPipe(p *win32File) error {
c, err := p.prepareIo()
c, err := p.prepareIO()
if err != nil {
return err
}
defer p.wg.Done()
err = connectNamedPipe(p.handle, &c.o)
_, err = p.asyncIo(c, nil, 0, err)
if err != nil && err != cERROR_PIPE_CONNECTED {
_, err = p.asyncIO(c, nil, 0, err)
if err != nil && err != windows.ERROR_PIPE_CONNECTED { //nolint:errorlint // err is Errno
return err
}
return nil

View File

@@ -1,5 +1,3 @@
// +build windows
// Package guid provides a GUID type. The backing structure for a GUID is
// identical to that used by the golang.org/x/sys/windows GUID type.
// There are two main binary encodings used for a GUID, the big-endian encoding,
@@ -9,24 +7,26 @@ package guid
import (
"crypto/rand"
"crypto/sha1"
"crypto/sha1" //nolint:gosec // not used for secure application
"encoding"
"encoding/binary"
"fmt"
"strconv"
)
//go:generate go run golang.org/x/tools/cmd/stringer -type=Variant -trimprefix=Variant -linecomment
// Variant specifies which GUID variant (or "type") of the GUID. It determines
// how the entirety of the rest of the GUID is interpreted.
type Variant uint8
// The variants specified by RFC 4122.
// The variants specified by RFC 4122 section 4.1.1.
const (
// VariantUnknown specifies a GUID variant which does not conform to one of
// the variant encodings specified in RFC 4122.
VariantUnknown Variant = iota
VariantNCS
VariantRFC4122
VariantRFC4122 // RFC 4122
VariantMicrosoft
VariantFuture
)
@@ -36,6 +36,10 @@ const (
// hash of an input string.
type Version uint8
func (v Version) String() string {
return strconv.FormatUint(uint64(v), 10)
}
var _ = (encoding.TextMarshaler)(GUID{})
var _ = (encoding.TextUnmarshaler)(&GUID{})
@@ -61,7 +65,7 @@ func NewV4() (GUID, error) {
// big-endian UTF16 stream of bytes. If that is desired, the string can be
// encoded as such before being passed to this function.
func NewV5(namespace GUID, name []byte) (GUID, error) {
b := sha1.New()
b := sha1.New() //nolint:gosec // not used for secure application
namespaceBytes := namespace.ToArray()
b.Write(namespaceBytes[:])
b.Write(name)

View File

@@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package guid

View File

@@ -1,3 +1,6 @@
//go:build windows
// +build windows
package guid
import "golang.org/x/sys/windows"

View File

@@ -0,0 +1,27 @@
// Code generated by "stringer -type=Variant -trimprefix=Variant -linecomment"; DO NOT EDIT.
package guid
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[VariantUnknown-0]
_ = x[VariantNCS-1]
_ = x[VariantRFC4122-2]
_ = x[VariantMicrosoft-3]
_ = x[VariantFuture-4]
}
const _Variant_name = "UnknownNCSRFC 4122MicrosoftFuture"
var _Variant_index = [...]uint8{0, 7, 10, 18, 27, 33}
func (i Variant) String() string {
if i >= Variant(len(_Variant_index)-1) {
return "Variant(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _Variant_name[_Variant_index[i]:_Variant_index[i+1]]
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package winio
@@ -24,22 +25,17 @@ import (
//sys lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) = advapi32.LookupPrivilegeDisplayNameW
const (
SE_PRIVILEGE_ENABLED = 2
//revive:disable-next-line:var-naming ALL_CAPS
SE_PRIVILEGE_ENABLED = windows.SE_PRIVILEGE_ENABLED
ERROR_NOT_ALL_ASSIGNED syscall.Errno = 1300
//revive:disable-next-line:var-naming ALL_CAPS
ERROR_NOT_ALL_ASSIGNED syscall.Errno = windows.ERROR_NOT_ALL_ASSIGNED
SeBackupPrivilege = "SeBackupPrivilege"
SeRestorePrivilege = "SeRestorePrivilege"
SeSecurityPrivilege = "SeSecurityPrivilege"
)
const (
securityAnonymous = iota
securityIdentification
securityImpersonation
securityDelegation
)
var (
privNames = make(map[string]uint64)
privNameMutex sync.Mutex
@@ -51,11 +47,9 @@ type PrivilegeError struct {
}
func (e *PrivilegeError) Error() string {
s := ""
s := "Could not enable privilege "
if len(e.privileges) > 1 {
s = "Could not enable privileges "
} else {
s = "Could not enable privilege "
}
for i, p := range e.privileges {
if i != 0 {
@@ -94,7 +88,7 @@ func RunWithPrivileges(names []string, fn func() error) error {
}
func mapPrivileges(names []string) ([]uint64, error) {
var privileges []uint64
privileges := make([]uint64, 0, len(names))
privNameMutex.Lock()
defer privNameMutex.Unlock()
for _, name := range names {
@@ -127,7 +121,7 @@ func enableDisableProcessPrivilege(names []string, action uint32) error {
return err
}
p, _ := windows.GetCurrentProcess()
p := windows.CurrentProcess()
var token windows.Token
err = windows.OpenProcessToken(p, windows.TOKEN_ADJUST_PRIVILEGES|windows.TOKEN_QUERY, &token)
if err != nil {
@@ -140,10 +134,10 @@ func enableDisableProcessPrivilege(names []string, action uint32) error {
func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) error {
var b bytes.Buffer
binary.Write(&b, binary.LittleEndian, uint32(len(privileges)))
_ = binary.Write(&b, binary.LittleEndian, uint32(len(privileges)))
for _, p := range privileges {
binary.Write(&b, binary.LittleEndian, p)
binary.Write(&b, binary.LittleEndian, action)
_ = binary.Write(&b, binary.LittleEndian, p)
_ = binary.Write(&b, binary.LittleEndian, action)
}
prevState := make([]byte, b.Len())
reqSize := uint32(0)
@@ -151,7 +145,7 @@ func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) e
if !success {
return err
}
if err == ERROR_NOT_ALL_ASSIGNED {
if err == ERROR_NOT_ALL_ASSIGNED { //nolint:errorlint // err is Errno
return &PrivilegeError{privileges}
}
return nil
@@ -177,7 +171,7 @@ func getPrivilegeName(luid uint64) string {
}
func newThreadToken() (windows.Token, error) {
err := impersonateSelf(securityImpersonation)
err := impersonateSelf(windows.SecurityImpersonation)
if err != nil {
return 0, err
}

Some files were not shown because too many files have changed in this diff Show More