Compare commits

...

133 Commits

Author SHA1 Message Date
Miloslav Trmač
a552909737 Release 1.12.0
More template functions available in (skopeo inspect --format)
Adds new ways to supply trusted keys to (skopeo standalone-verify).

Now requires Go 1.18.

- [CI:DOCS] Fix up language in README
- Add unit tests for tlsVerifyConfig's yaml.Unmarshaler
- Cirrus: Use human-readable CI VM Images
- [CI:BUILD] copr: fix el8 build and enable debuginfo
- [CI:BUILD] enable debuginfo for el8 copr builds
- Update to use, and benefit from, Go 1.18
- [CI:DOCS] Disable dependabot
- Renovate: c/common rule moved to defaults
- [CI:BUILD] Packit: initial enablement
- Replace gopkg.in/check.v1 by github.com/stretchr/testify/suite/
- Corrected typo in skopeo-sync and updated description
- Fix tabelating output in (skopeo inspect --format)
- Use common library reporter
- Fix formatting of inspect examples
- Use io.WriteString
- Factor out the output of data in (skopeo inspect)
- Simplify inspectOptions.writeOutput a bit more
- Cirrus: Update CI VM images
- Make the installation instructions more prominent in README.md
- [CI:BUILD] Packit: trigger builds on commit to main branch
- systemtests: Fix 040-local-registry-auth about XDG_RUNTIME_DIR
- Verify signatures from a trust store
- Rename argument. Only use any with public key file. Double check fingerprint is in public key file.
- Use multiple fingerprint function Allow comma separated fingerprint list
- Avoid use of a deprecated capability.NewPid
- Fix error handling of signature.NewEphemeralGPGSigningMechanism
- Cross-link the top-level and subcommand option lists
- Use golangci-lint instead of golint
- Add (make tools) to install (for now only) golangci-lint, use it in Cirrus

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-12 22:47:03 +02:00
Miloslav Trmač
2363dfeeab Merge pull request #1969 from containers/renovate/github.com-containers-common-0.x
Update module github.com/containers/common to v0.52.0
2023-04-11 20:02:19 +02:00
renovate[bot]
5f0314f342 Update module github.com/containers/common to v0.52.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-11 17:35:08 +00:00
Miloslav Trmač
c1836e1971 Merge pull request #1963 from containers/renovate/github.com-containers-storage-1.x
Update module github.com/containers/storage to v1.46.1
2023-04-11 19:32:23 +02:00
renovate[bot]
66157589c5 Update module github.com/containers/storage to v1.46.1
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-10 22:42:41 +00:00
Daniel J Walsh
1aa6371db4 Merge pull request #1944 from mtrmac/top-options
Cross-link the top-level and subcommand option lists
2023-04-10 15:07:07 -04:00
Daniel J Walsh
c1d54feac9 Merge pull request #1967 from mtrmac/golangci-lint
Rework hack/make*, and switch from golint to golangci-lint
2023-04-06 16:53:40 -04:00
Miloslav Trmač
7c66b7405a Add (make tools) to install (for now only) golangci-lint, use it in Cirrus
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
d4bd787e5f Use golangci-lint instead of golint
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
c538340e3b Finally, eliminate hack/make.sh
The only thing hack/make.sh is now really doing is the
warning + sleep without SKOPEO_CONTAINER_TESTS .

So, make that a separate script, and eliminate the
hack/make directory.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
f8f5a25fe2 Actually fail if (go vet) fails
The errors are printed on stderr, so read that.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
aebab49285 Speed up validate-git-marks by about a factor of three
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
4298692dd4 Don't use hack/make.sh for validate-git-marks
hack/make.sh now does not make a difference, so simplify.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
7e35ad54c3 Test all files by validate-git-marks
This is simpler to do, cheap enough for our repo size, and it
does not require a network access to see which files to check.

And it's the last user of hack/make/.validate, which I wanted to
remove in the first place.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
789257f767 Simplify the package list of (go vet)
That extra process is an extra 0.5s on macOS at least.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
bee51e5eed Don't use hack/make.sh for validate-gofmt
hack/make.sh now does not make a difference, so simplify.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
85fef03670 Run gofmt on all files, not just the changed ones
... so that if we upgrade gofmt, the updates
need happen immediately.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
82268ea8bf Don't use hack/make.sh for validate-lint
hack/make.sh now does not make a difference, so simplify.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
694b1565d1 Lint many more files in validate-lint
- Always lint everything, not just changed files;
  that means that if we upgrade the linter, we will
  need to clean everything up, but that's a good thing
  for contributors who come after that linter upgrade.

- Don't skip linting the integration tests, there's no
  good reason to skip them.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
43090b2917 Don't use hack/make.sh for validate-vet
hack/make.sh now does not make a difference, so simplify.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
225f239a69 Remove no-longer-necessary module options
We now require Go 1.18. As of that version:
- GO111MODULE=on is implied by having a go.mod file
- -mod=vendor is implied by having a vendor directory

so just remove both options everywhere

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
98b01af031 Fix Makefile dependencies
validate-docs requires bin/skopeo; test-unit-local ctually doesn't.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
835d71a3a4 Remove some outright unused code from hack/make*
Should not affect observable behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:51 +02:00
Miloslav Trmač
30ecd8f040 Cross-link the top-level and subcommand option lists
... primarily to make the top-level options more discoverable.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:45:36 +02:00
Miloslav Trmač
cd1c43c65d Merge pull request #1965 from mtrmac/verify-error
Fix error handling of signature.NewEphemeralGPGSigningMechanism
2023-04-06 21:44:58 +02:00
Miloslav Trmač
4be583c8a9 Fix error handling of signature.NewEphemeralGPGSigningMechanism
signature.NewEphemeralGPGSigningMechanism is called in an if branch
where the previous err := introduces a "new" err variable, which means
the failure isn't visible after the if.

So, do the dumb thing and just check on both branches explicitly.
(We still need to worry about correctly setting "mech" and
"publicKeyfingerprints" to persist after the if.)

How I hate Go sometimes. And this shows we really should update
the linter.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-06 21:23:36 +02:00
Miloslav Trmač
0f3e6e30e4 Merge pull request #1966 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20230405
2023-04-06 21:22:33 +02:00
renovate[bot]
e841409787 chore(deps): update dependency containers/automation_images to v20230405
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-06 16:29:28 +00:00
Miloslav Trmač
9ffdceb157 Merge pull request #1968 from mtrmac/NewPid2
Avoid use of a deprecated capability.NewPid
2023-04-06 18:24:32 +02:00
Miloslav Trmač
4f5e821436 Avoid use of a deprecated capability.NewPid
This is reported by golangci-lint.

(Absolutely untested.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-05 22:21:26 +02:00
Miloslav Trmač
1e70fee21d Merge pull request #1962 from containers/renovate/github.com-spf13-cobra-1.x
fix(deps): update module github.com/spf13/cobra to v1.7.0
2023-04-05 20:07:27 +02:00
renovate[bot]
ca0f8418e1 fix(deps): update module github.com/spf13/cobra to v1.7.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-05 16:29:43 +00:00
Miloslav Trmač
3ddbfec17c Merge pull request #1903 from containers/renovate/github.com-containers-image-v5-5.x
fix(deps): update module github.com/containers/image/v5 to v5.25.0
2023-04-05 18:28:43 +02:00
renovate[bot]
b0d339f0fd fix(deps): update module github.com/containers/image/v5 to v5.25.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-05 16:14:44 +00:00
Miloslav Trmač
c4dac7632c Merge pull request #1964 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.7.0
2023-04-05 18:13:15 +02:00
renovate[bot]
03ca2871fe fix(deps): update module golang.org/x/term to v0.7.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-05 15:45:36 +00:00
Miloslav Trmač
841eab319f Merge pull request #1950 from Jamstah/easy-verify-options
Verify signatures from a list of public keys
2023-04-05 17:40:52 +02:00
James Hewitt
4ca2058d01 Use multiple fingerprint function
Allow comma separated fingerprint list

To be squashed later

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2023-04-01 12:07:35 +01:00
James Hewitt
c54f2025d8 Review comments (to be squashed later
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2023-04-01 11:51:58 +01:00
James Hewitt
9b1f1fa1a9 Rename argument. Only use any with public key file. Double check fingerprint is in public key file.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2023-04-01 11:51:57 +01:00
James Hewitt
3097b7a4e9 Verify signatures from a trust store
Add the ability to use an on-disk trust store to verify signatures. Also allow the user to trust any known fingerprint instead of having to specify one.

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2023-04-01 11:51:57 +01:00
Miloslav Trmač
d08bf21367 Merge pull request #1959 from mtrmac/update-image
Update c/image from the main branch
2023-04-01 12:41:59 +02:00
Miloslav Trmač
bfe82593c8 Update c/image from the main branch
> go get github.com/containers/image/v5@main
> make vendor

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-04-01 12:24:04 +02:00
Miloslav Trmač
4f475bd4d2 Merge pull request #1957 from containers/renovate/github.com-containers-common-0.x
Update module github.com/containers/common to v0.51.2
2023-04-01 12:14:03 +02:00
renovate[bot]
468ac6559e Update module github.com/containers/common to v0.51.2
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-01 09:30:13 +00:00
Miloslav Trmač
2409c25922 Merge pull request #1955 from containers/renovate/major-ci-vm-image
Update dependency containers/automation_images to v20230330
2023-04-01 11:27:58 +02:00
renovate[bot]
7481aae6ac Update dependency containers/automation_images to v20230330
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-30 19:02:55 +00:00
Miloslav Trmač
3952227939 Merge pull request #1953 from sstosh/tests-xdg
systemtests: Fix 040-local-registry-auth about XDG_RUNTIME_DIR
2023-03-28 22:36:09 +02:00
Toshiki Sonoda
454f8559c1 systemtests: Fix 040-local-registry-auth about XDG_RUNTIME_DIR
Need to restore XDG_RUNTIME_DIR when podman removes the registry.

Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
2023-03-28 15:01:41 +09:00
Miloslav Trmač
0c43e43e36 Merge pull request #1942 from lsm5/packit-build-on-main-commit
[CI:BUILD] Packit: trigger builds on commit to main branch
2023-03-24 21:03:43 +01:00
Lokesh Mandvekar
bbdcb79c03 [CI:BUILD] Packit: trigger builds on commit to main branch
This commit lets packit trigger builds on
`rhcontainerbot/podman-next` copr after a commit to the main branch
instead of the current github webhook trigger.

The builds triggered via packit also provide more information in their
`version-release`:

Current webhook triggered build:
`101:0.0.git.2460.cfd6f20f-1`.
Ref: https://copr.fedorainfracloud.org/coprs/rhcontainerbot/podman-next/package/skopeo/

Packit triggered build for another package (netavark) on podman-next:
101:1.6.0~dev-1.20230321121647013339.main.61.gd6f0352
Ref: https://copr.fedorainfracloud.org/coprs/rhcontainerbot/podman-next/package/netavark/

The packit triggered build correctly shows the upstream branch name,
commit id, timestamp as well as the upstream version.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-03-24 11:21:54 +05:30
Miloslav Trmač
dd80c87c2c Merge pull request #1947 from containers/renovate/actions-stale-8.x
[skip-ci] Update actions/stale action to v8
2023-03-23 21:33:37 +01:00
renovate[bot]
cd4f2ee554 [skip-ci] Update actions/stale action to v8
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-23 09:38:29 +00:00
Valentin Rothberg
f588f8bde7 Merge pull request #1943 from mtrmac/promote-install
Make the installation instructions more prominent in README.md
2023-03-23 09:33:11 +01:00
Miloslav Trmač
b2ede999f3 Make the installation instructions more prominent in README.md
... per https://github.com/containers/skopeo/issues/1940 .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-03-22 22:29:47 +01:00
Miloslav Trmač
b43ec279d2 Merge pull request #1945 from containers/renovate/major-ci-vm-image
Update dependency containers/automation_images to v20230320
2023-03-22 22:28:17 +01:00
renovate[bot]
8ea5fd4458 Update dependency containers/automation_images to v20230320
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-22 20:14:04 +00:00
Miloslav Trmač
81c63c331f Merge pull request #1946 from containers/renovate/github.com-containers-common-0.x
Update module github.com/containers/common to v0.51.1
2023-03-22 21:12:46 +01:00
renovate[bot]
aa9862a718 Update module github.com/containers/common to v0.51.1
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-22 15:11:42 +00:00
Miloslav Trmač
cfd6f20fbd Merge pull request #1939 from cevich/debian_vms
Cirrus: Update CI VM images
2023-03-16 22:30:48 +01:00
Chris Evich
0ad54d6df3 Cirrus: Update CI VM images
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-16 15:34:44 -04:00
Miloslav Trmač
c88f3eab64 Merge pull request #1938 from lsm5/update-golang-org-x-net
bump golang.org/x/net to v0.8.0
2023-03-15 19:17:08 +01:00
Lokesh Mandvekar
20447df139 bump golang.org/x/net to v0.8.0
Resolves: CVE-2022-41723
Ref: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-41723

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-03-15 18:58:10 +05:30
Miloslav Trmač
d15ecce269 Merge pull request #1932 from containers/renovate/golang.org-x-term-0.x
Update module golang.org/x/term to v0.6.0
2023-03-06 17:59:46 +01:00
renovate[bot]
3481a5b927 Update module golang.org/x/term to v0.6.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-03-05 03:40:10 +00:00
Valentin Rothberg
44eff83253 Merge pull request #1927 from mtrmac/WriteString
Use io.WriteString
2023-02-28 09:16:12 +01:00
Valentin Rothberg
a1c9655cff Merge pull request #1928 from mtrmac/inspect-duplicated
Avoid duplicated code in `skopeo inspect`
2023-02-28 09:14:18 +01:00
Miloslav Trmač
bcc0d54e54 Simplify inspectOptions.writeOutput a bit more
Don't maintain a named array variable that we only append to
exactly once.

Should not change behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-27 19:36:16 +01:00
Miloslav Trmač
c345785d28 Factor out the output of data in (skopeo inspect)
The two code paths are basically exactly identical, so
share the code.

Should not change behavior.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-27 19:35:53 +01:00
Miloslav Trmač
2a6a944c13 Use io.WriteString
... mostly so that I get practice and remember this exists in the future.

(This saves one allocation & copy when the target implements
io.StringWriter. And that makes absolutely no relevant difference
on this path.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-27 18:15:47 +01:00
Miloslav Trmač
ad1b09dea4 Merge pull request #1925 from containers/renovate/github.com-stretchr-testify-1.x
Update module github.com/stretchr/testify to v1.8.2
2023-02-27 15:40:25 +01:00
renovate[bot]
9a02c1eb57 Update module github.com/stretchr/testify to v1.8.2
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-27 10:52:05 +00:00
Daniel J Walsh
7060b094a7 Merge pull request #1923 from containers/renovate/github.com-containers-storage-1.x
Update module github.com/containers/storage to v1.45.4
2023-02-27 05:48:27 -05:00
renovate[bot]
f1c03ef104 Update module github.com/containers/storage to v1.45.4
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-23 19:18:39 +00:00
Daniel J Walsh
f0abd60623 Merge pull request #1904 from containers/renovate/golang.org-x-exp-digest
Update golang.org/x/exp digest to 5e25df0
2023-02-23 14:15:50 -05:00
Daniel J Walsh
719ae1d890 Merge pull request #1908 from mtrmac/warnings
Fix some warnings
2023-02-23 14:15:11 -05:00
renovate[bot]
64daedca6c Update golang.org/x/exp digest to 5e25df0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-23 18:49:00 +00:00
Daniel J Walsh
508891281c Merge pull request #1822 from cblecker/report
Use common library reporter
2023-02-23 12:32:55 -05:00
Christoph Blecker
c07f20982c Fix formatting of inspect examples
Signed-off-by: Christoph Blecker <cblecker@redhat.com>
2023-02-23 09:06:16 -08:00
Christoph Blecker
313f142c87 Use common library reporter
Signed-off-by: Christoph Blecker <cblecker@redhat.com>
2023-02-23 08:59:13 -08:00
Miloslav Trmač
4beb3f0af9 Fix some warnings
golangci-lint linter: unparam

This primarily fixes TestProxy to test the correct image

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-23 16:08:45 +01:00
Valentin Rothberg
371604ba27 Merge pull request #1921 from mtrmac/inspect-tabs
Fix tabelating output in (skopeo inspect --format)
2023-02-23 08:26:40 +01:00
Miloslav Trmač
1c3d49f012 Fix tabelating output in (skopeo inspect --format)
tabwriter buffers lines that contain \t in memory, and only
writes them out on a .Flush(). So actually call that.

Without this, things like
> --format 'name\tdigest\tlabels\n{{.Name}}\t{{.Digest}}\t{{.Labels}}\n'
result in no output at all.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-23 00:41:09 +01:00
Miloslav Trmač
d833619740 Merge pull request #1920 from dbeezt/main
Corrected Skopeo-Sync Documentation
2023-02-22 18:11:47 +01:00
dbeezt
fb0be61374 Corrected typo in skopeo-sync and updated description
Signed-off-by: dbeezt <dbrereton1995@gmail.com>
2023-02-22 15:30:02 +00:00
Valentin Rothberg
e61893eebd Merge pull request #1913 from mtrmac/testify-suite
Replace gopkg.in/check.v1 by github.com/stretchr/testify/suite/
2023-02-21 10:24:59 +01:00
Miloslav Trmač
2ef9cf6902 Replace gopkg.in/check.v1 by github.com/stretchr/testify/suite/
gopkg.in/check.v1 hasn't had any commit since Nov 2020.
That's not a immediate issue for a test-only dependency, but
because it hides access to the standard library *testing.T,
eventually it will become limiting.

Also, using the same framework for unit and integration tests
seems practical.

This is mostly a batch copy&paste job, with a fairly high risk
of unexpected breakage.

Also, I didn't take much time at all to carefully choose between
assert.* and require.*; we can tune that as failures show up.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-17 02:35:51 +01:00
Valentin Rothberg
c0c7065737 Merge pull request #1911 from mtrmac/c-image-update
Update c/image after https://github.com/containers/image/pull/1842
2023-02-16 09:04:54 +01:00
Miloslav Trmač
0ba164f072 Update c/image after https://github.com/containers/image/pull/1842
... so that Renovate doesn't keep proposing a downgrade.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-15 19:20:03 +01:00
Miloslav Trmač
e0893efc48 Merge pull request #1905 from lsm5/packit
[CI:BUILD] Packit: initial enablement
2023-02-14 18:33:12 +01:00
Lokesh Mandvekar
012e1144cf [CI:BUILD] Packit: initial enablement
This commit will run COPR builds on every PR against all active
releases of CentOS Stream and Fedora, thus allowing buildability checks before the
PR merges.

Builds are done on a custom COPR project:
`rhcontainerbot/packit-builds`.
Ref: https://copr.fedorainfracloud.org/coprs/rhcontainerbot/packit-builds/

The build targets are set in the copr itself, so we don't need to
explicitly mention them in `.packit.yaml`, making upstream configuration
a lot simpler.

The `spec.rpkg` file meant for rpm builds post-pr-merge at
`rhcontainerbot/podman-next` copr gets reused for packit builds, so the
packit jobs are independent of Fedora / CentOS dist-git.

NOTE: The Packit copr_build tasks help to check if every commit builds on
supported Fedora and CentOS Stream arches. They do not block the current
Cirrus-based workflow.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-02-14 16:15:01 +05:30
Miloslav Trmač
3362a1998a Merge pull request #1906 from cevich/update_renovate
[CI:DOCS] Renovate: c/common rule moved to defaults
2023-02-13 16:50:47 +01:00
Chris Evich
5435c80867 Renovate: c/common rule moved to defaults
Ref: https://github.com/containers/automation/pull/128

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-02-13 09:21:34 -05:00
Miloslav Trmač
95680f3c07 Merge pull request #1901 from mtrmac/c-image-eof
Update c/image after https://github.com/containers/image/pull/1816
2023-02-10 18:24:17 +01:00
Miloslav Trmač
643a2359e4 Update c/image after https://github.com/containers/image/pull/1816
... to work around some of the "unexpected EOF" failures.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-09 20:36:27 +01:00
Miloslav Trmač
e9f30e5b65 Merge pull request #1900 from rhatdan/codespell
Run codespell on codebase
2023-02-09 20:35:52 +01:00
Daniel J Walsh
2c6e15b5ab Run codespell on codebase
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2023-02-09 09:13:32 -05:00
Miloslav Trmač
e257db0aa9 Merge pull request #1898 from cevich/disable_dependabot
[CI:DOCS] Disable dependabot
2023-02-08 20:44:34 +01:00
Chris Evich
df708d1652 [CI:DOCS] Disable dependabot
Ref:
https://docs.github.com/en/code-security/dependabot/dependabot-security-updates/configuring-dependabot-security-updates

Disabling it via the WebUI isn't good enough, the configuration file
must also be absent.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-02-08 14:39:10 -05:00
Miloslav Trmač
0fdd10491e Merge pull request #1897 from containers/renovate/golang.org-x-term-0.x
Update module golang.org/x/term to v0.5.0
2023-02-07 23:30:02 +01:00
renovate[bot]
2acac8a6c2 Update module golang.org/x/term to v0.5.0
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-07 21:58:55 +00:00
Miloslav Trmač
5d7edf1f7c Merge pull request #1896 from containers/renovate/golang.org-x-exp-digest
Update golang.org/x/exp digest to 46f607a
2023-02-07 03:34:17 +01:00
renovate[bot]
f9e2c67648 Update golang.org/x/exp digest to 46f607a
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-02-06 22:14:39 +00:00
Daniel J Walsh
8f5d4fc0dc Merge pull request #1893 from mtrmac/golangci-lint
Fix some golangci-lint-reported nits
2023-02-03 18:39:38 -05:00
Miloslav Trmač
47c7902ea2 Remove unnecessary blank lines
golangci-lint linter: whitespace

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:49:01 +01:00
Miloslav Trmač
c1a57ca199 Pre-allocate an array
golangci-lint linter: prealloc

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:49:01 +01:00
Miloslav Trmač
2a7b132790 Simplify a condition
golangci-lint linter: ifshort

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:49:01 +01:00
Miloslav Trmač
e7ab33e65f Rename a variable to avoid an underscore
golangci-lint linter: golint

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:59 +01:00
Miloslav Trmač
e90c381a02 Add missing comment punctuation
golangci-lint linter: godot

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
70c06b4ac0 Fix, or remove, comments using lint syntax
golangci-lint linter: gocritic

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
9137ac5697 Simplify an increment
golangci-lint linter: gocritic

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
efc6e837c6 Reformat import statements
golangci-lint linter: gci

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
a8b9e4e385 Use %w when wrapping errors
golangci-lint linter: errorlint

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Miloslav Trmač
99215e40e9 Remove a duplicate word
golangci-lint linter: dupword

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-03 17:48:16 +01:00
Valentin Rothberg
7afbf30963 Merge pull request #1894 from mtrmac/go1.18
Update to Go 1.18
2023-02-03 09:25:41 +01:00
Miloslav Trmač
afa031e846 Use net/netip.Addr instead of net.IP
This is not really shorter, but more a matter of principle...

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 22:27:35 +01:00
Miloslav Trmač
891ba3d4a6 s/interface{}/any/g
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 22:27:35 +01:00
Miloslav Trmač
f2b3a9c04b Use golang.org/x/exp
... instead of open-coding loops.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 22:27:35 +01:00
Miloslav Trmač
f1a6d427b3 Use strings.Cut
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 21:43:35 +01:00
Miloslav Trmač
22955d0506 go mod tidy -go=1.18
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-02-02 21:43:12 +01:00
Miloslav Trmač
45dc2071ca Merge pull request #1890 from lsm5/fix-copr-el8-debuginfo
[CI:BUILD] enable debuginfo for el8 copr builds
2023-02-02 17:54:12 +01:00
Lokesh Mandvekar
007f01c656 [CI:BUILD] enable debuginfo for el8 copr builds
Follow up on #1816.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-02-02 16:52:10 +05:30
Miloslav Trmač
2fc5f2ef60 Merge pull request #1816 from lsm5/fix-copr
[CI:BUILD] copr: fix el8 build
2023-02-01 19:08:47 +01:00
Lokesh Mandvekar
036bf59885 [CI:BUILD] copr: fix el8 build and enable debuginfo
Fedora 35 builds are disabled, so remove fedora 35
conditionals while we're at it.

Bump containers-common dependency to match with that in
podman.spec.rpkg.

TODO: fix debuginfo for rhel8

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-02-01 20:52:38 +05:30
Miloslav Trmač
ce16b5b007 Merge pull request #1888 from cevich/managed_ci_vm_images
Cirrus: Use human-readable CI VM Images
2023-02-01 14:42:01 +01:00
Chris Evich
f9406bb06b Cirrus: Use human-readable CI VM Images
Image content hasn't changed much, the biggest thing here is the
`$IMAGE_SUFFIX` value.  This new schema is also fully manageable
by renovate.  Allowing a tag-push to c/automation_images to create image
update PRs in all repos automatically.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-31 17:03:35 -05:00
Miloslav Trmač
e7f4af0651 Merge pull request #1885 from rhatdan/main
Update module gopkg.in/yaml.v2 to v3
2023-01-31 17:29:05 +01:00
Daniel J Walsh
b41b85abc4 Update module gopkg.in/yaml.v2 to v3
Signed-off-by: Renovate Bot <bot@renovateapp.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2023-01-31 05:50:34 -06:00
Valentin Rothberg
5e0156e13b Merge pull request #1886 from mtrmac/yaml-unit-test
Add unit tests for tlsVerifyConfig's yaml.Unmarshaler
2023-01-30 08:23:43 +01:00
Miloslav Trmač
d2fbec3508 Add unit tests for tlsVerifyConfig's yaml.Unmarshaler
This is a security-critical piece of code, make sure we detect
if it breaks.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-01-27 22:03:36 +01:00
Miloslav Trmač
601d255bb3 Merge pull request #1884 from TomSweeneyRedHat/dev/tsweeney/langread
[CI:DOCS] Fix up language in README
2023-01-27 17:04:11 +01:00
tomsweeneyredhat
9e24a19543 [CI:DOCS] Fix up language in README
Touch up a few left over comments from #1871

[NO NEW TESTS NEEDED]

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2023-01-26 18:11:09 -05:00
Daniel J Walsh
968670116c Merge pull request #1883 from rhatdan/VERSION
Bump to v1.11.0
2023-01-26 16:13:24 -05:00
1084 changed files with 50387 additions and 30661 deletions

View File

@@ -26,7 +26,7 @@ env:
FEDORA_NAME: "fedora-37"
# Google-cloud VM Images
IMAGE_SUFFIX: "c6300530360713216"
IMAGE_SUFFIX: "c20230405t152256z-f37f36d12"
FEDORA_CACHE_IMAGE_NAME: "fedora-${IMAGE_SUFFIX}"
# Container FQIN's
@@ -52,7 +52,9 @@ validate_task:
image: '${SKOPEO_CIDEV_CONTAINER_FQIN}'
cpu: 4
memory: 8
script: |
setup_script: |
make tools
test_script: |
make validate-local
make vendor && hack/tree_status.sh
@@ -91,7 +93,7 @@ osx_task:
export PATH=$GOPATH/bin:$PATH
brew update
brew install gpgme go go-md2man
go install golang.org/x/lint/golint@latest
make tools
test_script: |
export PATH=$GOPATH/bin:$PATH
go version

View File

@@ -1,10 +0,0 @@
version: 2
updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
time: "10:00"
timezone: Europe/Berlin
open-pull-requests-limit: 10

View File

@@ -56,19 +56,4 @@
/*************************************************
***** Golang-specific configuration options *****
*************************************************/
"golang": {
// N/B: LAST MATCHING RULE WINS
// https://docs.renovatebot.com/configuration-options/#packagerules
"packageRules": [
// Package version retraction (https://go.dev/ref/mod#go-mod-file-retract)
// is broken in Renovate
// ref: https://github.com/renovatebot/renovate/issues/13012
{
"matchPackageNames": ["github.com/containers/common"],
// Both v1.0.0 and v1.0.1 should be ignored.
"allowedVersions": "!/v((1.0.0)|(1.0.1))$/"
},
],
},
}

View File

@@ -17,7 +17,7 @@ jobs:
pull-requests: write # for actions/stale to close stale PRs
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v7
- uses: actions/stale@v8
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'A friendly reminder that this issue had no activity for 30 days.'

26
.packit.sh Normal file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/env bash
# This script handles any custom processing of the spec file generated using the `post-upstream-clone`
# action and gets used by the fix-spec-file action in .packit.yaml.
set -eo pipefail
# Get Version from HEAD
VERSION=$(grep '^const Version' version/version.go | cut -d\" -f2 | sed -e 's/-/~/')
# Generate source tarball
git archive --prefix=skopeo-$VERSION/ -o skopeo-$VERSION.tar.gz HEAD
# RPM Spec modifications
# Update Version in spec with Version from Cargo.toml
sed -i "s/^Version:.*/Version: $VERSION/" skopeo.spec
# Update Release in spec with Packit's release envvar
sed -i "s/^Release:.*/Release: $PACKIT_RPMSPEC_RELEASE%{?dist}/" skopeo.spec
# Update Source tarball name in spec
sed -i "s/^Source:.*.tar.gz/Source: skopeo-$VERSION.tar.gz/" skopeo.spec
# Update setup macro to use the correct build dir
sed -i "s/^%setup.*/%autosetup -Sgit -n %{name}-$VERSION/" skopeo.spec

34
.packit.yaml Normal file
View File

@@ -0,0 +1,34 @@
---
# See the documentation for more information:
# https://packit.dev/docs/configuration/
# NOTE: The Packit copr_build tasks help to check if every commit builds on
# supported Fedora and CentOS Stream arches.
# They do not block the current Cirrus-based workflow.
# Build targets can be found at:
# https://copr.fedorainfracloud.org/coprs/rhcontainerbot/packit-builds/
specfile_path: skopeo.spec
jobs:
- &copr
job: copr_build
trigger: pull_request
owner: rhcontainerbot
project: packit-builds
enable_net: true
srpm_build_deps:
- make
- rpkg
actions:
post-upstream-clone:
- "rpkg spec --outdir ./"
fix-spec-file:
- "bash .packit.sh"
- <<: *copr
# Run on commit to main branch
trigger: commit
branch: main
project: podman-next

View File

@@ -38,13 +38,6 @@ endif
export CONTAINER_RUNTIME ?= $(if $(shell command -v podman ;),podman,docker)
GOMD2MAN ?= $(if $(shell command -v go-md2man ;),go-md2man,$(GOBIN)/go-md2man)
# Go module support: set `-mod=vendor` to use the vendored sources.
# See also hack/make.sh.
ifeq ($(shell go help mod >/dev/null 2>&1 && echo true), true)
GO:=GO111MODULE=on $(GO)
MOD_VENDOR=-mod=vendor
endif
ifeq ($(DEBUG), 1)
override GOGCFLAGS += -N -l
endif
@@ -56,17 +49,11 @@ ifeq ($(GOOS), linux)
endif
# If $TESTFLAGS is set, it is passed as extra arguments to 'go test'.
# You can increase test output verbosity with the option '-test.vv'.
# You can select certain tests to run, with `-test.run <regex>` for example:
# You can select certain tests to run, with `-run <regex>` for example:
#
# make test-unit TESTFLAGS='-test.run ^TestManifestDigest$'
#
# For integration test, we use [gocheck](https://labix.org/gocheck).
# You can increase test output verbosity with the option '-check.vv'.
# You can limit test selection with `-check.f <regex>`, for example:
#
# make test-integration TESTFLAGS='-check.f CopySuite.TestCopy.*'
export TESTFLAGS ?= -v -check.v -test.timeout=15m
# make test-unit TESTFLAGS='-run ^TestManifestDigest$'
# make test-integration TESTFLAGS='-run copySuite.TestCopy.*'
export TESTFLAGS ?= -timeout=15m
# This is assumed to be set non-empty when operating inside a CI/automation environment
CI ?=
@@ -139,9 +126,9 @@ binary: cmd/skopeo
# Build w/o using containers
.PHONY: bin/skopeo
bin/skopeo:
$(GO) build $(MOD_VENDOR) ${GO_DYN_FLAGS} ${SKOPEO_LDFLAGS} -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o $@ ./cmd/skopeo
$(GO) build ${GO_DYN_FLAGS} ${SKOPEO_LDFLAGS} -gcflags "$(GOGCFLAGS)" -tags "$(BUILDTAGS)" -o $@ ./cmd/skopeo
bin/skopeo.%:
GOOS=$(word 2,$(subst ., ,$@)) GOARCH=$(word 3,$(subst ., ,$@)) $(GO) build $(MOD_VENDOR) ${SKOPEO_LDFLAGS} -tags "containers_image_openpgp $(BUILDTAGS)" -o $@ ./cmd/skopeo
GOOS=$(word 2,$(subst ., ,$@)) GOARCH=$(word 3,$(subst ., ,$@)) $(GO) build ${SKOPEO_LDFLAGS} -tags "containers_image_openpgp $(BUILDTAGS)" -o $@ ./cmd/skopeo
local-cross: bin/skopeo.darwin.amd64 bin/skopeo.linux.arm bin/skopeo.linux.arm64 bin/skopeo.windows.386.exe bin/skopeo.windows.amd64.exe
$(MANPAGES): %: %.md
@@ -194,6 +181,11 @@ install-completions: completions
shell:
$(CONTAINER_RUN) bash
tools:
if [ ! -x "$(GOBIN)/golangci-lint" ]; then \
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOBIN) v1.52.2 ; \
fi
check: validate test-unit test-integration test-system
test-integration:
@@ -206,7 +198,8 @@ test-integration:
# Intended for CI, assumed to be running in quay.io/libpod/skopeo_cidev container.
test-integration-local: bin/skopeo
hack/make.sh test-integration
hack/warn-destructive-tests.sh
hack/test-integration.sh
# complicated set of options needed to run podman-in-podman
test-system:
@@ -222,7 +215,8 @@ test-system:
# Intended for CI, assumed to already be running in quay.io/libpod/skopeo_cidev container.
test-system-local: bin/skopeo
hack/make.sh test-system
hack/warn-destructive-tests.sh
hack/test-system.sh
test-unit:
# Just call (make test unit-local) here instead of worrying about environment differences
@@ -236,16 +230,19 @@ test-all-local: validate-local validate-docs test-unit-local
.PHONY: validate-local
validate-local:
BUILDTAGS="${BUILDTAGS}" hack/make.sh validate-git-marks validate-gofmt validate-lint validate-vet
hack/validate-git-marks.sh
hack/validate-gofmt.sh
GOBIN=$(GOBIN) hack/validate-lint.sh
BUILDTAGS="${BUILDTAGS}" hack/validate-vet.sh
# This invokes bin/skopeo, hence cannot be run as part of validate-local
.PHONY: validate-docs
validate-docs:
validate-docs: bin/skopeo
hack/man-page-checker
hack/xref-helpmsgs-manpages
test-unit-local: bin/skopeo
$(GO) test $(MOD_VENDOR) -tags "$(BUILDTAGS)" $$($(GO) list $(MOD_VENDOR) -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/containers/skopeo/\(integration\|vendor/.*\)$$')
test-unit-local:
$(GO) test -tags "$(BUILDTAGS)" $$($(GO) list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/containers/skopeo/\(integration\|vendor/.*\)$$')
vendor:
$(GO) mod tidy

View File

@@ -1,8 +1,4 @@
<!--- skopeo [![Build Status](https://travis-ci.org/containers/skopeo.svg?branch=main)](https://travis-ci.org/containers/skopeo)
=
--->
<img src="https://cdn.rawgit.com/containers/skopeo/main/docs/skopeo.svg" width="250">
<img src="https://cdn.rawgit.com/containers/skopeo/main/docs/skopeo.svg" width="250" alt="Skopeo">
----
@@ -43,6 +39,12 @@ Skopeo works with API V2 container image registries such as [docker.io](https://
* oci:path:tag
An image tag in a directory compliant with "Open Container Image Layout Specification" at path.
[Obtaining skopeo](./install.md)
-
For a detailed description how to install or build skopeo, see
[install.md](./install.md).
## Inspecting a repository
`skopeo` is able to _inspect_ a repository on a container registry and fetch images layers.
The _inspect_ command fetches the repository's manifest and it is able to show you a `docker inspect`-like
@@ -193,12 +195,6 @@ $ skopeo inspect --creds=testuser:testpassword docker://myregistrydomain.com:500
$ skopeo copy --src-creds=testuser:testpassword docker://myregistrydomain.com:5000/private oci:local_oci_image
```
[Obtaining skopeo](./install.md)
-
For a detailed description how to install or build skopeo, see
[install.md](./install.md).
Contributing
-

View File

@@ -62,7 +62,7 @@ func TestGenerateSigstoreKey(t *testing.T) {
// we have to trigger a write failure.
// Success
// Just a smoke-test, useability of the keys is tested in the generate implementation.
// Just a smoke-test, usability of the keys is tested in the generate implementation.
dir := t.TempDir()
prefix := filepath.Join(dir, "prefix")
passphraseFile := filepath.Join(dir, "passphrase")

View File

@@ -6,8 +6,6 @@ import (
"fmt"
"io"
"strings"
"text/tabwriter"
"text/template"
"github.com/containers/common/pkg/report"
"github.com/containers/common/pkg/retry"
@@ -53,8 +51,8 @@ See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
RunE: commandAction(opts.run),
Example: `skopeo inspect docker://registry.fedoraproject.org/fedora
skopeo inspect --config docker://docker.io/alpine
skopeo inspect --format "Name: {{.Name}} Digest: {{.Digest}}" docker://registry.access.redhat.com/ubi8`,
skopeo inspect --config docker://docker.io/alpine
skopeo inspect --format "Name: {{.Name}} Digest: {{.Digest}}" docker://registry.access.redhat.com/ubi8`,
ValidArgsFunction: autocompleteSupportedTransports,
}
adjustUsage(cmd)
@@ -74,7 +72,6 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
rawManifest []byte
src types.ImageSource
imgInspect *types.ImageInspectInfo
data []interface{}
)
ctx, cancel := opts.global.commandTimeoutContext()
defer cancel()
@@ -151,18 +148,7 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
}, opts.retryOpts); err != nil {
return fmt.Errorf("Error reading OCI-formatted configuration data: %w", err)
}
if report.IsJSON(opts.format) || opts.format == "" {
var out []byte
out, err = json.MarshalIndent(config, "", " ")
if err == nil {
fmt.Fprintf(stdout, "%s\n", string(out))
}
} else {
row := "{{range . }}" + report.NormalizeFormat(opts.format) + "{{end}}"
data = append(data, config)
err = printTmpl(stdout, row, data)
}
if err != nil {
if err := opts.writeOutput(stdout, config); err != nil {
return fmt.Errorf("Error writing OCI-formatted configuration data to standard output: %w", err)
}
return nil
@@ -228,23 +214,23 @@ func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error)
logrus.Warnf("Registry disallows tag list retrieval; skipping")
}
}
return opts.writeOutput(stdout, outputData)
}
// writeOutput writes data depending on opts.format to stdout
func (opts *inspectOptions) writeOutput(stdout io.Writer, data any) error {
if report.IsJSON(opts.format) || opts.format == "" {
out, err := json.MarshalIndent(outputData, "", " ")
out, err := json.MarshalIndent(data, "", " ")
if err == nil {
fmt.Fprintf(stdout, "%s\n", string(out))
}
return err
}
row := "{{range . }}" + report.NormalizeFormat(opts.format) + "{{end}}"
data = append(data, outputData)
return printTmpl(stdout, row, data)
}
func printTmpl(stdout io.Writer, row string, data []interface{}) error {
t, err := template.New("skopeo inspect").Parse(row)
rpt, err := report.New(stdout, "skopeo inspect").Parse(report.OriginUser, opts.format)
if err != nil {
return err
}
w := tabwriter.NewWriter(stdout, 8, 2, 2, ' ', 0)
return t.Execute(w, data)
defer rpt.Flush()
return rpt.Execute([]any{data})
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/containers/image/v5/transports/alltransports"
"github.com/containers/image/v5/types"
"github.com/spf13/cobra"
"golang.org/x/exp/maps"
)
// tagListOutput is the output format of (skopeo list-tags), primarily so that we can format it with a simple json.MarshalIndent.
@@ -37,10 +38,7 @@ var transportHandlers = map[string]func(ctx context.Context, sys *types.SystemCo
// supportedTransports returns all the supported transports
func supportedTransports(joinStr string) string {
res := make([]string, 0, len(transportHandlers))
for handlerName := range transportHandlers {
res = append(res, handlerName)
}
res := maps.Keys(transportHandlers)
sort.Strings(res)
return strings.Join(res, joinStr)
}
@@ -84,12 +82,12 @@ func parseDockerRepositoryReference(refString string) (types.ImageReference, err
return nil, fmt.Errorf("docker: image reference %s does not start with %s://", refString, docker.Transport.Name())
}
parts := strings.SplitN(refString, ":", 2)
if len(parts) != 2 {
_, dockerImageName, hasColon := strings.Cut(refString, ":")
if !hasColon {
return nil, fmt.Errorf(`Invalid image name "%s", expected colon-separated transport:reference`, refString)
}
ref, err := reference.ParseNormalizedNamed(strings.TrimPrefix(parts[1], "//"))
ref, err := reference.ParseNormalizedNamed(strings.TrimPrefix(dockerImageName, "//"))
if err != nil {
return nil, err
}
@@ -130,7 +128,7 @@ func listDockerRepoTags(ctx context.Context, sys *types.SystemContext, opts *tag
}
// return the tagLists from a docker archive file
func listDockerArchiveTags(ctx context.Context, sys *types.SystemContext, opts *tagsOptions, userInput string) (repositoryName string, tagListing []string, err error) {
func listDockerArchiveTags(_ context.Context, sys *types.SystemContext, _ *tagsOptions, userInput string) (repositoryName string, tagListing []string, err error) {
ref, err := alltransports.ParseImageName(userInput)
if err != nil {
return

View File

@@ -16,7 +16,6 @@ func TestDockerRepositoryReferenceParser(t *testing.T) {
{"docker://somehost.com"}, // Valid default expansion
{"docker://nginx"}, // Valid default expansion
} {
ref, err := parseDockerRepositoryReference(test[0])
require.NoError(t, err)
expected, err := alltransports.ParseImageName(test[0])
@@ -47,7 +46,6 @@ func TestDockerRepositoryReferenceParserDrift(t *testing.T) {
{"docker://somehost.com", "docker.io/library/somehost.com"}, // Valid default expansion
{"docker://nginx", "docker.io/library/nginx"}, // Valid default expansion
} {
ref, err := parseDockerRepositoryReference(test[0])
ref2, err2 := alltransports.ParseImageName(test[0])

View File

@@ -55,14 +55,12 @@ func createApp() (*cobra.Command, *globalOptions) {
opts := globalOptions{}
rootCommand := &cobra.Command{
Use: "skopeo",
Long: "Various operations with container images and container image registries",
RunE: requireSubcommand,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
return opts.before(cmd)
},
SilenceUsage: true,
SilenceErrors: true,
Use: "skopeo",
Long: "Various operations with container images and container image registries",
RunE: requireSubcommand,
PersistentPreRunE: opts.before,
SilenceUsage: true,
SilenceErrors: true,
// Hide the completion command which is provided by cobra
CompletionOptions: cobra.CompletionOptions{HiddenDefaultCmd: true},
// This is documented to parse "local" (non-PersistentFlags) flags of parent commands before
@@ -115,7 +113,7 @@ func createApp() (*cobra.Command, *globalOptions) {
}
// before is run by the cli package for any command, before running the command-specific handler.
func (opts *globalOptions) before(cmd *cobra.Command) error {
func (opts *globalOptions) before(cmd *cobra.Command, args []string) error {
if opts.debug {
logrus.SetLevel(logrus.DebugLevel)
}

View File

@@ -115,7 +115,7 @@ type request struct {
// Method is the name of the function
Method string `json:"method"`
// Args is the arguments (parsed inside the function)
Args []interface{} `json:"args"`
Args []any `json:"args"`
}
// reply is serialized to JSON as the return value from a function call.
@@ -123,7 +123,7 @@ type reply struct {
// Success is true if and only if the call succeeded.
Success bool `json:"success"`
// Value is an arbitrary value (or values, as array/map) returned from the call.
Value interface{} `json:"value"`
Value any `json:"value"`
// PipeID is an index into open pipes, and should be passed to FinishPipe
PipeID uint32 `json:"pipeid"`
// Error should be non-empty if Success == false
@@ -133,7 +133,7 @@ type reply struct {
// replyBuf is our internal deserialization of reply plus optional fd
type replyBuf struct {
// value will be converted to a reply Value
value interface{}
value any
// fd is the read half of a pipe, passed back to the client
fd *os.File
// pipeid will be provided to the client as PipeID, an index into our open pipes
@@ -185,7 +185,7 @@ type convertedLayerInfo struct {
}
// Initialize performs one-time initialization, and returns the protocol version
func (h *proxyHandler) Initialize(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) Initialize(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -214,7 +214,7 @@ func (h *proxyHandler) Initialize(args []interface{}) (replyBuf, error) {
// OpenImage accepts a string image reference i.e. TRANSPORT:REF - like `skopeo copy`.
// The return value is an opaque integer handle.
func (h *proxyHandler) OpenImage(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) OpenImage(args []any) (replyBuf, error) {
return h.openImageImpl(args, false)
}
@@ -237,7 +237,7 @@ func isNotFoundImageError(err error) bool {
errors.Is(err, ocilayout.ImageNotFoundError{})
}
func (h *proxyHandler) openImageImpl(args []interface{}, allowNotFound bool) (replyBuf, error) {
func (h *proxyHandler) openImageImpl(args []any, allowNotFound bool) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
@@ -282,11 +282,11 @@ func (h *proxyHandler) openImageImpl(args []interface{}, allowNotFound bool) (re
// OpenImage accepts a string image reference i.e. TRANSPORT:REF - like `skopeo copy`.
// The return value is an opaque integer handle. If the image does not exist, zero
// is returned.
func (h *proxyHandler) OpenImageOptional(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) OpenImageOptional(args []any) (replyBuf, error) {
return h.openImageImpl(args, true)
}
func (h *proxyHandler) CloseImage(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) CloseImage(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
@@ -307,7 +307,7 @@ func (h *proxyHandler) CloseImage(args []interface{}) (replyBuf, error) {
return ret, nil
}
func parseImageID(v interface{}) (uint32, error) {
func parseImageID(v any) (uint32, error) {
imgidf, ok := v.(float64)
if !ok {
return 0, fmt.Errorf("expecting integer imageid, not %T", v)
@@ -316,7 +316,7 @@ func parseImageID(v interface{}) (uint32, error) {
}
// parseUint64 validates that a number fits inside a JavaScript safe integer
func parseUint64(v interface{}) (uint64, error) {
func parseUint64(v any) (uint64, error) {
f, ok := v.(float64)
if !ok {
return 0, fmt.Errorf("expecting numeric, not %T", v)
@@ -327,7 +327,7 @@ func parseUint64(v interface{}) (uint64, error) {
return uint64(f), nil
}
func (h *proxyHandler) parseImageFromID(v interface{}) (*openImage, error) {
func (h *proxyHandler) parseImageFromID(v any) (*openImage, error) {
imgid, err := parseImageID(v)
if err != nil {
return nil, err
@@ -357,7 +357,7 @@ func (h *proxyHandler) allocPipe() (*os.File, *activePipe, error) {
// returnBytes generates a return pipe() from a byte array
// In the future it might be nicer to return this via memfd_create()
func (h *proxyHandler) returnBytes(retval interface{}, buf []byte) (replyBuf, error) {
func (h *proxyHandler) returnBytes(retval any, buf []byte) (replyBuf, error) {
var ret replyBuf
piper, f, err := h.allocPipe()
if err != nil {
@@ -419,7 +419,7 @@ func (h *proxyHandler) cacheTargetManifest(img *openImage) error {
// GetManifest returns a copy of the manifest, converted to OCI format, along with the original digest.
// Manifest lists are resolved to the current operating system and architecture.
func (h *proxyHandler) GetManifest(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) GetManifest(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -490,7 +490,7 @@ func (h *proxyHandler) GetManifest(args []interface{}) (replyBuf, error) {
// GetFullConfig returns a copy of the image configuration, converted to OCI format.
// https://github.com/opencontainers/image-spec/blob/main/config.md
func (h *proxyHandler) GetFullConfig(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) GetFullConfig(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -527,7 +527,7 @@ func (h *proxyHandler) GetFullConfig(args []interface{}) (replyBuf, error) {
// GetConfig returns a copy of the container runtime configuration, converted to OCI format.
// Note that due to a historical mistake, this returns not the full image configuration,
// but just the container runtime configuration. You should use GetFullConfig instead.
func (h *proxyHandler) GetConfig(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) GetConfig(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -562,7 +562,7 @@ func (h *proxyHandler) GetConfig(args []interface{}) (replyBuf, error) {
}
// GetBlob fetches a blob, performing digest verification.
func (h *proxyHandler) GetBlob(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) GetBlob(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -632,7 +632,7 @@ func (h *proxyHandler) GetBlob(args []interface{}) (replyBuf, error) {
// This needs to be called since the data returned by GetManifest() does not allow to correctly
// calling GetBlob() for the containers-storage: transport (which doesnt store the original compressed
// representations referenced in the manifest).
func (h *proxyHandler) GetLayerInfo(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) GetLayerInfo(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()
@@ -668,7 +668,7 @@ func (h *proxyHandler) GetLayerInfo(args []interface{}) (replyBuf, error) {
layerInfos = img.LayerInfos()
}
var layers []convertedLayerInfo
layers := make([]convertedLayerInfo, 0, len(layerInfos))
for _, layer := range layerInfos {
layers = append(layers, convertedLayerInfo{layer.Digest, layer.Size, layer.MediaType})
}
@@ -678,7 +678,7 @@ func (h *proxyHandler) GetLayerInfo(args []interface{}) (replyBuf, error) {
}
// FinishPipe waits for the worker goroutine to finish, and closes the write side of the pipe.
func (h *proxyHandler) FinishPipe(args []interface{}) (replyBuf, error) {
func (h *proxyHandler) FinishPipe(args []any) (replyBuf, error) {
h.lock.Lock()
defer h.lock.Unlock()

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"io"
"os"
"strings"
"github.com/containers/image/v5/pkg/cli"
"github.com/containers/image/v5/signature"
@@ -41,12 +42,12 @@ func (opts *standaloneSignOptions) run(args []string, stdout io.Writer) error {
manifest, err := os.ReadFile(manifestPath)
if err != nil {
return fmt.Errorf("Error reading %s: %v", manifestPath, err)
return fmt.Errorf("Error reading %s: %w", manifestPath, err)
}
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
return fmt.Errorf("Error initializing GPG: %w", err)
}
defer mech.Close()
@@ -57,25 +58,31 @@ func (opts *standaloneSignOptions) run(args []string, stdout io.Writer) error {
signature, err := signature.SignDockerManifestWithOptions(manifest, dockerReference, mech, fingerprint, &signature.SignOptions{Passphrase: passphrase})
if err != nil {
return fmt.Errorf("Error creating signature: %v", err)
return fmt.Errorf("Error creating signature: %w", err)
}
if err := os.WriteFile(opts.output, signature, 0644); err != nil {
return fmt.Errorf("Error writing signature to %s: %v", opts.output, err)
return fmt.Errorf("Error writing signature to %s: %w", opts.output, err)
}
return nil
}
type standaloneVerifyOptions struct {
publicKeyFile string
}
func standaloneVerifyCmd() *cobra.Command {
opts := standaloneVerifyOptions{}
cmd := &cobra.Command{
Use: "standalone-verify MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT SIGNATURE",
Use: "standalone-verify MANIFEST DOCKER-REFERENCE KEY-FINGERPRINTS SIGNATURE",
Short: "Verify a signature using local files",
RunE: commandAction(opts.run),
Long: `Verify a signature using local files
KEY-FINGERPRINTS can be a comma separated list of fingerprints, or "any" if you trust all the keys in the public key file.`,
RunE: commandAction(opts.run),
}
flags := cmd.Flags()
flags.StringVar(&opts.publicKeyFile, "public-key-file", "", `File containing public keys. If not specified, will use local GPG keys.`)
adjustUsage(cmd)
return cmd
}
@@ -86,29 +93,51 @@ func (opts *standaloneVerifyOptions) run(args []string, stdout io.Writer) error
}
manifestPath := args[0]
expectedDockerReference := args[1]
expectedFingerprint := args[2]
expectedFingerprints := strings.Split(args[2], ",")
signaturePath := args[3]
if opts.publicKeyFile == "" && len(expectedFingerprints) == 1 && expectedFingerprints[0] == "any" {
return fmt.Errorf("Cannot use any fingerprint without a public key file")
}
unverifiedManifest, err := os.ReadFile(manifestPath)
if err != nil {
return fmt.Errorf("Error reading manifest from %s: %v", manifestPath, err)
return fmt.Errorf("Error reading manifest from %s: %w", manifestPath, err)
}
unverifiedSignature, err := os.ReadFile(signaturePath)
if err != nil {
return fmt.Errorf("Error reading signature from %s: %v", signaturePath, err)
return fmt.Errorf("Error reading signature from %s: %w", signaturePath, err)
}
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return fmt.Errorf("Error initializing GPG: %v", err)
var mech signature.SigningMechanism
var publicKeyfingerprints []string
if opts.publicKeyFile != "" {
publicKeys, err := os.ReadFile(opts.publicKeyFile)
if err != nil {
return fmt.Errorf("Error reading public keys from %s: %w", opts.publicKeyFile, err)
}
mech, publicKeyfingerprints, err = signature.NewEphemeralGPGSigningMechanism(publicKeys)
if err != nil {
return fmt.Errorf("Error initializing GPG: %w", err)
}
} else {
mech, err = signature.NewGPGSigningMechanism()
if err != nil {
return fmt.Errorf("Error initializing GPG: %w", err)
}
}
defer mech.Close()
sig, err := signature.VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest, expectedDockerReference, mech, expectedFingerprint)
if err != nil {
return fmt.Errorf("Error verifying signature: %v", err)
if len(expectedFingerprints) == 1 && expectedFingerprints[0] == "any" {
expectedFingerprints = publicKeyfingerprints
}
fmt.Fprintf(stdout, "Signature verified, digest %s\n", sig.DockerManifestDigest)
sig, verificationFingerprint, err := signature.VerifyImageManifestSignatureUsingKeyIdentityList(unverifiedSignature, unverifiedManifest, expectedDockerReference, mech, expectedFingerprints)
if err != nil {
return fmt.Errorf("Error verifying signature: %w", err)
}
fmt.Fprintf(stdout, "Signature verified using fingerprint %s, digest %s\n", verificationFingerprint, sig.DockerManifestDigest)
return nil
}
@@ -141,7 +170,7 @@ func (opts *untrustedSignatureDumpOptions) run(args []string, stdout io.Writer)
untrustedSignature, err := os.ReadFile(untrustedSignaturePath)
if err != nil {
return fmt.Errorf("Error reading untrusted signature from %s: %v", untrustedSignaturePath, err)
return fmt.Errorf("Error reading untrusted signature from %s: %w", untrustedSignaturePath, err)
}
untrustedInfo, err := signature.GetUntrustedSignatureInformationWithoutVerifying(untrustedSignature)

View File

@@ -127,11 +127,36 @@ func TestStandaloneVerify(t *testing.T) {
dockerReference, fixturesTestKeyFingerprint, "fixtures/corrupt.signature")
assertTestFailed(t, out, err, "Error verifying signature")
// Error using any without a public key file
out, err = runSkopeo("standalone-verify", manifestPath,
dockerReference, "any", signaturePath)
assertTestFailed(t, out, err, "Cannot use any fingerprint without a public key file")
// Success
out, err = runSkopeo("standalone-verify", manifestPath,
dockerReference, fixturesTestKeyFingerprint, signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified, digest "+fixturesTestImageManifestDigest.String()+"\n", out)
assert.Equal(t, "Signature verified using fingerprint "+fixturesTestKeyFingerprint+", digest "+fixturesTestImageManifestDigest.String()+"\n", out)
// Using multiple fingerprints
out, err = runSkopeo("standalone-verify", manifestPath,
dockerReference, "0123456789ABCDEF0123456789ABCDEF01234567,"+fixturesTestKeyFingerprint+",DEADBEEFDEADBEEFDEADBEEFDEADBEEFDEADBEEF", signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified using fingerprint "+fixturesTestKeyFingerprint+", digest "+fixturesTestImageManifestDigest.String()+"\n", out)
// Using a public key file
t.Setenv("GNUPGHOME", "")
out, err = runSkopeo("standalone-verify", "--public-key-file", "fixtures/pubring.gpg", manifestPath,
dockerReference, fixturesTestKeyFingerprint, signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified using fingerprint "+fixturesTestKeyFingerprint+", digest "+fixturesTestImageManifestDigest.String()+"\n", out)
// Using a public key file matching any public key
t.Setenv("GNUPGHOME", "")
out, err = runSkopeo("standalone-verify", "--public-key-file", "fixtures/pubring.gpg", manifestPath,
dockerReference, "any", signaturePath)
assert.NoError(t, err)
assert.Equal(t, "Signature verified using fingerprint "+fixturesTestKeyFingerprint+", digest "+fixturesTestImageManifestDigest.String()+"\n", out)
}
func TestUntrustedSignatureDump(t *testing.T) {

View File

@@ -26,7 +26,8 @@ import (
"github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gopkg.in/yaml.v2"
"golang.org/x/exp/slices"
"gopkg.in/yaml.v3"
)
// syncOptions contains information retrieved from the skopeo sync command line.
@@ -131,12 +132,12 @@ See skopeo-sync(1) for details.
}
// UnmarshalYAML is the implementation of the Unmarshaler interface method
// method for the tlsVerifyConfig type.
// for the tlsVerifyConfig type.
// It unmarshals the 'tls-verify' YAML key so that, when they key is not
// specified, tls verification is enforced.
func (tls *tlsVerifyConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
func (tls *tlsVerifyConfig) UnmarshalYAML(value *yaml.Node) error {
var verify bool
if err := unmarshal(&verify); err != nil {
if err := value.Decode(&verify); err != nil {
return err
}
@@ -523,26 +524,17 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) (retErr error) {
}()
// validate source and destination options
contains := func(val string, list []string) (_ bool) {
for _, l := range list {
if l == val {
return true
}
}
return
}
if len(opts.source) == 0 {
return errors.New("A source transport must be specified")
}
if !contains(opts.source, []string{docker.Transport.Name(), directory.Transport.Name(), "yaml"}) {
if !slices.Contains([]string{docker.Transport.Name(), directory.Transport.Name(), "yaml"}, opts.source) {
return fmt.Errorf("%q is not a valid source transport", opts.source)
}
if len(opts.destination) == 0 {
return errors.New("A destination transport must be specified")
}
if !contains(opts.destination, []string{docker.Transport.Name(), directory.Transport.Name()}) {
if !slices.Contains([]string{docker.Transport.Name(), directory.Transport.Name()}, opts.destination) {
return fmt.Errorf("%q is not a valid destination transport", opts.destination)
}

46
cmd/skopeo/sync_test.go Normal file
View File

@@ -0,0 +1,46 @@
package main
import (
"testing"
"github.com/containers/image/v5/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
)
var _ yaml.Unmarshaler = (*tlsVerifyConfig)(nil)
func TestTLSVerifyConfig(t *testing.T) {
type container struct { // An example of a larger config file
TLSVerify tlsVerifyConfig `yaml:"tls-verify"`
}
for _, c := range []struct {
input string
expected tlsVerifyConfig
}{
{
input: `tls-verify: true`,
expected: tlsVerifyConfig{skip: types.OptionalBoolFalse},
},
{
input: `tls-verify: false`,
expected: tlsVerifyConfig{skip: types.OptionalBoolTrue},
},
{
input: ``, // No value
expected: tlsVerifyConfig{skip: types.OptionalBoolUndefined},
},
} {
config := container{}
err := yaml.Unmarshal([]byte(c.input), &config)
require.NoError(t, err, c.input)
assert.Equal(t, c.expected, config.TLSVerify, c.input)
}
// Invalid input
config := container{}
err := yaml.Unmarshal([]byte(`tls-verify: "not a valid bool"`), &config)
assert.Error(t, err)
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/containers/image/v5/transports/alltransports"
"github.com/containers/storage/pkg/unshare"
"github.com/syndtr/gocapability/capability"
"golang.org/x/exp/slices"
)
var neededCapabilities = []capability.Cap{
@@ -21,29 +22,32 @@ func maybeReexec() error {
// With Skopeo we need only the subset of the root capabilities necessary
// for pulling an image to the storage. Do not attempt to create a namespace
// if we already have the capabilities we need.
capabilities, err := capability.NewPid(0)
capabilities, err := capability.NewPid2(0)
if err != nil {
return fmt.Errorf("error reading the current capabilities sets: %w", err)
}
for _, cap := range neededCapabilities {
if !capabilities.Get(capability.EFFECTIVE, cap) {
// We miss a capability we need, create a user namespaces
unshare.MaybeReexecUsingUserNamespace(true)
return nil
}
if err := capabilities.Load(); err != nil {
return fmt.Errorf("error loading the current capabilities sets: %w", err)
}
if slices.ContainsFunc(neededCapabilities, func(cap capability.Cap) bool {
return !capabilities.Get(capability.EFFECTIVE, cap)
}) {
// We miss a capability we need, create a user namespaces
unshare.MaybeReexecUsingUserNamespace(true)
return nil
}
return nil
}
func reexecIfNecessaryForImages(imageNames ...string) error {
// Check if container-storage is used before doing unshare
for _, imageName := range imageNames {
if slices.ContainsFunc(imageNames, func(imageName string) bool {
transport := alltransports.TransportFromImageName(imageName)
// Hard-code the storage name to avoid a reference on c/image/storage.
// See https://github.com/containers/skopeo/issues/771#issuecomment-563125006.
if transport != nil && transport.Name() == "containers-storage" {
return maybeReexec()
}
return transport != nil && transport.Name() == "containers-storage"
}) {
return maybeReexec()
}
return nil
}

View File

@@ -44,7 +44,7 @@ func noteCloseFailure(err error, description string, closeErr error) error {
if err == nil {
return fmt.Errorf("%s: %w", description, closeErr)
}
// In this case we prioritize the primary error for use with %w; closeErr is usually less relevant, or might be a consequence of the primary erorr.
// In this case we prioritize the primary error for use with %w; closeErr is usually less relevant, or might be a consequence of the primary error.
return fmt.Errorf("%w (%s: %v)", err, description, closeErr)
}
@@ -315,14 +315,11 @@ func parseCreds(creds string) (string, string, error) {
if creds == "" {
return "", "", errors.New("credentials can't be empty")
}
up := strings.SplitN(creds, ":", 2)
if len(up) == 1 {
return up[0], "", nil
}
if up[0] == "" {
username, password, _ := strings.Cut(creds, ":") // Sets password to "" if there is no ":"
if username == "" {
return "", "", errors.New("username can't be empty")
}
return up[0], up[1], nil
return username, password, nil
}
func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {

View File

@@ -385,7 +385,6 @@ func TestParseManifestFormat(t *testing.T) {
// since there is a shared authfile image option and a non-shared (prefixed) one, make sure the override logic
// works correctly.
func TestImageOptionsAuthfileOverride(t *testing.T) {
for _, testCase := range []struct {
flagPrefix string
cmdFlags []string

View File

@@ -1,6 +1,6 @@
[comment]: <> (***ATTENTION*** ***WARNING*** ***ALERT*** ***CAUTION*** ***DANGER***)
[comment]: <> ()
[comment]: <> (ANY changes made to this file, once commited/merged must)
[comment]: <> (ANY changes made to this file, once committed/merged must)
[comment]: <> (be manually copy/pasted -in markdown- into the description)
[comment]: <> (field on Quay at the following locations:)
[comment]: <> ()

View File

@@ -20,6 +20,8 @@ automatically inherit any parts of the source name.
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--additional-tag**=_strings_
Additional tags (supports docker-archive).

View File

@@ -31,6 +31,8 @@ $ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distrib
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--authfile** _path_
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.

View File

@@ -17,6 +17,8 @@ The private key is written to _prefix_**.pub** .
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--help**, **-h**
Print usage statement

View File

@@ -17,6 +17,8 @@ To see values for a different architecture/OS, use the **--override-os** / **--o
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--authfile** _path_
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
@@ -42,6 +44,7 @@ Use docker daemon host at _host_ (`docker-daemon:` transport only)
Format the output using the given Go template.
The keys of the returned JSON can be used as the values for the --format flag (see examples below).
Supports the Go templating functions available at https://pkg.go.dev/github.com/containers/common/pkg/report#hdr-Template_Functions
**--help**, **-h**

View File

@@ -12,6 +12,8 @@ Return a list of tags from _source-image_ in a registry or a local docker-archiv
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--authfile** _path_
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.

View File

@@ -15,6 +15,8 @@ flag. The default path used is **${XDG\_RUNTIME\_DIR}/containers/auth.json**.
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--password**, **-p**=*password*
Password for registry

View File

@@ -14,6 +14,8 @@ All the cached credentials can be removed by setting the **all** flag.
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--authfile**=*path*
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json

View File

@@ -17,6 +17,8 @@ This is primarily a debugging tool, useful for special cases, and usually should
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--help**, **-h**
Print usage statement

View File

@@ -4,7 +4,7 @@
skopeo\-standalone\-verify - Verify an image signature.
## SYNOPSIS
**skopeo standalone-verify** _manifest_ _docker-reference_ _key-fingerprint_ _signature_
**skopeo standalone-verify** _manifest_ _docker-reference_ _key-fingerprints_ _signature_
## DESCRIPTION
@@ -16,7 +16,7 @@ as per containers-policy.json(5).
_docker-reference_ A docker reference expected to identify the image in the signature
_key-fingerprint_ Expected identity of the signing key
_key-fingerprints_ Identities of trusted signing keys (comma separated), or "any" to trust any known key when using a public key file
_signature_ Path to signature file
@@ -24,10 +24,16 @@ as per containers-policy.json(5).
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--help**, **-h**
Print usage statement
**--public-key-file** _public key file_
File containing the public keys to use when verifying signatures. If this is not specified, keys from the GPG homedir are used.
## EXAMPLES
```console

View File

@@ -8,10 +8,7 @@ skopeo\-sync - Synchronize images between registry repositories and local direct
**skopeo sync** [*options*] --src _transport_ --dest _transport_ _source_ _destination_
## DESCRIPTION
Synchronize images between registry repoositories and local directories.
The synchronization is achieved by copying all the images found at _source_ to _destination_.
Useful to synchronize a local container registry mirror, and to to populate registries running inside of air-gapped environments.
Synchronize images between registry repositories and local directories. Synchronization is achieved by copying all the images found at _source_ to _destination_ - useful when synchronizing a local container registry mirror or for populating registries running inside of air-gapped environments.
Differently from other skopeo commands, skopeo sync requires both source and destination transports to be specified separately from _source_ and _destination_.
One of the problems of prefixing a destination with its transport is that, the registry `docker://hostname:port` would be wrongly interpreted as an image reference at a non-fully qualified registry, with `hostname` and `port` the image name and tag.
@@ -32,6 +29,9 @@ When the `--scoped` option is specified, images are prefixed with the source ima
name can be stored at _destination_.
## OPTIONS
See also [skopeo(1)](skopeo.1.md) for options placed before the subcommand name.
**--all**, **-a**
If one of the images in __src__ refers to a list of images, instead of copying just the image which matches the current OS and
architecture (subject to the use of the global --override-os, --override-arch and --override-variant options), attempt to copy all of

View File

@@ -51,6 +51,9 @@ See [containers-transports(5)](https://github.com/containers/image/blob/main/doc
## OPTIONS
These options should be placed before the subcommand name.
Individual subcommands have their own options.
**--command-timeout** _duration_
Timeout for the command execution.

120
go.mod
View File

@@ -1,86 +1,84 @@
module github.com/containers/skopeo
go 1.17
go 1.18
require (
github.com/containers/common v0.51.0
github.com/containers/image/v5 v5.24.0
github.com/containers/common v0.52.0
github.com/containers/image/v5 v5.25.0
github.com/containers/ocicrypt v1.1.7
github.com/containers/storage v1.45.3
github.com/containers/storage v1.46.1
github.com/docker/distribution v2.8.1+incompatible
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0-rc2
github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b
github.com/opencontainers/image-tools v1.0.0-rc3
github.com/sirupsen/logrus v1.9.0
github.com/spf13/cobra v1.6.1
github.com/spf13/cobra v1.7.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.1
github.com/stretchr/testify v1.8.2
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635
golang.org/x/term v0.4.0
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
gopkg.in/yaml.v2 v2.4.0
golang.org/x/exp v0.0.0-20230321023759-10a507213a29
golang.org/x/term v0.7.0
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/BurntSushi/toml v1.2.1 // indirect
github.com/Microsoft/go-winio v0.6.0 // indirect
github.com/Microsoft/hcsshim v0.9.6 // indirect
github.com/Microsoft/hcsshim v0.10.0-rc.7 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d // indirect
github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d // indirect
github.com/containerd/cgroups v1.0.4 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.13.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01 // indirect
github.com/coreos/go-oidc/v3 v3.5.0 // indirect
github.com/cyberphone/json-canonicalization v0.0.0-20220623050100-57a0ce2678a7 // indirect
github.com/cyphar/filepath-securejoin v0.2.3 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/docker v20.10.23+incompatible // indirect
github.com/docker/docker v23.0.3+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/ghodss/yaml v1.0.0 // indirect
github.com/go-jose/go-jose/v3 v3.0.0 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/analysis v0.21.4 // indirect
github.com/go-openapi/errors v0.20.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/loads v0.21.2 // indirect
github.com/go-openapi/runtime v0.24.1 // indirect
github.com/go-openapi/spec v0.20.7 // indirect
github.com/go-openapi/strfmt v0.21.3 // indirect
github.com/go-openapi/runtime v0.25.0 // indirect
github.com/go-openapi/spec v0.20.8 // indirect
github.com/go-openapi/strfmt v0.21.7 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/go-openapi/validate v0.22.0 // indirect
github.com/go-playground/locales v0.14.0 // indirect
github.com/go-playground/universal-translator v0.18.0 // indirect
github.com/go-playground/validator/v10 v10.11.1 // indirect
github.com/go-openapi/validate v0.22.1 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.12.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-containerregistry v0.12.1 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/go-containerregistry v0.13.0 // indirect
github.com/google/go-intervals v0.0.2 // indirect
github.com/google/trillian v1.5.0 // indirect
github.com/google/trillian v1.5.1 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-retryablehttp v0.7.2 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/imdario/mergo v0.3.15 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.15.15 // indirect
github.com/klauspost/compress v1.16.4 // indirect
github.com/klauspost/pgzip v1.2.6-0.20220930104621-17e8dac29df8 // indirect
github.com/kr/pretty v0.3.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/leodido/go-urn v1.2.1 // indirect
github.com/letsencrypt/boulder v0.0.0-20221109233200-85aa52084eaf // indirect
github.com/leodido/go-urn v1.2.2 // indirect
github.com/letsencrypt/boulder v0.0.0-20230213213521-fdfea0d469b6 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-runewidth v0.0.14 // indirect
github.com/mattn/go-shellwords v1.0.12 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mistifyio/go-zfs/v3 v3.0.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
@@ -88,49 +86,51 @@ require (
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/opencontainers/runc v1.1.4 // indirect
github.com/opencontainers/runtime-spec v1.0.3-0.20220825212826-86290f6a00fb // indirect
github.com/opencontainers/selinux v1.10.2 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/ostreedev/ostree-go v0.0.0-20210805093236-719684c64e4f // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/proglottis/gpgme v0.1.3 // indirect
github.com/rivo/uniseg v0.4.3 // indirect
github.com/rogpeppe/go-internal v1.8.0 // indirect
github.com/rivo/uniseg v0.4.4 // indirect
github.com/russross/blackfriday v2.0.0+incompatible // indirect
github.com/segmentio/ksuid v1.0.4 // indirect
github.com/sigstore/fulcio v1.0.0 // indirect
github.com/sigstore/rekor v1.0.1 // indirect
github.com/sigstore/sigstore v1.5.1 // indirect
github.com/sigstore/fulcio v1.2.0 // indirect
github.com/sigstore/rekor v1.1.0 // indirect
github.com/sigstore/sigstore v1.6.0 // indirect
github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 // indirect
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 // indirect
github.com/sylabs/sif/v2 v2.9.0 // indirect
github.com/tchap/go-patricia v2.3.0+incompatible // indirect
github.com/theupdateframework/go-tuf v0.5.2-0.20221207161717-9cb61d6e65f5 // indirect
github.com/sylabs/sif/v2 v2.11.1 // indirect
github.com/tchap/go-patricia/v2 v2.3.1 // indirect
github.com/theupdateframework/go-tuf v0.5.2 // indirect
github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399 // indirect
github.com/ulikunitz/xz v0.5.11 // indirect
github.com/vbatts/tar-split v0.11.2 // indirect
github.com/vbauerster/mpb/v7 v7.5.3 // indirect
github.com/vbatts/tar-split v0.11.3 // indirect
github.com/vbauerster/mpb/v8 v8.3.0 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
go.etcd.io/bbolt v1.3.6 // indirect
go.mongodb.org/mongo-driver v1.11.1 // indirect
go.etcd.io/bbolt v1.3.7 // indirect
go.mongodb.org/mongo-driver v1.11.3 // indirect
go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 // indirect
go.opencensus.io v0.24.0 // indirect
golang.org/x/crypto v0.5.0 // indirect
golang.org/x/mod v0.7.0 // indirect
golang.org/x/net v0.5.0 // indirect
golang.org/x/oauth2 v0.4.0 // indirect
go.opentelemetry.io/otel v1.13.0 // indirect
go.opentelemetry.io/otel/trace v1.13.0 // indirect
golang.org/x/crypto v0.8.0 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.9.0 // indirect
golang.org/x/oauth2 v0.6.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.4.0 // indirect
golang.org/x/text v0.6.0 // indirect
golang.org/x/tools v0.4.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/text v0.9.0 // indirect
golang.org/x/tools v0.7.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20221227171554-f9683d7f8bef // indirect
google.golang.org/grpc v1.51.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
google.golang.org/genproto v0.0.0-20230306155012-7f2fa6fef1f4 // indirect
google.golang.org/grpc v1.54.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
gopkg.in/go-jose/go-jose.v2 v2.6.1 // indirect
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

3356
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -1,92 +0,0 @@
#!/usr/bin/env bash
set -e
# This script builds various binary from a checkout of the skopeo
# source code. DO NOT CALL THIS SCRIPT DIRECTLY.
#
# Requirements:
# - The current directory should be a checkout of the skopeo source code
# (https://github.com/containers/skopeo). Whatever version is checked out
# will be built.
# - The script is intended to be run inside the container specified
# in the output of hack/get_fqin.sh
# - The right way to call this script is to invoke "make" from
# your checkout of the skopeo repository.
# the Makefile will do a "docker build -t skopeo ." and then
# "docker run hack/make.sh" in the resulting image.
#
set -o pipefail
export SKOPEO_PKG='github.com/containers/skopeo'
export SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
export MAKEDIR="$SCRIPTDIR/make"
# Set this to 1 to enable installation/modification of environment/services
export SKOPEO_CONTAINER_TESTS=${SKOPEO_CONTAINER_TESTS:-0}
if [[ "$SKOPEO_CONTAINER_TESTS" == "0" ]] && [[ "$CI" != "true" ]]; then
(
echo "***************************************************************"
echo "WARNING: Executing tests directly on the local development"
echo " host is highly discouraged. Many important items"
echo " will be skipped. For manual execution, please utilize"
echo " the Makefile targets WITHOUT the '-local' suffix."
echo "***************************************************************"
) > /dev/stderr
sleep 5
fi
echo
# List of bundles to create when no argument is passed
# TODO(runcom): these are the one left from Docker...for now
# test-unit
# validate-dco
# cover
DEFAULT_BUNDLES=(
validate-gofmt
validate-lint
validate-vet
validate-git-marks
test-integration
)
# Go module support: set `-mod=vendor` to use the vendored sources
# See also the top-level Makefile.
mod_vendor=
if go help mod >/dev/null 2>&1; then
export GO111MODULE=on
mod_vendor='-mod=vendor'
fi
go_test_dir() {
dir=$1
(
echo '+ go test' $mod_vendor $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"} "${SKOPEO_PKG}${dir#.}"
cd "$dir"
export DEST="$ABS_DEST" # we're in a subshell, so this is safe -- our integration-cli tests need DEST, and "cd" screws it up
go test $mod_vendor $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"}
)
}
bundle() {
local bundle="$1"; shift
echo "---> Making bundle: $(basename "$bundle")"
source "$SCRIPTDIR/make/$bundle" "$@"
}
main() {
if [ $# -lt 1 ]; then
bundles=(${DEFAULT_BUNDLES[@]})
else
bundles=($@)
fi
for bundle in ${bundles[@]}; do
bundle "$bundle"
echo
done
}
main "$@"

View File

@@ -1,31 +0,0 @@
#!/bin/bash
if [ -z "$VALIDATE_UPSTREAM" ]; then
# this is kind of an expensive check, so let's not do this twice if we
# are running more than one validate bundlescript
VALIDATE_REPO='https://github.com/containers/skopeo.git'
VALIDATE_BRANCH='main'
if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then
VALIDATE_REPO="https://github.com/${TRAVIS_REPO_SLUG}.git"
VALIDATE_BRANCH="${TRAVIS_BRANCH}"
fi
VALIDATE_HEAD="$(git rev-parse --verify HEAD)"
git fetch -q "$VALIDATE_REPO" "refs/heads/$VALIDATE_BRANCH"
VALIDATE_UPSTREAM="$(git rev-parse --verify FETCH_HEAD)"
VALIDATE_COMMIT_LOG="$VALIDATE_UPSTREAM..$VALIDATE_HEAD"
VALIDATE_COMMIT_DIFF="$VALIDATE_UPSTREAM...$VALIDATE_HEAD"
validate_diff() {
git diff "$VALIDATE_UPSTREAM" "$@"
}
validate_log() {
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then
git log "$VALIDATE_COMMIT_LOG" "$@"
fi
}
fi

View File

@@ -1,12 +0,0 @@
#!/bin/bash
set -e
bundle_test_integration() {
go_test_dir ./integration
}
# subshell so that we can export PATH without breaking other things
(
make PREFIX=/usr install
bundle_test_integration
) 2>&1

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env bash
source "$(dirname "$BASH_SOURCE")/.validate"
# folders=$(find * -type d | egrep -v '^Godeps|bundles|.git')
IFS=$'\n'
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*' | grep -v '^vendor/' || true) )
unset IFS
badFiles=()
for f in "${files[@]}"; do
if [ $(grep -r "^<<<<<<<" $f) ]; then
badFiles+=( "$f" )
continue
fi
if [ $(grep -r "^>>>>>>>" $f) ]; then
badFiles+=( "$f" )
continue
fi
if [ $(grep -r "^=======$" $f) ]; then
badFiles+=( "$f" )
continue
fi
set -e
done
if [ ${#badFiles[@]} -eq 0 ]; then
echo 'Congratulations! There is no conflict.'
else
{
echo "There is trace of conflict(s) in the following files :"
for f in "${badFiles[@]}"; do
echo " - $f"
done
echo
echo 'Please fix the conflict(s) commit the result.'
echo
} >&2
false
fi

View File

@@ -1,33 +0,0 @@
#!/bin/bash
source "$(dirname "$BASH_SOURCE")/.validate"
# We will eventually get to the point where packages should be the complete list
# of subpackages, vendoring excluded, as given by:
#
IFS=$'\n'
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/\|^integration' || true) )
unset IFS
errors=()
for f in "${files[@]}"; do
failedLint=$(golint "$f")
if [ "$failedLint" ]; then
errors+=( "$failedLint" )
fi
done
if [ ${#errors[@]} -eq 0 ]; then
echo 'Congratulations! All Go source files have been linted.'
else
{
echo "Errors from golint:"
for err in "${errors[@]}"; do
echo "$err"
done
echo
echo 'Please fix the above errors. You can test via "golint" and commit the result.'
echo
} >&2
false
fi

8
hack/test-integration.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/bash
set -e
make PREFIX=/usr install
echo "cd ./integration;" go test $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"}
cd ./integration
go test $TESTFLAGS ${BUILDTAGS:+-tags "$BUILDTAGS"}

30
hack/validate-git-marks.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/usr/bin/env bash
IFS=$'\n'
files=( $(git ls-tree -r HEAD --name-only | grep -v '^vendor/' || true) )
unset IFS
badFiles=()
for f in "${files[@]}"; do
if [ $(grep -r "^\(<<<<<<<\|>>>>>>>\|^=======$\)" $f) ]; then
badFiles+=( "$f" )
continue
fi
set -e
done
if [ ${#badFiles[@]} -eq 0 ]; then
echo 'Congratulations! There is no conflict.'
else
{
echo "There is trace of conflict(s) in the following files :"
for f in "${badFiles[@]}"; do
echo " - $f"
done
echo
echo 'Please fix the conflict(s) commit the result.'
echo
} >&2
exit 1
fi

View File

@@ -1,9 +1,7 @@
#!/bin/bash
source "$(dirname "$BASH_SOURCE")/.validate"
IFS=$'\n'
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/' || true) )
files=( $(find . -name '*.go' | grep -v '^./vendor/' | sort || true) )
unset IFS
badFiles=()
@@ -25,5 +23,5 @@ else
echo 'Please reformat the above files using "gofmt -s -w" and commit the result.'
echo
} >&2
false
exit 1
fi

16
hack/validate-lint.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/bash
errors=$($GOBIN/golangci-lint run --build-tags "${BUILDTAGS}" 2>&1)
if [ -z "$errors" ]; then
echo 'Congratulations! All Go source files have been linted.'
else
{
echo "Errors from golangci-lint:"
echo "$errors"
echo
echo 'Please fix the above errors. You can test via "golangci-lint" and commit the result.'
echo
} >&2
exit 1
fi

View File

@@ -1,6 +1,6 @@
#!/bin/bash
errors=$(go vet -tags="${BUILDTAGS}" $mod_vendor $(go list $mod_vendor -e ./...))
errors=$(go vet -tags="${BUILDTAGS}" ./... 2>&1)
if [ -z "$errors" ]; then
echo 'Congratulations! All Go source files have been vetted.'
@@ -12,5 +12,5 @@ else
echo 'Please fix the above errors. You can test via "go vet" and commit the result.'
echo
} >&2
false
exit 1
fi

17
hack/warn-destructive-tests.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -e
# Set this to 1 to enable installation/modification of environment/services
export SKOPEO_CONTAINER_TESTS=${SKOPEO_CONTAINER_TESTS:-0}
if [[ "$SKOPEO_CONTAINER_TESTS" == "0" ]] && [[ "$CI" != "true" ]]; then
(
echo "***************************************************************"
echo "WARNING: Executing tests directly on the local development"
echo " host is highly discouraged. Many important items"
echo " will be skipped. For manual execution, please utilize"
echo " the Makefile targets WITHOUT the '-local' suffix."
echo "***************************************************************"
) > /dev/stderr
sleep 5
fi

View File

@@ -1,34 +1,34 @@
package main
import (
"gopkg.in/check.v1"
)
const blockedRegistriesConf = "./fixtures/blocked-registries.conf"
const blockedErrorRegex = `.*registry registry-blocked.com is blocked in .*`
func (s *SkopeoSuite) TestCopyBlockedSource(c *check.C) {
assertSkopeoFails(c, blockedErrorRegex,
func (s *skopeoSuite) TestCopyBlockedSource() {
t := s.T()
assertSkopeoFails(t, blockedErrorRegex,
"--registries-conf", blockedRegistriesConf, "copy",
"docker://registry-blocked.com/image:test",
"docker://registry-unblocked.com/image:test")
}
func (s *SkopeoSuite) TestCopyBlockedDestination(c *check.C) {
assertSkopeoFails(c, blockedErrorRegex,
func (s *skopeoSuite) TestCopyBlockedDestination() {
t := s.T()
assertSkopeoFails(t, blockedErrorRegex,
"--registries-conf", blockedRegistriesConf, "copy",
"docker://registry-unblocked.com/image:test",
"docker://registry-blocked.com/image:test")
}
func (s *SkopeoSuite) TestInspectBlocked(c *check.C) {
assertSkopeoFails(c, blockedErrorRegex,
func (s *skopeoSuite) TestInspectBlocked() {
t := s.T()
assertSkopeoFails(t, blockedErrorRegex,
"--registries-conf", blockedRegistriesConf, "inspect",
"docker://registry-blocked.com/image:test")
}
func (s *SkopeoSuite) TestDeleteBlocked(c *check.C) {
assertSkopeoFails(c, blockedErrorRegex,
func (s *skopeoSuite) TestDeleteBlocked() {
t := s.T()
assertSkopeoFails(t, blockedErrorRegex,
"--registries-conf", blockedRegistriesConf, "delete",
"docker://registry-blocked.com/image:test")
}

View File

@@ -6,7 +6,9 @@ import (
"testing"
"github.com/containers/skopeo/version"
"gopkg.in/check.v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
const (
@@ -14,98 +16,104 @@ const (
privateRegistryURL1 = "127.0.0.1:5001"
)
func Test(t *testing.T) {
check.TestingT(t)
func TestSkopeo(t *testing.T) {
suite.Run(t, &skopeoSuite{})
}
func init() {
check.Suite(&SkopeoSuite{})
}
type SkopeoSuite struct {
type skopeoSuite struct {
suite.Suite
regV2 *testRegistryV2
regV2WithAuth *testRegistryV2
}
func (s *SkopeoSuite) SetUpSuite(c *check.C) {
var _ = suite.SetupAllSuite(&skopeoSuite{})
var _ = suite.TearDownAllSuite(&skopeoSuite{})
func (s *skopeoSuite) SetupSuite() {
t := s.T()
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.regV2 = setupRegistryV2At(c, privateRegistryURL0, false, false)
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL1, true, false)
require.NoError(t, err)
s.regV2 = setupRegistryV2At(t, privateRegistryURL0, false, false)
s.regV2WithAuth = setupRegistryV2At(t, privateRegistryURL1, true, false)
}
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
func (s *skopeoSuite) TearDownSuite() {
if s.regV2 != nil {
s.regV2.tearDown(c)
s.regV2.tearDown()
}
if s.regV2WithAuth != nil {
//cmd := exec.Command("docker", "logout", s.regV2WithAuth)
//c.Assert(cmd.Run(), check.IsNil)
s.regV2WithAuth.tearDown(c)
// cmd := exec.Command("docker", "logout", s.regV2WithAuth)
// require.Noerror(t, cmd.Run())
s.regV2WithAuth.tearDown()
}
}
// TODO like dockerCmd but much easier, just out,err
//func skopeoCmd()
func (s *SkopeoSuite) TestVersion(c *check.C) {
assertSkopeoSucceeds(c, fmt.Sprintf(".*%s version %s.*", skopeoBinary, version.Version),
func (s *skopeoSuite) TestVersion() {
t := s.T()
assertSkopeoSucceeds(t, fmt.Sprintf(".*%s version %s.*", skopeoBinary, version.Version),
"--version")
}
func (s *SkopeoSuite) TestCanAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
assertSkopeoFails(c, ".*manifest unknown.*",
func (s *skopeoSuite) TestCanAuthToPrivateRegistryV2WithoutDockerCfg() {
t := s.T()
assertSkopeoFails(t, ".*manifest unknown.*",
"--tls-verify=false", "inspect", "--creds="+s.regV2WithAuth.username+":"+s.regV2WithAuth.password, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C) {
assertSkopeoFails(c, ".*authentication required.*",
func (s *skopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg() {
t := s.T()
assertSkopeoFails(t, ".*authentication required.*",
"--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
}
func (s *SkopeoSuite) TestCertDirInsteadOfCertPath(c *check.C) {
assertSkopeoFails(c, ".*unknown flag: --cert-path.*",
func (s *skopeoSuite) TestCertDirInsteadOfCertPath() {
t := s.T()
assertSkopeoFails(t, ".*unknown flag: --cert-path.*",
"--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-path=/")
assertSkopeoFails(c, ".*authentication required.*",
assertSkopeoFails(t, ".*authentication required.*",
"--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-dir=/etc/docker/certs.d/")
}
// TODO(runcom): as soon as we can push to registries ensure you can inspect here
// not just get image not found :)
func (s *SkopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound(c *check.C) {
func (s *skopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound() {
t := s.T()
out, err := exec.Command(skopeoBinary, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2.url)).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf(string(out)))
c.Assert(string(out), check.Matches, "(?s).*manifest unknown.*") // (?s) : '.' will also match newlines
c.Assert(string(out), check.Not(check.Matches), "(?s).*unauthorized: authentication required.*") // (?s) : '.' will also match newlines
assert.Error(t, err, "%s", string(out))
assert.Regexp(t, "(?s).*manifest unknown.*", string(out)) // (?s) : '.' will also match newlines
assert.NotRegexp(t, "(?s).*unauthorized: authentication required.*", string(out)) // (?s) : '.' will also match newlines
}
func (s *SkopeoSuite) TestInspectFailsWhenReferenceIsInvalid(c *check.C) {
assertSkopeoFails(c, `.*Invalid image name.*`, "inspect", "unknown")
func (s *skopeoSuite) TestInspectFailsWhenReferenceIsInvalid() {
t := s.T()
assertSkopeoFails(t, `.*Invalid image name.*`, "inspect", "unknown")
}
func (s *SkopeoSuite) TestLoginLogout(c *check.C) {
assertSkopeoSucceeds(c, "^Login Succeeded!\n$",
func (s *skopeoSuite) TestLoginLogout() {
t := s.T()
assertSkopeoSucceeds(t, "^Login Succeeded!\n$",
"login", "--tls-verify=false", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, s.regV2WithAuth.url)
// test --get-login returns username
assertSkopeoSucceeds(c, fmt.Sprintf("^%s\n$", s.regV2WithAuth.username),
assertSkopeoSucceeds(t, fmt.Sprintf("^%s\n$", s.regV2WithAuth.username),
"login", "--tls-verify=false", "--get-login", s.regV2WithAuth.url)
// test logout
assertSkopeoSucceeds(c, fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url),
assertSkopeoSucceeds(t, fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url),
"logout", s.regV2WithAuth.url)
}
func (s *SkopeoSuite) TestCopyWithLocalAuth(c *check.C) {
assertSkopeoSucceeds(c, "^Login Succeeded!\n$",
func (s *skopeoSuite) TestCopyWithLocalAuth() {
t := s.T()
assertSkopeoSucceeds(t, "^Login Succeeded!\n$",
"login", "--tls-verify=false", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, s.regV2WithAuth.url)
// copy to private registry using local authentication
imageName := fmt.Sprintf("docker://%s/busybox:mine", s.regV2WithAuth.url)
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", testFQIN+":latest", imageName)
assertSkopeoSucceeds(t, "", "copy", "--dest-tls-verify=false", testFQIN+":latest", imageName)
// inspect from private registry
assertSkopeoSucceeds(c, "", "inspect", "--tls-verify=false", imageName)
assertSkopeoSucceeds(t, "", "inspect", "--tls-verify=false", imageName)
// logout from the registry
assertSkopeoSucceeds(c, fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url),
assertSkopeoSucceeds(t, fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url),
"logout", s.regV2WithAuth.url)
// inspect from private registry should fail after logout
assertSkopeoFails(c, ".*authentication required.*",
assertSkopeoFails(t, ".*authentication required.*",
"inspect", "--tls-verify=false", imageName)
}

File diff suppressed because it is too large Load Diff

View File

@@ -9,10 +9,11 @@ import (
"os/exec"
"path/filepath"
"strings"
"testing"
"time"
"github.com/containers/storage/pkg/homedir"
"gopkg.in/check.v1"
"github.com/stretchr/testify/require"
)
var adminKUBECONFIG = map[string]string{
@@ -30,21 +31,21 @@ type openshiftCluster struct {
// startOpenshiftCluster creates a new openshiftCluster.
// WARNING: This affects state in users' home directory! Only run
// in isolated test environment.
func startOpenshiftCluster(c *check.C) *openshiftCluster {
func startOpenshiftCluster(t *testing.T) *openshiftCluster {
cluster := &openshiftCluster{}
cluster.workingDir = c.MkDir()
cluster.workingDir = t.TempDir()
cluster.startMaster(c)
cluster.prepareRegistryConfig(c)
cluster.startRegistry(c)
cluster.ocLoginToProject(c)
cluster.dockerLogin(c)
cluster.relaxImageSignerPermissions(c)
cluster.startMaster(t)
cluster.prepareRegistryConfig(t)
cluster.startRegistry(t)
cluster.ocLoginToProject(t)
cluster.dockerLogin(t)
cluster.relaxImageSignerPermissions(t)
return cluster
}
// clusterCmd creates an exec.Cmd in cluster.workingDir with current environment modified by environment
// clusterCmd creates an exec.Cmd in cluster.workingDir with current environment modified by environment.
func (cluster *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
cmd := exec.Command(name, args...)
cmd.Dir = cluster.workingDir
@@ -56,21 +57,20 @@ func (cluster *openshiftCluster) clusterCmd(env map[string]string, name string,
}
// startMaster starts the OpenShift master (etcd+API server) and waits for it to be ready, or terminates on failure.
func (cluster *openshiftCluster) startMaster(c *check.C) {
func (cluster *openshiftCluster) startMaster(t *testing.T) {
cmd := cluster.clusterCmd(nil, "openshift", "start", "master")
cluster.processes = append(cluster.processes, cmd)
stdout, err := cmd.StdoutPipe()
c.Assert(err, check.IsNil)
// Send both to the same pipe. This might cause the two streams to be mixed up,
require.NoError(t, err)
// but logging actually goes only to stderr - this primarily ensure we log any
// unexpected output to stdout.
cmd.Stderr = cmd.Stdout
err = cmd.Start()
c.Assert(err, check.IsNil)
require.NoError(t, err)
portOpen, terminatePortCheck := newPortChecker(c, 8443)
portOpen, terminatePortCheck := newPortChecker(t, 8443)
defer func() {
c.Logf("Terminating port check")
t.Logf("Terminating port check")
terminatePortCheck <- true
}()
@@ -78,12 +78,12 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
logCheckFound := make(chan bool)
go func() {
defer func() {
c.Logf("Log checker exiting")
t.Logf("Log checker exiting")
}()
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
line := scanner.Text()
c.Logf("Log line: %s", line)
t.Logf("Log line: %s", line)
if strings.Contains(line, "Started Origin Controllers") {
logCheckFound <- true
return
@@ -92,7 +92,7 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
// Note: we can block before we get here.
select {
case <-terminateLogCheck:
c.Logf("terminated")
t.Logf("terminated")
return
default:
// Do not block here and read the next line.
@@ -101,7 +101,7 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
logCheckFound <- false
}()
defer func() {
c.Logf("Terminating log check")
t.Logf("Terminating log check")
terminateLogCheck <- true
}()
@@ -110,26 +110,26 @@ func (cluster *openshiftCluster) startMaster(c *check.C) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
defer cancel()
for !gotPortCheck || !gotLogCheck {
c.Logf("Waiting for master")
t.Logf("Waiting for master")
select {
case <-portOpen:
c.Logf("port check done")
t.Logf("port check done")
gotPortCheck = true
case found := <-logCheckFound:
c.Logf("log check done, found: %t", found)
t.Logf("log check done, found: %t", found)
if !found {
c.Fatal("log check done, success message not found")
t.Fatal("log check done, success message not found")
}
gotLogCheck = true
case <-ctx.Done():
c.Fatalf("Timed out waiting for master: %v", ctx.Err())
t.Fatalf("Timed out waiting for master: %v", ctx.Err())
}
}
c.Logf("OK, master started!")
t.Logf("OK, master started!")
}
// prepareRegistryConfig creates a registry service account and a related k8s client configuration in ${cluster.workingDir}/openshift.local.registry.
func (cluster *openshiftCluster) prepareRegistryConfig(c *check.C) {
func (cluster *openshiftCluster) prepareRegistryConfig(t *testing.T) {
// This partially mimics the objects created by (oadm registry), except that we run the
// server directly as an ordinary process instead of a pod with an implicitly attached service account.
saJSON := `{
@@ -140,93 +140,93 @@ func (cluster *openshiftCluster) prepareRegistryConfig(c *check.C) {
}
}`
cmd := cluster.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c, cmd, saJSON)
runExecCmdWithInput(t, cmd, saJSON)
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
out, err := cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
require.NoError(t, err, "%s", string(out))
require.Equal(t, "cluster role \"system:registry\" added: \"registry\"\n", string(out))
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
out, err = cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "")
require.NoError(t, err, "%s", string(out))
require.Equal(t, "", string(out))
}
// startRegistry starts the OpenShift registry with configPart on port, waits for it to be ready, and returns the process object, or terminates on failure.
func (cluster *openshiftCluster) startRegistryProcess(c *check.C, port int, configPath string) *exec.Cmd {
func (cluster *openshiftCluster) startRegistryProcess(t *testing.T, port uint16, configPath string) *exec.Cmd {
cmd := cluster.clusterCmd(map[string]string{
"KUBECONFIG": "openshift.local.registry/openshift-registry.kubeconfig",
"DOCKER_REGISTRY_URL": fmt.Sprintf("127.0.0.1:%d", port),
}, "dockerregistry", configPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%d", port), cmd)
consumeAndLogOutputs(t, fmt.Sprintf("registry-%d", port), cmd)
err := cmd.Start()
c.Assert(err, check.IsNil)
require.NoError(t, err, "%s")
portOpen, terminatePortCheck := newPortChecker(c, port)
portOpen, terminatePortCheck := newPortChecker(t, port)
defer func() {
terminatePortCheck <- true
}()
c.Logf("Waiting for registry to start")
t.Logf("Waiting for registry to start")
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Minute)
defer cancel()
select {
case <-portOpen:
c.Logf("OK, Registry port open")
t.Logf("OK, Registry port open")
case <-ctx.Done():
c.Fatalf("Timed out waiting for registry to start: %v", ctx.Err())
t.Fatalf("Timed out waiting for registry to start: %v", ctx.Err())
}
return cmd
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (cluster *openshiftCluster) startRegistry(c *check.C) {
func (cluster *openshiftCluster) startRegistry(t *testing.T) {
// Our “primary” registry
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5000, "/atomic-registry-config.yml"))
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(t, 5000, "/atomic-registry-config.yml"))
// A registry configured with acceptschema2:false
schema1Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
schema1Config := fileFromFixture(t, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5005",
"rootdirectory: /registry": "rootdirectory: /registry-schema1",
// The default configuration currently already contains acceptschema2: false
})
// Make sure the configuration contains "acceptschema2: false", because eventually it will be enabled upstream and this function will need to be updated.
configContents, err := os.ReadFile(schema1Config)
c.Assert(err, check.IsNil)
c.Assert(string(configContents), check.Matches, "(?s).*acceptschema2: false.*")
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5005, schema1Config))
require.NoError(t, err)
require.Regexp(t, "(?s).*acceptschema2: false.*", string(configContents))
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(t, 5005, schema1Config))
// A registry configured with acceptschema2:true
schema2Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
schema2Config := fileFromFixture(t, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5006",
"rootdirectory: /registry": "rootdirectory: /registry-schema2",
"acceptschema2: false": "acceptschema2: true",
})
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5006, schema2Config))
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(t, 5006, schema2Config))
}
// ocLogin runs (oc login) and (oc new-project) on the cluster, or terminates on failure.
func (cluster *openshiftCluster) ocLoginToProject(c *check.C) {
c.Logf("oc login")
func (cluster *openshiftCluster) ocLoginToProject(t *testing.T) {
t.Logf("oc login")
cmd := cluster.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
out, err := cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
require.NoError(t, err, "%s", out)
require.Regexp(t, "(?s).*Login successful.*", string(out)) // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c, "oc", "new-project", "myns")
c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(t, "oc", "new-project", "myns")
require.Regexp(t, `(?s).*Now using project "myns".*`, outString) // (?s) : '.' will also match newlines
}
// dockerLogin simulates (docker login) to the cluster, or terminates on failure.
// We do not run (docker login) directly, because that requires a running daemon and a docker package.
func (cluster *openshiftCluster) dockerLogin(c *check.C) {
func (cluster *openshiftCluster) dockerLogin(t *testing.T) {
cluster.dockerDir = filepath.Join(homedir.Get(), ".docker")
err := os.Mkdir(cluster.dockerDir, 0700)
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.Logf("oc config value: %s", out)
out := combinedOutputOfCommand(t, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
t.Logf("oc config value: %s", out)
authValue := base64.StdEncoding.EncodeToString([]byte("unused:" + out))
auths := []string{}
for _, port := range []int{5000, 5005, 5006} {
@@ -237,22 +237,22 @@ func (cluster *openshiftCluster) dockerLogin(c *check.C) {
}
configJSON := `{"auths": {` + strings.Join(auths, ",") + `}}`
err = os.WriteFile(filepath.Join(cluster.dockerDir, "config.json"), []byte(configJSON), 0600)
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
// relaxImageSignerPermissions opens up the system:image-signer permissions so that
// anyone can work with signatures
// FIXME: This also allows anyone to DoS anyone else; this design is really not all
// that workable, but it is the best we can do for now.
func (cluster *openshiftCluster) relaxImageSignerPermissions(c *check.C) {
func (cluster *openshiftCluster) relaxImageSignerPermissions(t *testing.T) {
cmd := cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
out, err := cmd.CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
require.NoError(t, err, "%s", string(out))
require.Equal(t, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n", string(out))
}
// tearDown stops the cluster services and deletes (only some!) of the state.
func (cluster *openshiftCluster) tearDown(c *check.C) {
func (cluster *openshiftCluster) tearDown(t *testing.T) {
for i := len(cluster.processes) - 1; i >= 0; i-- {
// Its undocumented what Kill() returns if the process has terminated,
// so we couldnt check just for that. This is running in a container anyway…
@@ -260,6 +260,6 @@ func (cluster *openshiftCluster) tearDown(c *check.C) {
}
if cluster.dockerDir != "" {
err := os.RemoveAll(cluster.dockerDir)
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
}

View File

@@ -7,7 +7,8 @@ import (
"os"
"os/exec"
"gopkg.in/check.v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
/*
@@ -20,7 +21,7 @@ To use it, run:
to start a container, then within the container:
SKOPEO_CONTAINER_TESTS=1 PS1='nested> ' go test -tags openshift_shell -timeout=24h ./integration -v -check.v -check.vv -check.f='CopySuite.TestRunShell'
SKOPEO_CONTAINER_TESTS=1 PS1='nested> ' go test -tags openshift_shell -timeout=24h ./integration -v -run='copySuite.TestRunShell'
An example of what can be done within the container:
@@ -33,13 +34,14 @@ An example of what can be done within the container:
curl -L -v 'http://localhost:5000/v2/myns/personal/manifests/personal' --header 'Authorization: Bearer $token_from_oauth'
curl -L -v 'http://localhost:5000/extensions/v2/myns/personal/signatures/$manifest_digest' --header 'Authorization: Bearer $token_from_oauth'
*/
func (s *CopySuite) TestRunShell(c *check.C) {
func (s *copySuite) TestRunShell() {
t := s.T()
cmd := exec.Command("bash", "-i")
tty, err := os.OpenFile("/dev/tty", os.O_RDWR, 0)
c.Assert(err, check.IsNil)
require.NoError(t, err)
cmd.Stdin = tty
cmd.Stdout = tty
cmd.Stderr = tty
err = cmd.Run()
c.Assert(err, check.IsNil)
assert.NoError(t, err)
}

View File

@@ -7,6 +7,6 @@ import (
"os/exec"
)
// cmdLifecycleToParentIfPossible tries to exit if the parent process exits (only works on Linux)
// cmdLifecycleToParentIfPossible tries to exit if the parent process exits (only works on Linux).
func cmdLifecycleToParentIfPossible(c *exec.Cmd) {
}

View File

@@ -9,16 +9,18 @@ import (
"os/exec"
"strings"
"syscall"
"testing"
"time"
"gopkg.in/check.v1"
"github.com/containers/image/v5/manifest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
// This image is known to be x86_64 only right now
const knownNotManifestListedImage_x8664 = "docker://quay.io/coreos/11bot"
const knownNotManifestListedImageX8664 = "docker://quay.io/coreos/11bot"
// knownNotExtantImage would be very surprising if it did exist
const knownNotExtantImage = "docker://quay.io/centos/centos:opensusewindowsubuntu"
@@ -32,7 +34,7 @@ type request struct {
// Method is the name of the function
Method string `json:"method"`
// Args is the arguments (parsed inside the function)
Args []interface{} `json:"args"`
Args []any `json:"args"`
}
// reply is copied from proxy.go
@@ -40,7 +42,7 @@ type reply struct {
// Success is true if and only if the call succeeded.
Success bool `json:"success"`
// Value is an arbitrary value (or values, as array/map) returned from the call.
Value interface{} `json:"value"`
Value any `json:"value"`
// PipeID is an index into open pipes, and should be passed to FinishPipe
PipeID uint32 `json:"pipeid"`
// Error should be non-empty if Success == false
@@ -60,7 +62,7 @@ type pipefd struct {
fd *os.File
}
func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *pipefd, err error) {
func (p *proxy) call(method string, args []any) (rval any, fd *pipefd, err error) {
req := request{
Method: method,
Args: args,
@@ -81,7 +83,7 @@ func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *p
replybuf := make([]byte, maxMsgSize)
n, oobn, _, _, err := p.c.ReadMsgUnix(replybuf, oob)
if err != nil {
err = fmt.Errorf("reading reply: %v", err)
err = fmt.Errorf("reading reply: %w", err)
return
}
var reply reply
@@ -99,7 +101,7 @@ func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *p
var scms []syscall.SocketControlMessage
scms, err = syscall.ParseSocketControlMessage(oob[:oobn])
if err != nil {
err = fmt.Errorf("failed to parse control message: %v", err)
err = fmt.Errorf("failed to parse control message: %w", err)
return
}
if len(scms) != 1 {
@@ -109,7 +111,7 @@ func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *p
var fds []int
fds, err = syscall.ParseUnixRights(&scms[0])
if err != nil {
err = fmt.Errorf("failed to parse unix rights: %v", err)
err = fmt.Errorf("failed to parse unix rights: %w", err)
return
}
fd = &pipefd{
@@ -122,7 +124,7 @@ func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *p
return
}
func (p *proxy) callNoFd(method string, args []interface{}) (rval interface{}, err error) {
func (p *proxy) callNoFd(method string, args []any) (rval any, err error) {
var fd *pipefd
rval, fd, err = p.call(method, args)
if err != nil {
@@ -135,7 +137,7 @@ func (p *proxy) callNoFd(method string, args []interface{}) (rval interface{}, e
return rval, nil
}
func (p *proxy) callReadAllBytes(method string, args []interface{}) (rval interface{}, buf []byte, err error) {
func (p *proxy) callReadAllBytes(method string, args []any) (rval any, buf []byte, err error) {
var fd *pipefd
rval, fd, err = p.call(method, args)
if err != nil {
@@ -153,7 +155,7 @@ func (p *proxy) callReadAllBytes(method string, args []interface{}) (rval interf
err: err,
}
}()
_, err = p.callNoFd("FinishPipe", []interface{}{fd.id})
_, err = p.callNoFd("FinishPipe", []any{fd.id})
if err != nil {
return
}
@@ -214,17 +216,12 @@ func newProxy() (*proxy, error) {
return p, nil
}
func init() {
check.Suite(&ProxySuite{})
func TestProxy(t *testing.T) {
suite.Run(t, &proxySuite{})
}
type ProxySuite struct {
}
func (s *ProxySuite) SetUpSuite(c *check.C) {
}
func (s *ProxySuite) TearDownSuite(c *check.C) {
type proxySuite struct {
suite.Suite
}
type byteFetch struct {
@@ -233,7 +230,7 @@ type byteFetch struct {
}
func runTestGetManifestAndConfig(p *proxy, img string) error {
v, err := p.callNoFd("OpenImage", []interface{}{knownNotManifestListedImage_x8664})
v, err := p.callNoFd("OpenImage", []any{img})
if err != nil {
return err
}
@@ -248,7 +245,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
}
// Also verify the optional path
v, err = p.callNoFd("OpenImageOptional", []interface{}{knownNotManifestListedImage_x8664})
v, err = p.callNoFd("OpenImageOptional", []any{img})
if err != nil {
return err
}
@@ -262,12 +259,12 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
return fmt.Errorf("got zero from expected image")
}
_, err = p.callNoFd("CloseImage", []interface{}{imgid2})
_, err = p.callNoFd("CloseImage", []any{imgid2})
if err != nil {
return err
}
_, manifestBytes, err := p.callReadAllBytes("GetManifest", []interface{}{imgid})
_, manifestBytes, err := p.callReadAllBytes("GetManifest", []any{imgid})
if err != nil {
return err
}
@@ -276,7 +273,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
return err
}
_, configBytes, err := p.callReadAllBytes("GetFullConfig", []interface{}{imgid})
_, configBytes, err := p.callReadAllBytes("GetFullConfig", []any{imgid})
if err != nil {
return err
}
@@ -295,7 +292,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
}
// Also test this legacy interface
_, ctrconfigBytes, err := p.callReadAllBytes("GetConfig", []interface{}{imgid})
_, ctrconfigBytes, err := p.callReadAllBytes("GetConfig", []any{imgid})
if err != nil {
return err
}
@@ -310,7 +307,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
return fmt.Errorf("No CMD or ENTRYPOINT set")
}
_, err = p.callNoFd("CloseImage", []interface{}{imgid})
_, err = p.callNoFd("CloseImage", []any{imgid})
if err != nil {
return err
}
@@ -319,7 +316,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
}
func runTestOpenImageOptionalNotFound(p *proxy, img string) error {
v, err := p.callNoFd("OpenImageOptional", []interface{}{img})
v, err := p.callNoFd("OpenImageOptional", []any{img})
if err != nil {
return err
}
@@ -335,25 +332,26 @@ func runTestOpenImageOptionalNotFound(p *proxy, img string) error {
return nil
}
func (s *ProxySuite) TestProxy(c *check.C) {
func (s *proxySuite) TestProxy() {
t := s.T()
p, err := newProxy()
c.Assert(err, check.IsNil)
require.NoError(t, err)
err = runTestGetManifestAndConfig(p, knownNotManifestListedImage_x8664)
err = runTestGetManifestAndConfig(p, knownNotManifestListedImageX8664)
if err != nil {
err = fmt.Errorf("Testing image %s: %v", knownNotManifestListedImage_x8664, err)
err = fmt.Errorf("Testing image %s: %v", knownNotManifestListedImageX8664, err)
}
c.Assert(err, check.IsNil)
assert.NoError(t, err)
err = runTestGetManifestAndConfig(p, knownListImage)
if err != nil {
err = fmt.Errorf("Testing image %s: %v", knownListImage, err)
}
c.Assert(err, check.IsNil)
assert.NoError(t, err)
err = runTestOpenImageOptionalNotFound(p, knownNotExtantImage)
if err != nil {
err = fmt.Errorf("Testing optional image %s: %v", knownNotExtantImage, err)
}
c.Assert(err, check.IsNil)
assert.NoError(t, err)
}

View File

@@ -6,9 +6,10 @@ import (
"os"
"os/exec"
"path/filepath"
"testing"
"time"
"gopkg.in/check.v1"
"github.com/stretchr/testify/require"
)
const (
@@ -24,9 +25,9 @@ type testRegistryV2 struct {
email string
}
func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistryV2 {
reg, err := newTestRegistryV2At(c, url, auth, schema1)
c.Assert(err, check.IsNil)
func setupRegistryV2At(t *testing.T, url string, auth, schema1 bool) *testRegistryV2 {
reg, err := newTestRegistryV2At(t, url, auth, schema1)
require.NoError(t, err)
// Wait for registry to be ready to serve requests.
for i := 0; i != 50; i++ {
@@ -37,13 +38,13 @@ func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistry
}
if err != nil {
c.Fatal("Timeout waiting for test registry to become available")
t.Fatal("Timeout waiting for test registry to become available")
}
return reg
}
func newTestRegistryV2At(c *check.C, url string, auth, schema1 bool) (*testRegistryV2, error) {
tmp := c.MkDir()
func newTestRegistryV2At(t *testing.T, url string, auth, schema1 bool) (*testRegistryV2, error) {
tmp := t.TempDir()
template := `version: 0.1
loglevel: debug
storage:
@@ -94,10 +95,10 @@ compatibility:
cmd = exec.Command(binaryV2, "serve", confPath)
}
consumeAndLogOutputs(c, fmt.Sprintf("registry-%s", url), cmd)
consumeAndLogOutputs(t, fmt.Sprintf("registry-%s", url), cmd)
if err := cmd.Start(); err != nil {
if os.IsNotExist(err) {
c.Skip(err.Error())
t.Skip(err.Error())
}
return nil, err
}
@@ -110,9 +111,9 @@ compatibility:
}, nil
}
func (t *testRegistryV2) Ping() error {
func (r *testRegistryV2) Ping() error {
// We always ping through HTTP for our test registry.
resp, err := http.Get(fmt.Sprintf("http://%s/v2/", t.url))
resp, err := http.Get(fmt.Sprintf("http://%s/v2/", r.url))
if err != nil {
return err
}
@@ -123,8 +124,8 @@ func (t *testRegistryV2) Ping() error {
return nil
}
func (t *testRegistryV2) tearDown(c *check.C) {
func (r *testRegistryV2) tearDown() {
// Its undocumented what Kill() returns if the process has terminated,
// so we couldnt check just for that. This is running in a container anyway…
_ = t.cmd.Process.Kill()
_ = r.cmd.Process.Kill()
}

View File

@@ -6,23 +6,28 @@ import (
"os"
"os/exec"
"strings"
"testing"
"github.com/containers/image/v5/signature"
"gopkg.in/check.v1"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
const (
gpgBinary = "gpg"
)
func init() {
check.Suite(&SigningSuite{})
func TestSigning(t *testing.T) {
suite.Run(t, &signingSuite{})
}
type SigningSuite struct {
type signingSuite struct {
suite.Suite
fingerprint string
}
var _ = suite.SetupAllSuite(&signingSuite{})
func findFingerprint(lineBytes []byte) (string, error) {
lines := string(lineBytes)
for _, line := range strings.Split(lines, "\n") {
@@ -34,43 +39,41 @@ func findFingerprint(lineBytes []byte) (string, error) {
return "", errors.New("No fingerprint found")
}
func (s *SigningSuite) SetUpSuite(c *check.C) {
func (s *signingSuite) SetupSuite() {
t := s.T()
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
require.NoError(t, err)
gpgHome := c.MkDir()
os.Setenv("GNUPGHOME", gpgHome)
gpgHome := t.TempDir()
t.Setenv("GNUPGHOME", gpgHome)
runCommandWithInput(c, "Key-Type: RSA\nName-Real: Testing user\n%no-protection\n%commit\n", gpgBinary, "--homedir", gpgHome, "--batch", "--gen-key")
runCommandWithInput(t, "Key-Type: RSA\nName-Real: Testing user\n%no-protection\n%commit\n", gpgBinary, "--homedir", gpgHome, "--batch", "--gen-key")
lines, err := exec.Command(gpgBinary, "--homedir", gpgHome, "--with-colons", "--no-permission-warning", "--fingerprint").Output()
c.Assert(err, check.IsNil)
require.NoError(t, err)
s.fingerprint, err = findFingerprint(lines)
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
func (s *SigningSuite) TearDownSuite(c *check.C) {
os.Unsetenv("GNUPGHOME")
}
func (s *SigningSuite) TestSignVerifySmoke(c *check.C) {
func (s *signingSuite) TestSignVerifySmoke() {
t := s.T()
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
require.NoError(t, err)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
t.Skipf("Signing not supported: %v", err)
}
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/smoketest"
sigOutput, err := os.CreateTemp("", "sig")
c.Assert(err, check.IsNil)
require.NoError(t, err)
defer os.Remove(sigOutput.Name())
assertSkopeoSucceeds(c, "^$", "standalone-sign", "-o", sigOutput.Name(),
assertSkopeoSucceeds(t, "^$", "standalone-sign", "-o", sigOutput.Name(),
manifestPath, dockerReference, s.fingerprint)
expected := fmt.Sprintf("^Signature verified, digest %s\n$", TestImageManifestDigest)
assertSkopeoSucceeds(c, expected, "standalone-verify", manifestPath,
expected := fmt.Sprintf("^Signature verified using fingerprint %s, digest %s\n$", s.fingerprint, TestImageManifestDigest)
assertSkopeoSucceeds(t, expected, "standalone-verify", manifestPath,
dockerReference, s.fingerprint, sigOutput.Name())
}

View File

@@ -9,13 +9,16 @@ import (
"path/filepath"
"regexp"
"strings"
"testing"
"github.com/containers/image/v5/docker"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/types"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"gopkg.in/check.v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
const (
@@ -33,30 +36,36 @@ const (
pullableRepoWithLatestTag = "k8s.gcr.io/pause"
)
func init() {
check.Suite(&SyncSuite{})
func TestSync(t *testing.T) {
suite.Run(t, &syncSuite{})
}
type SyncSuite struct {
type syncSuite struct {
suite.Suite
cluster *openshiftCluster
registry *testRegistryV2
}
func (s *SyncSuite) SetUpSuite(c *check.C) {
var _ = suite.SetupAllSuite(&syncSuite{})
var _ = suite.TearDownAllSuite(&syncSuite{})
func (s *syncSuite) SetupSuite() {
t := s.T()
const registryAuth = false
const registrySchema1 = false
if os.Getenv("SKOPEO_LOCAL_TESTS") == "1" {
c.Log("Running tests without a container")
t.Log("Running tests without a container")
fmt.Printf("NOTE: tests requires a V2 registry at url=%s, with auth=%t, schema1=%t \n", v2DockerRegistryURL, registryAuth, registrySchema1)
return
}
if os.Getenv("SKOPEO_CONTAINER_TESTS") != "1" {
c.Skip("Not running in a container, refusing to affect user state")
t.Skip("Not running in a container, refusing to affect user state")
}
s.cluster = startOpenshiftCluster(c) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
s.cluster = startOpenshiftCluster(t) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression", "schema1", "schema2"} {
isJSON := fmt.Sprintf(`{
@@ -67,41 +76,42 @@ func (s *SyncSuite) SetUpSuite(c *check.C) {
},
"spec": {}
}`, stream)
runCommandWithInput(c, isJSON, "oc", "create", "-f", "-")
runCommandWithInput(t, isJSON, "oc", "create", "-f", "-")
}
// FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, registryAuth, registrySchema1)
s.registry = setupRegistryV2At(t, v2DockerRegistryURL, registryAuth, registrySchema1)
gpgHome := c.MkDir()
os.Setenv("GNUPGHOME", gpgHome)
gpgHome := t.TempDir()
t.Setenv("GNUPGHOME", gpgHome)
for _, key := range []string{"personal", "official"} {
batchInput := fmt.Sprintf("Key-Type: RSA\nName-Real: Test key - %s\nName-email: %s@example.com\n%%no-protection\n%%commit\n",
key, key)
runCommandWithInput(c, batchInput, gpgBinary, "--batch", "--gen-key")
runCommandWithInput(t, batchInput, gpgBinary, "--batch", "--gen-key")
out := combinedOutputOfCommand(c, gpgBinary, "--armor", "--export", fmt.Sprintf("%s@example.com", key))
out := combinedOutputOfCommand(t, gpgBinary, "--armor", "--export", fmt.Sprintf("%s@example.com", key))
err := os.WriteFile(filepath.Join(gpgHome, fmt.Sprintf("%s-pubkey.gpg", key)),
[]byte(out), 0600)
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
}
func (s *SyncSuite) TearDownSuite(c *check.C) {
func (s *syncSuite) TearDownSuite() {
t := s.T()
if os.Getenv("SKOPEO_LOCAL_TESTS") == "1" {
return
}
if s.registry != nil {
s.registry.tearDown(c)
s.registry.tearDown()
}
if s.cluster != nil {
s.cluster.tearDown(c)
s.cluster.tearDown(t)
}
}
func assertNumberOfManifestsInSubdirs(c *check.C, dir string, expectedCount int) {
func assertNumberOfManifestsInSubdirs(t *testing.T, dir string, expectedCount int) {
nManifests := 0
err := filepath.WalkDir(dir, func(path string, d fs.DirEntry, err error) error {
if err != nil {
@@ -113,156 +123,163 @@ func assertNumberOfManifestsInSubdirs(c *check.C, dir string, expectedCount int)
}
return nil
})
c.Assert(err, check.IsNil)
c.Assert(nManifests, check.Equals, expectedCount)
require.NoError(t, err)
assert.Equal(t, expectedCount, nManifests)
}
func (s *SyncSuite) TestDocker2DirTagged(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestDocker2DirTagged() {
t := s.T()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedImage
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := path.Join(tmpDir, "dir1")
dir2 := path.Join(tmpDir, "dir2")
// sync docker => dir
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "docker://"+image, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", "docker://"+image, "dir:"+dir2)
_, err = os.Stat(path.Join(dir2, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "diff", "-urN", path.Join(dir1, imagePath), dir2)
c.Assert(out, check.Equals, "")
out := combinedOutputOfCommand(t, "diff", "-urN", path.Join(dir1, imagePath), dir2)
assert.Equal(t, "", out)
}
func (s *SyncSuite) TestDocker2DirTaggedAll(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestDocker2DirTaggedAll() {
t := s.T()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedManifestList
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := path.Join(tmpDir, "dir1")
dir2 := path.Join(tmpDir, "dir2")
// sync docker => dir
assertSkopeoSucceeds(c, "", "sync", "--all", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
assertSkopeoSucceeds(t, "", "sync", "--all", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--all", "docker://"+image, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", "--all", "docker://"+image, "dir:"+dir2)
_, err = os.Stat(path.Join(dir2, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "diff", "-urN", path.Join(dir1, imagePath), dir2)
c.Assert(out, check.Equals, "")
out := combinedOutputOfCommand(t, "diff", "-urN", path.Join(dir1, imagePath), dir2)
assert.Equal(t, "", out)
}
func (s *SyncSuite) TestPreserveDigests(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestPreserveDigests() {
t := s.T()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedManifestList
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--all", "--preserve-digests", "docker://"+image, "dir:"+tmpDir)
assertSkopeoSucceeds(t, "", "copy", "--all", "--preserve-digests", "docker://"+image, "dir:"+tmpDir)
_, err := os.Stat(path.Join(tmpDir, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
assertSkopeoFails(c, ".*Instructed to preserve digests.*", "copy", "--all", "--preserve-digests", "--format=oci", "docker://"+image, "dir:"+tmpDir)
assertSkopeoFails(t, ".*Instructed to preserve digests.*", "copy", "--all", "--preserve-digests", "--format=oci", "docker://"+image, "dir:"+tmpDir)
}
func (s *SyncSuite) TestScoped(c *check.C) {
func (s *syncSuite) TestScoped() {
t := s.T()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedImage
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := c.MkDir()
assertSkopeoSucceeds(c, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
dir1 := t.TempDir()
assertSkopeoSucceeds(t, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, path.Base(imagePath), "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
func (s *SyncSuite) TestDirIsNotOverwritten(c *check.C) {
func (s *syncSuite) TestDirIsNotOverwritten() {
t := s.T()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableRepoWithLatestTag
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
// make a copy of the image in the local registry
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "docker://"+image, "docker://"+path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())))
assertSkopeoSucceeds(t, "", "copy", "--dest-tls-verify=false", "docker://"+image, "docker://"+path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())))
//sync upstream image to dir, not scoped
dir1 := c.MkDir()
assertSkopeoSucceeds(c, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
dir1 := t.TempDir()
assertSkopeoSucceeds(t, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, path.Base(imagePath), "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
//sync local registry image to dir, not scoped
assertSkopeoFails(c, ".*Refusing to overwrite destination directory.*", "sync", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())), dir1)
assertSkopeoFails(t, ".*Refusing to overwrite destination directory.*", "sync", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())), dir1)
//sync local registry image to dir, scoped
imageRef, err = docker.ParseReference(fmt.Sprintf("//%s", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference()))))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath = imageRef.DockerReference().String()
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())), dir1)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())), dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
}
func (s *SyncSuite) TestDocker2DirUntagged(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestDocker2DirUntagged() {
t := s.T()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableRepo
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := path.Join(tmpDir, "dir1")
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
sysCtx := types.SystemContext{}
tags, err := docker.GetRepositoryTags(context.Background(), &sysCtx, imageRef)
c.Assert(err, check.IsNil)
c.Check(len(tags), check.Not(check.Equals), 0)
require.NoError(t, err)
assert.NotZero(t, len(tags))
nManifests, err := filepath.Glob(path.Join(dir1, path.Dir(imagePath), "*", "manifest.json"))
c.Assert(err, check.IsNil)
c.Assert(len(nManifests), check.Equals, len(tags))
require.NoError(t, err)
assert.Len(t, nManifests, len(tags))
}
func (s *SyncSuite) TestYamlUntagged(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestYamlUntagged() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
image := pullableRepo
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().Name()
sysCtx := types.SystemContext{}
tags, err := docker.GetRepositoryTags(context.Background(), &sysCtx, imageRef)
c.Assert(err, check.IsNil)
c.Check(len(tags), check.Not(check.Equals), 0)
require.NoError(t, err)
assert.NotZero(t, len(tags))
yamlConfig := fmt.Sprintf(`
%s:
@@ -273,8 +290,8 @@ func (s *SyncSuite) TestYamlUntagged(c *check.C) {
// sync to the local registry
yamlFile := path.Join(tmpDir, "registries.yaml")
err = os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "docker", "--dest-tls-verify=false", yamlFile, v2DockerRegistryURL)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "docker", "--dest-tls-verify=false", yamlFile, v2DockerRegistryURL)
// sync back from local registry to a folder
os.Remove(yamlFile)
yamlConfig = fmt.Sprintf(`
@@ -285,23 +302,24 @@ func (s *SyncSuite) TestYamlUntagged(c *check.C) {
`, v2DockerRegistryURL, imagePath)
err = os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
sysCtx = types.SystemContext{
DockerInsecureSkipTLSVerify: types.NewOptionalBool(true),
}
localImageRef, err := docker.ParseReference(fmt.Sprintf("//%s/%s", v2DockerRegistryURL, imagePath))
c.Assert(err, check.IsNil)
require.NoError(t, err)
localTags, err := docker.GetRepositoryTags(context.Background(), &sysCtx, localImageRef)
c.Assert(err, check.IsNil)
c.Check(len(localTags), check.Not(check.Equals), 0)
c.Assert(len(localTags), check.Equals, len(tags))
assertNumberOfManifestsInSubdirs(c, dir1, len(tags))
require.NoError(t, err)
assert.NotZero(t, len(localTags))
assert.Len(t, localTags, len(tags))
assertNumberOfManifestsInSubdirs(t, dir1, len(tags))
}
func (s *SyncSuite) TestYamlRegex2Dir(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestYamlRegex2Dir() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
@@ -311,17 +329,18 @@ k8s.gcr.io:
`
// the ↑ regex strings always matches only 2 images
var nTags = 2
c.Assert(nTags, check.Not(check.Equals), 0)
assert.NotZero(t, nTags)
yamlFile := path.Join(tmpDir, "registries.yaml")
err := os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(c, dir1, nTags)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(t, dir1, nTags)
}
func (s *SyncSuite) TestYamlDigest2Dir(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestYamlDigest2Dir() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
@@ -332,13 +351,14 @@ k8s.gcr.io:
`
yamlFile := path.Join(tmpDir, "registries.yaml")
err := os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(c, dir1, 1)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(t, dir1, 1)
}
func (s *SyncSuite) TestYaml2Dir(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestYaml2Dir() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
@@ -366,25 +386,26 @@ quay.io:
nTags++
}
}
c.Assert(nTags, check.Not(check.Equals), 0)
assert.NotZero(t, nTags)
yamlFile := path.Join(tmpDir, "registries.yaml")
err := os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(c, dir1, nTags)
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
assertNumberOfManifestsInSubdirs(t, dir1, nTags)
}
func (s *SyncSuite) TestYamlTLSVerify(c *check.C) {
func (s *syncSuite) TestYamlTLSVerify() {
t := s.T()
const localRegURL = "docker://" + v2DockerRegistryURL + "/"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
image := pullableRepoWithLatestTag
tag := "latest"
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
// copy docker => docker
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "docker://"+image+":"+tag, localRegURL+image+":"+tag)
assertSkopeoSucceeds(t, "", "copy", "--dest-tls-verify=false", "docker://"+image+":"+tag, localRegURL+image+":"+tag)
yamlTemplate := `
%s:
@@ -396,7 +417,7 @@ func (s *SyncSuite) TestYamlTLSVerify(c *check.C) {
testCfg := []struct {
tlsVerify string
msg string
checker func(c *check.C, regexp string, args ...string)
checker func(t *testing.T, regexp string, args ...string)
}{
{
tlsVerify: "tls-verify: false",
@@ -420,17 +441,18 @@ func (s *SyncSuite) TestYamlTLSVerify(c *check.C) {
yamlConfig := fmt.Sprintf(yamlTemplate, v2DockerRegistryURL, cfg.tlsVerify, image, tag)
yamlFile := path.Join(tmpDir, "registries.yaml")
err := os.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
require.NoError(t, err)
cfg.checker(c, cfg.msg, "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
cfg.checker(t, cfg.msg, "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
os.Remove(yamlFile)
os.RemoveAll(dir1)
}
}
func (s *SyncSuite) TestSyncManifestOutput(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestSyncManifestOutput() {
t := s.T()
tmpDir := t.TempDir()
destDir1 := filepath.Join(tmpDir, "dest1")
destDir2 := filepath.Join(tmpDir, "dest2")
@@ -439,154 +461,162 @@ func (s *SyncSuite) TestSyncManifestOutput(c *check.C) {
//Split image:tag path from image URI for manifest comparison
imageDir := pullableTaggedImage[strings.LastIndex(pullableTaggedImage, "/")+1:]
assertSkopeoSucceeds(c, "", "sync", "--format=oci", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir1)
verifyManifestMIMEType(c, filepath.Join(destDir1, imageDir), imgspecv1.MediaTypeImageManifest)
assertSkopeoSucceeds(c, "", "sync", "--format=v2s2", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir2)
verifyManifestMIMEType(c, filepath.Join(destDir2, imageDir), manifest.DockerV2Schema2MediaType)
assertSkopeoSucceeds(c, "", "sync", "--format=v2s1", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir3)
verifyManifestMIMEType(c, filepath.Join(destDir3, imageDir), manifest.DockerV2Schema1SignedMediaType)
assertSkopeoSucceeds(t, "", "sync", "--format=oci", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir1)
verifyManifestMIMEType(t, filepath.Join(destDir1, imageDir), imgspecv1.MediaTypeImageManifest)
assertSkopeoSucceeds(t, "", "sync", "--format=v2s2", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir2)
verifyManifestMIMEType(t, filepath.Join(destDir2, imageDir), manifest.DockerV2Schema2MediaType)
assertSkopeoSucceeds(t, "", "sync", "--format=v2s1", "--all", "--src", "docker", "--dest", "dir", pullableTaggedImage, destDir3)
verifyManifestMIMEType(t, filepath.Join(destDir3, imageDir), manifest.DockerV2Schema1SignedMediaType)
}
func (s *SyncSuite) TestDocker2DockerTagged(c *check.C) {
func (s *syncSuite) TestDocker2DockerTagged() {
t := s.T()
const localRegURL = "docker://" + v2DockerRegistryURL + "/"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedImage
imageRef, err := docker.ParseReference(fmt.Sprintf("//%s", image))
c.Assert(err, check.IsNil)
require.NoError(t, err)
imagePath := imageRef.DockerReference().String()
dir1 := path.Join(tmpDir, "dir1")
dir2 := path.Join(tmpDir, "dir2")
// sync docker => docker
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--dest-tls-verify=false", "--src", "docker", "--dest", "docker", image, v2DockerRegistryURL)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--dest-tls-verify=false", "--src", "docker", "--dest", "docker", image, v2DockerRegistryURL)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "docker://"+image, "dir:"+dir1)
assertSkopeoSucceeds(t, "", "copy", "docker://"+image, "dir:"+dir1)
_, err = os.Stat(path.Join(dir1, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", localRegURL+imagePath, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", "--src-tls-verify=false", localRegURL+imagePath, "dir:"+dir2)
_, err = os.Stat(path.Join(dir2, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
out := combinedOutputOfCommand(t, "diff", "-urN", dir1, dir2)
assert.Equal(t, "", out)
}
func (s *SyncSuite) TestDir2DockerTagged(c *check.C) {
func (s *syncSuite) TestDir2DockerTagged() {
t := s.T()
const localRegURL = "docker://" + v2DockerRegistryURL + "/"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableRepoWithLatestTag
dir1 := path.Join(tmpDir, "dir1")
err := os.Mkdir(dir1, 0755)
c.Assert(err, check.IsNil)
require.NoError(t, err)
dir2 := path.Join(tmpDir, "dir2")
err = os.Mkdir(dir2, 0755)
c.Assert(err, check.IsNil)
require.NoError(t, err)
// create leading dirs
err = os.MkdirAll(path.Dir(path.Join(dir1, image)), 0755)
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "docker://"+image, "dir:"+path.Join(dir1, image))
assertSkopeoSucceeds(t, "", "copy", "docker://"+image, "dir:"+path.Join(dir1, image))
_, err = os.Stat(path.Join(dir1, image, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
// sync dir => docker
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--dest-tls-verify=false", "--src", "dir", "--dest", "docker", dir1, v2DockerRegistryURL)
assertSkopeoSucceeds(t, "", "sync", "--scoped", "--dest-tls-verify=false", "--src", "dir", "--dest", "docker", dir1, v2DockerRegistryURL)
// create leading dirs
err = os.MkdirAll(path.Dir(path.Join(dir2, image)), 0755)
c.Assert(err, check.IsNil)
require.NoError(t, err)
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", localRegURL+image, "dir:"+path.Join(dir2, image))
assertSkopeoSucceeds(t, "", "copy", "--src-tls-verify=false", localRegURL+image, "dir:"+path.Join(dir2, image))
_, err = os.Stat(path.Join(dir2, image, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
c.Assert(out, check.Equals, "")
out := combinedOutputOfCommand(t, "diff", "-urN", dir1, dir2)
assert.Equal(t, "", out)
}
func (s *SyncSuite) TestFailsWithDir2Dir(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestFailsWithDir2Dir() {
t := s.T()
tmpDir := t.TempDir()
dir1 := path.Join(tmpDir, "dir1")
dir2 := path.Join(tmpDir, "dir2")
// sync dir => dir is not allowed
assertSkopeoFails(c, ".*sync from 'dir' to 'dir' not implemented.*", "sync", "--scoped", "--src", "dir", "--dest", "dir", dir1, dir2)
assertSkopeoFails(t, ".*sync from 'dir' to 'dir' not implemented.*", "sync", "--scoped", "--src", "dir", "--dest", "dir", dir1, dir2)
}
func (s *SyncSuite) TestFailsNoSourceImages(c *check.C) {
tmpDir := c.MkDir()
func (s *syncSuite) TestFailsNoSourceImages() {
t := s.T()
tmpDir := t.TempDir()
assertSkopeoFails(c, ".*No images to sync found in .*",
assertSkopeoFails(t, ".*No images to sync found in .*",
"sync", "--scoped", "--dest-tls-verify=false", "--src", "dir", "--dest", "docker", tmpDir, v2DockerRegistryURL)
assertSkopeoFails(c, ".*Error determining repository tags for repo docker.io/library/hopefully_no_images_will_ever_be_called_like_this: fetching tags list: requested access to the resource is denied.*",
assertSkopeoFails(t, ".*Error determining repository tags for repo docker.io/library/hopefully_no_images_will_ever_be_called_like_this: fetching tags list: requested access to the resource is denied.*",
"sync", "--scoped", "--dest-tls-verify=false", "--src", "docker", "--dest", "docker", "hopefully_no_images_will_ever_be_called_like_this", v2DockerRegistryURL)
}
func (s *SyncSuite) TestFailsWithDockerSourceNoRegistry(c *check.C) {
func (s *syncSuite) TestFailsWithDockerSourceNoRegistry() {
t := s.T()
const regURL = "google.com/namespace/imagename"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
//untagged
assertSkopeoFails(c, ".*StatusCode: 404.*",
assertSkopeoFails(t, ".*StatusCode: 404.*",
"sync", "--scoped", "--src", "docker", "--dest", "dir", regURL, tmpDir)
//tagged
assertSkopeoFails(c, ".*StatusCode: 404.*",
assertSkopeoFails(t, ".*StatusCode: 404.*",
"sync", "--scoped", "--src", "docker", "--dest", "dir", regURL+":thetag", tmpDir)
}
func (s *SyncSuite) TestFailsWithDockerSourceUnauthorized(c *check.C) {
func (s *syncSuite) TestFailsWithDockerSourceUnauthorized() {
t := s.T()
const repo = "privateimagenamethatshouldnotbepublic"
tmpDir := c.MkDir()
tmpDir := t.TempDir()
//untagged
assertSkopeoFails(c, ".*requested access to the resource is denied.*",
assertSkopeoFails(t, ".*requested access to the resource is denied.*",
"sync", "--scoped", "--src", "docker", "--dest", "dir", repo, tmpDir)
//tagged
assertSkopeoFails(c, ".*requested access to the resource is denied.*",
assertSkopeoFails(t, ".*requested access to the resource is denied.*",
"sync", "--scoped", "--src", "docker", "--dest", "dir", repo+":thetag", tmpDir)
}
func (s *SyncSuite) TestFailsWithDockerSourceNotExisting(c *check.C) {
func (s *syncSuite) TestFailsWithDockerSourceNotExisting() {
t := s.T()
repo := path.Join(v2DockerRegistryURL, "imagedoesnotexist")
tmpDir := c.MkDir()
tmpDir := t.TempDir()
//untagged
assertSkopeoFails(c, ".*repository name not known to registry.*",
assertSkopeoFails(t, ".*repository name not known to registry.*",
"sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", repo, tmpDir)
//tagged
assertSkopeoFails(c, ".*reading manifest.*",
assertSkopeoFails(t, ".*reading manifest.*",
"sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", repo+":thetag", tmpDir)
}
func (s *SyncSuite) TestFailsWithDirSourceNotExisting(c *check.C) {
func (s *syncSuite) TestFailsWithDirSourceNotExisting() {
t := s.T()
// Make sure the dir does not exist!
tmpDir := c.MkDir()
tmpDir := t.TempDir()
tmpDir = filepath.Join(tmpDir, "this-does-not-exist")
err := os.RemoveAll(tmpDir)
c.Assert(err, check.IsNil)
require.NoError(t, err)
_, err = os.Stat(path.Join(tmpDir))
c.Check(os.IsNotExist(err), check.Equals, true)
assert.True(t, os.IsNotExist(err))
assertSkopeoFails(c, ".*no such file or directory.*",
assertSkopeoFails(t, ".*no such file or directory.*",
"sync", "--scoped", "--dest-tls-verify=false", "--src", "dir", "--dest", "docker", tmpDir, v2DockerRegistryURL)
}

View File

@@ -4,14 +4,17 @@ import (
"bytes"
"io"
"net"
"net/netip"
"os"
"os/exec"
"path/filepath"
"strings"
"testing"
"time"
"github.com/containers/image/v5/manifest"
"gopkg.in/check.v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const skopeoBinary = "skopeo"
@@ -21,19 +24,19 @@ const testFQIN = "docker://quay.io/libpod/busybox" // tag left off on purpose, s
const testFQIN64 = "docker://quay.io/libpod/busybox:amd64"
const testFQINMultiLayer = "docker://quay.io/libpod/alpine_nginx:latest" // multi-layer
// consumeAndLogOutputStream takes (f, err) from an exec.*Pipe(), and causes all output to it to be logged to c.
func consumeAndLogOutputStream(c *check.C, id string, f io.ReadCloser, err error) {
c.Assert(err, check.IsNil)
// consumeAndLogOutputStream takes (f, err) from an exec.*Pipe(), and causes all output to it to be logged to t.
func consumeAndLogOutputStream(t *testing.T, id string, f io.ReadCloser, err error) {
require.NoError(t, err)
go func() {
defer func() {
f.Close()
c.Logf("Output %s: Closed", id)
t.Logf("Output %s: Closed", id)
}()
buf := make([]byte, 1024)
for {
c.Logf("Output %s: waiting", id)
t.Logf("Output %s: waiting", id)
n, err := f.Read(buf)
c.Logf("Output %s: got %d,%#v: %s", id, n, err, strings.TrimSuffix(string(buf[:n]), "\n"))
t.Logf("Output %s: got %d,%#v: %s", id, n, err, strings.TrimSuffix(string(buf[:n]), "\n"))
if n <= 0 {
break
}
@@ -41,72 +44,73 @@ func consumeAndLogOutputStream(c *check.C, id string, f io.ReadCloser, err error
}()
}
// consumeAndLogOutputs causes all output to stdout and stderr from an *exec.Cmd to be logged to c
func consumeAndLogOutputs(c *check.C, id string, cmd *exec.Cmd) {
// consumeAndLogOutputs causes all output to stdout and stderr from an *exec.Cmd to be logged to c.
func consumeAndLogOutputs(t *testing.T, id string, cmd *exec.Cmd) {
stdout, err := cmd.StdoutPipe()
consumeAndLogOutputStream(c, id+" stdout", stdout, err)
consumeAndLogOutputStream(t, id+" stdout", stdout, err)
stderr, err := cmd.StderrPipe()
consumeAndLogOutputStream(c, id+" stderr", stderr, err)
consumeAndLogOutputStream(t, id+" stderr", stderr, err)
}
// combinedOutputOfCommand runs a command as if exec.Command().CombinedOutput(), verifies that the exit status is 0, and returns the output,
// or terminates c on failure.
func combinedOutputOfCommand(c *check.C, name string, args ...string) string {
c.Logf("Running %s %s", name, strings.Join(args, " "))
func combinedOutputOfCommand(t *testing.T, name string, args ...string) string {
t.Logf("Running %s %s", name, strings.Join(args, " "))
out, err := exec.Command(name, args...).CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
require.NoError(t, err, "%s", out)
return string(out)
}
// assertSkopeoSucceeds runs a skopeo command as if exec.Command().CombinedOutput, verifies that the exit status is 0,
// and optionally that the output matches a multi-line regexp if it is nonempty;
// or terminates c on failure
func assertSkopeoSucceeds(c *check.C, regexp string, args ...string) {
c.Logf("Running %s %s", skopeoBinary, strings.Join(args, " "))
func assertSkopeoSucceeds(t *testing.T, regexp string, args ...string) {
t.Logf("Running %s %s", skopeoBinary, strings.Join(args, " "))
out, err := exec.Command(skopeoBinary, args...).CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
assert.NoError(t, err, "%s", out)
if regexp != "" {
c.Assert(string(out), check.Matches, "(?s)"+regexp) // (?s) : '.' will also match newlines
assert.Regexp(t, "(?s)"+regexp, string(out)) // (?s) : '.' will also match newlines
}
}
// assertSkopeoFails runs a skopeo command as if exec.Command().CombinedOutput, verifies that the exit status is 0,
// and that the output matches a multi-line regexp;
// or terminates c on failure
func assertSkopeoFails(c *check.C, regexp string, args ...string) {
c.Logf("Running %s %s", skopeoBinary, strings.Join(args, " "))
func assertSkopeoFails(t *testing.T, regexp string, args ...string) {
t.Logf("Running %s %s", skopeoBinary, strings.Join(args, " "))
out, err := exec.Command(skopeoBinary, args...).CombinedOutput()
c.Assert(err, check.NotNil, check.Commentf("%s", out))
c.Assert(string(out), check.Matches, "(?s)"+regexp) // (?s) : '.' will also match newlines
assert.Error(t, err, "%s", out)
assert.Regexp(t, "(?s)"+regexp, string(out)) // (?s) : '.' will also match newlines
}
// runCommandWithInput runs a command as if exec.Command(), sending it the input to stdin,
// and verifies that the exit status is 0, or terminates c on failure.
func runCommandWithInput(c *check.C, input string, name string, args ...string) {
func runCommandWithInput(t *testing.T, input string, name string, args ...string) {
cmd := exec.Command(name, args...)
runExecCmdWithInput(c, cmd, input)
runExecCmdWithInput(t, cmd, input)
}
// runExecCmdWithInput runs an exec.Cmd, sending it the input to stdin,
// and verifies that the exit status is 0, or terminates c on failure.
func runExecCmdWithInput(c *check.C, cmd *exec.Cmd, input string) {
c.Logf("Running %s %s", cmd.Path, strings.Join(cmd.Args, " "))
consumeAndLogOutputs(c, cmd.Path+" "+strings.Join(cmd.Args, " "), cmd)
func runExecCmdWithInput(t *testing.T, cmd *exec.Cmd, input string) {
t.Logf("Running %s %s", cmd.Path, strings.Join(cmd.Args, " "))
consumeAndLogOutputs(t, cmd.Path+" "+strings.Join(cmd.Args, " "), cmd)
stdin, err := cmd.StdinPipe()
c.Assert(err, check.IsNil)
require.NoError(t, err)
err = cmd.Start()
c.Assert(err, check.IsNil)
_, err = stdin.Write([]byte(input))
c.Assert(err, check.IsNil)
require.NoError(t, err)
_, err = io.WriteString(stdin, input)
require.NoError(t, err)
err = stdin.Close()
c.Assert(err, check.IsNil)
require.NoError(t, err)
err = cmd.Wait()
c.Assert(err, check.IsNil)
assert.NoError(t, err)
}
// isPortOpen returns true iff the specified port on localhost is open.
func isPortOpen(port int) bool {
conn, err := net.DialTCP("tcp", nil, &net.TCPAddr{IP: net.IPv4(127, 0, 0, 1), Port: port})
func isPortOpen(port uint16) bool {
ap := netip.AddrPortFrom(netip.AddrFrom4([4]byte{127, 0, 0, 1}), port)
conn, err := net.DialTCP("tcp", nil, net.TCPAddrFromAddrPort(ap))
if err != nil {
return false
}
@@ -118,29 +122,29 @@ func isPortOpen(port int) bool {
// The checking can be aborted by sending a value to the terminate channel, which the caller should
// always do using
// defer func() {terminate <- true}()
func newPortChecker(c *check.C, port int) (portOpen <-chan bool, terminate chan<- bool) {
func newPortChecker(t *testing.T, port uint16) (portOpen <-chan bool, terminate chan<- bool) {
portOpenBidi := make(chan bool)
// Buffered, so that sending a terminate request after the goroutine has exited does not block.
terminateBidi := make(chan bool, 1)
go func() {
defer func() {
c.Logf("Port checker for port %d exiting", port)
t.Logf("Port checker for port %d exiting", port)
}()
for {
c.Logf("Checking for port %d...", port)
t.Logf("Checking for port %d...", port)
if isPortOpen(port) {
c.Logf("Port %d open", port)
t.Logf("Port %d open", port)
portOpenBidi <- true
return
}
c.Logf("Sleeping for port %d", port)
t.Logf("Sleeping for port %d", port)
sleepChan := time.After(100 * time.Millisecond)
select {
case <-sleepChan: // Try again
c.Logf("Sleeping for port %d done, will retry", port)
t.Logf("Sleeping for port %d done, will retry", port)
case <-terminateBidi:
c.Logf("Check for port %d terminated", port)
t.Logf("Check for port %d terminated", port)
return
}
}
@@ -162,54 +166,51 @@ func modifyEnviron(env []string, name, value string) []string {
// fileFromFixtureFixture applies edits to inputPath and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func fileFromFixture(c *check.C, inputPath string, edits map[string]string) string {
func fileFromFixture(t *testing.T, inputPath string, edits map[string]string) string {
contents, err := os.ReadFile(inputPath)
c.Assert(err, check.IsNil)
require.NoError(t, err)
for template, value := range edits {
updated := bytes.ReplaceAll(contents, []byte(template), []byte(value))
c.Assert(bytes.Equal(updated, contents), check.Equals, false, check.Commentf("Replacing %s in %#v failed", template, string(contents))) // Verify that the template has matched something and we are not silently ignoring it.
require.NotEqual(t, contents, updated, "Replacing %s in %#v failed", template, string(contents)) // Verify that the template has matched something and we are not silently ignoring it.
contents = updated
}
file, err := os.CreateTemp("", "policy.json")
c.Assert(err, check.IsNil)
require.NoError(t, err)
path := file.Name()
_, err = file.Write(contents)
c.Assert(err, check.IsNil)
require.NoError(t, err)
err = file.Close()
c.Assert(err, check.IsNil)
require.NoError(t, err)
return path
}
// runDecompressDirs runs decompress-dirs.sh using exec.Command().CombinedOutput, verifies that the exit status is 0,
// and optionally that the output matches a multi-line regexp if it is nonempty; or terminates c on failure
func runDecompressDirs(c *check.C, regexp string, args ...string) {
c.Logf("Running %s %s", decompressDirsBinary, strings.Join(args, " "))
func runDecompressDirs(t *testing.T, args ...string) {
t.Logf("Running %s %s", decompressDirsBinary, strings.Join(args, " "))
for i, dir := range args {
m, err := os.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
c.Logf("manifest %d before: %s", i+1, string(m))
require.NoError(t, err)
t.Logf("manifest %d before: %s", i+1, string(m))
}
out, err := exec.Command(decompressDirsBinary, args...).CombinedOutput()
c.Assert(err, check.IsNil, check.Commentf("%s", out))
assert.NoError(t, err, "%s", out)
for i, dir := range args {
if len(out) > 0 {
c.Logf("output: %s", out)
t.Logf("output: %s", out)
}
m, err := os.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
c.Logf("manifest %d after: %s", i+1, string(m))
}
if regexp != "" {
c.Assert(string(out), check.Matches, "(?s)"+regexp) // (?s) : '.' will also match newlines
require.NoError(t, err)
t.Logf("manifest %d after: %s", i+1, string(m))
}
}
// Verify manifest in a dir: image at dir is expectedMIMEType.
func verifyManifestMIMEType(c *check.C, dir string, expectedMIMEType string) {
func verifyManifestMIMEType(t *testing.T, dir string, expectedMIMEType string) {
manifestBlob, err := os.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
require.NoError(t, err)
mimeType := manifest.GuessMIMEType(manifestBlob)
c.Assert(mimeType, check.Equals, expectedMIMEType)
assert.Equal(t, expectedMIMEType, mimeType)
}

View File

@@ -9,8 +9,14 @@
# CAUTION: This is not a replacement for RPMs provided by your distro.
# Only intended to build and test the latest unreleased changes.
%global gomodulesmode GO111MODULE=on
%global with_debug 1
%global gomodulesmode GO111MODULE=on
# RHEL 8's default %%gobuild macro doesn't account for the BUILDTAGS variable, so we
# set it separately here and do not depend on RHEL 8's go-srpm-macros package.
%if !0%{?fedora} && 0%{?rhel} <= 8
%define gobuild(o:) %{gomodulesmode} go build -buildmode pie -compiler gc -tags="rpm_crashtraceback libtrust_openssl ${BUILDTAGS:-}" -ldflags "-linkmode=external -compressdwarf=false ${LDFLAGS:-} -B 0x$(head -c20 /dev/urandom|od -An -tx1|tr -d ' \\n') -extldflags '%__global_ldflags'" -a -v -x %{?**};
%endif
%if 0%{?with_debug}
%global _find_debuginfo_dwz_opts %{nil}
@@ -19,10 +25,6 @@
%global debug_package %{nil}
%endif
%if ! 0%{?gobuild:1}
%define gobuild(o:) go build -buildmode pie -compiler gc -tags="rpm_crashtraceback ${BUILDTAGS:-}" -ldflags "${LDFLAGS:-} -B 0x$(head -c20 /dev/urandom|od -An -tx1|tr -d ' \\n') -extldflags '-Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '" -a -v -x %{?**};
%endif
Name: {{{ git_dir_name }}}
Epoch: 101
Version: {{{ git_dir_version }}}
@@ -48,11 +50,7 @@ BuildRequires: libassuan-devel
BuildRequires: pkgconfig
BuildRequires: make
BuildRequires: ostree-devel
%if 0%{?fedora} <= 35
Requires: containers-common >= 4:1-39
%else
Requires: containers-common >= 4:1-46
%endif
Requires: containers-common >= 4:1-78
%description
Command line utility to inspect images and repositories directly on Docker
@@ -95,11 +93,7 @@ export CGO_CFLAGS+=" -m64 -mtune=generic -fcf-protection=full"
LDFLAGS=""
export BUILDTAGS="$(hack/libdm_tag.sh)"
%if 0%{?rhel}
export BUILDTAGS="$BUILDTAGS exclude_graphdriver_btrfs btrfs_noversion"
%endif
export BUILDTAGS="$(hack/libdm_tag.sh) $(hack/btrfs_installed_tag.sh) $(hack/btrfs_tag.sh)"
%gobuild -o bin/%{name} ./cmd/%{name}
%install

View File

@@ -16,7 +16,8 @@ function setup() {
_cred_dir=$TESTDIR/credentials
# It is important to change XDG_RUNTIME_DIR only after we start the registry, otherwise it affects the path of $XDG_RUNTIME_DIR/netns maintained by Podman,
# making it imposible to clean up after ourselves.
# making it impossible to clean up after ourselves.
export XDG_RUNTIME_DIR_OLD=$XDG_RUNTIME_DIR
export XDG_RUNTIME_DIR=$_cred_dir
mkdir -p $_cred_dir/containers
# Remove old/stale cred file
@@ -111,6 +112,9 @@ function setup() {
}
teardown() {
# Need to restore XDG_RUNTIME_DIR.
XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR_OLD
podman rm -f reg
if [[ -n $_cred_dir ]]; then

View File

@@ -242,7 +242,7 @@ END_TESTS
$fingerprint \
$TESTDIR/busybox.signature
# manifest digest
digest=$(echo "$output" | awk '{print $4;}')
digest=$(echo "$output" | awk '{print $NF;}')
run_skopeo manifest-digest $TESTDIR/busybox/manifest.json
expect_output $digest
}

View File

@@ -0,0 +1,57 @@
/*
mkwinsyscall generates windows system call bodies
It parses all files specified on command line containing function
prototypes (like syscall_windows.go) and prints system call bodies
to standard output.
The prototypes are marked by lines beginning with "//sys" and read
like func declarations if //sys is replaced by func, but:
- The parameter lists must give a name for each argument. This
includes return parameters.
- The parameter lists must give a type for each argument:
the (x, y, z int) shorthand is not allowed.
- If the return parameter is an error number, it must be named err.
- If go func name needs to be different from its winapi dll name,
the winapi name could be specified at the end, after "=" sign, like
//sys LoadLibrary(libname string) (handle uint32, err error) = LoadLibraryA
- Each function that returns err needs to supply a condition, that
return value of winapi will be tested against to detect failure.
This would set err to windows "last-error", otherwise it will be nil.
The value can be provided at end of //sys declaration, like
//sys LoadLibrary(libname string) (handle uint32, err error) [failretval==-1] = LoadLibraryA
and is [failretval==0] by default.
- If the function name ends in a "?", then the function not existing is non-
fatal, and an error will be returned instead of panicking.
Usage:
mkwinsyscall [flags] [path ...]
Flags
-output string
Output file name (standard output if omitted).
-sort
Sort DLL and function declarations (default true).
Intended to help transition from older versions of mkwinsyscall by making diffs
easier to read and understand.
-systemdll
Whether all DLLs should be loaded from the Windows system directory (default true).
-trace
Generate print statement after every syscall.
-utf16
Encode string arguments as UTF-16 for syscalls not ending in 'A' or 'W' (default true).
-winio
Import this package ("github.com/Microsoft/go-winio").
*/
package main

File diff suppressed because it is too large Load Diff

View File

@@ -1 +1,3 @@
* text=auto eol=lf
* text=auto eol=lf
vendor/** -text
test/vendor/** -text

View File

@@ -6,6 +6,7 @@
# Ignore vscode setting files
.vscode/
.idea/
# Test binary, build with `go test -c`
*.test
@@ -23,16 +24,26 @@ service/pkg/
*.img
*.vhd
*.tar.gz
*.tar
# Make stuff
.rootfs-done
bin/*
rootfs/*
rootfs-conv/*
*.o
/build/
deps/*
out/*
.idea/
.vscode/
# test results
test/results
# go workspace files
go.work
go.work.sum
# keys and related artifacts
*.pem
*.cose

View File

@@ -1,23 +1,51 @@
run:
timeout: 8m
tests: true
build-tags:
- admin
- functional
- integration
skip-dirs:
# paths are relative to module root
- cri-containerd/test-images
linters:
enable:
- stylecheck
# defaults:
# - errcheck
# - gosimple
# - govet
# - ineffassign
# - staticcheck
# - typecheck
# - unused
- gofmt # whether code was gofmt-ed
- nolintlint # ill-formed or insufficient nolint directives
- stylecheck # golint replacement
- thelper # test helpers without t.Helper()
linters-settings:
stylecheck:
# https://staticcheck.io/docs/checks
checks: ["all"]
issues:
# This repo has a LOT of generated schema files, operating system bindings, and other things that ST1003 from stylecheck won't like
# (screaming case Windows api constants for example). There's also some structs that we *could* change the initialisms to be Go
# friendly (Id -> ID) but they're exported and it would be a breaking change. This makes it so that most new code, code that isn't
# supposed to be a pretty faithful mapping to an OS call/constants, or non-generated code still checks if we're following idioms,
# while ignoring the things that are just noise or would be more of a hassle than it'd be worth to change.
exclude-rules:
# path is relative to module root, which is ./test/
- path: cri-containerd
linters:
- stylecheck
text: "^ST1003: should not use underscores in package names$"
source: "^package cri_containerd$"
# This repo has a LOT of generated schema files, operating system bindings, and other
# things that ST1003 from stylecheck won't like (screaming case Windows api constants for example).
# There's also some structs that we *could* change the initialisms to be Go friendly
# (Id -> ID) but they're exported and it would be a breaking change.
# This makes it so that most new code, code that isn't supposed to be a pretty faithful
# mapping to an OS call/constants, or non-generated code still checks if we're following idioms,
# while ignoring the things that are just noise or would be more of a hassle than it'd be worth to change.
- path: layer.go
linters:
- stylecheck
@@ -28,11 +56,21 @@ issues:
- stylecheck
Text: "ST1003:"
- path: internal\\hcs\\schema2\\
- path: cmd\\ncproxy\\nodenetsvc\\
linters:
- stylecheck
Text: "ST1003:"
- path: cmd\\ncproxy_mock\\
linters:
- stylecheck
Text: "ST1003:"
- path: internal\\hcs\\schema2\\
linters:
- stylecheck
- gofmt
- path: internal\\wclayer\\
linters:
- stylecheck
@@ -96,4 +134,4 @@ issues:
- path: internal\\hcserror\\
linters:
- stylecheck
Text: "ST1003:"
Text: "ST1003:"

View File

@@ -1,4 +1,5 @@
BASE:=base.tar.gz
DEV_BUILD:=0
GO:=go
GO_FLAGS:=-ldflags "-s -w" # strip Go binaries
@@ -12,16 +13,31 @@ GO_FLAGS_EXTRA:=
ifeq "$(GOMODVENDOR)" "1"
GO_FLAGS_EXTRA += -mod=vendor
endif
GO_BUILD_TAGS:=
ifneq ($(strip $(GO_BUILD_TAGS)),)
GO_FLAGS_EXTRA += -tags="$(GO_BUILD_TAGS)"
endif
GO_BUILD:=CGO_ENABLED=$(CGO_ENABLED) $(GO) build $(GO_FLAGS) $(GO_FLAGS_EXTRA)
SRCROOT=$(dir $(abspath $(firstword $(MAKEFILE_LIST))))
# additional directories to search for rule prerequisites and targets
VPATH=$(SRCROOT)
DELTA_TARGET=out/delta.tar.gz
ifeq "$(DEV_BUILD)" "1"
DELTA_TARGET=out/delta-dev.tar.gz
endif
# The link aliases for gcstools
GCS_TOOLS=\
generichook
generichook \
install-drivers
.PHONY: all always rootfs test
.DEFAULT_GOAL := all
all: out/initrd.img out/rootfs.tar.gz
clean:
@@ -29,21 +45,13 @@ clean:
rm -rf bin deps rootfs out
test:
cd $(SRCROOT) && go test -v ./internal/guest/...
cd $(SRCROOT) && $(GO) test -v ./internal/guest/...
out/delta.tar.gz: bin/init bin/vsockexec bin/cmd/gcs bin/cmd/gcstools Makefile
@mkdir -p out
rm -rf rootfs
mkdir -p rootfs/bin/
cp bin/init rootfs/
cp bin/vsockexec rootfs/bin/
cp bin/cmd/gcs rootfs/bin/
cp bin/cmd/gcstools rootfs/bin/
for tool in $(GCS_TOOLS); do ln -s gcstools rootfs/bin/$$tool; done
git -C $(SRCROOT) rev-parse HEAD > rootfs/gcs.commit && \
git -C $(SRCROOT) rev-parse --abbrev-ref HEAD > rootfs/gcs.branch
tar -zcf $@ -C rootfs .
rm -rf rootfs
rootfs: out/rootfs.vhd
out/rootfs.vhd: out/rootfs.tar.gz bin/cmd/tar2ext4
gzip -f -d ./out/rootfs.tar.gz
bin/cmd/tar2ext4 -vhd -i ./out/rootfs.tar -o $@
out/rootfs.tar.gz: out/initrd.img
rm -rf rootfs-conv
@@ -52,13 +60,45 @@ out/rootfs.tar.gz: out/initrd.img
tar -zcf $@ -C rootfs-conv .
rm -rf rootfs-conv
out/initrd.img: $(BASE) out/delta.tar.gz $(SRCROOT)/hack/catcpio.sh
$(SRCROOT)/hack/catcpio.sh "$(BASE)" out/delta.tar.gz > out/initrd.img.uncompressed
out/initrd.img: $(BASE) $(DELTA_TARGET) $(SRCROOT)/hack/catcpio.sh
$(SRCROOT)/hack/catcpio.sh "$(BASE)" $(DELTA_TARGET) > out/initrd.img.uncompressed
gzip -c out/initrd.img.uncompressed > $@
rm out/initrd.img.uncompressed
# This target includes utilities which may be useful for testing purposes.
out/delta-dev.tar.gz: out/delta.tar.gz bin/internal/tools/snp-report
rm -rf rootfs-dev
mkdir rootfs-dev
tar -xzf out/delta.tar.gz -C rootfs-dev
cp bin/internal/tools/snp-report rootfs-dev/bin/
tar -zcf $@ -C rootfs-dev .
rm -rf rootfs-dev
out/delta.tar.gz: bin/init bin/vsockexec bin/cmd/gcs bin/cmd/gcstools bin/cmd/hooks/wait-paths Makefile
@mkdir -p out
rm -rf rootfs
mkdir -p rootfs/bin/
mkdir -p rootfs/info/
cp bin/init rootfs/
cp bin/vsockexec rootfs/bin/
cp bin/cmd/gcs rootfs/bin/
cp bin/cmd/gcstools rootfs/bin/
cp bin/cmd/hooks/wait-paths rootfs/bin/
for tool in $(GCS_TOOLS); do ln -s gcstools rootfs/bin/$$tool; done
git -C $(SRCROOT) rev-parse HEAD > rootfs/info/gcs.commit && \
git -C $(SRCROOT) rev-parse --abbrev-ref HEAD > rootfs/info/gcs.branch && \
date --iso-8601=minute --utc > rootfs/info/tar.date
$(if $(and $(realpath $(subst .tar,.testdata.json,$(BASE))), $(shell which jq)), \
jq -r '.IMAGE_NAME' $(subst .tar,.testdata.json,$(BASE)) 2>/dev/null > rootfs/info/image.name && \
jq -r '.DATETIME' $(subst .tar,.testdata.json,$(BASE)) 2>/dev/null > rootfs/info/build.date)
tar -zcf $@ -C rootfs .
rm -rf rootfs
-include deps/cmd/gcs.gomake
-include deps/cmd/gcstools.gomake
-include deps/cmd/hooks/wait-paths.gomake
-include deps/cmd/tar2ext4.gomake
-include deps/internal/tools/snp-report.gomake
# Implicit rule for includes that define Go targets.
%.gomake: $(SRCROOT)/Makefile
@@ -72,8 +112,6 @@ out/initrd.img: $(BASE) out/delta.tar.gz $(SRCROOT)/hack/catcpio.sh
@/bin/echo -e '-include $(@:%.gomake=%.godeps)' >> $@.new
mv $@.new $@
VPATH=$(SRCROOT)
bin/vsockexec: vsockexec/vsockexec.o vsockexec/vsock.o
@mkdir -p bin
$(CC) $(LDFLAGS) -o $@ $^

View File

@@ -1,4 +1,4 @@
version = "unstable"
version = "1"
generator = "gogoctrd"
plugins = ["grpc", "fieldpath"]
@@ -14,11 +14,6 @@ plugins = ["grpc", "fieldpath"]
# target package.
packages = ["github.com/gogo/protobuf"]
# Paths that will be added untouched to the end of the includes. We use
# `/usr/local/include` to pickup the common install location of protobuf.
# This is the default.
after = ["/usr/local/include"]
# This section maps protobuf imports to Go packages. These will become
# `-M` directives in the call to the go protobuf generator.
[packages]
@@ -36,6 +31,10 @@ plugins = ["grpc", "fieldpath"]
prefixes = ["github.com/Microsoft/hcsshim/internal/shimdiag"]
plugins = ["ttrpc"]
[[overrides]]
prefixes = ["github.com/Microsoft/hcsshim/internal/extendedtask"]
plugins = ["ttrpc"]
[[overrides]]
prefixes = ["github.com/Microsoft/hcsshim/internal/computeagent"]
plugins = ["ttrpc"]

View File

@@ -75,24 +75,6 @@ certify they either authored the work themselves or otherwise have permission to
more info, as well as to make sure that you can attest to the rules listed. Our CI uses the [DCO Github app](https://github.com/apps/dco) to ensure
that all commits in a given PR are signed-off.
### Test Directory (Important to note)
This project has tried to trim some dependencies from the root Go modules file that would be cumbersome to get transitively included if this
project is being vendored/used as a library. Some of these dependencies were only being used for tests, so the /test directory in this project also has
its own go.mod file where these are now included to get around this issue. Our tests rely on the code in this project to run, so the test Go modules file
has a relative path replace directive to pull in the latest hcsshim code that the tests actually touch from this project
(which is the repo itself on your disk).
```
replace (
github.com/Microsoft/hcsshim => ../
)
```
Because of this, for most code changes you may need to run `go mod vendor` + `go mod tidy` in the /test directory in this repository, as the
CI in this project will check if the files are out of date and will fail if this is true.
## Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
@@ -101,7 +83,7 @@ contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additio
## Dependencies
This project requires Golang 1.9 or newer to build.
This project requires Golang 1.17 or newer to build.
For system requirements to run this project, see the Microsoft docs on [Windows Container requirements](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/system-requirements).

41
vendor/github.com/Microsoft/hcsshim/SECURITY.md generated vendored Normal file
View File

@@ -0,0 +1,41 @@
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.7 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -17,8 +19,8 @@ import (
//
// `layerData` is the parent read-only layer data.
func AttachLayerStorageFilter(ctx context.Context, layerPath string, layerData LayerData) (err error) {
title := "hcsshim.AttachLayerStorageFilter"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::AttachLayerStorageFilter"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -12,8 +14,8 @@ import (
//
// `layerPath` is a path to a directory containing the layer to export.
func DestroyLayer(ctx context.Context, layerPath string) (err error) {
title := "hcsshim.DestroyLayer"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::DestroyLayer"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(trace.StringAttribute("layerPath", layerPath))

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -12,8 +14,8 @@ import (
//
// `layerPath` is a path to a directory containing the layer to export.
func DetachLayerStorageFilter(ctx context.Context, layerPath string) (err error) {
title := "hcsshim.DetachLayerStorageFilter"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::DetachLayerStorageFilter"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(trace.StringAttribute("layerPath", layerPath))

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -19,8 +21,8 @@ import (
//
// `options` are the export options applied to the exported layer.
func ExportLayer(ctx context.Context, layerPath, exportFolderPath string, layerData LayerData, options ExportLayerOptions) (err error) {
title := "hcsshim.ExportLayer"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::ExportLayer"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(
@@ -28,17 +30,17 @@ func ExportLayer(ctx context.Context, layerPath, exportFolderPath string, layerD
trace.StringAttribute("exportFolderPath", exportFolderPath),
)
ldbytes, err := json.Marshal(layerData)
ldBytes, err := json.Marshal(layerData)
if err != nil {
return err
}
obytes, err := json.Marshal(options)
oBytes, err := json.Marshal(options)
if err != nil {
return err
}
err = hcsExportLayer(layerPath, exportFolderPath, string(ldbytes), string(obytes))
err = hcsExportLayer(layerPath, exportFolderPath, string(ldBytes), string(oBytes))
if err != nil {
return errors.Wrap(err, "failed to export layer")
}

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -5,16 +7,20 @@ import (
"github.com/Microsoft/hcsshim/internal/oc"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"golang.org/x/sys/windows"
)
// FormatWritableLayerVhd formats a virtual disk for use as a writable container layer.
//
// If the VHD is not mounted it will be temporarily mounted.
//
// NOTE: This API had a breaking change in the operating system after Windows Server 2019.
// On ws2019 the API expects to get passed a file handle from CreateFile for the vhd that
// the caller wants to format. On > ws2019, its expected that the caller passes a vhd handle
// that can be obtained from the virtdisk APIs.
func FormatWritableLayerVhd(ctx context.Context, vhdHandle windows.Handle) (err error) {
title := "hcsshim.FormatWritableLayerVhd"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::FormatWritableLayerVhd"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -6,10 +8,12 @@ import (
"path/filepath"
"syscall"
"github.com/Microsoft/go-winio/pkg/security"
"github.com/Microsoft/go-winio/vhd"
"github.com/Microsoft/hcsshim/internal/memory"
"github.com/pkg/errors"
"golang.org/x/sys/windows"
"github.com/Microsoft/hcsshim/internal/security"
)
const defaultVHDXBlockSizeInMB = 1
@@ -59,8 +63,8 @@ func SetupContainerBaseLayer(ctx context.Context, layerPath, baseVhdPath, diffVh
createParams := &vhd.CreateVirtualDiskParameters{
Version: 2,
Version2: vhd.CreateVersion2{
MaximumSize: sizeInGB * 1024 * 1024 * 1024,
BlockSizeInBytes: defaultVHDXBlockSizeInMB * 1024 * 1024,
MaximumSize: sizeInGB * memory.GiB,
BlockSizeInBytes: defaultVHDXBlockSizeInMB * memory.MiB,
},
}
handle, err := vhd.CreateVirtualDisk(baseVhdPath, vhd.VirtualDiskAccessNone, vhd.CreateVirtualDiskFlagNone, createParams)
@@ -135,8 +139,8 @@ func SetupUtilityVMBaseLayer(ctx context.Context, uvmPath, baseVhdPath, diffVhdP
createParams := &vhd.CreateVirtualDiskParameters{
Version: 2,
Version2: vhd.CreateVersion2{
MaximumSize: sizeInGB * 1024 * 1024 * 1024,
BlockSizeInBytes: defaultVHDXBlockSizeInMB * 1024 * 1024,
MaximumSize: sizeInGB * memory.GiB,
BlockSizeInBytes: defaultVHDXBlockSizeInMB * memory.MiB,
},
}
handle, err := vhd.CreateVirtualDisk(baseVhdPath, vhd.VirtualDiskAccessNone, vhd.CreateVirtualDiskFlagNone, createParams)

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -19,8 +21,8 @@ import (
//
// `layerData` is the parent layer data.
func ImportLayer(ctx context.Context, layerPath, sourceFolderPath string, layerData LayerData) (err error) {
title := "hcsshim.ImportLayer"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::ImportLayer"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -16,8 +18,8 @@ import (
//
// `layerData` is the parent read-only layer data.
func InitializeWritableLayer(ctx context.Context, layerPath string, layerData LayerData) (err error) {
title := "hcsshim.InitializeWritableLayer"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::InitializeWritableLayer"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -6,14 +8,13 @@ import (
"github.com/Microsoft/hcsshim/internal/interop"
"github.com/Microsoft/hcsshim/internal/oc"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"golang.org/x/sys/windows"
)
// GetLayerVhdMountPath returns the volume path for a virtual disk of a writable container layer.
func GetLayerVhdMountPath(ctx context.Context, vhdHandle windows.Handle) (path string, err error) {
title := "hcsshim.GetLayerVhdMountPath"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::GetLayerVhdMountPath"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()

View File

@@ -1,3 +1,5 @@
//go:build windows
package computestorage
import (
@@ -21,8 +23,8 @@ import (
//
// `options` are the options applied while processing the layer.
func SetupBaseOSLayer(ctx context.Context, layerPath string, vhdHandle windows.Handle, options OsLayerOptions) (err error) {
title := "hcsshim.SetupBaseOSLayer"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::SetupBaseOSLayer"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(
@@ -48,12 +50,16 @@ func SetupBaseOSLayer(ctx context.Context, layerPath string, vhdHandle windows.H
// `volumePath` is the path to the volume to be used for setup.
//
// `options` are the options applied while processing the layer.
//
// NOTE: This API is only available on builds of Windows greater than 19645. Inside we
// check if the hosts build has the API available by using 'GetVersion' which requires
// the calling application to be manifested. https://docs.microsoft.com/en-us/windows/win32/sbscs/manifests
func SetupBaseOSVolume(ctx context.Context, layerPath, volumePath string, options OsLayerOptions) (err error) {
if osversion.Build() < 19645 {
return errors.New("SetupBaseOSVolume is not present on builds older than 19645")
}
title := "hcsshim.SetupBaseOSVolume"
ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
title := "hcsshim::SetupBaseOSVolume"
ctx, span := oc.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(

View File

@@ -7,7 +7,7 @@ import (
hcsschema "github.com/Microsoft/hcsshim/internal/hcs/schema2"
)
//go:generate go run ../mksyscall_windows.go -output zsyscall_windows.go storage.go
//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go storage.go
//sys hcsImportLayer(layerPath string, sourceFolderPath string, layerData string) (hr error) = computestorage.HcsImportLayer?
//sys hcsExportLayer(layerPath string, exportFolderPath string, layerData string, options string) (hr error) = computestorage.HcsExportLayer?
@@ -20,10 +20,13 @@ import (
//sys hcsGetLayerVhdMountPath(vhdHandle windows.Handle, mountPath **uint16) (hr error) = computestorage.HcsGetLayerVhdMountPath?
//sys hcsSetupBaseOSVolume(layerPath string, volumePath string, options string) (hr error) = computestorage.HcsSetupBaseOSVolume?
type Version = hcsschema.Version
type Layer = hcsschema.Layer
// LayerData is the data used to describe parent layer information.
type LayerData struct {
SchemaVersion hcsschema.Version `json:"SchemaVersion,omitempty"`
Layers []hcsschema.Layer `json:"Layers,omitempty"`
SchemaVersion Version `json:"SchemaVersion,omitempty"`
Layers []Layer `json:"Layers,omitempty"`
}
// ExportLayerOptions are the set of options that are used with the `computestorage.HcsExportLayer` syscall.

View File

@@ -1,4 +1,6 @@
// Code generated mksyscall_windows.exe DO NOT EDIT
//go:build windows
// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT.
package computestorage
@@ -19,6 +21,7 @@ const (
var (
errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
errERROR_EINVAL error = syscall.EINVAL
)
// errnoErr returns common boxed Errno values, to prevent
@@ -26,7 +29,7 @@ var (
func errnoErr(e syscall.Errno) error {
switch e {
case 0:
return nil
return errERROR_EINVAL
case errnoERROR_IO_PENDING:
return errERROR_IO_PENDING
}
@@ -39,42 +42,86 @@ func errnoErr(e syscall.Errno) error {
var (
modcomputestorage = windows.NewLazySystemDLL("computestorage.dll")
procHcsImportLayer = modcomputestorage.NewProc("HcsImportLayer")
procHcsExportLayer = modcomputestorage.NewProc("HcsExportLayer")
procHcsDestoryLayer = modcomputestorage.NewProc("HcsDestoryLayer")
procHcsSetupBaseOSLayer = modcomputestorage.NewProc("HcsSetupBaseOSLayer")
procHcsInitializeWritableLayer = modcomputestorage.NewProc("HcsInitializeWritableLayer")
procHcsAttachLayerStorageFilter = modcomputestorage.NewProc("HcsAttachLayerStorageFilter")
procHcsDestoryLayer = modcomputestorage.NewProc("HcsDestoryLayer")
procHcsDetachLayerStorageFilter = modcomputestorage.NewProc("HcsDetachLayerStorageFilter")
procHcsExportLayer = modcomputestorage.NewProc("HcsExportLayer")
procHcsFormatWritableLayerVhd = modcomputestorage.NewProc("HcsFormatWritableLayerVhd")
procHcsGetLayerVhdMountPath = modcomputestorage.NewProc("HcsGetLayerVhdMountPath")
procHcsImportLayer = modcomputestorage.NewProc("HcsImportLayer")
procHcsInitializeWritableLayer = modcomputestorage.NewProc("HcsInitializeWritableLayer")
procHcsSetupBaseOSLayer = modcomputestorage.NewProc("HcsSetupBaseOSLayer")
procHcsSetupBaseOSVolume = modcomputestorage.NewProc("HcsSetupBaseOSVolume")
)
func hcsImportLayer(layerPath string, sourceFolderPath string, layerData string) (hr error) {
func hcsAttachLayerStorageFilter(layerPath string, layerData string) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(layerPath)
if hr != nil {
return
}
var _p1 *uint16
_p1, hr = syscall.UTF16PtrFromString(sourceFolderPath)
_p1, hr = syscall.UTF16PtrFromString(layerData)
if hr != nil {
return
}
var _p2 *uint16
_p2, hr = syscall.UTF16PtrFromString(layerData)
if hr != nil {
return
}
return _hcsImportLayer(_p0, _p1, _p2)
return _hcsAttachLayerStorageFilter(_p0, _p1)
}
func _hcsImportLayer(layerPath *uint16, sourceFolderPath *uint16, layerData *uint16) (hr error) {
if hr = procHcsImportLayer.Find(); hr != nil {
func _hcsAttachLayerStorageFilter(layerPath *uint16, layerData *uint16) (hr error) {
hr = procHcsAttachLayerStorageFilter.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsImportLayer.Addr(), 3, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(sourceFolderPath)), uintptr(unsafe.Pointer(layerData)))
r0, _, _ := syscall.Syscall(procHcsAttachLayerStorageFilter.Addr(), 2, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(layerData)), 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func hcsDestroyLayer(layerPath string) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(layerPath)
if hr != nil {
return
}
return _hcsDestroyLayer(_p0)
}
func _hcsDestroyLayer(layerPath *uint16) (hr error) {
hr = procHcsDestoryLayer.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsDestoryLayer.Addr(), 1, uintptr(unsafe.Pointer(layerPath)), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func hcsDetachLayerStorageFilter(layerPath string) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(layerPath)
if hr != nil {
return
}
return _hcsDetachLayerStorageFilter(_p0)
}
func _hcsDetachLayerStorageFilter(layerPath *uint16) (hr error) {
hr = procHcsDetachLayerStorageFilter.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsDetachLayerStorageFilter.Addr(), 1, uintptr(unsafe.Pointer(layerPath)), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
@@ -109,7 +156,8 @@ func hcsExportLayer(layerPath string, exportFolderPath string, layerData string,
}
func _hcsExportLayer(layerPath *uint16, exportFolderPath *uint16, layerData *uint16, options *uint16) (hr error) {
if hr = procHcsExportLayer.Find(); hr != nil {
hr = procHcsExportLayer.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procHcsExportLayer.Addr(), 4, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(exportFolderPath)), uintptr(unsafe.Pointer(layerData)), uintptr(unsafe.Pointer(options)), 0, 0)
@@ -122,20 +170,12 @@ func _hcsExportLayer(layerPath *uint16, exportFolderPath *uint16, layerData *uin
return
}
func hcsDestroyLayer(layerPath string) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(layerPath)
func hcsFormatWritableLayerVhd(handle windows.Handle) (hr error) {
hr = procHcsFormatWritableLayerVhd.Find()
if hr != nil {
return
}
return _hcsDestroyLayer(_p0)
}
func _hcsDestroyLayer(layerPath *uint16) (hr error) {
if hr = procHcsDestoryLayer.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsDestoryLayer.Addr(), 1, uintptr(unsafe.Pointer(layerPath)), 0, 0)
r0, _, _ := syscall.Syscall(procHcsFormatWritableLayerVhd.Addr(), 1, uintptr(handle), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
@@ -145,25 +185,46 @@ func _hcsDestroyLayer(layerPath *uint16) (hr error) {
return
}
func hcsSetupBaseOSLayer(layerPath string, handle windows.Handle, options string) (hr error) {
func hcsGetLayerVhdMountPath(vhdHandle windows.Handle, mountPath **uint16) (hr error) {
hr = procHcsGetLayerVhdMountPath.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsGetLayerVhdMountPath.Addr(), 2, uintptr(vhdHandle), uintptr(unsafe.Pointer(mountPath)), 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func hcsImportLayer(layerPath string, sourceFolderPath string, layerData string) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(layerPath)
if hr != nil {
return
}
var _p1 *uint16
_p1, hr = syscall.UTF16PtrFromString(options)
_p1, hr = syscall.UTF16PtrFromString(sourceFolderPath)
if hr != nil {
return
}
return _hcsSetupBaseOSLayer(_p0, handle, _p1)
}
func _hcsSetupBaseOSLayer(layerPath *uint16, handle windows.Handle, options *uint16) (hr error) {
if hr = procHcsSetupBaseOSLayer.Find(); hr != nil {
var _p2 *uint16
_p2, hr = syscall.UTF16PtrFromString(layerData)
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsSetupBaseOSLayer.Addr(), 3, uintptr(unsafe.Pointer(layerPath)), uintptr(handle), uintptr(unsafe.Pointer(options)))
return _hcsImportLayer(_p0, _p1, _p2)
}
func _hcsImportLayer(layerPath *uint16, sourceFolderPath *uint16, layerData *uint16) (hr error) {
hr = procHcsImportLayer.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsImportLayer.Addr(), 3, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(sourceFolderPath)), uintptr(unsafe.Pointer(layerData)))
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
@@ -193,7 +254,8 @@ func hcsInitializeWritableLayer(writableLayerPath string, layerData string, opti
}
func _hcsInitializeWritableLayer(writableLayerPath *uint16, layerData *uint16, options *uint16) (hr error) {
if hr = procHcsInitializeWritableLayer.Find(); hr != nil {
hr = procHcsInitializeWritableLayer.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsInitializeWritableLayer.Addr(), 3, uintptr(unsafe.Pointer(writableLayerPath)), uintptr(unsafe.Pointer(layerData)), uintptr(unsafe.Pointer(options)))
@@ -206,76 +268,26 @@ func _hcsInitializeWritableLayer(writableLayerPath *uint16, layerData *uint16, o
return
}
func hcsAttachLayerStorageFilter(layerPath string, layerData string) (hr error) {
func hcsSetupBaseOSLayer(layerPath string, handle windows.Handle, options string) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(layerPath)
if hr != nil {
return
}
var _p1 *uint16
_p1, hr = syscall.UTF16PtrFromString(layerData)
_p1, hr = syscall.UTF16PtrFromString(options)
if hr != nil {
return
}
return _hcsAttachLayerStorageFilter(_p0, _p1)
return _hcsSetupBaseOSLayer(_p0, handle, _p1)
}
func _hcsAttachLayerStorageFilter(layerPath *uint16, layerData *uint16) (hr error) {
if hr = procHcsAttachLayerStorageFilter.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsAttachLayerStorageFilter.Addr(), 2, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(layerData)), 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func hcsDetachLayerStorageFilter(layerPath string) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(layerPath)
func _hcsSetupBaseOSLayer(layerPath *uint16, handle windows.Handle, options *uint16) (hr error) {
hr = procHcsSetupBaseOSLayer.Find()
if hr != nil {
return
}
return _hcsDetachLayerStorageFilter(_p0)
}
func _hcsDetachLayerStorageFilter(layerPath *uint16) (hr error) {
if hr = procHcsDetachLayerStorageFilter.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsDetachLayerStorageFilter.Addr(), 1, uintptr(unsafe.Pointer(layerPath)), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func hcsFormatWritableLayerVhd(handle windows.Handle) (hr error) {
if hr = procHcsFormatWritableLayerVhd.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsFormatWritableLayerVhd.Addr(), 1, uintptr(handle), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func hcsGetLayerVhdMountPath(vhdHandle windows.Handle, mountPath **uint16) (hr error) {
if hr = procHcsGetLayerVhdMountPath.Find(); hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsGetLayerVhdMountPath.Addr(), 2, uintptr(vhdHandle), uintptr(unsafe.Pointer(mountPath)), 0)
r0, _, _ := syscall.Syscall(procHcsSetupBaseOSLayer.Addr(), 3, uintptr(unsafe.Pointer(layerPath)), uintptr(handle), uintptr(unsafe.Pointer(options)))
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
@@ -305,7 +317,8 @@ func hcsSetupBaseOSVolume(layerPath string, volumePath string, options string) (
}
func _hcsSetupBaseOSVolume(layerPath *uint16, volumePath *uint16, options *uint16) (hr error) {
if hr = procHcsSetupBaseOSVolume.Find(); hr != nil {
hr = procHcsSetupBaseOSVolume.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsSetupBaseOSVolume.Addr(), 3, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(volumePath)), uintptr(unsafe.Pointer(options)))

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcsshim
import (
@@ -60,7 +62,7 @@ type container struct {
waitCh chan struct{}
}
// createComputeSystemAdditionalJSON is read from the environment at initialisation
// createContainerAdditionalJSON is read from the environment at initialization
// time. It allows an environment variable to define additional JSON which
// is merged in the CreateComputeSystem call to HCS.
var createContainerAdditionalJSON []byte

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcsshim
import (
@@ -50,6 +52,9 @@ var (
// ErrUnexpectedValue is an error encountered when hcs returns an invalid value
ErrUnexpectedValue = hcs.ErrUnexpectedValue
// ErrOperationDenied is an error when hcs attempts an operation that is explicitly denied
ErrOperationDenied = hcs.ErrOperationDenied
// ErrVmcomputeAlreadyStopped is an error encountered when a shutdown or terminate request is made on a stopped container
ErrVmcomputeAlreadyStopped = hcs.ErrVmcomputeAlreadyStopped

View File

@@ -1,12 +0,0 @@
# Requirements so far:
# dockerd running
# - image microsoft/nanoserver (matching host base image) docker load -i c:\baseimages\nanoserver.tar
# - image alpine (linux) docker pull --platform=linux alpine
# TODO: Add this a parameter for debugging. ie "functional-tests -debug=$true"
#$env:HCSSHIM_FUNCTIONAL_TESTS_DEBUG="yes please"
#pushd uvm
go test -v -tags "functional uvmcreate uvmscratch uvmscsi uvmvpmem uvmvsmb uvmp9" ./...
#popd

View File

@@ -1,15 +1,17 @@
//go:build windows
// Shim for the Host Compute Service (HCS) to manage Windows Server
// containers and Hyper-V containers.
package hcsshim
import (
"syscall"
"golang.org/x/sys/windows"
"github.com/Microsoft/hcsshim/internal/hcserror"
)
//go:generate go run mksyscall_windows.go -output zsyscall_windows.go hcsshim.go
//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go hcsshim.go
//sys SetCurrentThreadCompartmentId(compartmentId uint32) (hr error) = iphlpapi.SetCurrentThreadCompartmentId
@@ -17,9 +19,9 @@ const (
// Specific user-visible exit codes
WaitErrExecFailed = 32767
ERROR_GEN_FAILURE = hcserror.ERROR_GEN_FAILURE
ERROR_SHUTDOWN_IN_PROGRESS = syscall.Errno(1115)
WSAEINVAL = syscall.Errno(10022)
ERROR_GEN_FAILURE = windows.ERROR_GEN_FAILURE
ERROR_SHUTDOWN_IN_PROGRESS = windows.ERROR_SHUTDOWN_IN_PROGRESS
WSAEINVAL = windows.WSAEINVAL
// Timeout on wait calls
TimeoutInfinite = 0xFFFFFFFF

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcsshim
import (
@@ -13,7 +15,7 @@ type HNSEndpointStats = hns.EndpointStats
// Namespace represents a Compartment.
type Namespace = hns.Namespace
//SystemType represents the type of the system on which actions are done
// SystemType represents the type of the system on which actions are done
type SystemType string
// SystemType const

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcsshim
import (

View File

@@ -1,14 +1,16 @@
//go:build windows
package hcsshim
import (
"github.com/Microsoft/hcsshim/internal/hns"
)
// Subnet is assoicated with a network and represents a list
// Subnet is associated with a network and represents a list
// of subnets available to the network
type Subnet = hns.Subnet
// MacPool is assoicated with a network and represents a list
// MacPool is associated with a network and represents a list
// of macaddresses available to the network
type MacPool = hns.MacPool

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcsshim
import (

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcsshim
import (

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcsshim
import (

View File

@@ -1,3 +1,5 @@
//go:build windows
package cow
import (

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcs
import (

View File

@@ -0,0 +1 @@
package hcs

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcs
import (
@@ -51,6 +53,9 @@ var (
// ErrUnexpectedValue is an error encountered when hcs returns an invalid value
ErrUnexpectedValue = errors.New("unexpected value returned from hcs")
// ErrOperationDenied is an error when hcs attempts an operation that is explicitly denied
ErrOperationDenied = errors.New("operation denied")
// ErrVmcomputeAlreadyStopped is an error encountered when a shutdown or terminate request is made on a stopped container
ErrVmcomputeAlreadyStopped = syscall.Errno(0xc0370110)
@@ -82,7 +87,7 @@ var (
// ErrProcessAlreadyStopped is returned by hcs if the process we're trying to kill has already been stopped.
ErrProcessAlreadyStopped = syscall.Errno(0x8037011f)
// ErrInvalidHandle is an error that can be encountrered when querying the properties of a compute system when the handle to that
// ErrInvalidHandle is an error that can be encountered when querying the properties of a compute system when the handle to that
// compute system has already been closed.
ErrInvalidHandle = syscall.Errno(0x6)
)
@@ -152,33 +157,38 @@ func (e *HcsError) Error() string {
return s
}
func (e *HcsError) Is(target error) bool {
return errors.Is(e.Err, target)
}
// unwrap isnt really needed, but helpful convince function
func (e *HcsError) Unwrap() error {
return e.Err
}
// Deprecated: net.Error.Temporary is deprecated.
func (e *HcsError) Temporary() bool {
err, ok := e.Err.(net.Error)
return ok && err.Temporary() //nolint:staticcheck
err := e.netError()
return (err != nil) && err.Temporary()
}
func (e *HcsError) Timeout() bool {
err, ok := e.Err.(net.Error)
return ok && err.Timeout()
err := e.netError()
return (err != nil) && err.Timeout()
}
// ProcessError is an error encountered in HCS during an operation on a Process object
type ProcessError struct {
SystemID string
Pid int
Op string
Err error
Events []ErrorEvent
func (e *HcsError) netError() (err net.Error) {
if errors.As(e.Unwrap(), &err) {
return err
}
return nil
}
var _ net.Error = &ProcessError{}
// SystemError is an error encountered in HCS during an operation on a Container object
type SystemError struct {
ID string
Op string
Err error
Events []ErrorEvent
HcsError
ID string
}
var _ net.Error = &SystemError{}
@@ -191,29 +201,32 @@ func (e *SystemError) Error() string {
return s
}
func (e *SystemError) Temporary() bool {
err, ok := e.Err.(net.Error)
return ok && err.Temporary() //nolint:staticcheck
}
func (e *SystemError) Timeout() bool {
err, ok := e.Err.(net.Error)
return ok && err.Timeout()
}
func makeSystemError(system *System, op string, err error, events []ErrorEvent) error {
// Don't double wrap errors
if _, ok := err.(*SystemError); ok {
var e *SystemError
if errors.As(err, &e) {
return err
}
return &SystemError{
ID: system.ID(),
Op: op,
Err: err,
Events: events,
ID: system.ID(),
HcsError: HcsError{
Op: op,
Err: err,
Events: events,
},
}
}
// ProcessError is an error encountered in HCS during an operation on a Process object
type ProcessError struct {
HcsError
SystemID string
Pid int
}
var _ net.Error = &ProcessError{}
func (e *ProcessError) Error() string {
s := fmt.Sprintf("%s %s:%d: %s", e.Op, e.SystemID, e.Pid, e.Err.Error())
for _, ev := range e.Events {
@@ -222,27 +235,20 @@ func (e *ProcessError) Error() string {
return s
}
func (e *ProcessError) Temporary() bool {
err, ok := e.Err.(net.Error)
return ok && err.Temporary() //nolint:staticcheck
}
func (e *ProcessError) Timeout() bool {
err, ok := e.Err.(net.Error)
return ok && err.Timeout()
}
func makeProcessError(process *Process, op string, err error, events []ErrorEvent) error {
// Don't double wrap errors
if _, ok := err.(*ProcessError); ok {
var e *ProcessError
if errors.As(err, &e) {
return err
}
return &ProcessError{
Pid: process.Pid(),
SystemID: process.SystemID(),
Op: op,
Err: err,
Events: events,
HcsError: HcsError{
Op: op,
Err: err,
Events: events,
},
}
}
@@ -251,41 +257,41 @@ func makeProcessError(process *Process, op string, err error, events []ErrorEven
// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
// will currently return true when the error is ErrElementNotFound.
func IsNotExist(err error) bool {
err = getInnerError(err)
return err == ErrComputeSystemDoesNotExist ||
err == ErrElementNotFound
return IsAny(err, ErrComputeSystemDoesNotExist, ErrElementNotFound)
}
// IsErrorInvalidHandle checks whether the error is the result of an operation carried
// out on a handle that is invalid/closed. This error popped up while trying to query
// stats on a container in the process of being stopped.
func IsErrorInvalidHandle(err error) bool {
err = getInnerError(err)
return err == ErrInvalidHandle
return errors.Is(err, ErrInvalidHandle)
}
// IsAlreadyClosed checks if an error is caused by the Container or Process having been
// already closed by a call to the Close() method.
func IsAlreadyClosed(err error) bool {
err = getInnerError(err)
return err == ErrAlreadyClosed
return errors.Is(err, ErrAlreadyClosed)
}
// IsPending returns a boolean indicating whether the error is that
// the requested operation is being completed in the background.
func IsPending(err error) bool {
err = getInnerError(err)
return err == ErrVmcomputeOperationPending
return errors.Is(err, ErrVmcomputeOperationPending)
}
// IsTimeout returns a boolean indicating whether the error is caused by
// a timeout waiting for the operation to complete.
func IsTimeout(err error) bool {
if err, ok := err.(net.Error); ok && err.Timeout() {
// HcsError and co. implement Timeout regardless of whether the errors they wrap do,
// so `errors.As(err, net.Error)`` will always be true.
// Using `errors.As(err.Unwrap(), net.Err)` wont work for general errors.
// So first check if there an `ErrTimeout` in the chain, then convert to a net error.
if errors.Is(err, ErrTimeout) {
return true
}
err = getInnerError(err)
return err == ErrTimeout
var nerr net.Error
return errors.As(err, &nerr) && nerr.Timeout()
}
// IsAlreadyStopped returns a boolean indicating whether the error is caused by
@@ -294,10 +300,7 @@ func IsTimeout(err error) bool {
// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
// will currently return true when the error is ErrElementNotFound.
func IsAlreadyStopped(err error) bool {
err = getInnerError(err)
return err == ErrVmcomputeAlreadyStopped ||
err == ErrProcessAlreadyStopped ||
err == ErrElementNotFound
return IsAny(err, ErrVmcomputeAlreadyStopped, ErrProcessAlreadyStopped, ErrElementNotFound)
}
// IsNotSupported returns a boolean indicating whether the error is caused by
@@ -306,38 +309,28 @@ func IsAlreadyStopped(err error) bool {
// ErrVmcomputeInvalidJSON, ErrInvalidData, ErrNotSupported or ErrVmcomputeUnknownMessage
// is thrown from the Platform
func IsNotSupported(err error) bool {
err = getInnerError(err)
// If Platform doesn't recognize or support the request sent, below errors are seen
return err == ErrVmcomputeInvalidJSON ||
err == ErrInvalidData ||
err == ErrNotSupported ||
err == ErrVmcomputeUnknownMessage
return IsAny(err, ErrVmcomputeInvalidJSON, ErrInvalidData, ErrNotSupported, ErrVmcomputeUnknownMessage)
}
// IsOperationInvalidState returns true when err is caused by
// `ErrVmcomputeOperationInvalidState`.
func IsOperationInvalidState(err error) bool {
err = getInnerError(err)
return err == ErrVmcomputeOperationInvalidState
return errors.Is(err, ErrVmcomputeOperationInvalidState)
}
// IsAccessIsDenied returns true when err is caused by
// `ErrVmcomputeOperationAccessIsDenied`.
func IsAccessIsDenied(err error) bool {
err = getInnerError(err)
return err == ErrVmcomputeOperationAccessIsDenied
return errors.Is(err, ErrVmcomputeOperationAccessIsDenied)
}
func getInnerError(err error) error {
switch pe := err.(type) {
case nil:
return nil
case *HcsError:
err = pe.Err
case *SystemError:
err = pe.Err
case *ProcessError:
err = pe.Err
// IsAny is a vectorized version of [errors.Is], it returns true if err is one of targets.
func IsAny(err error, targets ...error) bool {
for _, e := range targets {
if errors.Is(err, e) {
return true
}
}
return err
return false
}

View File

@@ -1,3 +1,5 @@
//go:build windows
package hcs
import (
@@ -10,6 +12,7 @@ import (
"syscall"
"time"
"github.com/Microsoft/hcsshim/internal/cow"
"github.com/Microsoft/hcsshim/internal/log"
"github.com/Microsoft/hcsshim/internal/oc"
"github.com/Microsoft/hcsshim/internal/vmcompute"
@@ -36,6 +39,8 @@ type Process struct {
waitError error
}
var _ cow.Process = &Process{}
func newProcess(process vmcompute.HcsProcess, processID int, computeSystem *System) *Process {
return &Process{
handle: process,
@@ -89,10 +94,7 @@ func (process *Process) processSignalResult(ctx context.Context, err error) (boo
case nil:
return true, nil
case ErrVmcomputeOperationInvalidState, ErrComputeSystemDoesNotExist, ErrElementNotFound:
select {
case <-process.waitBlock:
// The process exit notification has already arrived.
default:
if !process.stopped() {
// The process should be gone, but we have not received the notification.
// After a second, force unblock the process wait to work around a possible
// deadlock in the HCS.
@@ -114,9 +116,9 @@ func (process *Process) processSignalResult(ctx context.Context, err error) (boo
// Signal signals the process with `options`.
//
// For LCOW `guestrequest.SignalProcessOptionsLCOW`.
// For LCOW `guestresource.SignalProcessOptionsLCOW`.
//
// For WCOW `guestrequest.SignalProcessOptionsWCOW`.
// For WCOW `guestresource.SignalProcessOptionsWCOW`.
func (process *Process) Signal(ctx context.Context, options interface{}) (bool, error) {
process.handleLock.RLock()
defer process.handleLock.RUnlock()
@@ -152,6 +154,10 @@ func (process *Process) Kill(ctx context.Context) (bool, error) {
return false, makeProcessError(process, operation, ErrAlreadyClosed, nil)
}
if process.stopped() {
return false, makeProcessError(process, operation, ErrProcessAlreadyStopped, nil)
}
if process.killSignalDelivered {
// A kill signal has already been sent to this process. Sending a second
// one offers no real benefit, as processes cannot stop themselves from
@@ -161,7 +167,39 @@ func (process *Process) Kill(ctx context.Context) (bool, error) {
return true, nil
}
resultJSON, err := vmcompute.HcsTerminateProcess(ctx, process.handle)
// HCS serializes the signals sent to a target pid per compute system handle.
// To avoid SIGKILL being serialized behind other signals, we open a new compute
// system handle to deliver the kill signal.
// If the calls to opening a new compute system handle fail, we forcefully
// terminate the container itself so that no container is left behind
hcsSystem, err := OpenComputeSystem(ctx, process.system.id)
if err != nil {
// log error and force termination of container
log.G(ctx).WithField("err", err).Error("OpenComputeSystem() call failed")
err = process.system.Terminate(ctx)
// if the Terminate() call itself ever failed, log and return error
if err != nil {
log.G(ctx).WithField("err", err).Error("Terminate() call failed")
return false, err
}
process.system.Close()
return true, nil
}
defer hcsSystem.Close()
newProcessHandle, err := hcsSystem.OpenProcess(ctx, process.Pid())
if err != nil {
// Return true only if the target process has either already
// exited, or does not exist.
if IsAlreadyStopped(err) {
return true, nil
} else {
return false, err
}
}
defer newProcessHandle.Close()
resultJSON, err := vmcompute.HcsTerminateProcess(ctx, newProcessHandle.handle)
if err != nil {
// We still need to check these two cases, as processes may still be killed by an
// external actor (human operator, OOM, random script etc).
@@ -185,9 +223,9 @@ func (process *Process) Kill(ctx context.Context) (bool, error) {
}
}
events := processHcsResult(ctx, resultJSON)
delivered, err := process.processSignalResult(ctx, err)
delivered, err := newProcessHandle.processSignalResult(ctx, err)
if err != nil {
err = makeProcessError(process, operation, err, events)
err = makeProcessError(newProcessHandle, operation, err, events)
}
process.killSignalDelivered = delivered
@@ -201,7 +239,7 @@ func (process *Process) Kill(ctx context.Context) (bool, error) {
// call multiple times.
func (process *Process) waitBackground() {
operation := "hcs::Process::waitBackground"
ctx, span := trace.StartSpan(context.Background(), operation)
ctx, span := oc.StartSpan(context.Background(), operation)
defer span.End()
span.AddAttributes(
trace.StringAttribute("cid", process.SystemID()),
@@ -227,12 +265,12 @@ func (process *Process) waitBackground() {
propertiesJSON, resultJSON, err = vmcompute.HcsGetProcessProperties(ctx, process.handle)
events := processHcsResult(ctx, resultJSON)
if err != nil {
err = makeProcessError(process, operation, err, events) //nolint:ineffassign
err = makeProcessError(process, operation, err, events)
} else {
properties := &processStatus{}
err = json.Unmarshal([]byte(propertiesJSON), properties)
if err != nil {
err = makeProcessError(process, operation, err, nil) //nolint:ineffassign
err = makeProcessError(process, operation, err, nil)
} else {
if properties.LastWaitResult != 0 {
log.G(ctx).WithField("wait-result", properties.LastWaitResult).Warning("non-zero last wait result")
@@ -254,12 +292,22 @@ func (process *Process) waitBackground() {
}
// Wait waits for the process to exit. If the process has already exited returns
// the pervious error (if any).
// the previous error (if any).
func (process *Process) Wait() error {
<-process.waitBlock
return process.waitError
}
// Exited returns if the process has stopped
func (process *Process) stopped() bool {
select {
case <-process.waitBlock:
return true
default:
return false
}
}
// ResizeConsole resizes the console of the process.
func (process *Process) ResizeConsole(ctx context.Context, width, height uint16) error {
process.handleLock.RLock()
@@ -296,15 +344,13 @@ func (process *Process) ResizeConsole(ctx context.Context, width, height uint16)
// ExitCode returns the exit code of the process. The process must have
// already terminated.
func (process *Process) ExitCode() (int, error) {
select {
case <-process.waitBlock:
if process.waitError != nil {
return -1, process.waitError
}
return process.exitCode, nil
default:
if !process.stopped() {
return -1, makeProcessError(process, "hcs::Process::ExitCode", ErrInvalidProcessState, nil)
}
if process.waitError != nil {
return -1, process.waitError
}
return process.exitCode, nil
}
// StdioLegacy returns the stdin, stdout, and stderr pipes, respectively. Closing
@@ -312,7 +358,7 @@ func (process *Process) ExitCode() (int, error) {
// are the responsibility of the caller to close.
func (process *Process) StdioLegacy() (_ io.WriteCloser, _ io.ReadCloser, _ io.ReadCloser, err error) {
operation := "hcs::Process::StdioLegacy"
ctx, span := trace.StartSpan(context.Background(), operation)
ctx, span := oc.StartSpan(context.Background(), operation)
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(
@@ -350,7 +396,7 @@ func (process *Process) StdioLegacy() (_ io.WriteCloser, _ io.ReadCloser, _ io.R
}
// Stdio returns the stdin, stdout, and stderr pipes, respectively.
// To close them, close the process handle.
// To close them, close the process handle, or use the `CloseStd*` functions.
func (process *Process) Stdio() (stdin io.Writer, stdout, stderr io.Reader) {
process.stdioLock.Lock()
defer process.stdioLock.Unlock()
@@ -359,46 +405,57 @@ func (process *Process) Stdio() (stdin io.Writer, stdout, stderr io.Reader) {
// CloseStdin closes the write side of the stdin pipe so that the process is
// notified on the read side that there is no more data in stdin.
func (process *Process) CloseStdin(ctx context.Context) error {
func (process *Process) CloseStdin(ctx context.Context) (err error) {
operation := "hcs::Process::CloseStdin"
ctx, span := trace.StartSpan(ctx, operation)
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(
trace.StringAttribute("cid", process.SystemID()),
trace.Int64Attribute("pid", int64(process.processID)))
process.handleLock.RLock()
defer process.handleLock.RUnlock()
operation := "hcs::Process::CloseStdin"
if process.handle == 0 {
return makeProcessError(process, operation, ErrAlreadyClosed, nil)
}
modifyRequest := processModifyRequest{
Operation: modifyCloseHandle,
CloseHandle: &closeHandle{
Handle: stdIn,
},
}
modifyRequestb, err := json.Marshal(modifyRequest)
if err != nil {
return err
}
resultJSON, err := vmcompute.HcsModifyProcess(ctx, process.handle, string(modifyRequestb))
events := processHcsResult(ctx, resultJSON)
if err != nil {
return makeProcessError(process, operation, err, events)
}
process.stdioLock.Lock()
if process.stdin != nil {
process.stdin.Close()
process.stdin = nil
defer process.stdioLock.Unlock()
if process.stdin == nil {
return nil
}
process.stdioLock.Unlock()
//HcsModifyProcess request to close stdin will fail if the process has already exited
if !process.stopped() {
modifyRequest := processModifyRequest{
Operation: modifyCloseHandle,
CloseHandle: &closeHandle{
Handle: stdIn,
},
}
modifyRequestb, err := json.Marshal(modifyRequest)
if err != nil {
return err
}
resultJSON, err := vmcompute.HcsModifyProcess(ctx, process.handle, string(modifyRequestb))
events := processHcsResult(ctx, resultJSON)
if err != nil {
return makeProcessError(process, operation, err, events)
}
}
process.stdin.Close()
process.stdin = nil
return nil
}
func (process *Process) CloseStdout(ctx context.Context) (err error) {
ctx, span := trace.StartSpan(ctx, "hcs::Process::CloseStdout") //nolint:ineffassign,staticcheck
ctx, span := oc.StartSpan(ctx, "hcs::Process::CloseStdout") //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(
@@ -422,7 +479,7 @@ func (process *Process) CloseStdout(ctx context.Context) (err error) {
}
func (process *Process) CloseStderr(ctx context.Context) (err error) {
ctx, span := trace.StartSpan(ctx, "hcs::Process::CloseStderr") //nolint:ineffassign,staticcheck
ctx, span := oc.StartSpan(ctx, "hcs::Process::CloseStderr") //nolint:ineffassign,staticcheck
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(
@@ -441,7 +498,6 @@ func (process *Process) CloseStderr(ctx context.Context) (err error) {
if process.stderr != nil {
process.stderr.Close()
process.stderr = nil
}
return nil
}
@@ -450,7 +506,7 @@ func (process *Process) CloseStderr(ctx context.Context) (err error) {
// or wait on it.
func (process *Process) Close() (err error) {
operation := "hcs::Process::Close"
ctx, span := trace.StartSpan(context.Background(), operation)
ctx, span := oc.StartSpan(context.Background(), operation)
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(

View File

@@ -1,3 +1,5 @@
//go:build windows
package schema1
import (
@@ -101,7 +103,7 @@ type ContainerConfig struct {
HvRuntime *HvRuntime `json:",omitempty"` // Hyper-V container settings. Used by Hyper-V containers only. Format ImagePath=%root%\BaseLayerID\UtilityVM
Servicing bool `json:",omitempty"` // True if this container is for servicing
AllowUnqualifiedDNSQuery bool `json:",omitempty"` // True to allow unqualified DNS name resolution
DNSSearchList string `json:",omitempty"` // Comma seperated list of DNS suffixes to use for name resolution
DNSSearchList string `json:",omitempty"` // Comma separated list of DNS suffixes to use for name resolution
ContainerType string `json:",omitempty"` // "Linux" for Linux containers on Windows. Omitted otherwise.
TerminateOnLastHandleClosed bool `json:",omitempty"` // Should HCS terminate the container once all handles have been closed
MappedVirtualDisks []MappedVirtualDisk `json:",omitempty"` // Array of virtual disks to mount at start

View File

@@ -9,6 +9,14 @@
package hcsschema
type CPUGroupPropertyCode uint32
const (
CPUCapacityProperty = 0x00010000
CPUSchedulingPriorityProperty = 0x00020000
IdleLPReserveProperty = 0x00030000
)
type CpuGroupProperty struct {
PropertyCode uint32 `json:"PropertyCode,omitempty"`
PropertyValue uint32 `json:"PropertyValue,omitempty"`

View File

@@ -0,0 +1,22 @@
/*
* HCS API
*
* No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
*
* API version: 2.1
* Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
*/
package hcsschema
type DebugOptions struct {
// BugcheckSavedStateFileName is the path for the file in which the guest VM state will be saved when
// the guest crashes.
BugcheckSavedStateFileName string `json:"BugcheckSavedStateFileName,omitempty"`
// BugcheckNoCrashdumpSavedStateFileName is the path of the file in which the guest VM state will be
// saved when the guest crashes but the guest isn't able to generate the crash dump. This usually
// happens in early boot failures.
BugcheckNoCrashdumpSavedStateFileName string `json:"BugcheckNoCrashdumpSavedStateFileName,omitempty"`
TripleFaultSavedStateFileName string `json:"TripleFaultSavedStateFileName,omitempty"`
FirmwareDumpFileName string `json:"FirmwareDumpFileName,omitempty"`
}

Some files were not shown because too many files have changed in this diff Show More