Compare commits

...

182 Commits

Author SHA1 Message Date
Miloslav Trmač
51e2e68b45 Merge pull request #2642 from cevich/release-1.14_add_release_test
[release-1.14] Add conditional release-checking system test
2025-07-04 17:12:30 +02:00
Chris Evich
cbd9601215 [release-1.14] Add conditional release-checking system test
Unfortunately on a number of occasions, Skopeo has been released
officially with a `-dev` suffix in the version number.  Assist in
catching this mistake at release time by the addition of a simple
conditional test.  Note that it must be positively enabled by a
magic env. var. before executing the system tests.

Original PR: https://github.com/containers/skopeo/pull/2631

Signed-off-by: Chris Evich <cevich@redhat.com>
2025-07-02 12:31:16 -04:00
Miloslav Trmač
f85f25a7d1 Merge pull request #2610 from cevich/release-1.14-multiarch_registry
[release-1.14] Support CI testing on non-x86_64
2025-05-28 20:38:38 +02:00
Chris Evich
8a473becbc Support CI testing on non-x86_64
Previously, internal CI gating tests sometimes fail because the required
registry container image only supports x86_64.  Update to the `2.8.2`
image tag with support for all primary architectures.

Signed-off-by: Chris Evich <cevich@redhat.com>
2025-05-28 13:58:51 -04:00
Miloslav Trmač
072072bf6e Merge pull request #2381 from TomSweeneyRedHat/dev/tsweeney/jfrog-1.14
[release-1.14] Fixes Listing tags in JFrog Artifactory may fail
2024-07-11 19:12:00 +02:00
tomsweeneyredhat
f9fd4db019 [release-1.14] Bump to v1.14.5
As the title says.  To be used to deliver the jfrog
artifactory fix.

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-07-10 21:15:20 -04:00
tomsweeneyredhat
3cdc5991f4 [release-1.14] Fixes Listing tags in JFrog Artifactory may fail
Addresses the problem first described in https://github.com/containers/skopeo/issues/2346
in the release-1.14 branch

Also addresses: https://issues.redhat.com/browse/RHEL-40801
https://issues.redhat.com/browse/RHEL-40805

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-07-10 21:12:38 -04:00
Colin Walters
0b1746b371 Merge pull request #2357 from mtrmac/k8s.gcr.io-14
[release-1.14] Refer to registry.k8s.io instead of k8s.gcr.io
2024-06-19 19:41:03 -04:00
Miloslav Trmač
df8ab21405 Refer to registry.k8s.io instead of k8s.gcr.io
... per https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/ .

We are seeing intermittent failures (sufficient to reliably cause a test suite failure)
pulling from k8s.gcr.io, let's see if using the newer one improves things.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-06-19 17:42:01 +02:00
Miloslav Trmač
78d9c9a5a9 Merge pull request #2337 from TomSweeneyRedHat/dev/tsweeney/release-1.14-CVE-2024-3727
[release-1.14] CVE-2024-3727
2024-05-29 21:17:01 +02:00
tomsweeneyredhat
ea143562bc [release-1.14] Bump to Skopeo v1.14.4
As the title says, bumping to v1.14.4 to get
a release ready with the CVE-2024-3727 fix.

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-05-29 14:28:10 -04:00
tomsweeneyredhat
8829b42700 [release-1.14] CVE-2024-3727 fix
This addresses CVE-2024-3727
https://issues.redhat.com/browse/OCPBUGS-33267

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-05-29 14:28:10 -04:00
Miloslav Trmač
d3b91fd9ec Merge pull request #2341 from lsm5/release-1.14-packit-update
Packit: update packit targets
2024-05-29 19:53:13 +02:00
Lokesh Mandvekar
3781db8816 Packit: update packit targets
We only care about buildability on epel 9 (RHEL). It won't be shipped to
centos stream or to fedora.

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2024-05-29 10:34:17 -04:00
Miloslav Trmač
5f2b9afebe Merge pull request #2301 from TomSweeneyRedHat/dev/tsweeney/cve-jose-2-1.14
[release-1.14] Bump gopkg.in/go-jose to v2.6.3
2024-04-16 14:51:24 +02:00
tomsweeneyredhat
c6e147fadc [release-1.14] Bump gopkg.in/go-jose to v2.6.3
Finishes the work for CVE-2024-28180

Bumps gopkg.in/go-jose/go-jose.v2 v2.6.3

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-04-15 17:49:03 -04:00
Miloslav Trmač
004035777a Merge pull request #2290 from TomSweeneyRedHat/dev/tsweeney/cve-jose-1.14
[release-1.14] Bump ocicrypt and go-jose CVE-2024-28180
2024-04-11 22:03:35 +02:00
tomsweeneyredhat
528de2ba55 [release-1.14] Bump ocicrypt and go-jose CVE-2024-28180
Bump github.com/go-jose/go-jose to v3.0.0 and
github.com/containers/ocicrypt to v1.1.10

Addresses: CVE-2024-28180
https://issues.redhat.com/browse/RHEL-28736
https://issues.redhat.com/browse/RHEL-28728
https://issues.redhat.com/browse/OCPBUGS-30723

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-04-11 11:20:22 -04:00
Miloslav Trmač
f14e9809e7 Merge pull request #2284 from mtrmac/integration-update-1.14
[release-1.14] Backport #2280
2024-04-10 20:00:51 +02:00
Miloslav Trmač
e52127ca7e Freeze the fedora-minimal image reference at Fedora 38
... because the tests are assuming a v2s2 image, but
as of Fedora 39, the image uses the OCI format.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-04-08 19:49:15 +02:00
Miloslav Trmač
4a2bc3a955 Merge pull request #2274 from TomSweeneyRedHat/dev/tsweeney/common_bump0.57.4
[release-1.14] Bump c/common to v0.57.4, Skopeo to v1.14.3
2024-03-28 14:48:53 +01:00
tomsweeneyredhat
5630fe8895 [release-1.14] Bump to v1.14.3
Getting ready for RHEL 8.10/9.4.  We will pull from the
release branch, so we need to drop the `-dev`.

[NO NEW TESTS NEEDED]

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-03-27 17:53:17 -04:00
tomsweeneyredhat
5c2d26f568 [release-1.14] Bump c/common to v0.57.4
I neglected to circle back and bump c/common in the
release-1.14 branch before RHEL 8.10/9.4.  This takes
care of that.

[NO NEW TESTS NEEDED]

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-03-27 17:46:37 -04:00
Tom Sweeney
d0a0f1a9d4 Merge pull request #2260 from TomSweeneyRedHat/dev/tsweeney/protobuf_1.33
[release-1.14] Bump google.golang.org/protobuf to v1.33.0
2024-03-15 19:50:47 -04:00
tomsweeneyredhat
433f7b7ee2 [release-1.14] Bump google.golang.org/protobuf to v1.33.0
As the title says.  Addresses CVE-2024-24786

https://issues.redhat.com/browse/RHEL-28226
https://issues.redhat.com/browse/RHEL-28235

[NO NEW TESTS NEEDED]

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-03-15 19:08:37 -04:00
Tom Sweeney
1c2ab99505 Merge pull request #2210 from TomSweeneyRedHat/dev/tsweeney/ddaemon_fix
[release-1.14] Bump to c/image v5.29.2, c/common to v0.57.3, and Skopeo to v1.14.2
2024-02-01 10:17:29 -05:00
tomsweeneyredhat
c3e474e4ba [release-1.14] Bump Skopeo to v1.14.3-dev
As the title says, set the version back to dev.

[NO NEW TESTS NEEDED]

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-01-31 16:27:01 -05:00
tomsweeneyredhat
b673eb60b7 [release-1.14] Bump Skopeo to v1.14.2
As the title says.  Bumping Skopeo to v1.14.2
to ready for RHEL 8.10/9.4.

[NO NEW TESTS NEEDED]

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-01-31 13:51:46 -05:00
tomsweeneyredhat
18cc81ab7f [release-1.14] Bump c/image to v5.29.2, c/common to v0.57.3
As the title says.  Bumping c/image to v5.29.2 and
c/common to v0.57.3 in preparation of RHEL 8.10/9.4.

This addresses the Docker Daemon version issue.

[NO NEW TESTS NEEDED]

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-01-31 13:39:03 -05:00
tomsweeneyredhat
45b7bf5e4a Bump to v1.14.1
As the title says.  Bumping now in preparation for RHEL 8.10/9.4.

Once merged, I will create release-1.14 branch based on this commit.

[NO NEW TESTS NEEDED]

Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
2024-01-18 13:44:26 -05:00
Miloslav Trmač
b8b65769ca Merge pull request #2199 from containers/renovate/github.com-containers-common-0.x
fix(deps): update module github.com/containers/common to v0.57.2
2024-01-18 16:52:50 +01:00
renovate[bot]
b7ec87a1f7 fix(deps): update module github.com/containers/common to v0.57.2
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-18 14:54:03 +00:00
Miloslav Trmač
a62bb4b5f1 Merge pull request #2197 from containers/renovate/github.com-containers-image-v5-5.x
fix(deps): update module github.com/containers/image/v5 to v5.29.1
2024-01-18 00:12:51 +01:00
renovate[bot]
92edbcb7b9 fix(deps): update module github.com/containers/image/v5 to v5.29.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-17 22:50:25 +00:00
Miloslav Trmač
488de114b8 Merge pull request #2195 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20240102
2024-01-16 19:48:09 +01:00
renovate[bot]
5684cd1290 chore(deps): update dependency containers/automation_images to v20240102
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-16 17:16:58 +00:00
Miloslav Trmač
3f3be98199 Merge pull request #2190 from lsm5/fix-subid-buildtag
Fix libsubid detection
2024-01-05 17:12:19 +01:00
Lokesh Mandvekar
c705331271 Fix libsubid detection
Currently, `$(hack/libsubid_tag.sh)` produces no buildtag output. This
patch fixes it.

1. Library arguments must be positioned after sources when invoking GCC.
2. Use new function name: `subid_get_uid_ranges`.

Refs:
rhbz: https://bugzilla.redhat.com/show_bug.cgi?id=2254902
podman file: https://github.com/containers/podman/blob/main/hack/libsubid_tag.sh

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2024-01-05 18:39:21 +05:30
Miloslav Trmač
9646311612 Merge pull request #2191 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.16.0
2024-01-04 18:44:46 +01:00
renovate[bot]
e51dbbd89f fix(deps): update module golang.org/x/term to v0.16.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-04 16:31:25 +00:00
Miloslav Trmač
5122990bf6 Merge pull request #2186 from containers/renovate/golang.org-x-exp-digest
fix(deps): update golang.org/x/exp digest to 02704c9
2024-01-02 19:02:18 +01:00
renovate[bot]
852cca637c fix(deps): update golang.org/x/exp digest to 02704c9
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-01 02:15:59 +00:00
Miloslav Trmač
1105541c80 Merge pull request #2179 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20231208
2023-12-12 22:51:17 +01:00
renovate[bot]
d1cb2e2c97 chore(deps): update dependency containers/automation_images to v20231208
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-12-12 20:47:26 +00:00
Miloslav Trmač
ca87381629 Merge pull request #2177 from containers/renovate/actions-stale-9.x
[skip-ci] Update actions/stale action to v9
2023-12-12 21:46:33 +01:00
renovate[bot]
fe83b6fedc [skip-ci] Update actions/stale action to v9
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-12-12 20:26:53 +00:00
Miloslav Trmač
442389eb72 Merge pull request #2176 from containers/renovate/github.com-containers-common-0.x
fix(deps): update module github.com/containers/common to v0.57.1
2023-12-12 20:07:00 +01:00
renovate[bot]
f346045d7b fix(deps): update module github.com/containers/common to v0.57.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-12-07 17:01:23 +00:00
Miloslav Trmač
a85eaac984 Merge pull request #2169 from containers/renovate/golang.org-x-exp-digest
fix(deps): update golang.org/x/exp digest to 6522937
2023-12-04 16:33:46 +01:00
renovate[bot]
48d11dac3f fix(deps): update golang.org/x/exp digest to 6522937
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-12-04 14:05:30 +00:00
Miloslav Trmač
889e3f1ccc Merge pull request #2171 from rahilarious/add-gentoo-install
DOCS: add Gentoo in install.md
2023-12-04 15:04:26 +01:00
Rahil Bhimjiani
87eef310fa DOCS: add Gentoo in install.md
Signed-off-by: Rahil Bhimjiani <me@rahil.website>
2023-12-02 01:08:45 +05:30
Miloslav Trmač
8514ab31ea Merge pull request #2165 from STARRY-S/main
DOCS: Update to add Arch Linux in install.md
2023-11-29 22:21:31 +01:00
Hanxing Wang
f50dc20442 DOCS: Update to add Arch Linux in install.md
Signed-off-by: Hanxing Wang <hxstarrys@gmail.com>
2023-11-29 15:29:31 +08:00
Miloslav Trmač
7941402c12 Merge pull request #2163 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.15.0
2023-11-27 22:18:55 +01:00
renovate[bot]
9f52e728f7 fix(deps): update module golang.org/x/term to v0.15.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-27 19:58:24 +00:00
Miloslav Trmač
89e7a5e4bb Merge pull request #2160 from TomSweeneyRedHat/dev/tsweeney/bump_v1.14
Bump to v1.14.0
2023-11-22 23:35:05 +01:00
TomSweeneyRedHat
efd76e7444 Bump to v1.14.1-dev
As the title says

[NO NEW TESTS NEEDED]

Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2023-11-22 15:55:26 -05:00
TomSweeneyRedHat
6abf96bb82 Bump to v1.14.0
As the title says.

[NO NEW TESTS NEEDED]

Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2023-11-22 15:54:08 -05:00
Miloslav Trmač
3978c8dde6 Merge pull request #2158 from containers/renovate/github.com-containers-common-0.x
fix(deps): update module github.com/containers/common to v0.57.0
2023-11-17 16:45:47 +01:00
renovate[bot]
14496ba483 fix(deps): update module github.com/containers/common to v0.57.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-17 04:54:18 +00:00
Miloslav Trmač
ffd687e356 Merge pull request #2157 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20231116
2023-11-17 05:53:28 +01:00
renovate[bot]
fa85e47bc3 chore(deps): update dependency containers/automation_images to v20231116
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-16 22:01:46 +00:00
Miloslav Trmač
3f5cc0b0b3 Merge pull request #2156 from containers/renovate/github.com-containers-image-v5-5.x
fix(deps): update module github.com/containers/image/v5 to v5.29.0
2023-11-16 21:35:18 +01:00
renovate[bot]
e4b67e78fd fix(deps): update module github.com/containers/image/v5 to v5.29.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-16 20:07:57 +00:00
Miloslav Trmač
143d62bde2 Merge pull request #2151 from mtrmac/docker-compat-login
Add --compat-auth-file to login and logout
2023-11-16 21:07:13 +01:00
Miloslav Trmač
edefdb6611 Add documentation and smoke tests for the new --compat-auth-file options
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-11-16 18:23:59 +01:00
Miloslav Trmač
518181e595 Update c/image and c/common to latest
... to include https://github.com/containers/image/pull/2173
and https://github.com/containers/common/pull/1731 .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-11-16 18:21:43 +01:00
Miloslav Trmač
313342bdf8 Merge pull request #2155 from containers/renovate/github.com-containers-storage-1.x
fix(deps): update module github.com/containers/storage to v1.51.0
2023-11-16 18:14:56 +01:00
renovate[bot]
56b96a4d37 fix(deps): update module github.com/containers/storage to v1.51.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-16 16:52:53 +00:00
Miloslav Trmač
925eada5b2 Merge pull request #2152 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.14.0
2023-11-08 16:52:47 +01:00
renovate[bot]
a8e7d94ebe fix(deps): update module golang.org/x/term to v0.14.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-08 08:11:46 +00:00
Miloslav Trmač
ef1fcd4806 Merge pull request #2148 from containers/renovate/github.com-spf13-cobra-1.x
fix(deps): update module github.com/spf13/cobra to v1.8.0
2023-11-06 19:49:08 +01:00
renovate[bot]
50cffa386b fix(deps): update module github.com/spf13/cobra to v1.8.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-04 23:05:35 +00:00
Miloslav Trmač
a1d4a1f5eb Merge pull request #2147 from containers/renovate/golangci-golangci-lint-1.x
[CI:DOCS] Update dependency golangci/golangci-lint to v1.55.2
2023-11-03 16:26:34 +01:00
renovate[bot]
0c2cca9640 [CI:DOCS] Update dependency golangci/golangci-lint to v1.55.2
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-03 13:49:14 +00:00
Miloslav Trmač
9c1ded8a34 Merge pull request #2139 from containers/renovate/golangci-golangci-lint-1.x
[CI:DOCS] Update dependency golangci/golangci-lint to v1.55.1
2023-11-02 19:58:29 +01:00
renovate[bot]
6b2a26f161 [CI:DOCS] Update dependency golangci/golangci-lint to v1.55.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-01 15:59:37 +00:00
Miloslav Trmač
93fb2c79da Merge pull request #2143 from containers/renovate/github.com-containers-common-digest
fix(deps): update github.com/containers/common digest to 3e5caa0
2023-11-01 16:59:20 +01:00
renovate[bot]
6ef8acff81 fix(deps): update github.com/containers/common digest to 3e5caa0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-01 15:29:29 +00:00
Miloslav Trmač
6fbc4c8322 Merge pull request #2141 from containers/renovate/go-google.golang.org/grpc-vulnerability
chore(deps): update module google.golang.org/grpc to v1.57.1 [security]
2023-11-01 16:28:39 +01:00
renovate[bot]
5d4e89ccbe chore(deps): update module google.golang.org/grpc to v1.57.1 [security]
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-11-01 09:40:14 +00:00
Miloslav Trmač
b44b5c9032 Merge pull request #2142 from containers/renovate/github.com-containers-ocicrypt-1.x
fix(deps): update module github.com/containers/ocicrypt to v1.1.9
2023-10-31 19:13:40 +01:00
renovate[bot]
5307dd6604 fix(deps): update module github.com/containers/ocicrypt to v1.1.9
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-31 17:06:37 +00:00
Miloslav Trmač
aec071dd9f Merge pull request #2140 from containers/renovate/go-github.com/docker/docker-vulnerability
chore(deps): update module github.com/docker/docker to v24.0.7+incompatible [security]
2023-10-30 19:40:31 +01:00
Miloslav Trmač
03c9425235 Update github.com/klauspost/compress to v1.17.2
... to be consistent with the just-updated docker/docker's vendor.mod.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-10-30 19:19:54 +01:00
renovate[bot]
91611a3ac9 chore(deps): update module github.com/docker/docker to v24.0.7+incompatible [security]
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-30 17:39:47 +00:00
Miloslav Trmač
ae60fd8765 Merge pull request #2138 from mtrmac/entrypoint
Fix ENTRYPOINT documentation, drop others.
2023-10-30 18:38:54 +01:00
Miloslav Trmač
a9c7c5051e Fix ENTRYPOINT documentation, drop others.
ENTRYPOINT was incorrectly documented to be set to /
(which doesn't even make sense).

Stop mentioning PATH and WORKDIR in the top-level README,
typical users of the container shouldn't need to care,
and it's already somewhat implied by "built using the latest Fedora".

Fixes #2134.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-10-25 14:45:36 +02:00
Valentin Rothberg
ab61775849 Merge pull request #2137 from mtrmac/fedora-name
Remove unused environment variables in Cirrus
2023-10-24 17:08:03 +02:00
Miloslav Trmač
70551db8ca Remove unused environment variables in Cirrus
Some other containers/* repos use these values in test names;
we don't, so remove them so that we don't have to worry
about keeping them up to date.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-10-24 14:58:11 +02:00
Miloslav Trmač
ca00d96b6b Merge pull request #2132 from containers/renovate/golangci-golangci-lint-1.x
[CI:DOCS] Update dependency golangci/golangci-lint to v1.55.0
2023-10-20 15:46:18 +02:00
renovate[bot]
a2eb508b10 [CI:DOCS] Update dependency golangci/golangci-lint to v1.55.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-20 12:59:53 +00:00
Miloslav Trmač
0749bca7c3 Merge pull request #2131 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20231004
2023-10-18 10:18:19 +02:00
renovate[bot]
1fa360a684 chore(deps): update dependency containers/automation_images to v20231004
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-17 18:48:38 +00:00
Miloslav Trmač
c375a1e37e Merge pull request #2129 from containers/renovate/go-golang.org/x/net-vulnerability
chore(deps): update module golang.org/x/net to v0.17.0 [security]
2023-10-16 22:41:45 +02:00
renovate[bot]
fa3e62f21b chore(deps): update module golang.org/x/net to v0.17.0 [security]
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-16 14:21:02 +00:00
Giuseppe Scrivano
1b85889f5f Merge pull request #2128 from cgwalters/dest-compress-zstd
copy: Note support for `zstd:chunked`
2023-10-16 16:19:58 +02:00
Colin Walters
dc4fa67253 copy: Note support for zstd:chunked
Since this comes from the underlying c/image library.

Signed-off-by: Colin Walters <walters@verbum.org>
2023-10-14 14:18:40 -04:00
Miloslav Trmač
d9c2568191 Merge pull request #2122 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.13.0
2023-10-09 22:06:50 +02:00
renovate[bot]
538dd6f3b4 fix(deps): update module golang.org/x/term to v0.13.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-05 19:58:47 +00:00
Miloslav Trmač
a086cc90cd Merge pull request #2120 from containers/renovate/github.com-docker-distribution-2.x
fix(deps): update module github.com/docker/distribution to v2.8.3+incompatible
2023-10-03 00:05:26 +02:00
renovate[bot]
611db7c3dd fix(deps): update module github.com/docker/distribution to v2.8.3+incompatible
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-02 19:50:28 +00:00
Miloslav Trmač
175c4efc31 Merge pull request #2119 from containers/renovate/github.com-containers-common-digest
fix(deps): update github.com/containers/common digest to 745eaa4
2023-10-02 18:35:56 +02:00
renovate[bot]
43e1a96e67 fix(deps): update github.com/containers/common digest to 745eaa4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-02 13:14:32 +00:00
Miloslav Trmač
2d1ae1fdb3 Merge pull request #2117 from lsm5/packit-fail-tag-2
Packit: switch to @containers/packit-build team for copr failure notification comments
2023-09-22 16:30:43 +02:00
Lokesh Mandvekar
5fad766ceb Packit: switch to @containers/packit-build team for copr failure notification comments
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-09-22 09:32:34 -04:00
Lokesh Mandvekar
d8b3a17ff2 Packit: tag @lsm5 on copr build failures
This change will auto-tag @lsm5 in a github comment on every copr build
failure.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-09-21 14:17:57 -04:00
Miloslav Trmač
4c805b7a63 Merge pull request #2115 from rhatdan/VENDOR
vendor of containers/common
2023-09-20 19:45:48 +02:00
Daniel J Walsh
5703482600 vendor of containers/common
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2023-09-20 08:18:32 -04:00
Daniel J Walsh
7c7e6000ce Merge pull request #2113 from containers/renovate/github.com-opencontainers-image-spec-1.x
fix(deps): update module github.com/opencontainers/image-spec to v1.1.0-rc5
2023-09-14 15:06:49 -04:00
renovate[bot]
7db8fbde98 fix(deps): update module github.com/opencontainers/image-spec to v1.1.0-rc5
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-14 18:32:59 +00:00
Miloslav Trmač
6998e7e5bd Merge pull request #2090 from cevich/lock_closed_issues_prs
[skip-ci] GHA: Closed issue/PR comment-lock
2023-09-14 19:02:24 +02:00
Daniel J Walsh
83275c35cb Merge pull request #2112 from containers/renovate/github.com-containers-common-0.x
fix(deps): update module github.com/containers/common to v0.56.0
2023-09-14 10:51:10 -04:00
renovate[bot]
4d921585f3 fix(deps): update module github.com/containers/common to v0.56.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-14 05:09:18 +00:00
Miloslav Trmač
7482b74ac2 Merge pull request #2093 from cevich/remove_multiarch_cron
Cirrus: Remove multi-arch skopeo image builds
2023-09-13 20:42:15 +02:00
Chris Evich
9e89e18f16 Cirrus: Remove multi-arch skopeo image builds
These jobs have been failing since early August due to
technical/scripting problems.  Disable/remove entirely since a fix
is unlikely to be implemented anytime soon.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-13 14:18:22 -04:00
Miloslav Trmač
3b610a75fe Merge pull request #2109 from containers/renovate/github.com-containers-image-v5-5.x
fix(deps): update module github.com/containers/image/v5 to v5.28.0
2023-09-13 19:54:24 +02:00
renovate[bot]
32c8a05a24 fix(deps): update module github.com/containers/image/v5 to v5.28.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-13 19:19:23 +02:00
Miloslav Trmač
679615f5f8 Increase the golangci-lint timeout
We are running into it in GitHub CI.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-09-13 19:19:23 +02:00
Miloslav Trmač
2bb8193522 Merge pull request #2108 from containers/renovate/github.com-containers-storage-1.x
fix(deps): update module github.com/containers/storage to v1.50.2
2023-09-13 16:09:47 +02:00
renovate[bot]
c1e7c974f8 fix(deps): update module github.com/containers/storage to v1.50.2
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-13 01:35:14 +00:00
Miloslav Trmač
b58ca4062a Merge pull request #2105 from containers/renovate/github.com-containers-storage-1.x
fix(deps): update module github.com/containers/storage to v1.50.1
2023-09-12 18:45:51 +02:00
renovate[bot]
9563e3b84b fix(deps): update module github.com/containers/storage to v1.50.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-12 13:54:25 +00:00
Valentin Rothberg
ea0a627e64 Merge pull request #2100 from containers/renovate/golang.org-x-exp-digest
fix(deps): update golang.org/x/exp digest to 9212866
2023-09-06 10:00:19 +02:00
renovate[bot]
427e58f5f5 fix(deps): update golang.org/x/exp digest to 9212866
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-06 00:23:35 +00:00
Miloslav Trmač
e3638bbe3c Merge pull request #2103 from mtrmac/registries-link
Fix a man page link
2023-09-05 19:46:38 +02:00
Miloslav Trmač
7c39f363e8 Fix a man page link
Reported in https://github.com/containers/skopeo/issues/2061 .

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-09-05 18:42:52 +02:00
Miloslav Trmač
9f78a09395 Merge pull request #2099 from containers/renovate/github.com-containers-image-v5-digest
fix(deps): update github.com/containers/image/v5 digest to 58d5eb6
2023-09-04 21:14:51 +02:00
renovate[bot]
897619f6b5 fix(deps): update github.com/containers/image/v5 digest to 58d5eb6
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-04 18:19:51 +00:00
Chris Evich
2976f4f84c GHA: Closed issue/PR comment-lock test
Lock discussions on closed PRs and Issues after 90-days of inactivity.

Ref:
     https://github.com/containers/podman/discussions/19012
and
     https://github.com/containers/podman/pull/19691
and
     https://github.com/containers/podman/pull/19700

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-28 12:02:59 -04:00
Miloslav Trmač
7f2f46e1b9 Merge pull request #2091 from containers/renovate/github.com-containers-common-0.x
fix(deps): update module github.com/containers/common to v0.55.4
2023-08-24 20:47:18 +02:00
renovate[bot]
4697991430 fix(deps): update module github.com/containers/common to v0.55.4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-24 17:21:02 +00:00
Miloslav Trmač
4d4479abbe Merge pull request #2089 from containers/renovate/github.com-containers-storage-1.x
fix(deps): update module github.com/containers/storage to v1.49.0
2023-08-22 18:49:30 +02:00
renovate[bot]
3249973d37 fix(deps): update module github.com/containers/storage to v1.49.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-22 16:31:39 +00:00
Miloslav Trmač
f54415d5b5 Merge pull request #2086 from lsm5/main-rpm-spdx
rpm: spdx compatible license field
2023-08-21 21:02:48 +02:00
Lokesh Mandvekar
b87a1b3e8e rpm: spdx compatible license field
The lowercase `and` in the License field isn't compatible with spdx
license format.

This commit replaces all `and` with `AND` in the License field in spec.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-08-21 14:26:52 -04:00
Miloslav Trmač
599b4e01a9 Merge pull request #2085 from containers/renovate/golangci-golangci-lint-1.x
chore(deps): update dependency golangci/golangci-lint to v1.54.2
2023-08-21 16:11:58 +02:00
renovate[bot]
b0d587a91c chore(deps): update dependency golangci/golangci-lint to v1.54.2
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-21 13:31:40 +00:00
Miloslav Trmač
85d55e8d5e Merge pull request #2083 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20230816
2023-08-21 12:40:15 +02:00
renovate[bot]
7ced0fb000 chore(deps): update dependency containers/automation_images to v20230816
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-17 17:42:50 +00:00
Miloslav Trmač
33818b27cc Merge pull request #2081 from michalbiesek/feat-riscv64
Improve the docs with cross-compilation info
2023-08-17 19:42:33 +02:00
Lokesh Mandvekar
4b952d6150 Packit: set eln target correctly
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-08-17 12:06:12 -04:00
Martin Pitt
6b827fa703 packit: Build PRs into default packit COPRs
Building all PRs of all container projects into the same COPR does not
properly isolate PRs from each other.

To avoid that, change the copr_build configuration to use the packit
default COPRs, which are specific to the particular PR, and disappear
after a few weeks. Depending projects should only run against what
landed in skopeo/main i.e. the podman-next COPR.

Signed-off-by: Martin Pitt <mpitt@redhat.com>
2023-08-17 12:06:12 -04:00
Michal Biesek
fec950c24d DOCS: Update Go version requirement info
Ref: 5abce03

Signed-off-by: Michal Biesek <michalbiesek@gmail.com>
2023-08-17 16:23:39 +02:00
Michal Biesek
449ac9bbfb DOCS: Add information about the cross-build
Signed-off-by: Michal Biesek <michalbiesek@gmail.com>
2023-08-17 16:23:11 +02:00
Miloslav Trmač
c19118d46f Merge pull request #2080 from containers/renovate/github.com-containers-ocicrypt-1.x
fix(deps): update module github.com/containers/ocicrypt to v1.1.8
2023-08-15 22:50:58 +02:00
renovate[bot]
78187ca816 fix(deps): update module github.com/containers/ocicrypt to v1.1.8
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-15 14:13:17 +00:00
Miloslav Trmač
a77743fb25 Merge pull request #2065 from containers/renovate/github.com-containers-common-0.x
fix(deps): update module github.com/containers/common to v0.55.3
2023-08-15 16:12:25 +02:00
renovate[bot]
df117e2838 fix(deps): update module github.com/containers/common to v0.55.3
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-15 12:52:32 +00:00
Miloslav Trmač
f64f323bb6 Merge pull request #2079 from mtrmac/c-image-after-merge
Update c/image after https://github.com/containers/image/pull/2070
2023-08-15 14:51:38 +02:00
Miloslav Trmač
4ee2946bbc Update c/image after https://github.com/containers/image/pull/2070
> go get github.com/containers/image/v5@main
> make vendor

This moves c/image to a commit that includes both the work on main
that we were already vendoring, and the last tagged version 5.27.0.

That should prevent Renovate from proposing downgrades which fail tests:
- https://github.com/containers/skopeo/pull/2065
- https://github.com/containers/skopeo/pull/2066

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-08-14 20:24:51 +02:00
Miloslav Trmač
9c8ed62f91 Merge pull request #2075 from containers/renovate/golangci-golangci-lint-1.x
chore(deps): update dependency golangci/golangci-lint to v1.54.1
2023-08-12 01:08:07 +02:00
renovate[bot]
0e3efc640a chore(deps): update dependency golangci/golangci-lint to v1.54.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-11 21:01:59 +00:00
Miloslav Trmač
1cea666c87 Merge pull request #2078 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20230809
2023-08-11 23:01:08 +02:00
renovate[bot]
46fcbd3af8 chore(deps): update dependency containers/automation_images to v20230809
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-11 16:44:37 +00:00
Miloslav Trmač
eca8382a55 Merge pull request #2060 from containers/renovate/golang.org-x-exp-digest
fix(deps): update golang.org/x/exp digest to 352e893
2023-08-10 19:24:20 +02:00
renovate[bot]
e98561e243 fix(deps): update golang.org/x/exp digest to 352e893
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-10 04:08:25 +00:00
Miloslav Trmač
d57bafbe37 Merge pull request #2071 from containers/renovate/major-ci-vm-image
chore(deps): update dependency containers/automation_images to v20230807
2023-08-07 22:02:08 +02:00
renovate[bot]
4f5ba65a6f chore(deps): update dependency containers/automation_images to v20230807
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-07 18:44:10 +00:00
Valentin Rothberg
3b1cd3aa14 Merge pull request #2069 from mtrmac/go1.19
Update to Go 1.19
2023-08-07 09:59:34 +02:00
Miloslav Trmač
5abce03c66 Update to Go 1.19
We already require it, because docker/credential-helpers uses Go 1.19
os/exec.Cmd.Environ(). So make that official.

> go mod tidy -go=1.19

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-08-05 01:06:19 +02:00
Miloslav Trmač
2dd282842f Merge pull request #2067 from containers/renovate/golang.org-x-term-0.x
fix(deps): update module golang.org/x/term to v0.11.0
2023-08-04 21:22:14 +02:00
renovate[bot]
276b80955a fix(deps): update module golang.org/x/term to v0.11.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-04 16:44:37 +00:00
Valentin Rothberg
575f411b86 Merge pull request #2064 from mtrmac/c-image-for-x-exp
Update c/image for golang.org/x/exp
2023-08-03 09:37:52 +02:00
Miloslav Trmač
60ee543f7f Update c/image for golang.org/x/exp
> go get github.com/containers/image/v5@main
> go mod tidy && go mod vendor

This updates c/image with a new version of x/exp.
That package has changed API in an incompatible way,
so just bumping x/exp (as in https://github.com/containers/skopeo/pull/2060 )
would break Skopeo builds.

This updates both c/image and x/exp in lockstep (and nothing
needs updating in Skopeo itself for the x/exp breakage).

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-08-02 22:41:44 +02:00
Lokesh Mandvekar
ab89207511 RPM: define gobuild macro for rhel/centos stream
The current gobuild macro doesn't account for build tags on both c9s and
c8s. This is currently causing copr build failures for c9s.

Ref: https://copr.fedorainfracloud.org/coprs/rhcontainerbot/podman-next/build/6220412/

This commit will define gobuild for all those envs until gobuild is
fixed by default.

Refs:
c9s bz: https://bugzilla.redhat.com/show_bug.cgi?id=2227328
c8s bz: https://bugzilla.redhat.com/show_bug.cgi?id=2227331

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-07-31 15:57:59 -04:00
Colin Walters
f2be411b7b Merge pull request #2048 from mtrmac/proxy-policy
Follow-up fixes to #2029
2023-07-19 06:25:58 -04:00
Miloslav Trmač
f236b5efdc Fix handling the unexpected return value combination from IsRunningImageAllowed
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-07-18 20:27:27 +02:00
Miloslav Trmač
c40f1485b0 Close the PolicyContext, as required by the API
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-07-18 20:27:04 +02:00
Miloslav Trmač
e90ad8614b Use globalOptions.getPolicyContext instead of an image-targeted SystemContext
This automatically the global --policy-path and --insecure-policy options,
which don't affect h.sysctx.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2023-07-18 20:22:22 +02:00
Miloslav Trmač
38650252d5 Merge pull request #2046 from lsm5/packit-remove-pre-sync
Packit: remove pre-sync action
2023-07-14 21:41:02 +02:00
Lokesh Mandvekar
a4aa15f4fa Packit: remove pre-sync action
The pre-sync action constantly breaks and is currently not possible to
reliably test until the subsequent upstream release due to limitations
in packit.

The lines being added by the action script to the downstream Fedora spec
were only meant to keep Fedora happy. But given that they provide
no tangible benefit as github notifies us of security
issues in libraries mentioned in go.mod and go.sum, along with redhat
prodsec's own magic for creating security alerts, there's absolutely
no point to having the pre-sync action run and add a layer of uncertainty.

This commit removes the pre-sync action and
`rpm/update-spec-provides.sh`.

Ref: https://github.com/containers/podman/issues/19232

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-07-14 10:14:00 -04:00
Miloslav Trmač
d606b8ad47 Merge pull request #2044 from containers/renovate/github.com-containers-common-0.x
fix(deps): update module github.com/containers/common to v0.55.2
2023-07-13 22:37:25 +02:00
renovate[bot]
a0a340a12e fix(deps): update module github.com/containers/common to v0.55.2
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-07-13 10:15:54 +00:00
Miloslav Trmač
fff034fecf Merge pull request #1777 from cgwalters/proxy-imageid-overflow
proxy: Change the imgid to uint64
2023-07-13 00:47:52 +02:00
Colin Walters
f7dc084799 proxy: Change the imgid to uint64
In PR review for a different issue, the question of what happens
if we hit overflow for the imageid serial was hit.  This feels
pretty unlikely; if I did the math right, it'd require opening
an average of 136 images per second to overflow it in a year.
Nevertheless, in practice what we're sending on the wire is just a JSON
number, and if we extend this to the "max safe JSON number" of 2^53,
it'd take 285,616,414 images per second to overflow in a year, going
from implausible to probably impossible.

With a bit more work of course, we could make this a sparse mapping
and reuse freed numbers, but eh.

Signed-off-by: Colin Walters <walters@verbum.org>
2023-07-13 00:24:55 +02:00
Lokesh Mandvekar
a39972ca35 [CI:BUILD] Packit: install golist before updating downstream spec
The default Packit sandbox environment that runs Packit tasks for
downstream Fedora does not have golist installed by default and can't
run superuser tasks.

This commit will download and extract the golist binary from the Fedora
rpm and use it to provide golist.

The GOPATH mention in `rpm/update-spec-provides.sh` is only required for
golist to generate the gopaths and doesn't affect upstream or the rpm spec.

Currently, the only way to reliably test this is on an open github issue by running
`/packit propose-downstream`. This can't be run on an open PR.
The job-specific packit actions can only be tested via the packit
service and not via packit cli.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-07-11 10:41:34 -04:00
Miloslav Trmač
abf15075d2 Merge pull request #2034 from containers/renovate/golang.org-x-term-0.x
Update module golang.org/x/term to v0.10.0
2023-07-07 05:44:11 +02:00
renovate[bot]
2945e9e039 Update module golang.org/x/term to v0.10.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-07-06 13:10:13 +00:00
Tom Sweeney
5f87f6abd0 Bump to v1.14.0-dev
As the title says.

[NO NEW TESTS NEEDED]

Signed-off-by: Tom Sweeney <tsweeney@redhat.com>
2023-07-06 09:08:59 -04:00
Tom Sweeney
cb1e90127e Bump to v1.13.0
As the title says.  In preparation of RHEL 8.9/9.3

[NO NEW TESTS NEEDED]

Signed-off-by: Tom Sweeney <tsweeney@redhat.com>
2023-07-06 09:08:59 -04:00
1188 changed files with 320609 additions and 44022 deletions

View File

@@ -20,13 +20,8 @@ env:
# Save a little typing (path relative to $CIRRUS_WORKING_DIR)
SCRIPT_BASE: "./contrib/cirrus"
####
#### Cache-image names to test with (double-quotes around names are critical)
####
FEDORA_NAME: "fedora-38"
# Google-cloud VM Images
IMAGE_SUFFIX: "c20230614t132754z-f38f37d13"
IMAGE_SUFFIX: "c20240102t155643z-f39f38d13"
FEDORA_CACHE_IMAGE_NAME: "fedora-${IMAGE_SUFFIX}"
# Container FQIN's
@@ -77,14 +72,13 @@ doccheck_task:
"${SKOPEO_PATH}/${SCRIPT_BASE}/runner.sh" doccheck
osx_task:
# Don't run for docs-only or multi-arch image builds.
# Don't run for docs-only builds.
# Also don't run on release-branches or their PRs,
# since base container-image is not version-constrained.
only_if: &not_docs_or_release_branch >-
($CIRRUS_BASE_BRANCH == $CIRRUS_DEFAULT_BRANCH ||
$CIRRUS_BRANCH == $CIRRUS_DEFAULT_BRANCH ) &&
$CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*' &&
$CIRRUS_CRON != 'multiarch'
$CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*'
depends_on:
- validate
macos_instance:
@@ -106,8 +100,7 @@ osx_task:
cross_task:
alias: cross
only_if: >-
$CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*' &&
$CIRRUS_CRON != 'multiarch'
$CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*'
depends_on:
- validate
gce_instance: &standardvm
@@ -170,11 +163,10 @@ ostree-rs-ext_task:
#####
test_skopeo_task:
alias: test_skopeo
# Don't test for [CI:DOCS], [CI:BUILD], or 'multiarch' cron.
# Don't test for [CI:DOCS], [CI:BUILD].
only_if: >-
$CIRRUS_CHANGE_TITLE !=~ '.*CI:BUILD.*' &&
$CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*' &&
$CIRRUS_CRON != 'multiarch'
$CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*'
depends_on:
- validate
gce_instance:
@@ -207,49 +199,6 @@ test_skopeo_task:
"${SKOPEO_PATH}/${SCRIPT_BASE}/runner.sh" system
image_build_task: &image-build
name: "Build multi-arch $CTXDIR"
alias: image_build
# Some of these container images take > 1h to build, limit
# this task to a specific Cirrus-Cron entry with this name.
only_if: $CIRRUS_CRON == 'multiarch'
timeout_in: 120m # emulation is sssllllooooowwww
gce_instance:
<<: *standardvm
image_name: build-push-${IMAGE_SUFFIX}
# More muscle required for parallel multi-arch build
type: "n2-standard-4"
matrix:
- env:
CTXDIR: contrib/skopeoimage/upstream
- env:
CTXDIR: contrib/skopeoimage/testing
- env:
CTXDIR: contrib/skopeoimage/stable
env:
SKOPEO_USERNAME: ENCRYPTED[4195884d23b154553f2ddb26a63fc9fbca50ba77b3e447e4da685d8639ed9bc94b9a86a9c77272c8c80d32ead9ca48da]
SKOPEO_PASSWORD: ENCRYPTED[36e06f9befd17e5da2d60260edb9ef0d40e6312e2bba4cf881d383f1b8b5a18c8e5a553aea2fdebf39cebc6bd3b3f9de]
CONTAINERS_USERNAME: ENCRYPTED[dd722c734641f103b394a3a834d51ca5415347e378637cf98ee1f99e64aad2ec3dbd4664c0d94cb0e06b83d89e9bbe91]
CONTAINERS_PASSWORD: ENCRYPTED[d8b0fac87fe251cedd26c864ba800480f9e0570440b9eb264265b67411b253a626fb69d519e188e6c9a7f525860ddb26]
main_script:
- source /etc/automation_environment
- main.sh $CIRRUS_REPO_CLONE_URL $CTXDIR
test_image_build_task:
<<: *image-build
alias: test_image_build
# Allow this to run inside a PR w/ [CI:BUILD] only.
only_if: $CIRRUS_PR != '' && $CIRRUS_CHANGE_TITLE =~ '.*CI:BUILD.*'
# This takes a LONG time, only run when requested. N/B: Any task
# made to depend on this one will block FOREVER unless triggered.
# DO NOT ADD THIS TASK AS DEPENDENCY FOR `success_task`.
trigger_type: manual
# Overwrite all 'env', don't push anything, just do the build.
env:
DRYRUN: 1
# This task is critical. It updates the "last-used by" timestamp stored
# in metadata for all VM images. This mechanism functions in tandem with
# an out-of-band pruning operation to remove disused VM images.
@@ -288,7 +237,6 @@ success_task:
- cross
- proxy_ostree_ext
- test_skopeo
- image_build
- meta
container: *smallcontainer
env:

20
.github/workflows/discussion_lock.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
---
# See also:
# https://github.com/containers/podman/blob/main/.github/workflows/discussion_lock.yml
on:
schedule:
- cron: '0 0 * * *'
# Debug: Allow triggering job manually in github-actions WebUI
workflow_dispatch: {}
jobs:
# Ref: https://docs.github.com/en/actions/using-workflows/reusing-workflows
closed_issue_discussion_lock:
uses: containers/podman/.github/workflows/discussion_lock.yml@main
secrets: inherit
permissions:
contents: read
issues: write
pull-requests: write

View File

@@ -17,7 +17,7 @@ jobs:
pull-requests: write # for actions/stale to close stale PRs
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v8
- uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'A friendly reminder that this issue had no activity for 30 days.'

3
.golangci.yml Normal file
View File

@@ -0,0 +1,3 @@
---
run:
timeout: 5m

View File

@@ -6,45 +6,19 @@
# supported Fedora and CentOS Stream arches.
# They do not block the current Cirrus-based workflow.
# Build targets can be found at:
# https://copr.fedorainfracloud.org/coprs/rhcontainerbot/packit-builds/
# and
# https://copr.fedorainfracloud.org/coprs/rhcontainerbot/podman-next/
specfile_path: rpm/skopeo.spec
upstream_tag_template: v{version}
srpm_build_deps:
- make
jobs:
- &copr
job: copr_build
- job: copr_build
trigger: pull_request
owner: rhcontainerbot
project: packit-builds
notifications:
failure_comment:
message: "Ephemeral COPR build failed. @containers/packit-build please check."
enable_net: true
srpm_build_deps:
- make
- <<: *copr
# Run on commit to main branch
trigger: commit
branch: main
project: podman-next
- job: propose_downstream
trigger: release
update_release: false
dist_git_branches:
- fedora-all
actions:
pre-sync:
- "bash rpm/update-spec-provides.sh"
- job: koji_build
trigger: commit
dist_git_branches:
- fedora-all
- job: bodhi_update
trigger: commit
dist_git_branches:
- fedora-branched # rawhide updates are created automatically
targets:
- epel-9-x86_64
- epel-9-aarch64

View File

@@ -27,7 +27,7 @@ GOARCH ?= $(shell go env GOARCH)
# N/B: This value is managed by Renovate, manual changes are
# possible, as long as they don't disturb the formatting
# (i.e. DO NOT ADD A 'v' prefix!)
GOLANGCI_LINT_VERSION := 1.53.3
GOLANGCI_LINT_VERSION := 1.55.2
ifeq ($(GOBIN),)
GOBIN := $(GOPATH)/bin
@@ -117,6 +117,7 @@ help:
@echo " * 'install' - Install binaries and documents to system locations"
@echo " * 'binary' - Build skopeo with a container"
@echo " * 'bin/skopeo' - Build skopeo locally"
@echo " * 'bin/skopeo.OS.ARCH' - Build skopeo for specific OS and ARCH"
@echo " * 'test-unit' - Execute unit tests"
@echo " * 'test-integration' - Execute integration tests"
@echo " * 'validate' - Verify whether there is no conflict and all Go source files have been formatted, linted and vetted"

18
cmd/skopeo/login_test.go Normal file
View File

@@ -0,0 +1,18 @@
package main
import (
"path/filepath"
"testing"
)
func TestLogin(t *testing.T) {
dir := t.TempDir()
authFile := filepath.Join(dir, "auth.json")
compatAuthFile := filepath.Join(dir, "config.json")
// Just a trivial smoke-test exercising one error-handling path.
// We cant test full operation without a registry, unit tests should mostly
// exist in c/common/pkg/auth, not here.
out, err := runSkopeo("login", "--authfile", authFile, "--compat-auth-file", compatAuthFile, "example.com")
assertTestFailed(t, out, err, "options for paths to the credential file and to the Docker-compatible credential file can not be set simultaneously")
}

25
cmd/skopeo/logout_test.go Normal file
View File

@@ -0,0 +1,25 @@
package main
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
)
func TestLogout(t *testing.T) {
dir := t.TempDir()
authFile := filepath.Join(dir, "auth.json")
compatAuthFile := filepath.Join(dir, "config.json")
// Just a trivial smoke-test exercising one error-handling path.
// We cant test full operation without a registry, unit tests should mostly
// exist in c/common/pkg/auth, not here.
err := os.WriteFile(authFile, []byte("{}"), 0o700)
require.NoError(t, err)
err = os.WriteFile(compatAuthFile, []byte("{}"), 0o700)
require.NoError(t, err)
out, err := runSkopeo("logout", "--authfile", authFile, "--compat-auth-file", compatAuthFile, "example.com")
assertTestFailed(t, out, err, "options for paths to the credential file and to the Docker-compatible credential file can not be set simultaneously")
}

View File

@@ -75,7 +75,6 @@ import (
"github.com/containers/image/v5/manifest"
ocilayout "github.com/containers/image/v5/oci/layout"
"github.com/containers/image/v5/pkg/blobinfocache"
"github.com/containers/image/v5/signature"
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/transports/alltransports"
"github.com/containers/image/v5/types"
@@ -156,7 +155,7 @@ type activePipe struct {
// openImage is an opened image reference
type openImage struct {
// id is an opaque integer handle
id uint32
id uint64
src types.ImageSource
cachedimg types.Image
}
@@ -171,9 +170,9 @@ type proxyHandler struct {
cache types.BlobInfoCache
// imageSerial is a counter for open images
imageSerial uint32
imageSerial uint64
// images holds our opened images
images map[uint32]*openImage
images map[uint64]*openImage
// activePipes maps from "pipeid" to a pipe + goroutine pair
activePipes map[uint32]*activePipe
}
@@ -239,7 +238,7 @@ func isNotFoundImageError(err error) bool {
errors.Is(err, ocilayout.ImageNotFoundError{})
}
func (h *proxyHandler) openImageImpl(args []any, allowNotFound bool) (replyBuf, error) {
func (h *proxyHandler) openImageImpl(args []any, allowNotFound bool) (retReplyBuf replyBuf, retErr error) {
h.lock.Lock()
defer h.lock.Unlock()
var ret replyBuf
@@ -268,21 +267,23 @@ func (h *proxyHandler) openImageImpl(args []any, allowNotFound bool) (replyBuf,
return ret, err
}
policyContext, err := h.opts.global.getPolicyContext()
if err != nil {
return ret, err
}
defer func() {
if err := policyContext.Destroy(); err != nil {
retErr = noteCloseFailure(retErr, "tearing down policy context", err)
}
}()
unparsedTopLevel := image.UnparsedInstance(imgsrc, nil)
policy, err := signature.DefaultPolicy(h.sysctx)
if err != nil {
return ret, err
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return ret, err
}
allowed, err := policyContext.IsRunningImageAllowed(context.Background(), unparsedTopLevel)
if !allowed || err != nil {
if err != nil {
return ret, err
}
if !allowed && err == nil {
return ret, fmt.Errorf("policy verification failed unexpectedly")
if !allowed {
return ret, fmt.Errorf("internal inconsistency: policy verification failed without returning an error")
}
// Note that we never return zero as an imageid; this code doesn't yet
@@ -326,14 +327,6 @@ func (h *proxyHandler) CloseImage(args []any) (replyBuf, error) {
return ret, nil
}
func parseImageID(v any) (uint32, error) {
imgidf, ok := v.(float64)
if !ok {
return 0, fmt.Errorf("expecting integer imageid, not %T", v)
}
return uint32(imgidf), nil
}
// parseUint64 validates that a number fits inside a JavaScript safe integer
func parseUint64(v any) (uint64, error) {
f, ok := v.(float64)
@@ -347,7 +340,7 @@ func parseUint64(v any) (uint64, error) {
}
func (h *proxyHandler) parseImageFromID(v any) (*openImage, error) {
imgid, err := parseImageID(v)
imgid, err := parseUint64(v)
if err != nil {
return nil, err
}
@@ -847,7 +840,7 @@ func (h *proxyHandler) processRequest(readBytes []byte) (rb replyBuf, terminate
func (opts *proxyOptions) run(args []string, stdout io.Writer) error {
handler := &proxyHandler{
opts: opts,
images: make(map[uint32]*openImage),
images: make(map[uint64]*openImage),
activePipes: make(map[uint32]*activePipe),
}
defer handler.close()

View File

@@ -25,9 +25,7 @@ the images live are public and can be pulled without credentials. These contain
resulting containers can run safely with privileges within the container.
The container images are built using the latest Fedora and then Skopeo is installed into them.
The PATH in the container images is set to the default PATH provided by Fedora. Also, the
ENTRYPOINT and the WORKDIR variables are not set within these container images, as such they
default to `/`.
The ENTRYPOINT of the container is set to execute the `skopeo` binary.
The container images are:

View File

@@ -182,7 +182,7 @@ Existing signatures, if any, are preserved as well.
**--dest-compress-format** _format_
Specifies the compression format to use. Supported values are: `gzip` and `zstd`.
Specifies the compression format to use. Supported values are: `gzip`, `zstd` and `zstd:chunked`.
**--dest-compress-level** _format_

View File

@@ -36,6 +36,10 @@ Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth
Note: You can also override the default path of the authentication file by setting the REGISTRY\_AUTH\_FILE
environment variable. `export REGISTRY_AUTH_FILE=path`
**--compat-auth-file**=*path*
Instead of updating the default credentials file, update the one at *path*, and use a Docker-compatible format.
**--get-login**
Return the logged-in user for the registry. Return error if no login is found.

View File

@@ -23,6 +23,10 @@ Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth
Note: You can also override the default path of the authentication file by setting the REGISTRY\_AUTH\_FILE
environment variable. `export REGISTRY_AUTH_FILE=path`
**--compat-auth-file**=*path*
Instead of updating the default credentials file, update the one at *path*, and use a Docker-compatible format.
**--all**, **-a**
Remove the cached credentials for all registries in the auth file

View File

@@ -121,7 +121,7 @@ Print the version number
**/etc/containers/registries.d**
Default directory containing registry configuration, if **--registries.d** is not specified.
The contents of this directory are documented in [containers-policy.json(5)](https://github.com/containers/image/blob/main/docs/containers-policy.json.5.md).
The contents of this directory are documented in [containers-registries.d(5)](https://github.com/containers/image/blob/main/docs/containers-registries.d.5.md).
## SEE ALSO
skopeo-login(1), docker-login(1), containers-auth.json(5), containers-storage.conf(5), containers-policy.json(5), containers-transports(5)

113
go.mod
View File

@@ -1,23 +1,23 @@
module github.com/containers/skopeo
go 1.18
go 1.19
require (
github.com/containers/common v0.55.1
github.com/containers/image/v5 v5.26.1
github.com/containers/ocicrypt v1.1.7
github.com/containers/storage v1.48.0
github.com/docker/distribution v2.8.2+incompatible
github.com/containers/common v0.57.6
github.com/containers/image/v5 v5.29.4
github.com/containers/ocicrypt v1.1.10
github.com/containers/storage v1.51.0
github.com/docker/distribution v2.8.3+incompatible
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0-rc4
github.com/opencontainers/image-spec v1.1.0-rc5
github.com/opencontainers/image-tools v1.0.0-rc3
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.7.0
github.com/spf13/cobra v1.8.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.4
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1
golang.org/x/term v0.9.0
golang.org/x/exp v0.0.0-20231226003508-02704c960a9b
golang.org/x/term v0.17.0
gopkg.in/yaml.v3 v3.0.1
)
@@ -25,30 +25,31 @@ require (
dario.cat/mergo v1.0.0 // indirect
github.com/BurntSushi/toml v1.3.2 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/Microsoft/hcsshim v0.10.0-rc.8 // indirect
github.com/Microsoft/hcsshim v0.12.0-rc.1 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/containerd v1.7.2 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
github.com/containerd/cgroups/v3 v3.0.2 // indirect
github.com/containerd/containerd v1.7.9 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.15.1 // indirect
github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01 // indirect
github.com/coreos/go-oidc/v3 v3.6.0 // indirect
github.com/cyberphone/json-canonicalization v0.0.0-20230514072755-504adb8a8af1 // indirect
github.com/cyphar/filepath-securejoin v0.2.3 // indirect
github.com/coreos/go-oidc/v3 v3.7.0 // indirect
github.com/cyberphone/json-canonicalization v0.0.0-20231011164504-785e29786b46 // indirect
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/docker v24.0.2+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/distribution/reference v0.5.0 // indirect
github.com/docker/docker v24.0.7+incompatible // indirect
github.com/docker/docker-credential-helpers v0.8.0 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/go-jose/go-jose/v3 v3.0.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/go-jose/go-jose/v3 v3.0.3 // indirect
github.com/go-logr/logr v1.3.0 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/analysis v0.21.4 // indirect
github.com/go-openapi/errors v0.20.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/errors v0.20.4 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/loads v0.21.2 // indirect
github.com/go-openapi/runtime v0.26.0 // indirect
github.com/go-openapi/spec v0.20.9 // indirect
@@ -58,32 +59,33 @@ require (
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/go-containerregistry v0.15.2 // indirect
github.com/google/go-containerregistry v0.16.1 // indirect
github.com/google/go-intervals v0.0.2 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/google/uuid v1.3.1 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-retryablehttp v0.7.4 // indirect
github.com/hashicorp/go-retryablehttp v0.7.5 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.16.6 // indirect
github.com/klauspost/compress v1.17.3 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/letsencrypt/boulder v0.0.0-20230213213521-fdfea0d469b6 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-runewidth v0.0.14 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/mattn/go-shellwords v1.0.12 // indirect
github.com/mattn/go-sqlite3 v1.14.18 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mistifyio/go-zfs/v3 v3.0.1 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect
github.com/moby/sys/mountinfo v0.7.1 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/opencontainers/runc v1.1.7 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.3 // indirect
github.com/opencontainers/runc v1.1.10 // indirect
github.com/opencontainers/runtime-spec v1.1.0 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/ostreedev/ostree-go v0.0.0-20210805093236-719684c64e4f // indirect
@@ -92,41 +94,40 @@ require (
github.com/proglottis/gpgme v0.1.3 // indirect
github.com/rivo/uniseg v0.4.4 // indirect
github.com/russross/blackfriday v2.0.0+incompatible // indirect
github.com/secure-systems-lab/go-securesystemslib v0.7.0 // indirect
github.com/segmentio/ksuid v1.0.4 // indirect
github.com/sigstore/fulcio v1.3.1 // indirect
github.com/sigstore/rekor v1.2.2-0.20230601122533-4c81ff246d12 // indirect
github.com/sigstore/sigstore v1.7.1 // indirect
github.com/sigstore/fulcio v1.4.3 // indirect
github.com/sigstore/rekor v1.2.2 // indirect
github.com/sigstore/sigstore v1.7.5 // indirect
github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 // indirect
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 // indirect
github.com/sylabs/sif/v2 v2.11.5 // indirect
github.com/sylabs/sif/v2 v2.15.0 // indirect
github.com/tchap/go-patricia/v2 v2.3.1 // indirect
github.com/theupdateframework/go-tuf v0.5.2 // indirect
github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399 // indirect
github.com/ulikunitz/xz v0.5.11 // indirect
github.com/vbatts/tar-split v0.11.3 // indirect
github.com/vbauerster/mpb/v8 v8.4.0 // indirect
github.com/vbatts/tar-split v0.11.5 // indirect
github.com/vbauerster/mpb/v8 v8.6.2 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
go.etcd.io/bbolt v1.3.7 // indirect
go.mongodb.org/mongo-driver v1.11.3 // indirect
go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.15.0 // indirect
go.opentelemetry.io/otel/trace v1.15.0 // indirect
golang.org/x/crypto v0.10.0 // indirect
golang.org/x/mod v0.10.0 // indirect
golang.org/x/net v0.11.0 // indirect
golang.org/x/oauth2 v0.9.0 // indirect
golang.org/x/sync v0.3.0 // indirect
golang.org/x/sys v0.9.0 // indirect
golang.org/x/text v0.10.0 // indirect
golang.org/x/tools v0.9.3 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect
google.golang.org/grpc v1.55.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
gopkg.in/go-jose/go-jose.v2 v2.6.1 // indirect
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
go.opentelemetry.io/otel v1.19.0 // indirect
go.opentelemetry.io/otel/metric v1.19.0 // indirect
go.opentelemetry.io/otel/trace v1.19.0 // indirect
golang.org/x/crypto v0.19.0 // indirect
golang.org/x/mod v0.14.0 // indirect
golang.org/x/net v0.19.0 // indirect
golang.org/x/oauth2 v0.14.0 // indirect
golang.org/x/sync v0.5.0 // indirect
golang.org/x/sys v0.17.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/tools v0.16.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230920204549-e6e6cdab5c13 // indirect
google.golang.org/grpc v1.58.3 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/go-jose/go-jose.v2 v2.6.3 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

287
go.sum
View File

@@ -4,13 +4,12 @@ dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
github.com/14rcole/gopopulate v0.0.0-20180821133914-b175b219e774 h1:SCbEWT58NSt7d2mcFdvxC9uyrdcTfvBbPLThhkDmXzg=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/toml v1.3.2 h1:o7IhLm0Msx3BaB+n3Ag7L8EVlByGnpq14C4YWiu/gL8=
github.com/BurntSushi/toml v1.3.2/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow=
github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM=
github.com/Microsoft/hcsshim v0.10.0-rc.8 h1:YSZVvlIIDD1UxQpJp0h+dnpLUw+TrY0cx8obKsp3bek=
github.com/Microsoft/hcsshim v0.10.0-rc.8/go.mod h1:OEthFdQv/AD2RAdzR6Mm1N1KPCztGKDurW1Z8b8VGMM=
github.com/Microsoft/hcsshim v0.12.0-rc.1 h1:Hy+xzYujv7urO5wrgcG58SPMOXNLrj4WCJbySs2XX/A=
github.com/Microsoft/hcsshim v0.12.0-rc.1/go.mod h1:Y1a1S0QlYp1mBpyvGiuEdOfZqnao+0uX5AWHXQ5NhZU=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/VividCortex/ewma v1.2.0 h1:f58SaIzcDXrSy3kWaHNvuJgJ3Nmz59Zji6XoJR/q1ow=
@@ -25,41 +24,45 @@ github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM=
github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw=
github.com/containerd/containerd v1.7.2 h1:UF2gdONnxO8I6byZXDi5sXWiWvlW3D/sci7dTQimEJo=
github.com/containerd/containerd v1.7.2/go.mod h1:afcz74+K10M/+cjGHIVQrCt3RAQhUSCAjJ9iMYhhkuI=
github.com/containerd/stargz-snapshotter/estargz v0.14.3 h1:OqlDCK3ZVUO6C3B/5FSkDwbkEETK84kQgEeFwDC+62k=
github.com/containerd/stargz-snapshotter/estargz v0.14.3/go.mod h1:KY//uOCIkSuNAHhJogcZtrNHdKrA99/FCCRjE3HD36o=
github.com/containers/common v0.55.1 h1:sOlcIxEYXoR3OSHufew7CuSeOWr7a2jHGYw3r+xKA1k=
github.com/containers/common v0.55.1/go.mod h1:ZKPllYOZ2xj2rgWRdnHHVvWg6ru4BT28En8mO8DMMPk=
github.com/containers/image/v5 v5.26.1 h1:8y3xq8GO/6y8FR+nAedHPsAFiAtOrab9qHTBpbqaX8g=
github.com/containers/image/v5 v5.26.1/go.mod h1:IwlOGzTkGnmfirXxt0hZeJlzv1zVukE03WZQ203Z9GA=
github.com/containerd/cgroups/v3 v3.0.2 h1:f5WFqIVSgo5IZmtTT3qVBo6TzI1ON6sycSBKkymb9L0=
github.com/containerd/cgroups/v3 v3.0.2/go.mod h1:JUgITrzdFqp42uI2ryGA+ge0ap/nxzYgkGmIcetmErE=
github.com/containerd/containerd v1.7.9 h1:KOhK01szQbM80YfW1H6RZKh85PHGqY/9OcEZ35Je8sc=
github.com/containerd/containerd v1.7.9/go.mod h1:0/W44LWEYfSHoxBtsHIiNU/duEkgpMokemafHVCpq9Y=
github.com/containerd/stargz-snapshotter/estargz v0.15.1 h1:eXJjw9RbkLFgioVaTG+G/ZW/0kEe2oEKCdS/ZxIyoCU=
github.com/containerd/stargz-snapshotter/estargz v0.15.1/go.mod h1:gr2RNwukQ/S9Nv33Lt6UC7xEx58C+LHRdoqbEKjz1Kk=
github.com/containers/common v0.57.6 h1:GNK2lsL2gMcmLc+cH749S7I7HxuP80TBWqcr4913bC4=
github.com/containers/common v0.57.6/go.mod h1:GRtgIWNPc8zmo/vcA7VoZfLWpgQRH01/kzQbeNZH8WQ=
github.com/containers/image/v5 v5.29.4 h1:EbYrwOscTvzeCXt4149OtU74T/ZuohEottcs/hz47O4=
github.com/containers/image/v5 v5.29.4/go.mod h1:kQ7qcDsps424ZAz24thD+x7+dJw1vgur3A9tTDsj97E=
github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01 h1:Qzk5C6cYglewc+UyGf6lc8Mj2UaPTHy/iF2De0/77CA=
github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01/go.mod h1:9rfv8iPl1ZP7aqh9YA68wnZv2NUDbXdcdPHVz0pFbPY=
github.com/containers/ocicrypt v1.1.7 h1:thhNr4fu2ltyGz8aMx8u48Ae0Pnbip3ePP9/mzkZ/3U=
github.com/containers/ocicrypt v1.1.7/go.mod h1:7CAhjcj2H8AYp5YvEie7oVSK2AhBY8NscCYRawuDNtw=
github.com/containers/storage v1.48.0 h1:wiPs8J2xiFoOEAhxHDRtP6A90Jzj57VqzLRXOqeizns=
github.com/containers/storage v1.48.0/go.mod h1:pRp3lkRo2qodb/ltpnudoXggrviRmaCmU5a5GhTBae0=
github.com/coreos/go-oidc/v3 v3.6.0 h1:AKVxfYw1Gmkn/w96z0DbT/B/xFnzTd3MkZvWLjF4n/o=
github.com/coreos/go-oidc/v3 v3.6.0/go.mod h1:ZpHUsHBucTUj6WOkrP4E20UPynbLZzhTQ1XKCXkxyPc=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/containers/ocicrypt v1.1.10 h1:r7UR6o8+lyhkEywetubUUgcKFjOWOaWz8cEBrCPX0ic=
github.com/containers/ocicrypt v1.1.10/go.mod h1:YfzSSr06PTHQwSTUKqDSjish9BeW1E4HUmreluQcMd8=
github.com/containers/storage v1.51.0 h1:AowbcpiWXzAjHosKz7MKvPEqpyX+ryZA/ZurytRrFNA=
github.com/containers/storage v1.51.0/go.mod h1:ybl8a3j1PPtpyaEi/5A6TOFs+5TrEyObeKJzVtkUlfc=
github.com/coreos/go-oidc/v3 v3.7.0 h1:FTdj0uexT4diYIPlF4yoFVI5MRO1r5+SEcIpEw9vC0o=
github.com/coreos/go-oidc/v3 v3.7.0/go.mod h1:yQzSCqBnK3e6Fs5l+f5i0F8Kwf0zpH9bPEsbY00KanM=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cyberphone/json-canonicalization v0.0.0-20230514072755-504adb8a8af1 h1:8Pq5UNTC+/UfvcOPKQGZoKCkeF+ZaKa4wJ9OS2gsQQM=
github.com/cyberphone/json-canonicalization v0.0.0-20230514072755-504adb8a8af1/go.mod h1:uzvlm1mxhHkdfqitSA92i7Se+S9ksOn3a3qmv/kyOCw=
github.com/cyphar/filepath-securejoin v0.2.3 h1:YX6ebbZCZP7VkM3scTTokDgBL2TY741X51MTk3ycuNI=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/cyberphone/json-canonicalization v0.0.0-20231011164504-785e29786b46 h1:2Dx4IHfC1yHWI12AxQDJM1QbRCDfk6M+blLzlZCXdrc=
github.com/cyberphone/json-canonicalization v0.0.0-20231011164504-785e29786b46/go.mod h1:uzvlm1mxhHkdfqitSA92i7Se+S9ksOn3a3qmv/kyOCw=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docker/distribution v2.8.2+incompatible h1:T3de5rq0dB1j30rp0sA2rER+m322EBzniBPB6ZIzuh8=
github.com/docker/distribution v2.8.2+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v24.0.2+incompatible h1:eATx+oLz9WdNVkQrr0qjQ8HvRJ4bOOxfzEo8R+dA3cg=
github.com/docker/docker v24.0.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.7.0 h1:xtCHsjxogADNZcdv1pKUHXryefjlVRqWqIhk/uXJp0A=
github.com/docker/docker-credential-helpers v0.7.0/go.mod h1:rETQfLdHNT3foU5kuNkFR1R1V12OJRRO5lzt2D1b5X0=
github.com/distribution/reference v0.5.0 h1:/FUIFXtfc/x2gpa5/VGfiGLuOIdYa1t65IKK2OFGvA0=
github.com/distribution/reference v0.5.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/cli v24.0.7+incompatible h1:wa/nIwYFW7BVTGa7SWPVyyXU9lgORqUb1xfI36MSkFg=
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v24.0.7+incompatible h1:Wo6l37AuwP3JaMnZa226lzVXGA3F9Ig1seQen0cKYlM=
github.com/docker/docker v24.0.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.8.0 h1:YQFtbBQb4VrpoPxhFuzEBPQ9E16qz5SpHLS+uswaCp8=
github.com/docker/docker-credential-helpers v0.8.0/go.mod h1:UGFXcuoQ5TxPiB54nHOZ32AWRqQdECoh/Mg0AlEYb40=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-metrics v0.0.1 h1:AgB/0SvBxihN0X8OR4SjsblXkbMvalQ8cjmtKQ2rQV8=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 h1:iFaUwBSo5Svw6L7HYpRu/0lE3e0BaElwnNO1qkNQxBY=
@@ -72,11 +75,11 @@ github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7
github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a h1:yDWHCSQ40h88yih2JAcL6Ls/kVkSE8GFACTGVnMPruw=
github.com/facebookgo/limitgroup v0.0.0-20150612190941-6abd8d71ec01 h1:IeaD1VDVBPlx3viJT9Md8if8IxxJnO+x0JCGb054heg=
github.com/facebookgo/muster v0.0.0-20150708232844-fd3d7953fd52 h1:a4DFiKFJiDRGFD1qIcqGLX/WlUMD9dyLSLDt+9QZgt8=
github.com/go-jose/go-jose/v3 v3.0.0 h1:s6rrhirfEP/CGIoc6p+PZAeogN2SxKav6Wp7+dyMWVo=
github.com/go-jose/go-jose/v3 v3.0.0/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8=
github.com/go-jose/go-jose/v3 v3.0.3 h1:fFKWeig/irsp7XD2zBxvnmA/XaRWp5V3CBsZXJF7G7k=
github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.3.0 h1:2y3SDp0ZXuc6/cjLSZ+Q3ir+QB9T/iG5yYRXqsagWSY=
github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/analysis v0.21.2/go.mod h1:HZwRk4RRisyG8vx2Oe6aqeSQcoxRp47Xkp3+K6q+LdY=
@@ -85,14 +88,16 @@ github.com/go-openapi/analysis v0.21.4/go.mod h1:4zQ35W4neeZTqh3ol0rv/O8JBbka9Qy
github.com/go-openapi/errors v0.19.8/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
github.com/go-openapi/errors v0.19.9/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
github.com/go-openapi/errors v0.20.2/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
github.com/go-openapi/errors v0.20.3 h1:rz6kiC84sqNQoqrtulzaL/VERgkoCyB6WdEkc2ujzUc=
github.com/go-openapi/errors v0.20.3/go.mod h1:Z3FlZ4I8jEGxjUK+bugx3on2mIAk4txuAOhlsB1FSgk=
github.com/go-openapi/errors v0.20.4 h1:unTcVm6PispJsMECE3zWgvG4xTiKda1LIR5rCRWLG6M=
github.com/go-openapi/errors v0.20.4/go.mod h1:Z3FlZ4I8jEGxjUK+bugx3on2mIAk4txuAOhlsB1FSgk=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUeOFYEICuY=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonreference v0.19.6/go.mod h1:diGHMEHg2IqXZGKxqyvWdfWU/aim5Dprw5bqpKkTvns=
github.com/go-openapi/jsonreference v0.20.0 h1:MYlu0sBgChmCfJxxUKZ8g1cPWFOB37YSZqewK7OKeyA=
github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/loads v0.21.1/go.mod h1:/DtAMXXneXFjbQMGEtbamCZb+4x7eGwkvZCvBmwUG+g=
github.com/go-openapi/loads v0.21.2 h1:r2a/xFIYeZ4Qd2TnGpWDIQNcP80dIaZgf704za8enro=
github.com/go-openapi/loads v0.21.2/go.mod h1:Jq58Os6SSGz0rzh62ptiu8Z31I+OTHqmULx5e/gJbNw=
@@ -110,11 +115,12 @@ github.com/go-openapi/strfmt v0.21.7/go.mod h1:adeGTkxE44sPyLk0JV235VQAO/ZXUr8KA
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.15/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag v0.21.1/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.22.4 h1:QLMzNJnMGPRNDCbySlcj1x01tzU8/9LTTL9hZZZogBU=
github.com/go-openapi/swag v0.22.4/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/validate v0.22.1 h1:G+c2ub6q47kfX1sOBLwIQwzBVt8qmOAARyo/9Fqs9NU=
github.com/go-openapi/validate v0.22.1/go.mod h1:rjnrwK57VJ7A8xqfpAOEKRH8yQSGUriMu5/zuPSQ1hg=
github.com/go-rod/rod v0.113.3 h1:oLiKZW721CCMwA5g7977cWfcAKQ+FuosP47Zf1QiDrA=
github.com/go-rod/rod v0.114.4 h1:FpkNFukjCuZLwnoLs+S9aCL95o/EMec6M+41UmvQay8=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
github.com/go-test/deep v1.1.0 h1:WOcxcdHcvdgThNXjw0t76K42FXTU7HpNQWHpA2HHNlg=
@@ -150,7 +156,6 @@ github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
@@ -160,6 +165,7 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
@@ -171,9 +177,10 @@ github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-containerregistry v0.15.2 h1:MMkSh+tjSdnmJZO7ljvEqV1DjfekB6VUEAZgy3a+TQE=
github.com/google/go-containerregistry v0.15.2/go.mod h1:wWK+LnOv4jXMM23IT/F1wdYftGWGr47Is8CG+pmHK1Q=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-containerregistry v0.16.1 h1:rUEt426sR6nyrL3gt+18ibRcvYpKYdpsa5ZW7MA08dQ=
github.com/google/go-containerregistry v0.16.1/go.mod h1:u0qB2l7mvtWVR5kNcbFIhFY1hLbf8eeGapA+vbFDCtQ=
github.com/google/go-intervals v0.0.2 h1:FGrVEiUnTRKR8yE04qzXYaJMtnIYqobR5QbblK3ixcM=
github.com/google/go-intervals v0.0.2/go.mod h1:MkaR3LNRfeKLPmqgJYs4E66z5InYjmCjbbr4TQlcT6Y=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -181,8 +188,8 @@ github.com/google/pprof v0.0.0-20230323073829-e72429f035bd h1:r8yyd+DJDmsUhGrRBx
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.1 h1:KjJaJ9iWZ3jOFZIf1Lqf4laDRCasjl0BCmnEGxkdLb4=
github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
@@ -194,8 +201,8 @@ github.com/hashicorp/go-hclog v0.9.2 h1:CG6TE5H9/JXsFWJCfoIVpKFIkFe6ysEuHirp4DxC
github.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-retryablehttp v0.7.4 h1:ZQgVdpTdAL7WpMIwLzCfbalOcSUdkDZnpUv3/+BxzFA=
github.com/hashicorp/go-retryablehttp v0.7.4/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8=
github.com/hashicorp/go-retryablehttp v0.7.5 h1:bJj+Pj19UZMIweq/iie+1u5YCdGrnxCT9yvm0e+Nd5M=
github.com/hashicorp/go-retryablehttp v0.7.5/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8=
github.com/honeycombio/beeline-go v1.10.0 h1:cUDe555oqvw8oD76BQJ8alk7FP0JZ/M/zXpNvOEDLDc=
github.com/honeycombio/libhoney-go v1.16.0 h1:kPpqoz6vbOzgp7jC6SR7SkNj7rua7rgxvznI6M3KdHc=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
@@ -213,14 +220,15 @@ github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.4.1/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.16.6 h1:91SKEy4K37vkp255cJ8QesJhjyRO0hn9i9G0GoUwLsk=
github.com/klauspost/compress v1.16.6/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/klauspost/compress v1.17.3 h1:qkRjuerhUU1EmXLYGkSH6EZL+vPSxIrYjLNAK4slzwA=
github.com/klauspost/compress v1.17.3/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/pgzip v1.2.6 h1:8RXeL5crjEUFnR2/Sn6GJNWtSQ3Dk8pq4CL3jvdDyjU=
github.com/klauspost/pgzip v1.2.6/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
@@ -235,10 +243,12 @@ github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/markbates/oncer v0.0.0-20181203154359-bf2de49a0be2/go.mod h1:Ld9puTsIW75CHf65OeIOkyKbteujpZVXDpWK6YGZbxE=
github.com/markbates/safe v1.0.1/go.mod h1:nAqgmRi7cY2nqMc92/bSEeQA+R4OheNU2T1kNSCBdG0=
github.com/mattn/go-runewidth v0.0.14 h1:+xnbZSEeDbOIg5/mE6JF0w6n9duR1l3/WmbinWVwUuU=
github.com/mattn/go-runewidth v0.0.14/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U=
github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-shellwords v1.0.12 h1:M2zGm7EW6UQJvDeQxo4T51eKPurbeFbe8WtebGE2xrk=
github.com/mattn/go-shellwords v1.0.12/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y=
github.com/mattn/go-sqlite3 v1.14.18 h1:JL0eqdCOq6DJVNPSvArO/bIV9/P7fbGrV00LZHc+5aI=
github.com/mattn/go-sqlite3 v1.14.18/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/miekg/pkcs11 v1.1.1 h1:Ugu9pdy6vAYku5DEpVWVFPYnzV+bxB+iRdbuFSu7TvU=
github.com/miekg/pkcs11 v1.1.1/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
@@ -248,8 +258,8 @@ github.com/mitchellh/mapstructure v1.3.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RR
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/sys/mountinfo v0.6.2 h1:BzJjoreD5BMFNmD9Rus6gdd1pLuecOFPt8wC+Vygl78=
github.com/moby/sys/mountinfo v0.6.2/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=
github.com/moby/sys/mountinfo v0.7.1 h1:/tTvQaSJRr2FshkhXiIpux6fQ2Zvc4j7tAhMTStAG2g=
github.com/moby/sys/mountinfo v0.7.1/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=
github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
@@ -261,25 +271,25 @@ github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo/v2 v2.11.0 h1:WgqUCUt/lT6yXoQ8Wef0fsNn5cAuMK7+KT9UFRz2tcU=
github.com/onsi/gomega v1.27.8 h1:gegWiwZjBsf2DgiSbf5hpokZ98JVDMcWkUiigk6/KXc=
github.com/onsi/ginkgo/v2 v2.13.1 h1:LNGfMbR2OVGBfXjvRZIZ2YCTQdGKtPLvuI1rMCCj3OU=
github.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.1.0-rc4 h1:oOxKUJWnFC4YGHCCMNql1x4YaDfYBTS5Y4x/Cgeo1E0=
github.com/opencontainers/image-spec v1.1.0-rc4/go.mod h1:X4pATf0uXsnn3g5aiGIsVnJBR4mxhKzfwmvK/B2NTm8=
github.com/opencontainers/image-spec v1.1.0-rc5 h1:Ygwkfw9bpDvs+c9E34SdgGOj41dX/cbdlwvlWt0pnFI=
github.com/opencontainers/image-spec v1.1.0-rc5/go.mod h1:X4pATf0uXsnn3g5aiGIsVnJBR4mxhKzfwmvK/B2NTm8=
github.com/opencontainers/image-tools v1.0.0-rc3 h1:ZR837lBIxq6mmwEqfYrbLMuf75eBSHhccVHy6lsBeM4=
github.com/opencontainers/image-tools v1.0.0-rc3/go.mod h1:A9btVpZLzttF4iFaKNychhPyrhfOjJ1OF5KrA8GcLj4=
github.com/opencontainers/runc v1.1.7 h1:y2EZDS8sNng4Ksf0GUYNhKbTShZJPJg1FiXJNH/uoCk=
github.com/opencontainers/runc v1.1.7/go.mod h1:CbUumNnWCuTGFukNXahoo/RFBZvDAgRh/smNYNOhA50=
github.com/opencontainers/runtime-spec v1.1.0-rc.3 h1:l04uafi6kxByhbxev7OWiuUv0LZxEsYUfDWZ6bztAuU=
github.com/opencontainers/runtime-spec v1.1.0-rc.3/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runc v1.1.10 h1:EaL5WeO9lv9wmS6SASjszOeQdSctvpbu0DdBQBizE40=
github.com/opencontainers/runc v1.1.10/go.mod h1:+/R6+KmDlh+hOO8NkjmgkG9Qzvypzk0yXxAPYYR65+M=
github.com/opencontainers/runtime-spec v1.1.0 h1:HHUyrt9mwHUjtasSbXSMvs4cyFxh+Bll4AjJ9odEGpg=
github.com/opencontainers/runtime-spec v1.1.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU=
github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs=
github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
github.com/ostreedev/ostree-go v0.0.0-20210805093236-719684c64e4f h1:/UDgs8FGMqwnHagNDPGOlts35QkhAZ8by3DR7nMih7M=
github.com/ostreedev/ostree-go v0.0.0-20210805093236-719684c64e4f/go.mod h1:J6OG6YJVEWopen4avK3VNQSnALmmjvniMmni/YFYAwc=
github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU=
github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@@ -289,44 +299,44 @@ github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZb
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/proglottis/gpgme v0.1.3 h1:Crxx0oz4LKB3QXc5Ea0J19K/3ICfy3ftr5exgUK1AU0=
github.com/proglottis/gpgme v0.1.3/go.mod h1:fPbW/EZ0LvwQtH8Hy7eixhp1eF3G39dtx7GUN+0Gmy0=
github.com/prometheus/client_golang v1.15.1 h1:8tXpTmJbyH5lydzFPoxSIJ0J46jdh3tylbvM1xCv0LI=
github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY=
github.com/prometheus/common v0.42.0 h1:EKsfXEYo4JpWMHH5cg+KOUWeuJSov1Id8zGR8eeI1YM=
github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI=
github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw=
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
github.com/prometheus/procfs v0.11.1 h1:xRC8Iq1yyca5ypa9n1EZnWZkt7dwcoRPQwX/5gwaUuI=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.4 h1:8TfxU8dW6PdqD27gjM8MVNuicgxIjxpm4K7x4jp8sis=
github.com/rivo/uniseg v0.4.4/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
github.com/russross/blackfriday v2.0.0+incompatible h1:cBXrhZNUf9C+La9/YpS+UHpUT8YD6Td9ZMSU9APFcsk=
github.com/russross/blackfriday v2.0.0+incompatible/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sebdah/goldie/v2 v2.5.3 h1:9ES/mNN+HNUbNWpVAlrzuZ7jE+Nrczbj8uFRjM7624Y=
github.com/secure-systems-lab/go-securesystemslib v0.7.0 h1:OwvJ5jQf9LnIAS83waAjPbcMsODrTQUpJ02eNLUoxBg=
github.com/secure-systems-lab/go-securesystemslib v0.7.0/go.mod h1:/2gYnlnHVQ6xeGtfIqFy7Do03K4cdCY0A/GlJLDKLHI=
github.com/segmentio/ksuid v1.0.4 h1:sBo2BdShXjmcugAMwjugoGUdUV0pcxY5mW4xKRn3v4c=
github.com/segmentio/ksuid v1.0.4/go.mod h1:/XUiZBD3kVx5SmUOl55voK5yeAbBNNIed+2O73XgrPE=
github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
github.com/sigstore/fulcio v1.3.1 h1:0ntW9VbQbt2JytoSs8BOGB84A65eeyvGSavWteYp29Y=
github.com/sigstore/fulcio v1.3.1/go.mod h1:/XfqazOec45ulJZpyL9sq+OsVQ8g2UOVoNVi7abFgqU=
github.com/sigstore/rekor v1.2.2-0.20230601122533-4c81ff246d12 h1:x/WnxasgR40qGY67IHwioakXLuhDxJ10vF8/INuOTiI=
github.com/sigstore/rekor v1.2.2-0.20230601122533-4c81ff246d12/go.mod h1:8c+a8Yo7r8gKuYbIaz+c3oOdw9iMXx+tMdOg2+b+2jQ=
github.com/sigstore/sigstore v1.7.1 h1:fCATemikcBK0cG4+NcM940MfoIgmioY1vC6E66hXxks=
github.com/sigstore/sigstore v1.7.1/go.mod h1:0PmMzfJP2Y9+lugD0wer4e7TihR5tM7NcIs3bQNk5xg=
github.com/sigstore/fulcio v1.4.3 h1:9JcUCZjjVhRF9fmhVuz6i1RyhCc/EGCD7MOl+iqCJLQ=
github.com/sigstore/fulcio v1.4.3/go.mod h1:BQPWo7cfxmJwgaHlphUHUpFkp5+YxeJes82oo39m5og=
github.com/sigstore/rekor v1.2.2 h1:5JK/zKZvcQpL/jBmHvmFj3YbpDMBQnJQ6ygp8xdF3bY=
github.com/sigstore/rekor v1.2.2/go.mod h1:FGnWBGWzeNceJnp0x9eDFd41mI8aQqCjj+Zp0IEs0Qg=
github.com/sigstore/sigstore v1.7.5 h1:ij55dBhLwjICmLTBJZm7SqoQLdsu/oowDanACcJNs48=
github.com/sigstore/sigstore v1.7.5/go.mod h1:9OCmYWhzuq/G4e1cy9m297tuMRJ1LExyrXY3ZC3Zt/s=
github.com/sirupsen/logrus v1.4.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 h1:JIAuq3EEf9cgbU6AtGPK4CTG3Zf6CKMNqf0MHTggAUA=
github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966/go.mod h1:sUM3LWHvSMaG192sy56D9F7CNvL7jUJVXoqM1QKLnog=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I=
github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0=
github.com/spf13/cobra v1.8.0 h1:7aJaZx1B85qltLMc546zn58BxxfZdR/W22ej9CFoEf0=
github.com/spf13/cobra v1.8.0/go.mod h1:WXLWApfZ71AjXPya3WOlMsY9yMs7YeiHhFVlvLyhcho=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
@@ -345,14 +355,12 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/sylabs/sif/v2 v2.11.5 h1:7ssPH3epSonsTrzbS1YxeJ9KuqAN7ISlSM61a7j/mQM=
github.com/sylabs/sif/v2 v2.11.5/go.mod h1:GBoZs9LU3e4yJH1dcZ3Akf/jsqYgy5SeguJQC+zd75Y=
github.com/sylabs/sif/v2 v2.15.0 h1:Nv0tzksFnoQiQ2eUwpAis9nVqEu4c3RcNSxX8P3Cecw=
github.com/sylabs/sif/v2 v2.15.0/go.mod h1:X1H7eaPz6BAxA84POMESXoXfTqgAnLQkujyF/CQFWTc=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tchap/go-patricia/v2 v2.3.1 h1:6rQp39lgIYZ+MHmdEq4xzuk1t7OdC35z/xm0BGhTkes=
github.com/tchap/go-patricia/v2 v2.3.1/go.mod h1:VZRHKAb53DLaG+nA9EaYYiaEx6YztwDlLElMsnSHD4k=
github.com/theupdateframework/go-tuf v0.5.2 h1:habfDzTmpbzBLIFGWa2ZpVhYvFBoK0C1onC3a4zuPRA=
github.com/theupdateframework/go-tuf v0.5.2/go.mod h1:SyMV5kg5n4uEclsyxXJZI2UxPFJNDc4Y+r7wv+MlvTA=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399 h1:e/5i7d4oYZ+C1wj2THlRK+oAhjeS/TRQwMfkIuet3w0=
@@ -360,11 +368,10 @@ github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399/go.mod h1:LdwHT
github.com/ulikunitz/xz v0.5.8/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
github.com/ulikunitz/xz v0.5.11 h1:kpFauv27b6ynzBNT/Xy+1k+fK4WswhN/6PN5WhFAGw8=
github.com/ulikunitz/xz v0.5.11/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
github.com/urfave/cli v1.22.12/go.mod h1:sSBEIC79qR6OvcmsD4U3KABeOTxDqQtdDnaFuUN30b8=
github.com/vbatts/tar-split v0.11.3 h1:hLFqsOLQ1SsppQNTMpkpPXClLDfC2A3Zgy9OUU+RVck=
github.com/vbatts/tar-split v0.11.3/go.mod h1:9QlHN18E+fEH7RdG+QAJJcuya3rqT7eXSTY7wGrAokY=
github.com/vbauerster/mpb/v8 v8.4.0 h1:Jq2iNA7T6SydpMVOwaT+2OBWlXS9Th8KEvBqeu5eeTo=
github.com/vbauerster/mpb/v8 v8.4.0/go.mod h1:vjp3hSTuCtR+x98/+2vW3eZ8XzxvGoP8CPseHMhiPyc=
github.com/vbatts/tar-split v0.11.5 h1:3bHCTIheBm1qFTcgh9oPu+nNBtX+XJIupG/vacinCts=
github.com/vbatts/tar-split v0.11.5/go.mod h1:yZbwRsSeGjusneWgA781EKej9HF8vme8okylkAeNKLk=
github.com/vbauerster/mpb/v8 v8.6.2 h1:9EhnJGQRtvgDVCychJgR96EDCOqgg2NsMuk5JUcX4DA=
github.com/vbauerster/mpb/v8 v8.6.2/go.mod h1:oVJ7T+dib99kZ/VBjoBaC8aPXiSAihnzuKmotuihyFo=
github.com/vmihailenco/msgpack/v5 v5.3.5 h1:5gO0H1iULLWGhs2H5tbAHIZTV8/cYafcFOr9znI5mJU=
github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
@@ -387,51 +394,51 @@ github.com/ysmood/gson v0.7.3 h1:QFkWbTH8MxyUTKPkVWAENJhxqdBa4lYTQWqZCiLG6kE=
github.com/ysmood/leakless v0.8.0 h1:BzLrVoiwxikpgEQR0Lk8NyBN5Cit2b1z+u0mgL4ZJak=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.7 h1:j+zJOnnEjF/kyHlDDgGnVL/AIqIJPq8UoB2GSNfkUfQ=
go.etcd.io/bbolt v1.3.7/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.mongodb.org/mongo-driver v1.7.3/go.mod h1:NqaYOwnXWr5Pm7AOpO5QFxKJ503nbMse/R79oO62zWg=
go.mongodb.org/mongo-driver v1.7.5/go.mod h1:VXEWRZ6URJIkUq2SCAyapmhH0ZLRBP+FT4xhp5Zvxng=
go.mongodb.org/mongo-driver v1.10.0/go.mod h1:wsihk0Kdgv8Kqu1Anit4sfK+22vSFbUrAVEYRhCXrA8=
go.mongodb.org/mongo-driver v1.11.3 h1:Ql6K6qYHEzB6xvu4+AU0BoRoqf9vFPcc4o7MUIdPW8Y=
go.mongodb.org/mongo-driver v1.11.3/go.mod h1:PTSz5yu21bkT/wXpkS7WR5f0ddqw5quethTUn9WM+2g=
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 h1:CCriYyAfq1Br1aIYettdHZTy8mBTIPo7We18TuO/bak=
go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/otel v1.15.0 h1:NIl24d4eiLJPM0vKn4HjLYM+UZf6gSfi9Z+NmCxkWbk=
go.opentelemetry.io/otel v1.15.0/go.mod h1:qfwLEbWhLPk5gyWrne4XnF0lC8wtywbuJbgfAE3zbek=
go.opentelemetry.io/otel/sdk v1.15.0 h1:jZTCkRRd08nxD6w7rIaZeDNGZGGQstH3SfLQ3ZsKICk=
go.opentelemetry.io/otel/trace v1.15.0 h1:5Fwje4O2ooOxkfyqI/kJwxWotggDLix4BSAvpE1wlpo=
go.opentelemetry.io/otel/trace v1.15.0/go.mod h1:CUsmE2Ht1CRkvE8OsMESvraoZrrcgD1J2W8GV1ev0Y4=
go.opentelemetry.io/otel v1.19.0 h1:MuS/TNf4/j4IXsZuJegVzI1cwut7Qc00344rgH7p8bs=
go.opentelemetry.io/otel v1.19.0/go.mod h1:i0QyjOq3UPoTzff0PJB2N66fb4S0+rSbSB15/oyH9fY=
go.opentelemetry.io/otel/metric v1.19.0 h1:aTzpGtV0ar9wlV4Sna9sdJyII5jTVJEvKETPiOKwvpE=
go.opentelemetry.io/otel/metric v1.19.0/go.mod h1:L5rUsV9kM1IxCj1MmSdS+JQAcVm319EUrDVLrt7jqt8=
go.opentelemetry.io/otel/sdk v1.19.0 h1:6USY6zH+L8uMH8L3t1enZPR3WFEmSTADlqldyHtJi3o=
go.opentelemetry.io/otel/trace v1.19.0 h1:DFVQmlVbfVeOuBRrwdtaehRrWiL1JoVs9CPIQ1Dzxpg=
go.opentelemetry.io/otel/trace v1.19.0/go.mod h1:mfaSyvGyEJEI0nyV2I4qhNQnbBOUUmYZpYojqMnX2vo=
go.uber.org/goleak v1.2.1 h1:NBol2c7O1ZokfZ0LEU9K6Whx/KnwvepVetCUhtKja4A=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190422162423-af44ce270edf/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.10.0 h1:LKqV2xt9+kDzSTfOhx4FrkEBcMrAgHSYgzywV9zcGmM=
golang.org/x/crypto v0.10.0/go.mod h1:o4eNf7Ede1fv+hwOwZsTHl9EsPFO6q6ZvYR8vYfY45I=
golang.org/x/crypto v0.19.0 h1:ENy+Az/9Y1vSrlrvBSyna3PITt4tiZLf7sgCjZBX7Wo=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 h1:k/i9J1pBpvlfR+9QsetwPyERsqu1GIbi967PQMq3Ivc=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w=
golang.org/x/exp v0.0.0-20231226003508-02704c960a9b h1:kLiC65FbiHWFAOu+lxwNPujcsl8VYyTYYEZnsOO1WK4=
golang.org/x/exp v0.0.0-20231226003508-02704c960a9b/go.mod h1:iRJReGqOEeBhDZGkGbynYwcHlctCvnjTYIamk7uXpHI=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk=
golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.14.0 h1:dGoOF9QVLYng8IHTm7BAyWqCqSheQ5pYWGhzW00YJr0=
golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
@@ -439,11 +446,14 @@ golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.11.0 h1:Gi2tvZIJyBtO9SDr1q9h5hEQCp/4L2RQ+ar0qjx2oNU=
golang.org/x/net v0.11.0/go.mod h1:2L/ixqYpgIVXmeoSA/4Lu7BzTG4KIyPIryS4IsOd1oQ=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c=
golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.9.0 h1:BPpt2kU7oMRq3kCHAA1tbSEshXRw1LpG2ztgDwrzuAs=
golang.org/x/oauth2 v0.9.0/go.mod h1:qYgFZaFiu6Wg24azG8bdV52QJXJGbZzIIsRCdVKzbLw=
golang.org/x/oauth2 v0.14.0 h1:P0Vrf/2538nmC0H+pEQ3MNFRRnVR7RlqyVw+bvm26z0=
golang.org/x/oauth2 v0.14.0/go.mod h1:lAtNWgaWfL4cm7j2OV8TxGi9Qb7ECORx8DktCY74OwM=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -452,8 +462,10 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.5.0 h1:60k92dhOjHxJkrqnwsfl8KuaHbn/5dl0lUPUklKo3qE=
golang.org/x/sync v0.5.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -462,7 +474,6 @@ golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190419153524-e8e3143a4f4a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190531175056-4c3a928424d2/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -470,20 +481,27 @@ golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220906165534-d0df966e6959/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.9.0 h1:KS/R3tvhPqvJvwcKfnBHJwwthS11LRhmM5D59eEXa0s=
golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.9.0 h1:GRRCnKYhdQrD8kfRAdQ6Zcw1P0OcELxGLKJvtjVMZ28=
golang.org/x/term v0.9.0/go.mod h1:M6DEAAIenWoTxdKrOltXcmDY3rSplQUkrvaDU5FcQyo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.17.0 h1:mkTF7LCd6WGJNL3K1Ad7kwxNfYAW6a8a8QqtMblp/4U=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.10.0 h1:UpjohKhiEgNc0CSauXmwYftY1+LlaC75SJwh0SgCX58=
golang.org/x/text v0.10.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -497,28 +515,30 @@ golang.org/x/tools v0.0.0-20190531172133-b3315ee88b7d/go.mod h1:/rFqwRUd4F7ZHNgw
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.9.3 h1:Gn1I8+64MsuTb/HpH+LmQtNas23LhUVr3rYZ0eKuaMM=
golang.org/x/tools v0.9.3/go.mod h1:owI94Op576fPu3cIGQeHs3joujW/2Oc6MtlxbF5dfNc=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.16.0 h1:GO788SKMRunPIBCXiQyo2AaexLstOrVhuAL5YwsckQM=
golang.org/x/tools v0.16.0/go.mod h1:kYVVN6I1mBNoB1OX+noeBjbRk4IUEPa7JJ+TJMEooJ0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM=
google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A=
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230920204549-e6e6cdab5c13 h1:N3bU/SQDCDyD6R528GJ/PwW9KjYcJA3dgyH+MovAkIM=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230920204549-e6e6cdab5c13/go.mod h1:KSqppvjFjtoCI+KGd4PELB0qLNxdJHRGqRI09mB6pQA=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.55.0 h1:3Oj82/tFSCeUrRTg/5E/7d/W5A1tj6Ky1ABAuZuv5ag=
google.golang.org/grpc v1.55.0/go.mod h1:iYEXKGkEBhg1PjZQvoYEVPTDkHo1/bjTnfwTeGONTY8=
google.golang.org/grpc v1.58.3 h1:BjnpXut1btbtgN/6sp+brB2Kbm2LjNXnidYujAVbSoQ=
google.golang.org/grpc v1.58.3/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -530,19 +550,17 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/alexcesaro/statsd.v2 v2.0.0 h1:FXkZSCZIH17vLCO5sO2UucTHsH9pc+17F6pl3JVCwMc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/go-jose/go-jose.v2 v2.6.1 h1:qEzJlIDmG9q5VO0M/o8tGS65QMHMS1w01TQJB1VPJ4U=
gopkg.in/go-jose/go-jose.v2 v2.6.1/go.mod h1:zzZDPkNNw/c9IE7Z9jr11mBZQhKQTMzoEEIoEdZlFBI=
gopkg.in/square/go-jose.v2 v2.5.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/square/go-jose.v2 v2.6.0 h1:NGk74WTnPKBNUhNzQX7PYcTLUjoq7mzKk2OKbvwk2iI=
gopkg.in/square/go-jose.v2 v2.6.0/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/go-jose/go-jose.v2 v2.6.3 h1:nt80fvSDlhKWQgSWyHyy5CfmlQr+asih51R8PTWNKKs=
gopkg.in/go-jose/go-jose.v2 v2.6.3/go.mod h1:zzZDPkNNw/c9IE7Z9jr11mBZQhKQTMzoEEIoEdZlFBI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
@@ -551,10 +569,9 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20200605160147-a5ece683394c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools/v3 v3.4.0 h1:ZazjZUfuVeZGLAmlKKuyv3IKP5orXcwtOwDQH6YVr6o=
gotest.tools/v3 v3.5.0 h1:Ljk6PdHdOhAb5aDMWXjDLMMhph+BpztA4v1QdqEW2eY=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@@ -5,11 +5,16 @@ fi
tmpdir="$PWD/tmp.$RANDOM"
mkdir -p "$tmpdir"
trap 'rm -fr "$tmpdir"' EXIT
cc -o "$tmpdir"/libsubid_tag -l subid -x c - > /dev/null 2> /dev/null << EOF
cc -o "$tmpdir"/libsubid_tag -x c - -l subid > /dev/null 2> /dev/null << EOF
#include <shadow/subid.h>
#include <stdlib.h>
int main() {
struct subid_range *ranges = NULL;
#if SUBID_ABI_MAJOR >= 4
subid_get_uid_ranges("root", &ranges);
#else
get_subuid_ranges("root", &ranges);
#endif
free(ranges);
return 0;
}

View File

@@ -55,6 +55,22 @@ sudo apk add skopeo
[Package Info](https://pkgs.alpinelinux.org/packages?name=skopeo)
### Gentoo
```sh
sudo emerge app-containers/skopeo
```
[Package Info](https://packages.gentoo.org/packages/app-containers/skopeo)
### Arch Linux
```sh
sudo pacman -S skopeo
```
[Package Info](https://archlinux.org/packages/extra/x86_64/skopeo/)
### macOS
```sh
@@ -123,7 +139,7 @@ podman run docker://quay.io/skopeo/stable:latest copy --help
Otherwise, read on for building and installing it from source:
To build the `skopeo` binary you need at least Go 1.12.
To build the `skopeo` binary you need at least Go 1.19.
There are two ways to build skopeo: in a container, or locally without a
container. Choose the one which better matches your needs and environment.
@@ -159,6 +175,11 @@ brew install gpgme
sudo zypper install libgpgme-devel device-mapper-devel libbtrfs-devel glib2-devel
```
```bash
# Arch Linux:
sudo pacman -S base-devel gpgme device-mapper btrfs-progs
```
Make sure to clone this repository in your `GOPATH` - otherwise compilation fails.
```bash
@@ -174,6 +195,16 @@ document generation can be skipped by passing `DISABLE_DOCS=1`:
DISABLE_DOCS=1 make
```
### Cross-compilation
For cross-building skopeo, use the command `make bin/skopeo.OS.ARCH`, where OS represents
the target operating system and ARCH stands for the desired architecture. For instance,
to build skopeo for RISC-V 64-bit Linux, execute:
```bash
make bin/skopeo.linux.riscv64
```
### Building documentation
To build the manual you will need go-md2man.

View File

@@ -30,7 +30,8 @@ const (
v2DockerRegistryURL = "localhost:5555" // Update also policy.json
v2s1DockerRegistryURL = "localhost:5556"
knownWindowsOnlyImage = "docker://mcr.microsoft.com/windows/nanoserver:1909"
knownListImage = "docker://registry.fedoraproject.org/fedora-minimal" // could have either ":latest" or "@sha256:..." appended
knownListImageRepo = "docker://registry.fedoraproject.org/fedora-minimal"
knownListImage = knownListImageRepo + ":38"
)
func TestCopy(t *testing.T) {
@@ -215,8 +216,8 @@ func (s *copySuite) TestCopyWithManifestListDigest() {
manifestDigest, err := manifest.Digest([]byte(m))
require.NoError(t, err)
digest := manifestDigest.String()
assertSkopeoSucceeds(t, "", "copy", knownListImage+"@"+digest, "dir:"+dir1)
assertSkopeoSucceeds(t, "", "copy", "--multi-arch=all", knownListImage+"@"+digest, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", knownListImageRepo+"@"+digest, "dir:"+dir1)
assertSkopeoSucceeds(t, "", "copy", "--multi-arch=all", knownListImageRepo+"@"+digest, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", "dir:"+dir1, "oci:"+oci1)
assertSkopeoSucceeds(t, "", "copy", "dir:"+dir2, "oci:"+oci2)
out := combinedOutputOfCommand(t, "diff", "-urN", oci1, oci2)
@@ -245,9 +246,9 @@ func (s *copySuite) TestCopyWithManifestListStorageDigest() {
manifestDigest, err := manifest.Digest([]byte(m))
require.NoError(t, err)
digest := manifestDigest.String()
assertSkopeoSucceeds(t, "", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "copy", knownListImageRepo+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "copy", "containers-storage:"+storage+"test@"+digest, "dir:"+dir1)
assertSkopeoSucceeds(t, "", "copy", knownListImage+"@"+digest, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", knownListImageRepo+"@"+digest, "dir:"+dir2)
runDecompressDirs(t, dir1, dir2)
assertDirImagesAreEqual(t, dir1, dir2)
}
@@ -262,9 +263,9 @@ func (s *copySuite) TestCopyWithManifestListStorageDigestMultipleArches() {
manifestDigest, err := manifest.Digest([]byte(m))
require.NoError(t, err)
digest := manifestDigest.String()
assertSkopeoSucceeds(t, "", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "copy", knownListImageRepo+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "copy", "containers-storage:"+storage+"test@"+digest, "dir:"+dir1)
assertSkopeoSucceeds(t, "", "copy", knownListImage+"@"+digest, "dir:"+dir2)
assertSkopeoSucceeds(t, "", "copy", knownListImageRepo+"@"+digest, "dir:"+dir2)
runDecompressDirs(t, dir1, dir2)
assertDirImagesAreEqual(t, dir1, dir2)
}
@@ -279,8 +280,8 @@ func (s *copySuite) TestCopyWithManifestListStorageDigestMultipleArchesBothUseLi
digest := manifestDigest.String()
_, err = manifest.ListFromBlob([]byte(m), manifest.GuessMIMEType([]byte(m)))
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImageRepo+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImageRepo+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(t, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(t, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
i2 := combinedOutputOfCommand(t, skopeoBinary, "--override-arch=arm64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
@@ -304,8 +305,8 @@ func (s *copySuite) TestCopyWithManifestListStorageDigestMultipleArchesFirstUses
require.NoError(t, err)
arm64Instance, err := list.ChooseInstance(&types.SystemContext{ArchitectureChoice: "arm64"})
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImage+"@"+arm64Instance.String(), "containers-storage:"+storage+"test@"+arm64Instance.String())
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImageRepo+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImageRepo+"@"+arm64Instance.String(), "containers-storage:"+storage+"test@"+arm64Instance.String())
i1 := combinedOutputOfCommand(t, skopeoBinary, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
var image1 imgspecv1.Image
err = json.Unmarshal([]byte(i1), &image1)
@@ -339,8 +340,8 @@ func (s *copySuite) TestCopyWithManifestListStorageDigestMultipleArchesSecondUse
require.NoError(t, err)
arm64Instance, err := list.ChooseInstance(&types.SystemContext{ArchitectureChoice: "arm64"})
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImage+"@"+amd64Instance.String(), "containers-storage:"+storage+"test@"+amd64Instance.String())
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImageRepo+"@"+amd64Instance.String(), "containers-storage:"+storage+"test@"+amd64Instance.String())
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImageRepo+"@"+digest, "containers-storage:"+storage+"test@"+digest)
i1 := combinedOutputOfCommand(t, skopeoBinary, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+amd64Instance.String())
var image1 imgspecv1.Image
err = json.Unmarshal([]byte(i1), &image1)
@@ -374,9 +375,9 @@ func (s *copySuite) TestCopyWithManifestListStorageDigestMultipleArchesThirdUses
require.NoError(t, err)
arm64Instance, err := list.ChooseInstance(&types.SystemContext{ArchitectureChoice: "arm64"})
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImage+"@"+amd64Instance.String(), "containers-storage:"+storage+"test@"+amd64Instance.String())
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImageRepo+"@"+amd64Instance.String(), "containers-storage:"+storage+"test@"+amd64Instance.String())
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImageRepo+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImageRepo+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(t, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
i1 := combinedOutputOfCommand(t, skopeoBinary, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+amd64Instance.String())
var image1 imgspecv1.Image
@@ -410,7 +411,7 @@ func (s *copySuite) TestCopyWithManifestListStorageDigestMultipleArchesTagAndDig
arm64Instance, err := list.ChooseInstance(&types.SystemContext{ArchitectureChoice: "arm64"})
require.NoError(t, err)
assertSkopeoSucceeds(t, "", "--override-arch=amd64", "copy", knownListImage, "containers-storage:"+storage+"test:latest")
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImage+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoSucceeds(t, "", "--override-arch=arm64", "copy", knownListImageRepo+"@"+digest, "containers-storage:"+storage+"test@"+digest)
assertSkopeoFails(t, `.*reading manifest for image instance.*does not exist.*`, "--override-arch=amd64", "inspect", "--config", "containers-storage:"+storage+"test@"+digest)
i1 := combinedOutputOfCommand(t, skopeoBinary, "--override-arch=arm64", "inspect", "--config", "containers-storage:"+storage+"test:latest")
var image1 imgspecv1.Image
@@ -478,7 +479,7 @@ func (s *copySuite) TestCopySimple() {
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
// "pull": docker: → dir:
assertSkopeoSucceeds(t, "", "copy", "docker://k8s.gcr.io/pause", "dir:"+dir1)
assertSkopeoSucceeds(t, "", "copy", "docker://registry.k8s.io/pause", "dir:"+dir1)
// "push": dir: → docker(v2s2):
assertSkopeoSucceeds(t, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, ourRegistry+"pause:unsigned")
// The result of pushing and pulling is an unmodified image.
@@ -492,14 +493,14 @@ func (s *copySuite) TestCopySimple() {
ociDest := "pause-latest-image"
ociImgName := "pause"
defer os.RemoveAll(ociDest)
assertSkopeoSucceeds(t, "", "copy", "docker://k8s.gcr.io/pause:latest", "oci:"+ociDest+":"+ociImgName)
assertSkopeoSucceeds(t, "", "copy", "docker://registry.k8s.io/pause:latest", "oci:"+ociDest+":"+ociImgName)
_, err := os.Stat(ociDest)
require.NoError(t, err)
// docker v2s2 -> OCI image layout without image name
ociDest = "pause-latest-noimage"
defer os.RemoveAll(ociDest)
assertSkopeoSucceeds(t, "", "copy", "docker://k8s.gcr.io/pause:latest", "oci:"+ociDest)
assertSkopeoSucceeds(t, "", "copy", "docker://registry.k8s.io/pause:latest", "oci:"+ociDest)
_, err = os.Stat(ociDest)
require.NoError(t, err)
}

View File

@@ -239,7 +239,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
if !ok {
return fmt.Errorf("OpenImage return value is %T", v)
}
imgid := uint32(imgidv)
imgid := uint64(imgidv)
if imgid == 0 {
return fmt.Errorf("got zero from expected image")
}
@@ -254,7 +254,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
if !ok {
return fmt.Errorf("OpenImageOptional return value is %T", v)
}
imgid2 := uint32(imgidv)
imgid2 := uint64(imgidv)
if imgid2 == 0 {
return fmt.Errorf("got zero from expected image")
}
@@ -325,7 +325,7 @@ func runTestOpenImageOptionalNotFound(p *proxy, img string) error {
if !ok {
return fmt.Errorf("OpenImageOptional return value is %T", v)
}
imgid := uint32(imgidv)
imgid := uint64(imgidv)
if imgid != 0 {
return fmt.Errorf("Unexpected optional image id %v", imgid)
}

View File

@@ -25,15 +25,15 @@ const (
// A repository with a path with multiple components in it which
// contains multiple tags, preferably with some tags pointing to
// manifest lists, and with some tags that don't.
pullableRepo = "k8s.gcr.io/coredns/coredns"
pullableRepo = "registry.k8s.io/coredns/coredns"
// A tagged image in the repository that we can inspect and copy.
pullableTaggedImage = "k8s.gcr.io/coredns/coredns:v1.6.6"
pullableTaggedImage = "registry.k8s.io/coredns/coredns:v1.6.6"
// A tagged manifest list in the repository that we can inspect and copy.
pullableTaggedManifestList = "k8s.gcr.io/coredns/coredns:v1.8.0"
pullableTaggedManifestList = "registry.k8s.io/coredns/coredns:v1.8.0"
// A repository containing multiple tags, some of which are for
// manifest lists, and which includes a "latest" tag. We specify the
// name here without a tag.
pullableRepoWithLatestTag = "k8s.gcr.io/pause"
pullableRepoWithLatestTag = "registry.k8s.io/pause"
)
func TestSync(t *testing.T) {
@@ -323,7 +323,7 @@ func (s *syncSuite) TestYamlRegex2Dir() {
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
k8s.gcr.io:
registry.k8s.io:
images-by-tag-regex:
pause: ^[12]\.0$ # regex string test
`
@@ -344,7 +344,7 @@ func (s *syncSuite) TestYamlDigest2Dir() {
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
k8s.gcr.io:
registry.k8s.io:
images:
pause:
- sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
@@ -362,7 +362,7 @@ func (s *syncSuite) TestYaml2Dir() {
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
k8s.gcr.io:
registry.k8s.io:
images:
coredns/coredns:
- v1.8.0

View File

@@ -7,9 +7,12 @@
%global debug_package %{nil}
%endif
# RHEL 8's default %%gobuild macro doesn't account for the BUILDTAGS variable, so we
# set it separately here and do not depend on RHEL 8's go-srpm-macros package.
%if %{defined rhel} && 0%{?rhel} == 8
# RHEL's default %%gobuild macro doesn't account for the BUILDTAGS variable, so we
# set it separately here and do not depend on RHEL's go-[s]rpm-macros package
# until that's fixed.
# c9s bz: https://bugzilla.redhat.com/show_bug.cgi?id=2227328
# c8s bz: https://bugzilla.redhat.com/show_bug.cgi?id=2227331
%if %{defined rhel}
%define gobuild(o:) go build -buildmode pie -compiler gc -tags="rpm_crashtraceback libtrust_openssl ${BUILDTAGS:-}" -ldflags "-linkmode=external -compressdwarf=false ${LDFLAGS:-} -B 0x$(head -c20 /dev/urandom|od -An -tx1|tr -d ' \\n') -extldflags '%__global_ldflags'" -a -v -x %{?**};
%endif
@@ -41,7 +44,8 @@ Epoch: %{conditional_epoch}
# copr and koji builds.
# If you're reading this on dist-git, the version is automatically filled in by Packit.
Version: 0
License: Apache-2.0 and BSD-2-Clause and BSD-3-Clause and ISC and MIT and MPL-2.0
# The `AND` needs to be uppercase in the License for SPDX compatibility
License: Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0
Release: %autorelease
%if %{defined golang_arches_future}
ExclusiveArch: %{golang_arches_future}
@@ -69,8 +73,6 @@ BuildRequires: glib2-devel
BuildRequires: make
BuildRequires: shadow-utils-subid-devel
Requires: containers-common >= 4:1-21
# DO NOT DELETE BELOW LINE - used for updating downstream goimports
# vendored libraries
%description
Command line utility to inspect images and repositories directly on Docker

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env bash
# This script will update the goimports in the rpm spec for downstream fedora
# packaging, via the `propose-downstream` packit action.
# The goimports don't need to be present upstream.
set -eo pipefail
PACKAGE=skopeo
# script is run from git root directory
SPEC_FILE=rpm/$PACKAGE.spec
sed -i '/Provides: bundled(golang.*/d' $SPEC_FILE
GO_IMPORTS=$(golist --imported --package-path github.com/containers/$PACKAGE --skip-self | sort -u | xargs "-I{}" echo "Provides: bundled(golang({}))")
awk -v r="$GO_IMPORTS" '/^# vendored libraries/ {print; print r; next} 1' $SPEC_FILE > temp && mv temp $SPEC_FILE

View File

@@ -16,4 +16,29 @@ function setup() {
expect_output --substring "skopeo version [0-9.]+"
}
@test "skopeo release isn't a development version" {
[[ "${RELEASE_TESTING:-false}" == "true" ]] || \
skip "Release testing may be enabled by setting \$RELEASE_TESTING = 'true'."
run_skopeo --version
# expect_output() doesn't support negative matching
if [[ "$output" =~ "dev" ]]; then
# This is a multi-line message, which may in turn contain multi-line
# output, so let's format it ourselves, readably
local -a output_split
readarray -t output_split <<<"$output"
printf "#/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv\n" >&2
printf "#| FAIL: $BATS_TEST_NAME\n" >&2
printf "#| unexpected: 'dev'\n" >&2
printf "#| actual: '%s'\n" "${output_split[0]}" >&2
local line
for line in "${output_split[@]:1}"; do
printf "#| > '%s'\n" "$line" >&2
done
printf "#\\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n" >&2
false
fi
}
# vim: filetype=sh

View File

@@ -10,7 +10,7 @@ SKOPEO_BINARY=${SKOPEO_BINARY:-${TEST_SOURCE_DIR}/../bin/skopeo}
SKOPEO_TIMEOUT=${SKOPEO_TIMEOUT:-300}
# Default image to run as a local registry
REGISTRY_FQIN=${SKOPEO_TEST_REGISTRY_FQIN:-quay.io/libpod/registry:2}
REGISTRY_FQIN=${SKOPEO_TEST_REGISTRY_FQIN:-quay.io/libpod/registry:2.8.2}
###############################################################################
# BEGIN setup/teardown

View File

@@ -1,57 +0,0 @@
/*
mkwinsyscall generates windows system call bodies
It parses all files specified on command line containing function
prototypes (like syscall_windows.go) and prints system call bodies
to standard output.
The prototypes are marked by lines beginning with "//sys" and read
like func declarations if //sys is replaced by func, but:
- The parameter lists must give a name for each argument. This
includes return parameters.
- The parameter lists must give a type for each argument:
the (x, y, z int) shorthand is not allowed.
- If the return parameter is an error number, it must be named err.
- If go func name needs to be different from its winapi dll name,
the winapi name could be specified at the end, after "=" sign, like
//sys LoadLibrary(libname string) (handle uint32, err error) = LoadLibraryA
- Each function that returns err needs to supply a condition, that
return value of winapi will be tested against to detect failure.
This would set err to windows "last-error", otherwise it will be nil.
The value can be provided at end of //sys declaration, like
//sys LoadLibrary(libname string) (handle uint32, err error) [failretval==-1] = LoadLibraryA
and is [failretval==0] by default.
- If the function name ends in a "?", then the function not existing is non-
fatal, and an error will be returned instead of panicking.
Usage:
mkwinsyscall [flags] [path ...]
Flags
-output string
Output file name (standard output if omitted).
-sort
Sort DLL and function declarations (default true).
Intended to help transition from older versions of mkwinsyscall by making diffs
easier to read and understand.
-systemdll
Whether all DLLs should be loaded from the Windows system directory (default true).
-trace
Generate print statement after every syscall.
-utf16
Encode string arguments as UTF-16 for syscalls not ending in 'A' or 'W' (default true).
-winio
Import this package ("github.com/Microsoft/go-winio").
*/
package main

File diff suppressed because it is too large Load Diff

View File

@@ -37,6 +37,10 @@ rootfs-conv/*
deps/*
out/*
# protobuf files
# only files at root of the repo, otherwise this will cause issues with vendoring
/protobuf/*
# test results
test/results

View File

@@ -21,17 +21,31 @@ linters:
# - unused
- gofmt # whether code was gofmt-ed
- govet # enabled by default, but just to be sure
- nolintlint # ill-formed or insufficient nolint directives
- stylecheck # golint replacement
- thelper # test helpers without t.Helper()
linters-settings:
govet:
enable-all: true
disable:
# struct order is often for Win32 compat
# also, ignore pointer bytes/GC issues for now until performance becomes an issue
- fieldalignment
check-shadowing: true
stylecheck:
# https://staticcheck.io/docs/checks
checks: ["all"]
issues:
exclude-rules:
# err is very often shadowed in nested scopes
- linters:
- govet
text: '^shadow: declaration of "err" shadows declaration'
# path is relative to module root, which is ./test/
- path: cri-containerd
linters:
@@ -135,3 +149,19 @@ issues:
linters:
- stylecheck
Text: "ST1003:"
# v0 APIs are deprecated, but still retained for backwards compatability
- path: cmd\\ncproxy\\
linters:
- staticcheck
text: "^SA1019: .*(ncproxygrpc|nodenetsvc)[/]?v0"
- path: internal\\tools\\networkagent
linters:
- staticcheck
text: "^SA1019: .*nodenetsvc[/]?v0"
- path: internal\\vhdx\\info
linters:
- stylecheck
Text: "ST1003:"

View File

@@ -1,48 +1,25 @@
version = "1"
generator = "gogoctrd"
plugins = ["grpc", "fieldpath"]
version = "2"
generators = ["go", "go-grpc"]
# Control protoc include paths. Below are usually some good defaults, but feel
# free to try it without them if it works for your project.
# Control protoc include paths.
[includes]
# Include paths that will be added before all others. Typically, you want to
# treat the root of the project as an include, but this may not be necessary.
before = ["./protobuf"]
# Paths that should be treated as include roots in relation to the vendor
# directory. These will be calculated with the vendor directory nearest the
# target package.
packages = ["github.com/gogo/protobuf"]
# defaults are "/usr/local/include" and "/usr/include", which don't exist on Windows.
# override defaults to supress errors about non-existant directories.
after = []
# This section maps protobuf imports to Go packages. These will become
# `-M` directives in the call to the go protobuf generator.
# This section maps protobuf imports to Go packages.
[packages]
"gogoproto/gogo.proto" = "github.com/gogo/protobuf/gogoproto"
"google/protobuf/any.proto" = "github.com/gogo/protobuf/types"
"google/protobuf/empty.proto" = "github.com/gogo/protobuf/types"
"google/protobuf/struct.proto" = "github.com/gogo/protobuf/types"
"google/protobuf/descriptor.proto" = "github.com/gogo/protobuf/protoc-gen-gogo/descriptor"
"google/protobuf/field_mask.proto" = "github.com/gogo/protobuf/types"
"google/protobuf/timestamp.proto" = "github.com/gogo/protobuf/types"
"google/protobuf/duration.proto" = "github.com/gogo/protobuf/types"
"github/containerd/cgroups/stats/v1/metrics.proto" = "github.com/containerd/cgroups/stats/v1"
# github.com/containerd/cgroups protofiles still list their go path as "github.com/containerd/cgroups/cgroup1/stats"
"github.com/containerd/cgroups/v3/cgroup1/stats/metrics.proto" = "github.com/containerd/cgroups/v3/cgroup1/stats"
[[overrides]]
prefixes = ["github.com/Microsoft/hcsshim/internal/shimdiag"]
plugins = ["ttrpc"]
[[overrides]]
prefixes = ["github.com/Microsoft/hcsshim/internal/extendedtask"]
plugins = ["ttrpc"]
[[overrides]]
prefixes = ["github.com/Microsoft/hcsshim/internal/computeagent"]
plugins = ["ttrpc"]
[[overrides]]
prefixes = ["github.com/Microsoft/hcsshim/internal/ncproxyttrpc"]
plugins = ["ttrpc"]
[[overrides]]
prefixes = ["github.com/Microsoft/hcsshim/internal/vmservice"]
plugins = ["ttrpc"]
prefixes = [
"github.com/Microsoft/hcsshim/internal/shimdiag",
"github.com/Microsoft/hcsshim/internal/extendedtask",
"github.com/Microsoft/hcsshim/internal/computeagent",
"github.com/Microsoft/hcsshim/internal/ncproxyttrpc",
"github.com/Microsoft/hcsshim/internal/vmservice",
]
generators = ["go", "go-ttrpc"]

View File

@@ -16,7 +16,9 @@ import (
"github.com/Microsoft/hcsshim/internal/security"
)
const defaultVHDXBlockSizeInMB = 1
const (
defaultVHDXBlockSizeInMB = 1
)
// SetupContainerBaseLayer is a helper to setup a containers scratch. It
// will create and format the vhdx's inside and the size is configurable with the sizeInGB

View File

@@ -11,7 +11,7 @@ import (
//sys hcsImportLayer(layerPath string, sourceFolderPath string, layerData string) (hr error) = computestorage.HcsImportLayer?
//sys hcsExportLayer(layerPath string, exportFolderPath string, layerData string, options string) (hr error) = computestorage.HcsExportLayer?
//sys hcsDestroyLayer(layerPath string) (hr error) = computestorage.HcsDestoryLayer?
//sys hcsDestroyLayer(layerPath string) (hr error) = computestorage.HcsDestroyLayer?
//sys hcsSetupBaseOSLayer(layerPath string, handle windows.Handle, options string) (hr error) = computestorage.HcsSetupBaseOSLayer?
//sys hcsInitializeWritableLayer(writableLayerPath string, layerData string, options string) (hr error) = computestorage.HcsInitializeWritableLayer?
//sys hcsAttachLayerStorageFilter(layerPath string, layerData string) (hr error) = computestorage.HcsAttachLayerStorageFilter?

View File

@@ -43,7 +43,7 @@ var (
modcomputestorage = windows.NewLazySystemDLL("computestorage.dll")
procHcsAttachLayerStorageFilter = modcomputestorage.NewProc("HcsAttachLayerStorageFilter")
procHcsDestoryLayer = modcomputestorage.NewProc("HcsDestoryLayer")
procHcsDestroyLayer = modcomputestorage.NewProc("HcsDestroyLayer")
procHcsDetachLayerStorageFilter = modcomputestorage.NewProc("HcsDetachLayerStorageFilter")
procHcsExportLayer = modcomputestorage.NewProc("HcsExportLayer")
procHcsFormatWritableLayerVhd = modcomputestorage.NewProc("HcsFormatWritableLayerVhd")
@@ -93,11 +93,11 @@ func hcsDestroyLayer(layerPath string) (hr error) {
}
func _hcsDestroyLayer(layerPath *uint16) (hr error) {
hr = procHcsDestoryLayer.Find()
hr = procHcsDestroyLayer.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procHcsDestoryLayer.Addr(), 1, uintptr(unsafe.Pointer(layerPath)), 0, 0)
r0, _, _ := syscall.Syscall(procHcsDestroyLayer.Addr(), 1, uintptr(unsafe.Pointer(layerPath)), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff

View File

@@ -12,14 +12,16 @@ import (
"syscall"
"time"
"go.opencensus.io/trace"
"github.com/Microsoft/hcsshim/internal/cow"
hcsschema "github.com/Microsoft/hcsshim/internal/hcs/schema2"
"github.com/Microsoft/hcsshim/internal/log"
"github.com/Microsoft/hcsshim/internal/oc"
"github.com/Microsoft/hcsshim/internal/protocol/guestrequest"
"github.com/Microsoft/hcsshim/internal/vmcompute"
"go.opencensus.io/trace"
)
// ContainerError is an error encountered in HCS
type Process struct {
handleLock sync.RWMutex
handle vmcompute.HcsProcess
@@ -50,35 +52,6 @@ func newProcess(process vmcompute.HcsProcess, processID int, computeSystem *Syst
}
}
type processModifyRequest struct {
Operation string
ConsoleSize *consoleSize `json:",omitempty"`
CloseHandle *closeHandle `json:",omitempty"`
}
type consoleSize struct {
Height uint16
Width uint16
}
type closeHandle struct {
Handle string
}
type processStatus struct {
ProcessID uint32
Exited bool
ExitCode uint32
LastWaitResult int32
}
const stdIn string = "StdIn"
const (
modifyConsoleSize string = "ConsoleSize"
modifyCloseHandle string = "CloseHandle"
)
// Pid returns the process ID of the process within the container.
func (process *Process) Pid() int {
return process.processID
@@ -260,14 +233,14 @@ func (process *Process) waitBackground() {
process.handleLock.RLock()
defer process.handleLock.RUnlock()
// Make sure we didnt race with Close() here
// Make sure we didn't race with Close() here
if process.handle != 0 {
propertiesJSON, resultJSON, err = vmcompute.HcsGetProcessProperties(ctx, process.handle)
events := processHcsResult(ctx, resultJSON)
if err != nil {
err = makeProcessError(process, operation, err, events)
} else {
properties := &processStatus{}
properties := &hcsschema.ProcessStatus{}
err = json.Unmarshal([]byte(propertiesJSON), properties)
if err != nil {
err = makeProcessError(process, operation, err, nil)
@@ -318,10 +291,9 @@ func (process *Process) ResizeConsole(ctx context.Context, width, height uint16)
if process.handle == 0 {
return makeProcessError(process, operation, ErrAlreadyClosed, nil)
}
modifyRequest := processModifyRequest{
Operation: modifyConsoleSize,
ConsoleSize: &consoleSize{
modifyRequest := hcsschema.ProcessModifyRequest{
Operation: guestrequest.ModifyProcessConsoleSize,
ConsoleSize: &hcsschema.ConsoleSize{
Height: height,
Width: width,
},
@@ -421,18 +393,12 @@ func (process *Process) CloseStdin(ctx context.Context) (err error) {
return makeProcessError(process, operation, ErrAlreadyClosed, nil)
}
process.stdioLock.Lock()
defer process.stdioLock.Unlock()
if process.stdin == nil {
return nil
}
//HcsModifyProcess request to close stdin will fail if the process has already exited
if !process.stopped() {
modifyRequest := processModifyRequest{
Operation: modifyCloseHandle,
CloseHandle: &closeHandle{
Handle: stdIn,
modifyRequest := hcsschema.ProcessModifyRequest{
Operation: guestrequest.CloseProcessHandle,
CloseHandle: &hcsschema.CloseHandle{
Handle: guestrequest.STDInHandle,
},
}
@@ -448,8 +414,12 @@ func (process *Process) CloseStdin(ctx context.Context) (err error) {
}
}
process.stdin.Close()
process.stdin = nil
process.stdioLock.Lock()
defer process.stdioLock.Unlock()
if process.stdin != nil {
process.stdin.Close()
process.stdin = nil
}
return nil
}

View File

@@ -0,0 +1,25 @@
/*
* HCS API
*
* No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
*
* API version: 2.5
* Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
*/
package hcsschema
const (
CimMountFlagNone uint32 = 0x0
CimMountFlagChildOnly uint32 = 0x1
CimMountFlagEnableDax uint32 = 0x2
CimMountFlagCacheFiles uint32 = 0x4
CimMountFlagCacheRegions uint32 = 0x8
)
type CimMount struct {
ImagePath string `json:"ImagePath,omitempty"`
FileSystemName string `json:"FileSystemName,omitempty"`
VolumeGuid string `json:"VolumeGuid,omitempty"`
MountFlags uint32 `json:"MountFlags,omitempty"`
}

View File

@@ -9,6 +9,8 @@
package hcsschema
import "github.com/Microsoft/hcsshim/internal/protocol/guestrequest"
type CloseHandle struct {
Handle string `json:"Handle,omitempty"`
Handle guestrequest.STDIOHandle `json:"Handle,omitempty"` // NOTE: Swagger generated as string. Locally updated.
}

View File

@@ -9,8 +9,11 @@
package hcsschema
type ConsoleSize struct {
Height int32 `json:"Height,omitempty"`
// NOTE: Swagger generated fields as int32. Locally updated to uint16 to match documentation.
// https://learn.microsoft.com/en-us/virtualization/api/hcs/schemareference#ConsoleSize
Width int32 `json:"Width,omitempty"`
type ConsoleSize struct {
Height uint16 `json:"Height,omitempty"`
Width uint16 `json:"Width,omitempty"`
}

View File

@@ -17,5 +17,5 @@ type IsolationSettings struct {
DebugPort int64 `json:"DebugPort,omitempty"`
// Optional data passed by host on isolated virtual machine start
LaunchData string `json:"LaunchData,omitempty"`
HclEnabled bool `json:"HclEnabled,omitempty"`
HclEnabled *bool `json:"HclEnabled,omitempty"`
}

View File

@@ -9,9 +9,11 @@
package hcsschema
// Passed to HcsRpc_ModifyProcess
import "github.com/Microsoft/hcsshim/internal/protocol/guestrequest"
// Passed to HcsRpc_ModifyProcess
type ProcessModifyRequest struct {
Operation string `json:"Operation,omitempty"`
Operation guestrequest.ProcessModifyOperation `json:"Operation,omitempty"` // NOTE: Swagger generated as string. Locally updated.
ConsoleSize *ConsoleSize `json:"ConsoleSize,omitempty"`

View File

@@ -9,13 +9,16 @@
package hcsschema
// Status of a process running in a container
// NOTE: Swagger generated fields as int32. Locally updated to uint16 to match documentation.
// https://learn.microsoft.com/en-us/virtualization/api/hcs/schemareference#ConsoleSize
// Status of a process running in a container
type ProcessStatus struct {
ProcessId int32 `json:"ProcessId,omitempty"`
ProcessId uint32 `json:"ProcessId,omitempty"` // NOTE: Swagger generated as int32. Locally updated to match documentation.
Exited bool `json:"Exited,omitempty"`
ExitCode int32 `json:"ExitCode,omitempty"`
ExitCode uint32 `json:"ExitCode,omitempty"` // NOTE: Swagger generated as int32. Locally updated to match documentation.
LastWaitResult int32 `json:"LastWaitResult,omitempty"`
}

View File

@@ -10,7 +10,7 @@
package hcsschema
import (
v1 "github.com/containerd/cgroups/stats/v1"
v1 "github.com/containerd/cgroups/v3/cgroup1/stats"
)
type Properties struct {

View File

@@ -304,11 +304,22 @@ func (computeSystem *System) WaitError() error {
return computeSystem.waitError
}
// Wait synchronously waits for the compute system to shutdown or terminate. If
// the compute system has already exited returns the previous error (if any).
// Wait synchronously waits for the compute system to shutdown or terminate.
// If the compute system has already exited returns the previous error (if any).
func (computeSystem *System) Wait() error {
<-computeSystem.WaitChannel()
return computeSystem.WaitError()
return computeSystem.WaitCtx(context.Background())
}
// WaitCtx synchronously waits for the compute system to shutdown or terminate, or the context to be cancelled.
//
// See [System.Wait] for more information.
func (computeSystem *System) WaitCtx(ctx context.Context) error {
select {
case <-computeSystem.WaitChannel():
return computeSystem.WaitError()
case <-ctx.Done():
return ctx.Err()
}
}
// stopped returns true if the compute system stopped.
@@ -735,9 +746,17 @@ func (computeSystem *System) OpenProcess(ctx context.Context, pid int) (*Process
}
// Close cleans up any state associated with the compute system but does not terminate or wait for it.
func (computeSystem *System) Close() (err error) {
func (computeSystem *System) Close() error {
return computeSystem.CloseCtx(context.Background())
}
// CloseCtx is similar to [System.Close], but accepts a context.
//
// The context is used for all operations, including waits, so timeouts/cancellations may prevent
// proper system cleanup.
func (computeSystem *System) CloseCtx(ctx context.Context) (err error) {
operation := "hcs::System::Close"
ctx, span := oc.StartSpan(context.Background(), operation)
ctx, span := oc.StartSpan(ctx, operation)
defer span.End()
defer func() { oc.SetSpanStatus(span, err) }()
span.AddAttributes(trace.StringAttribute("cid", computeSystem.id))

View File

@@ -167,7 +167,7 @@ func Create(ctx context.Context, options *Options) (_ *JobObject, err error) {
//
// Returns a JobObject structure and an error if there is one.
func Open(ctx context.Context, options *Options) (_ *JobObject, err error) {
if options == nil || (options != nil && options.Name == "") {
if options == nil || options.Name == "" {
return nil, errors.New("no job object name specified to open")
}

View File

@@ -9,10 +9,16 @@ import (
"reflect"
"time"
"github.com/containerd/containerd/log"
"github.com/sirupsen/logrus"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
)
const TimeFormat = log.RFC3339NanoFixed
// TimeFormat is [time.RFC3339Nano] with nanoseconds padded using
// zeros to ensure the formatted time is always the same number of
// characters.
// Based on RFC3339NanoFixed from github.com/containerd/log
const TimeFormat = "2006-01-02T15:04:05.000000000Z07:00"
func FormatTime(t time.Time) string {
return t.Format(TimeFormat)
@@ -59,25 +65,48 @@ func formatAddr(a net.Addr) string {
func Format(ctx context.Context, v interface{}) string {
b, err := encode(v)
if err != nil {
G(ctx).WithError(err).Warning("could not format value")
// logging errors aren't really warning worthy, and can potentially spam a lot of logs out
G(ctx).WithFields(logrus.Fields{
logrus.ErrorKey: err,
"type": fmt.Sprintf("%T", v),
}).Debug("could not format value")
return ""
}
return string(b)
}
func encode(v interface{}) ([]byte, error) {
return encodeBuffer(&bytes.Buffer{}, v)
}
func encode(v interface{}) (_ []byte, err error) {
if m, ok := v.(proto.Message); ok {
// use canonical JSON encoding for protobufs (instead of [encoding/json])
// https://protobuf.dev/programming-guides/proto3/#json
var b []byte
b, err = protojson.MarshalOptions{
AllowPartial: true,
// protobuf defaults to camel case for JSON encoding; use proto field name instead (snake case)
UseProtoNames: true,
}.Marshal(m)
if err == nil {
// the protojson marshaller tries to unmarshal anypb.Any fields, which can
// fail for types encoded with "github.com/containerd/typeurl/v2"
// we can try creating a dedicated protoregistry.MessageTypeResolver that uses typeurl, but, its
// more robust to fall back on json marshalling for errors in general
return b, nil
}
func encodeBuffer(buf *bytes.Buffer, v interface{}) ([]byte, error) {
}
buf := &bytes.Buffer{}
enc := json.NewEncoder(buf)
enc.SetEscapeHTML(false)
enc.SetIndent("", "")
if err := enc.Encode(v); err != nil {
err = fmt.Errorf("could not marshall %T to JSON for logging: %w", v, err)
return nil, err
if jErr := enc.Encode(v); jErr != nil {
if err != nil {
// TODO (go1.20): use multierror via fmt.Errorf("...: %w; ...: %w", ...)
return nil, fmt.Errorf("protojson encoding: %v; json encoding: %w", err, jErr)
}
return nil, fmt.Errorf("json encoding: %w", jErr)
}
// encoder.Encode appends a newline to the end

View File

@@ -6,7 +6,6 @@ import (
"time"
"github.com/Microsoft/hcsshim/internal/logfields"
"github.com/containerd/containerd/log"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -30,7 +29,7 @@ type Hook struct {
// An empty string disables formatting.
// When disabled, the fall back will the JSON encoding, if enabled.
//
// Default is [github.com/containerd/containerd/log.RFC3339NanoFixed].
// Default is [TimeFormat].
TimeFormat string
// Duration format converts a [time.Duration] fields to an appropriate encoding.
@@ -49,7 +48,7 @@ var _ logrus.Hook = &Hook{}
func NewHook() *Hook {
return &Hook{
TimeFormat: log.RFC3339NanoFixed,
TimeFormat: TimeFormat,
DurationFormat: DurationFormatString,
AddSpanContext: true,
}

View File

@@ -0,0 +1,12 @@
package log
import (
"github.com/sirupsen/logrus"
)
type NopFormatter struct{}
var _ logrus.Formatter = NopFormatter{}
// Format does nothing and returns a nil slice.
func (NopFormatter) Format(*logrus.Entry) ([]byte, error) { return nil, nil }

View File

@@ -55,7 +55,7 @@ func ScrubProcessParameters(s string) (string, error) {
}
pp.Environment = map[string]string{_scrubbedReplacement: _scrubbedReplacement}
b, err := encodeBuffer(bytes.NewBuffer(b[:0]), pp)
b, err := encode(pp)
if err != nil {
return "", err
}
@@ -89,11 +89,11 @@ func scrubBridgeCreate(m genMap) error {
}
func scrubLinuxHostedSystem(m genMap) error {
if m, ok := index(m, "OciSpecification"); ok {
if m, ok := index(m, "OciSpecification"); ok { //nolint:govet // shadow
if _, ok := m["annotations"]; ok {
m["annotations"] = map[string]string{_scrubbedReplacement: _scrubbedReplacement}
}
if m, ok := index(m, "process"); ok {
if m, ok := index(m, "process"); ok { //nolint:govet // shadow
if _, ok := m["env"]; ok {
m["env"] = []string{_scrubbedReplacement}
return nil
@@ -113,7 +113,7 @@ func scrubExecuteProcess(m genMap) error {
if !isRequestBase(m) {
return ErrUnknownType
}
if m, ok := index(m, "Settings"); ok {
if m, ok := index(m, "Settings"); ok { //nolint:govet // shadow
if ss, ok := m["ProcessParameters"]; ok {
// ProcessParameters is a json encoded struct passed as a regular sting field
s, ok := ss.(string)

View File

@@ -5,7 +5,7 @@ package guestrequest
type RequestType string
type ResourceType string
// RequestType const
// RequestType const.
const (
RequestTypeAdd RequestType = "Add"
RequestTypeRemove RequestType = "Remove"
@@ -54,3 +54,23 @@ var (
"305891a9-b251-5dfe-91a2-c25d9212275b",
}
)
// constants for v2 schema ProcessModifyRequest
// Operation type for [hcsschema.ProcessModifyRequest].
type ProcessModifyOperation string
const (
ModifyProcessConsoleSize ProcessModifyOperation = "ConsoleSize"
CloseProcessHandle ProcessModifyOperation = "CloseHandle"
)
// Standard IO handle(s) to close for [hcsschema.CloseHandle] in [hcsschema.ProcessModifyRequest].
type STDIOHandle string
const (
STDInHandle STDIOHandle = "StdIn"
STDOutHandle STDIOHandle = "StdOut"
STDErrHandle STDIOHandle = "StdErr"
AllHandles STDIOHandle = "All"
)

View File

@@ -276,7 +276,7 @@ func RemoveAllRelative(path string, root *os.File) error {
}
// It is necessary to use os.Open as Readdirnames does not work with
// OpenRelative. This is safe because the above lstatrelative fails
// OpenRelative. This is safe because the above LstatRelative fails
// if the target is outside the root, and we know this is not a
// symlink from the above FILE_ATTRIBUTE_REPARSE_POINT check.
fd, err := os.Open(filepath.Join(root.Name(), path))
@@ -293,12 +293,12 @@ func RemoveAllRelative(path string, root *os.File) error {
for {
names, err1 := fd.Readdirnames(100)
for _, name := range names {
err1 := RemoveAllRelative(path+string(os.PathSeparator)+name, root)
if err == nil {
err = err1
if err2 := RemoveAllRelative(path+string(os.PathSeparator)+name, root); err == nil {
err = err2
}
}
if err1 == io.EOF {
// Readdirnames has no more files to return
break
}
// If Readdirnames returned an error, use it.

View File

@@ -72,8 +72,8 @@ func (r *baseLayerReader) walkUntilCancelled() error {
return err
}
utilityVMAbsPath := filepath.Join(r.root, utilityVMPath)
utilityVMFilesAbsPath := filepath.Join(r.root, utilityVMFilesPath)
utilityVMAbsPath := filepath.Join(r.root, UtilityVMPath)
utilityVMFilesAbsPath := filepath.Join(r.root, UtilityVMFilesPath)
// Ignore a UtilityVM without Files, that's not _really_ a UtiltyVM
if _, err = os.Lstat(utilityVMFilesAbsPath); err != nil {

View File

@@ -5,7 +5,6 @@ import (
"fmt"
"os"
"path/filepath"
"syscall"
"github.com/Microsoft/hcsshim/internal/hcserror"
"github.com/Microsoft/hcsshim/internal/longpath"
@@ -37,7 +36,7 @@ func ensureHive(path string, root *os.File) (err error) {
return fmt.Errorf("getting path: %w", err)
}
var key syscall.Handle
var key winapi.ORHKey
err = winapi.ORCreateHive(&key)
if err != nil {
return fmt.Errorf("creating hive: %w", err)
@@ -72,7 +71,7 @@ func ensureBaseLayer(root *os.File) (hasUtilityVM bool, err error) {
}
}
stat, err := safefile.LstatRelative(utilityVMFilesPath, root)
stat, err := safefile.LstatRelative(UtilityVMFilesPath, root)
if os.IsNotExist(err) {
return false, nil
@@ -83,7 +82,7 @@ func ensureBaseLayer(root *os.File) (hasUtilityVM bool, err error) {
}
if !stat.Mode().IsDir() {
fullPath := filepath.Join(root.Name(), utilityVMFilesPath)
fullPath := filepath.Join(root.Name(), UtilityVMFilesPath)
return false, errors.Errorf("%s has unexpected file mode %s", fullPath, stat.Mode().String())
}
@@ -92,7 +91,7 @@ func ensureBaseLayer(root *os.File) (hasUtilityVM bool, err error) {
// Just check that this exists as a regular file. If it exists but is not a valid registry hive,
// ProcessUtilityVMImage will complain:
// "The registry could not read in, or write out, or flush, one of the files that contain the system's image of the registry."
bcdPath := filepath.Join(utilityVMFilesPath, bcdRelativePath)
bcdPath := filepath.Join(UtilityVMFilesPath, bcdRelativePath)
stat, err = safefile.LstatRelative(bcdPath, root)
if err != nil {
@@ -122,12 +121,12 @@ func convertToBaseLayer(ctx context.Context, root *os.File) error {
return nil
}
err = safefile.EnsureNotReparsePointRelative(utilityVMPath, root)
err = safefile.EnsureNotReparsePointRelative(UtilityVMPath, root)
if err != nil {
return err
}
utilityVMPath := filepath.Join(root.Name(), utilityVMPath)
utilityVMPath := filepath.Join(root.Name(), UtilityVMPath)
return ProcessUtilityVMImage(ctx, utilityVMPath)
}

View File

@@ -7,6 +7,10 @@ package wclayer
import (
"context"
"fmt"
"os"
"path/filepath"
"strconv"
"syscall"
"github.com/Microsoft/go-winio/pkg/guid"
@@ -101,3 +105,23 @@ func layerPathsToDescriptors(ctx context.Context, parentLayerPaths []string) ([]
return layers, nil
}
// GetLayerUvmBuild looks for a file named `uvmbuildversion` at `layerPath\uvmbuildversion` and returns the
// build number of the UVM from that file.
func GetLayerUvmBuild(layerPath string) (uint16, error) {
data, err := os.ReadFile(filepath.Join(layerPath, UvmBuildFileName))
if err != nil {
return 0, err
}
ver, err := strconv.ParseUint(string(data), 10, 16)
if err != nil {
return 0, err
}
return uint16(ver), nil
}
// WriteLayerUvmBuildFile writes a file at path `layerPath\uvmbuildversion` that contains the given `build`
// version for future reference.
func WriteLayerUvmBuildFile(layerPath string, build uint16) error {
return os.WriteFile(filepath.Join(layerPath, UvmBuildFileName), []byte(fmt.Sprintf("%d", build)), 0777)
}

View File

@@ -29,10 +29,19 @@ var mutatedUtilityVMFiles = map[string]bool{
}
const (
filesPath = `Files`
hivesPath = `Hives`
utilityVMPath = `UtilityVM`
utilityVMFilesPath = `UtilityVM\Files`
filesPath = `Files`
HivesPath = `Hives`
UtilityVMPath = `UtilityVM`
UtilityVMFilesPath = `UtilityVM\Files`
RegFilesPath = `Files\Windows\System32\config`
BcdFilePath = `UtilityVM\Files\EFI\Microsoft\Boot\BCD`
BootMgrFilePath = `UtilityVM\Files\EFI\Microsoft\Boot\bootmgfw.efi`
ContainerBaseVhd = `blank-base.vhdx`
ContainerScratchVhd = `blank.vhdx`
UtilityVMBaseVhd = `SystemTemplateBase.vhdx`
UtilityVMScratchVhd = `SystemTemplate.vhdx`
LayoutFileName = `layout`
UvmBuildFileName = `uvmbuildversion`
)
func openFileOrDir(path string, mode uint32, createDisposition uint32) (file *os.File, err error) {
@@ -243,11 +252,11 @@ func (r *legacyLayerReader) Next() (path string, size int64, fileInfo *winio.Fil
if !hasPathPrefix(path, filesPath) {
size = fe.fi.Size()
r.backupReader = winio.NewBackupFileReader(f, false)
if path == hivesPath || path == filesPath {
if path == HivesPath || path == filesPath {
// The Hives directory has a non-deterministic file time because of the
// nature of the import process. Use the times from System_Delta.
var g *os.File
g, err = os.Open(filepath.Join(r.root, hivesPath, `System_Delta`))
g, err = os.Open(filepath.Join(r.root, HivesPath, `System_Delta`))
if err != nil {
return
}
@@ -409,7 +418,7 @@ func (w *legacyLayerWriter) CloseRoots() {
func (w *legacyLayerWriter) initUtilityVM() error {
if !w.HasUtilityVM {
err := safefile.MkdirRelative(utilityVMPath, w.destRoot)
err := safefile.MkdirRelative(UtilityVMPath, w.destRoot)
if err != nil {
return err
}
@@ -417,7 +426,7 @@ func (w *legacyLayerWriter) initUtilityVM() error {
// clone the utility VM from the parent layer into this layer. Use hard
// links to avoid unnecessary copying, since most of the files are
// immutable.
err = cloneTree(w.parentRoots[0], w.destRoot, utilityVMFilesPath, mutatedUtilityVMFiles)
err = cloneTree(w.parentRoots[0], w.destRoot, UtilityVMFilesPath, mutatedUtilityVMFiles)
if err != nil {
return fmt.Errorf("cloning the parent utility VM image failed: %s", err)
}
@@ -592,7 +601,7 @@ func (w *legacyLayerWriter) Add(name string, fileInfo *winio.FileBasicInfo) erro
return err
}
if name == utilityVMPath {
if name == UtilityVMPath {
return w.initUtilityVM()
}
@@ -601,11 +610,11 @@ func (w *legacyLayerWriter) Add(name string, fileInfo *winio.FileBasicInfo) erro
}
name = filepath.Clean(name)
if hasPathPrefix(name, utilityVMPath) {
if hasPathPrefix(name, UtilityVMPath) {
if !w.HasUtilityVM {
return errors.New("missing UtilityVM directory")
}
if !hasPathPrefix(name, utilityVMFilesPath) && name != utilityVMFilesPath {
if !hasPathPrefix(name, UtilityVMFilesPath) && name != UtilityVMFilesPath {
return errors.New("invalid UtilityVM layer")
}
createDisposition := uint32(winapi.FILE_OPEN)
@@ -699,7 +708,7 @@ func (w *legacyLayerWriter) Add(name string, fileInfo *winio.FileBasicInfo) erro
return err
}
if hasPathPrefix(name, hivesPath) {
if hasPathPrefix(name, HivesPath) {
w.backupWriter = winio.NewBackupFileWriter(f, false)
w.bufWriter.Reset(w.backupWriter)
} else {
@@ -731,14 +740,14 @@ func (w *legacyLayerWriter) AddLink(name string, target string) error {
// Look for cross-layer hard link targets in the parent layers, since
// nothing is in the destination path yet.
roots = w.parentRoots
} else if hasPathPrefix(target, utilityVMFilesPath) {
} else if hasPathPrefix(target, UtilityVMFilesPath) {
// Since the utility VM is fully cloned into the destination path
// already, look for cross-layer hard link targets directly in the
// destination path.
roots = []*os.File{w.destRoot}
}
if roots == nil || (!hasPathPrefix(name, filesPath) && !hasPathPrefix(name, utilityVMFilesPath)) {
if roots == nil || (!hasPathPrefix(name, filesPath) && !hasPathPrefix(name, UtilityVMFilesPath)) {
return errors.New("invalid hard link in layer")
}
@@ -777,7 +786,7 @@ func (w *legacyLayerWriter) Remove(name string) error {
name = filepath.Clean(name)
if hasPathPrefix(name, filesPath) {
w.Tombstones = append(w.Tombstones, name)
} else if hasPathPrefix(name, utilityVMFilesPath) {
} else if hasPathPrefix(name, UtilityVMFilesPath) {
err := w.initUtilityVM()
if err != nil {
return err

View File

@@ -0,0 +1,45 @@
package winapi
import (
"unsafe"
"github.com/Microsoft/go-winio/pkg/guid"
"golang.org/x/sys/windows"
)
type g = guid.GUID
type FsHandle uintptr
type StreamHandle uintptr
type CimFsFileMetadata struct {
Attributes uint32
FileSize int64
CreationTime windows.Filetime
LastWriteTime windows.Filetime
ChangeTime windows.Filetime
LastAccessTime windows.Filetime
SecurityDescriptorBuffer unsafe.Pointer
SecurityDescriptorSize uint32
ReparseDataBuffer unsafe.Pointer
ReparseDataSize uint32
ExtendedAttributes unsafe.Pointer
EACount uint32
}
//sys CimMountImage(imagePath string, fsName string, flags uint32, volumeID *g) (hr error) = cimfs.CimMountImage?
//sys CimDismountImage(volumeID *g) (hr error) = cimfs.CimDismountImage?
//sys CimCreateImage(imagePath string, oldFSName *uint16, newFSName *uint16, cimFSHandle *FsHandle) (hr error) = cimfs.CimCreateImage?
//sys CimCloseImage(cimFSHandle FsHandle) (hr error) = cimfs.CimCloseImage?
//sys CimCommitImage(cimFSHandle FsHandle) (hr error) = cimfs.CimCommitImage?
//sys CimCreateFile(cimFSHandle FsHandle, path string, file *CimFsFileMetadata, cimStreamHandle *StreamHandle) (hr error) = cimfs.CimCreateFile?
//sys CimCloseStream(cimStreamHandle StreamHandle) (hr error) = cimfs.CimCloseStream?
//sys CimWriteStream(cimStreamHandle StreamHandle, buffer uintptr, bufferSize uint32) (hr error) = cimfs.CimWriteStream?
//sys CimDeletePath(cimFSHandle FsHandle, path string) (hr error) = cimfs.CimDeletePath?
//sys CimCreateHardLink(cimFSHandle FsHandle, newPath string, oldPath string) (hr error) = cimfs.CimCreateHardLink?
//sys CimCreateAlternateStream(cimFSHandle FsHandle, path string, size uint64, cimStreamHandle *StreamHandle) (hr error) = cimfs.CimCreateAlternateStream?

View File

@@ -0,0 +1,37 @@
package winapi
// Offline registry management API
type ORHKey uintptr
type RegType uint32
const (
// Registry value types: https://docs.microsoft.com/en-us/windows/win32/sysinfo/registry-value-types
REG_TYPE_NONE RegType = 0
REG_TYPE_SZ RegType = 1
REG_TYPE_EXPAND_SZ RegType = 2
REG_TYPE_BINARY RegType = 3
REG_TYPE_DWORD RegType = 4
REG_TYPE_DWORD_LITTLE_ENDIAN RegType = 4
REG_TYPE_DWORD_BIG_ENDIAN RegType = 5
REG_TYPE_LINK RegType = 6
REG_TYPE_MULTI_SZ RegType = 7
REG_TYPE_RESOURCE_LIST RegType = 8
REG_TYPE_FULL_RESOURCE_DESCRIPTOR RegType = 9
REG_TYPE_RESOURCE_REQUIREMENTS_LIST RegType = 10
REG_TYPE_QWORD RegType = 11
REG_TYPE_QWORD_LITTLE_ENDIAN RegType = 11
)
//sys ORCreateHive(key *ORHKey) (win32err error) = offreg.ORCreateHive
//sys ORMergeHives(hiveHandles []ORHKey, result *ORHKey) (win32err error) = offreg.ORMergeHives
//sys OROpenHive(hivePath string, result *ORHKey) (win32err error) = offreg.OROpenHive
//sys ORCloseHive(handle ORHKey) (win32err error) = offreg.ORCloseHive
//sys ORSaveHive(handle ORHKey, hivePath string, osMajorVersion uint32, osMinorVersion uint32) (win32err error) = offreg.ORSaveHive
//sys OROpenKey(handle ORHKey, subKey string, result *ORHKey) (win32err error) = offreg.OROpenKey
//sys ORCloseKey(handle ORHKey) (win32err error) = offreg.ORCloseKey
//sys ORCreateKey(handle ORHKey, subKey string, class uintptr, options uint32, securityDescriptor uintptr, result *ORHKey, disposition *uint32) (win32err error) = offreg.ORCreateKey
//sys ORDeleteKey(handle ORHKey, subKey string) (win32err error) = offreg.ORDeleteKey
//sys ORGetValue(handle ORHKey, subKey string, value string, valueType *uint32, data *byte, dataLen *uint32) (win32err error) = offreg.ORGetValue
//sys ORSetValue(handle ORHKey, valueName string, valueType uint32, data *byte, dataLen uint32) (win32err error) = offreg.ORSetValue

View File

@@ -1,5 +0,0 @@
package winapi
//sys ORCreateHive(key *syscall.Handle) (regerrno error) = offreg.ORCreateHive
//sys ORSaveHive(key syscall.Handle, file string, OsMajorVersion uint32, OsMinorVersion uint32) (regerrno error) = offreg.ORSaveHive
//sys ORCloseHive(key syscall.Handle) (regerrno error) = offreg.ORCloseHive

View File

@@ -80,3 +80,9 @@ func ConvertStringSetToSlice(buf []byte) ([]string, error) {
}
return nil, errors.New("string set malformed: missing null terminator at end of buffer")
}
// ParseUtf16LE parses a UTF-16LE byte array into a string (without passing
// through a uint16 or rune array).
func ParseUtf16LE(b []byte) string {
return windows.UTF16PtrToString((*uint16)(unsafe.Pointer(&b[0])))
}

View File

@@ -43,6 +43,7 @@ var (
modadvapi32 = windows.NewLazySystemDLL("advapi32.dll")
modbindfltapi = windows.NewLazySystemDLL("bindfltapi.dll")
modcfgmgr32 = windows.NewLazySystemDLL("cfgmgr32.dll")
modcimfs = windows.NewLazySystemDLL("cimfs.dll")
modiphlpapi = windows.NewLazySystemDLL("iphlpapi.dll")
modkernel32 = windows.NewLazySystemDLL("kernel32.dll")
modnetapi32 = windows.NewLazySystemDLL("netapi32.dll")
@@ -55,6 +56,17 @@ var (
procCM_Get_Device_ID_ListA = modcfgmgr32.NewProc("CM_Get_Device_ID_ListA")
procCM_Get_Device_ID_List_SizeA = modcfgmgr32.NewProc("CM_Get_Device_ID_List_SizeA")
procCM_Locate_DevNodeW = modcfgmgr32.NewProc("CM_Locate_DevNodeW")
procCimCloseImage = modcimfs.NewProc("CimCloseImage")
procCimCloseStream = modcimfs.NewProc("CimCloseStream")
procCimCommitImage = modcimfs.NewProc("CimCommitImage")
procCimCreateAlternateStream = modcimfs.NewProc("CimCreateAlternateStream")
procCimCreateFile = modcimfs.NewProc("CimCreateFile")
procCimCreateHardLink = modcimfs.NewProc("CimCreateHardLink")
procCimCreateImage = modcimfs.NewProc("CimCreateImage")
procCimDeletePath = modcimfs.NewProc("CimDeletePath")
procCimDismountImage = modcimfs.NewProc("CimDismountImage")
procCimMountImage = modcimfs.NewProc("CimMountImage")
procCimWriteStream = modcimfs.NewProc("CimWriteStream")
procSetJobCompartmentId = modiphlpapi.NewProc("SetJobCompartmentId")
procClosePseudoConsole = modkernel32.NewProc("ClosePseudoConsole")
procCopyFileW = modkernel32.NewProc("CopyFileW")
@@ -84,8 +96,16 @@ var (
procNtSetInformationFile = modntdll.NewProc("NtSetInformationFile")
procRtlNtStatusToDosError = modntdll.NewProc("RtlNtStatusToDosError")
procORCloseHive = modoffreg.NewProc("ORCloseHive")
procORCloseKey = modoffreg.NewProc("ORCloseKey")
procORCreateHive = modoffreg.NewProc("ORCreateHive")
procORCreateKey = modoffreg.NewProc("ORCreateKey")
procORDeleteKey = modoffreg.NewProc("ORDeleteKey")
procORGetValue = modoffreg.NewProc("ORGetValue")
procORMergeHives = modoffreg.NewProc("ORMergeHives")
procOROpenHive = modoffreg.NewProc("OROpenHive")
procOROpenKey = modoffreg.NewProc("OROpenKey")
procORSaveHive = modoffreg.NewProc("ORSaveHive")
procORSetValue = modoffreg.NewProc("ORSetValue")
)
func LogonUser(username *uint16, domain *uint16, password *uint16, logonType uint32, logonProvider uint32, token *windows.Token) (err error) {
@@ -164,6 +184,235 @@ func _CMLocateDevNode(pdnDevInst *uint32, pDeviceID *uint16, uFlags uint32) (hr
return
}
func CimCloseImage(cimFSHandle FsHandle) (hr error) {
hr = procCimCloseImage.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procCimCloseImage.Addr(), 1, uintptr(cimFSHandle), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimCloseStream(cimStreamHandle StreamHandle) (hr error) {
hr = procCimCloseStream.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procCimCloseStream.Addr(), 1, uintptr(cimStreamHandle), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimCommitImage(cimFSHandle FsHandle) (hr error) {
hr = procCimCommitImage.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procCimCommitImage.Addr(), 1, uintptr(cimFSHandle), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimCreateAlternateStream(cimFSHandle FsHandle, path string, size uint64, cimStreamHandle *StreamHandle) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(path)
if hr != nil {
return
}
return _CimCreateAlternateStream(cimFSHandle, _p0, size, cimStreamHandle)
}
func _CimCreateAlternateStream(cimFSHandle FsHandle, path *uint16, size uint64, cimStreamHandle *StreamHandle) (hr error) {
hr = procCimCreateAlternateStream.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procCimCreateAlternateStream.Addr(), 4, uintptr(cimFSHandle), uintptr(unsafe.Pointer(path)), uintptr(size), uintptr(unsafe.Pointer(cimStreamHandle)), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimCreateFile(cimFSHandle FsHandle, path string, file *CimFsFileMetadata, cimStreamHandle *StreamHandle) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(path)
if hr != nil {
return
}
return _CimCreateFile(cimFSHandle, _p0, file, cimStreamHandle)
}
func _CimCreateFile(cimFSHandle FsHandle, path *uint16, file *CimFsFileMetadata, cimStreamHandle *StreamHandle) (hr error) {
hr = procCimCreateFile.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procCimCreateFile.Addr(), 4, uintptr(cimFSHandle), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(file)), uintptr(unsafe.Pointer(cimStreamHandle)), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimCreateHardLink(cimFSHandle FsHandle, newPath string, oldPath string) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(newPath)
if hr != nil {
return
}
var _p1 *uint16
_p1, hr = syscall.UTF16PtrFromString(oldPath)
if hr != nil {
return
}
return _CimCreateHardLink(cimFSHandle, _p0, _p1)
}
func _CimCreateHardLink(cimFSHandle FsHandle, newPath *uint16, oldPath *uint16) (hr error) {
hr = procCimCreateHardLink.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procCimCreateHardLink.Addr(), 3, uintptr(cimFSHandle), uintptr(unsafe.Pointer(newPath)), uintptr(unsafe.Pointer(oldPath)))
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimCreateImage(imagePath string, oldFSName *uint16, newFSName *uint16, cimFSHandle *FsHandle) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(imagePath)
if hr != nil {
return
}
return _CimCreateImage(_p0, oldFSName, newFSName, cimFSHandle)
}
func _CimCreateImage(imagePath *uint16, oldFSName *uint16, newFSName *uint16, cimFSHandle *FsHandle) (hr error) {
hr = procCimCreateImage.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procCimCreateImage.Addr(), 4, uintptr(unsafe.Pointer(imagePath)), uintptr(unsafe.Pointer(oldFSName)), uintptr(unsafe.Pointer(newFSName)), uintptr(unsafe.Pointer(cimFSHandle)), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimDeletePath(cimFSHandle FsHandle, path string) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(path)
if hr != nil {
return
}
return _CimDeletePath(cimFSHandle, _p0)
}
func _CimDeletePath(cimFSHandle FsHandle, path *uint16) (hr error) {
hr = procCimDeletePath.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procCimDeletePath.Addr(), 2, uintptr(cimFSHandle), uintptr(unsafe.Pointer(path)), 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimDismountImage(volumeID *g) (hr error) {
hr = procCimDismountImage.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procCimDismountImage.Addr(), 1, uintptr(unsafe.Pointer(volumeID)), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimMountImage(imagePath string, fsName string, flags uint32, volumeID *g) (hr error) {
var _p0 *uint16
_p0, hr = syscall.UTF16PtrFromString(imagePath)
if hr != nil {
return
}
var _p1 *uint16
_p1, hr = syscall.UTF16PtrFromString(fsName)
if hr != nil {
return
}
return _CimMountImage(_p0, _p1, flags, volumeID)
}
func _CimMountImage(imagePath *uint16, fsName *uint16, flags uint32, volumeID *g) (hr error) {
hr = procCimMountImage.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall6(procCimMountImage.Addr(), 4, uintptr(unsafe.Pointer(imagePath)), uintptr(unsafe.Pointer(fsName)), uintptr(flags), uintptr(unsafe.Pointer(volumeID)), 0, 0)
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func CimWriteStream(cimStreamHandle StreamHandle, buffer uintptr, bufferSize uint32) (hr error) {
hr = procCimWriteStream.Find()
if hr != nil {
return
}
r0, _, _ := syscall.Syscall(procCimWriteStream.Addr(), 3, uintptr(cimStreamHandle), uintptr(buffer), uintptr(bufferSize))
if int32(r0) < 0 {
if r0&0x1fff0000 == 0x00070000 {
r0 &= 0xffff
}
hr = syscall.Errno(r0)
}
return
}
func SetJobCompartmentId(handle windows.Handle, compartmentId uint32) (win32Err error) {
r0, _, _ := syscall.Syscall(procSetJobCompartmentId.Addr(), 2, uintptr(handle), uintptr(compartmentId), 0)
if r0 != 0 {
@@ -381,35 +630,162 @@ func RtlNtStatusToDosError(status uint32) (winerr error) {
return
}
func ORCloseHive(key syscall.Handle) (regerrno error) {
r0, _, _ := syscall.Syscall(procORCloseHive.Addr(), 1, uintptr(key), 0, 0)
func ORCloseHive(handle ORHKey) (win32err error) {
r0, _, _ := syscall.Syscall(procORCloseHive.Addr(), 1, uintptr(handle), 0, 0)
if r0 != 0 {
regerrno = syscall.Errno(r0)
win32err = syscall.Errno(r0)
}
return
}
func ORCreateHive(key *syscall.Handle) (regerrno error) {
func ORCloseKey(handle ORHKey) (win32err error) {
r0, _, _ := syscall.Syscall(procORCloseKey.Addr(), 1, uintptr(handle), 0, 0)
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func ORCreateHive(key *ORHKey) (win32err error) {
r0, _, _ := syscall.Syscall(procORCreateHive.Addr(), 1, uintptr(unsafe.Pointer(key)), 0, 0)
if r0 != 0 {
regerrno = syscall.Errno(r0)
win32err = syscall.Errno(r0)
}
return
}
func ORSaveHive(key syscall.Handle, file string, OsMajorVersion uint32, OsMinorVersion uint32) (regerrno error) {
func ORCreateKey(handle ORHKey, subKey string, class uintptr, options uint32, securityDescriptor uintptr, result *ORHKey, disposition *uint32) (win32err error) {
var _p0 *uint16
_p0, regerrno = syscall.UTF16PtrFromString(file)
if regerrno != nil {
_p0, win32err = syscall.UTF16PtrFromString(subKey)
if win32err != nil {
return
}
return _ORSaveHive(key, _p0, OsMajorVersion, OsMinorVersion)
return _ORCreateKey(handle, _p0, class, options, securityDescriptor, result, disposition)
}
func _ORSaveHive(key syscall.Handle, file *uint16, OsMajorVersion uint32, OsMinorVersion uint32) (regerrno error) {
r0, _, _ := syscall.Syscall6(procORSaveHive.Addr(), 4, uintptr(key), uintptr(unsafe.Pointer(file)), uintptr(OsMajorVersion), uintptr(OsMinorVersion), 0, 0)
func _ORCreateKey(handle ORHKey, subKey *uint16, class uintptr, options uint32, securityDescriptor uintptr, result *ORHKey, disposition *uint32) (win32err error) {
r0, _, _ := syscall.Syscall9(procORCreateKey.Addr(), 7, uintptr(handle), uintptr(unsafe.Pointer(subKey)), uintptr(class), uintptr(options), uintptr(securityDescriptor), uintptr(unsafe.Pointer(result)), uintptr(unsafe.Pointer(disposition)), 0, 0)
if r0 != 0 {
regerrno = syscall.Errno(r0)
win32err = syscall.Errno(r0)
}
return
}
func ORDeleteKey(handle ORHKey, subKey string) (win32err error) {
var _p0 *uint16
_p0, win32err = syscall.UTF16PtrFromString(subKey)
if win32err != nil {
return
}
return _ORDeleteKey(handle, _p0)
}
func _ORDeleteKey(handle ORHKey, subKey *uint16) (win32err error) {
r0, _, _ := syscall.Syscall(procORDeleteKey.Addr(), 2, uintptr(handle), uintptr(unsafe.Pointer(subKey)), 0)
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func ORGetValue(handle ORHKey, subKey string, value string, valueType *uint32, data *byte, dataLen *uint32) (win32err error) {
var _p0 *uint16
_p0, win32err = syscall.UTF16PtrFromString(subKey)
if win32err != nil {
return
}
var _p1 *uint16
_p1, win32err = syscall.UTF16PtrFromString(value)
if win32err != nil {
return
}
return _ORGetValue(handle, _p0, _p1, valueType, data, dataLen)
}
func _ORGetValue(handle ORHKey, subKey *uint16, value *uint16, valueType *uint32, data *byte, dataLen *uint32) (win32err error) {
r0, _, _ := syscall.Syscall6(procORGetValue.Addr(), 6, uintptr(handle), uintptr(unsafe.Pointer(subKey)), uintptr(unsafe.Pointer(value)), uintptr(unsafe.Pointer(valueType)), uintptr(unsafe.Pointer(data)), uintptr(unsafe.Pointer(dataLen)))
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func ORMergeHives(hiveHandles []ORHKey, result *ORHKey) (win32err error) {
var _p0 *ORHKey
if len(hiveHandles) > 0 {
_p0 = &hiveHandles[0]
}
r0, _, _ := syscall.Syscall(procORMergeHives.Addr(), 3, uintptr(unsafe.Pointer(_p0)), uintptr(len(hiveHandles)), uintptr(unsafe.Pointer(result)))
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func OROpenHive(hivePath string, result *ORHKey) (win32err error) {
var _p0 *uint16
_p0, win32err = syscall.UTF16PtrFromString(hivePath)
if win32err != nil {
return
}
return _OROpenHive(_p0, result)
}
func _OROpenHive(hivePath *uint16, result *ORHKey) (win32err error) {
r0, _, _ := syscall.Syscall(procOROpenHive.Addr(), 2, uintptr(unsafe.Pointer(hivePath)), uintptr(unsafe.Pointer(result)), 0)
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func OROpenKey(handle ORHKey, subKey string, result *ORHKey) (win32err error) {
var _p0 *uint16
_p0, win32err = syscall.UTF16PtrFromString(subKey)
if win32err != nil {
return
}
return _OROpenKey(handle, _p0, result)
}
func _OROpenKey(handle ORHKey, subKey *uint16, result *ORHKey) (win32err error) {
r0, _, _ := syscall.Syscall(procOROpenKey.Addr(), 3, uintptr(handle), uintptr(unsafe.Pointer(subKey)), uintptr(unsafe.Pointer(result)))
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func ORSaveHive(handle ORHKey, hivePath string, osMajorVersion uint32, osMinorVersion uint32) (win32err error) {
var _p0 *uint16
_p0, win32err = syscall.UTF16PtrFromString(hivePath)
if win32err != nil {
return
}
return _ORSaveHive(handle, _p0, osMajorVersion, osMinorVersion)
}
func _ORSaveHive(handle ORHKey, hivePath *uint16, osMajorVersion uint32, osMinorVersion uint32) (win32err error) {
r0, _, _ := syscall.Syscall6(procORSaveHive.Addr(), 4, uintptr(handle), uintptr(unsafe.Pointer(hivePath)), uintptr(osMajorVersion), uintptr(osMinorVersion), 0, 0)
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func ORSetValue(handle ORHKey, valueName string, valueType uint32, data *byte, dataLen uint32) (win32err error) {
var _p0 *uint16
_p0, win32err = syscall.UTF16PtrFromString(valueName)
if win32err != nil {
return
}
return _ORSetValue(handle, _p0, valueType, data, dataLen)
}
func _ORSetValue(handle ORHKey, valueName *uint16, valueType uint32, data *byte, dataLen uint32) (win32err error) {
r0, _, _ := syscall.Syscall6(procORSetValue.Addr(), 5, uintptr(handle), uintptr(unsafe.Pointer(valueName)), uintptr(valueType), uintptr(unsafe.Pointer(data)), uintptr(dataLen), 0)
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}

View File

@@ -32,6 +32,7 @@ func CreateScratchLayer(info DriverInfo, layerId, parentId string, parentLayerPa
func DeactivateLayer(info DriverInfo, id string) error {
return wclayer.DeactivateLayer(context.Background(), layerPath(&info, id))
}
func DestroyLayer(info DriverInfo, id string) error {
return wclayer.DestroyLayer(context.Background(), layerPath(&info, id))
}

View File

@@ -5,6 +5,7 @@ import (
"sync"
"golang.org/x/sys/windows"
"golang.org/x/sys/windows/registry"
)
// OSVersion is a wrapper for Windows version information
@@ -25,16 +26,15 @@ var (
// The calling application must be manifested to get the correct version information.
func Get() OSVersion {
once.Do(func() {
var err error
v := *windows.RtlGetVersion()
osv = OSVersion{}
osv.Version, err = windows.GetVersion()
if err != nil {
// GetVersion never fails.
panic(err)
}
osv.MajorVersion = uint8(osv.Version & 0xFF)
osv.MinorVersion = uint8(osv.Version >> 8 & 0xFF)
osv.Build = uint16(osv.Version >> 16)
osv.MajorVersion = uint8(v.MajorVersion)
osv.MinorVersion = uint8(v.MinorVersion)
osv.Build = uint16(v.BuildNumber)
// Fill version value so that existing clients don't break
osv.Version = v.BuildNumber << 16
osv.Version = osv.Version | (uint32(v.MinorVersion) << 8)
osv.Version = osv.Version | v.MajorVersion
})
return osv
}
@@ -57,3 +57,18 @@ func (osv OSVersion) String() string {
func (osv OSVersion) ToString() string {
return osv.String()
}
// Running `cmd /c ver` shows something like "10.0.20348.1000". The last component ("1000") is the revision
// number
func BuildRevision() (uint32, error) {
k, err := registry.OpenKey(registry.LOCAL_MACHINE, `SOFTWARE\Microsoft\Windows NT\CurrentVersion`, registry.QUERY_VALUE)
if err != nil {
return 0, fmt.Errorf("open `CurrentVersion` registry key: %w", err)
}
defer k.Close()
s, _, err := k.GetIntegerValue("UBR")
if err != nil {
return 0, fmt.Errorf("read `UBR` from registry: %w", err)
}
return uint32(s), nil
}

View File

@@ -0,0 +1,35 @@
package osversion
// List of stable ABI compliant ltsc releases
// Note: List must be sorted in ascending order
var compatLTSCReleases = []uint16{
V21H2Server,
}
// CheckHostAndContainerCompat checks if given host and container
// OS versions are compatible.
// It includes support for stable ABI compliant versions as well.
// Every release after WS 2022 will support the previous ltsc
// container image. Stable ABI is in preview mode for windows 11 client.
// Refer: https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-2022%2Cwindows-10#windows-server-host-os-compatibility
func CheckHostAndContainerCompat(host, ctr OSVersion) bool {
// check major minor versions of host and guest
if host.MajorVersion != ctr.MajorVersion ||
host.MinorVersion != ctr.MinorVersion {
return false
}
// If host is < WS 2022, exact version match is required
if host.Build < V21H2Server {
return host.Build == ctr.Build
}
var supportedLtscRelease uint16
for i := len(compatLTSCReleases) - 1; i >= 0; i-- {
if host.Build >= compatLTSCReleases[i] {
supportedLtscRelease = compatLTSCReleases[i]
break
}
}
return ctr.Build >= supportedLtscRelease && ctr.Build <= host.Build
}

View File

@@ -1,5 +0,0 @@
//go:build tools
package hcsshim
import _ "github.com/Microsoft/go-winio/tools/mkwinsyscall"

File diff suppressed because it is too large Load Diff

View File

@@ -14,4 +14,4 @@
limitations under the License.
*/
package v1
package stats

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,6 @@
file {
name: "github.com/containerd/cgroups/stats/v1/metrics.proto"
name: "github.com/containerd/cgroups/cgroup1/stats/metrics.proto"
package: "io.containerd.cgroups.v1"
dependency: "gogoproto/gogo.proto"
message_type {
name: "Metrics"
field {
@@ -26,9 +25,6 @@ file {
label: LABEL_OPTIONAL
type: TYPE_MESSAGE
type_name: ".io.containerd.cgroups.v1.CPUStat"
options {
65004: "CPU"
}
json_name: "cpu"
}
field {
@@ -175,9 +171,6 @@ file {
number: 4
label: LABEL_REPEATED
type: TYPE_UINT64
options {
65004: "PerCPU"
}
json_name: "perCpu"
}
}
@@ -219,9 +212,6 @@ file {
number: 2
label: LABEL_OPTIONAL
type: TYPE_UINT64
options {
65004: "RSS"
}
json_name: "rss"
}
field {
@@ -229,9 +219,6 @@ file {
number: 3
label: LABEL_OPTIONAL
type: TYPE_UINT64
options {
65004: "RSSHuge"
}
json_name: "rssHuge"
}
field {
@@ -344,9 +331,6 @@ file {
number: 19
label: LABEL_OPTIONAL
type: TYPE_UINT64
options {
65004: "TotalRSS"
}
json_name: "totalRss"
}
field {
@@ -354,9 +338,6 @@ file {
number: 20
label: LABEL_OPTIONAL
type: TYPE_UINT64
options {
65004: "TotalRSSHuge"
}
json_name: "totalRssHuge"
}
field {
@@ -473,9 +454,6 @@ file {
label: LABEL_OPTIONAL
type: TYPE_MESSAGE
type_name: ".io.containerd.cgroups.v1.MemoryEntry"
options {
65004: "KernelTCP"
}
json_name: "kernelTcp"
}
}
@@ -786,5 +764,8 @@ file {
json_name: "nrIoWait"
}
}
options {
go_package: "github.com/containerd/cgroups/cgroup1/stats"
}
syntax: "proto3"
}

View File

@@ -2,12 +2,12 @@ syntax = "proto3";
package io.containerd.cgroups.v1;
import "gogoproto/gogo.proto";
option go_package = "github.com/containerd/cgroups/cgroup1/stats";
message Metrics {
repeated HugetlbStat hugetlb = 1;
PidsStat pids = 2;
CPUStat cpu = 3 [(gogoproto.customname) = "CPU"];
CPUStat cpu = 3;
MemoryStat memory = 4;
BlkIOStat blkio = 5;
RdmaStat rdma = 6;
@@ -38,7 +38,7 @@ message CPUUsage {
uint64 total = 1;
uint64 kernel = 2;
uint64 user = 3;
repeated uint64 per_cpu = 4 [(gogoproto.customname) = "PerCPU"];
repeated uint64 per_cpu = 4;
}
@@ -50,8 +50,8 @@ message Throttle {
message MemoryStat {
uint64 cache = 1;
uint64 rss = 2 [(gogoproto.customname) = "RSS"];
uint64 rss_huge = 3 [(gogoproto.customname) = "RSSHuge"];
uint64 rss = 2;
uint64 rss_huge = 3;
uint64 mapped_file = 4;
uint64 dirty = 5;
uint64 writeback = 6;
@@ -67,8 +67,8 @@ message MemoryStat {
uint64 hierarchical_memory_limit = 16;
uint64 hierarchical_swap_limit = 17;
uint64 total_cache = 18;
uint64 total_rss = 19 [(gogoproto.customname) = "TotalRSS"];
uint64 total_rss_huge = 20 [(gogoproto.customname) = "TotalRSSHuge"];
uint64 total_rss = 19;
uint64 total_rss_huge = 20;
uint64 total_mapped_file = 21;
uint64 total_dirty = 22;
uint64 total_writeback = 23;
@@ -84,7 +84,7 @@ message MemoryStat {
MemoryEntry usage = 33;
MemoryEntry swap = 34;
MemoryEntry kernel = 35;
MemoryEntry kernel_tcp = 36 [(gogoproto.customname) = "KernelTCP"];
MemoryEntry kernel_tcp = 36;
}

View File

@@ -1,72 +0,0 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package log
import (
"context"
"github.com/sirupsen/logrus"
)
var (
// G is an alias for GetLogger.
//
// We may want to define this locally to a package to get package tagged log
// messages.
G = GetLogger
// L is an alias for the standard logger.
L = logrus.NewEntry(logrus.StandardLogger())
)
type (
loggerKey struct{}
// Fields type to pass to `WithFields`, alias from `logrus`.
Fields = logrus.Fields
)
const (
// RFC3339NanoFixed is time.RFC3339Nano with nanoseconds padded using zeros to
// ensure the formatted time is always the same number of characters.
RFC3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
// TextFormat represents the text logging format
TextFormat = "text"
// JSONFormat represents the JSON logging format
JSONFormat = "json"
)
// WithLogger returns a new context with the provided logger. Use in
// combination with logger.WithField(s) for great effect.
func WithLogger(ctx context.Context, logger *logrus.Entry) context.Context {
e := logger.WithContext(ctx)
return context.WithValue(ctx, loggerKey{}, e)
}
// GetLogger retrieves the current logger from the context. If no logger is
// available, the default logger is returned.
func GetLogger(ctx context.Context) *logrus.Entry {
logger := ctx.Value(loggerKey{})
if logger == nil {
return L.WithContext(ctx)
}
return logger.(*logrus.Entry)
}

View File

@@ -436,9 +436,8 @@ func importTar(in io.ReaderAt) (*tarFile, error) {
if err != nil {
if err == io.EOF {
break
} else {
return nil, fmt.Errorf("failed to parse tar file, %w", err)
}
return nil, fmt.Errorf("failed to parse tar file, %w", err)
}
switch cleanEntryName(h.Name) {
case PrefetchLandmark, NoPrefetchLandmark:

View File

@@ -10,13 +10,14 @@ import (
"path/filepath"
"strings"
passwd "github.com/containers/common/pkg/password"
"github.com/containers/image/v5/docker"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/pkg/docker/config"
"github.com/containers/image/v5/pkg/sysregistriesv2"
"github.com/containers/image/v5/types"
"github.com/containers/storage/pkg/homedir"
"github.com/sirupsen/logrus"
terminal "golang.org/x/term"
)
// ErrNewCredentialsInvalid means that the new user-provided credentials are
@@ -39,33 +40,46 @@ func (e ErrNewCredentialsInvalid) Unwrap() error {
// GetDefaultAuthFile returns env value REGISTRY_AUTH_FILE as default
// --authfile path used in multiple --authfile flag definitions
// Will fail over to DOCKER_CONFIG if REGISTRY_AUTH_FILE environment is not set
//
// WARNINGS:
// - In almost all invocations, expect this function to return ""; so it can not be used
// for directly accessing the file.
// - Use this only for commands that _read_ credentials, not write them.
// The path may refer to github.com/containers auth.json, or to Docker config.json,
// and the distinction is lost; writing auth.json data to config.json may not be consumable by Docker,
// or it may overwrite and discard unrelated Docker configuration set by the user.
func GetDefaultAuthFile() string {
// Keep this in sync with the default logic in systemContextWithOptions!
if authfile := os.Getenv("REGISTRY_AUTH_FILE"); authfile != "" {
return authfile
}
// This pre-existing behavior is not conceptually consistent:
// If users have a ~/.docker/config.json in the default path, and no environment variable
// set, we read auth.json first, falling back to config.json;
// but if DOCKER_CONFIG is set, we read only config.json in that path, and we dont read auth.json at all.
if authEnv := os.Getenv("DOCKER_CONFIG"); authEnv != "" {
return filepath.Join(authEnv, "config.json")
}
return ""
}
// CheckAuthFile validates filepath given by --authfile
// used by command has --authfile flag
func CheckAuthFile(authfile string) error {
if authfile == "" {
// CheckAuthFile validates a path option, failing if the option is set but the referenced file is not accessible.
func CheckAuthFile(pathOption string) error {
if pathOption == "" {
return nil
}
if _, err := os.Stat(authfile); err != nil {
return fmt.Errorf("checking authfile: %w", err)
if _, err := os.Stat(pathOption); err != nil {
return fmt.Errorf("credential file is not accessible: %w", err)
}
return nil
}
// systemContextWithOptions returns a version of sys
// updated with authFile and certDir values (if they are not "").
// updated with authFile, dockerCompatAuthFile and certDir values (if they are not "").
// NOTE: this is a shallow copy that can be used and updated, but may share
// data with the original parameter.
func systemContextWithOptions(sys *types.SystemContext, authFile, certDir string) *types.SystemContext {
func systemContextWithOptions(sys *types.SystemContext, authFile, dockerCompatAuthFile, certDir string) (*types.SystemContext, error) {
if sys != nil {
sysCopy := *sys
sys = &sysCopy
@@ -73,24 +87,50 @@ func systemContextWithOptions(sys *types.SystemContext, authFile, certDir string
sys = &types.SystemContext{}
}
if authFile != "" {
defaultDockerConfigPath := filepath.Join(homedir.Get(), ".docker", "config.json")
switch {
case authFile != "" && dockerCompatAuthFile != "":
return nil, errors.New("options for paths to the credential file and to the Docker-compatible credential file can not be set simultaneously")
case authFile != "":
if authFile == defaultDockerConfigPath {
logrus.Warn("saving credentials to ~/.docker/config.json, but not using Docker-compatible file format")
}
sys.AuthFilePath = authFile
case dockerCompatAuthFile != "":
sys.DockerCompatAuthFilePath = dockerCompatAuthFile
default:
// Keep this in sync with GetDefaultAuthFile()!
//
// Note that c/image does not natively implement the REGISTRY_AUTH_FILE
// variable, so not all callers look for credentials in this location.
if authFileVar := os.Getenv("REGISTRY_AUTH_FILE"); authFileVar != "" {
if authFileVar == defaultDockerConfigPath {
logrus.Warn("$REGISTRY_AUTH_FILE points to ~/.docker/config.json, but the file format is not fully compatible; use the Docker-compatible file path option instead")
}
sys.AuthFilePath = authFileVar
} else if dockerConfig := os.Getenv("DOCKER_CONFIG"); dockerConfig != "" {
// This preserves pre-existing _inconsistent_ behavior:
// If the Docker configuration exists in the default ~/.docker/config.json location,
// we DO NOT write to it; instead, we update auth.json in the default path.
// Only if the user explicitly sets DOCKER_CONFIG, we write to that config.json.
sys.DockerCompatAuthFilePath = filepath.Join(dockerConfig, "config.json")
}
}
if certDir != "" {
sys.DockerCertPath = certDir
}
return sys
return sys, nil
}
// Login implements a “log in” command with the provided opts and args
// reading the password from opts.Stdin or the options in opts.
func Login(ctx context.Context, systemContext *types.SystemContext, opts *LoginOptions, args []string) error {
systemContext = systemContextWithOptions(systemContext, opts.AuthFile, opts.CertDir)
systemContext, err := systemContextWithOptions(systemContext, opts.AuthFile, opts.DockerCompatAuthFile, opts.CertDir)
if err != nil {
return err
}
var (
key, registry string
err error
)
var key, registry string
switch len(args) {
case 0:
if !opts.AcceptUnspecifiedRegistry {
@@ -259,7 +299,7 @@ func getUserAndPass(opts *LoginOptions, password, userFromAuthFile string) (user
if err != nil {
return "", "", fmt.Errorf("reading username: %w", err)
}
// If the user just hit enter, use the displayed user from the
// If the user just hit enter, use the displayed user from
// the authentication file. This allows to do a lazy
// `$ buildah login -p $NEW_PASSWORD` without specifying the
// user.
@@ -269,7 +309,7 @@ func getUserAndPass(opts *LoginOptions, password, userFromAuthFile string) (user
}
if password == "" {
fmt.Fprint(opts.Stdout, "Password: ")
pass, err := terminal.ReadPassword(int(os.Stdin.Fd()))
pass, err := passwd.Read(int(os.Stdin.Fd()))
if err != nil {
return "", "", fmt.Errorf("reading password: %w", err)
}
@@ -284,7 +324,13 @@ func Logout(systemContext *types.SystemContext, opts *LogoutOptions, args []stri
if err := CheckAuthFile(opts.AuthFile); err != nil {
return err
}
systemContext = systemContextWithOptions(systemContext, opts.AuthFile, "")
if err := CheckAuthFile(opts.DockerCompatAuthFile); err != nil {
return err
}
systemContext, err := systemContextWithOptions(systemContext, opts.AuthFile, opts.DockerCompatAuthFile, "")
if err != nil {
return err
}
if opts.All {
if len(args) != 0 {
@@ -297,10 +343,7 @@ func Logout(systemContext *types.SystemContext, opts *LogoutOptions, args []stri
return nil
}
var (
key, registry string
err error
)
var key, registry string
switch len(args) {
case 0:
if !opts.AcceptUnspecifiedRegistry {
@@ -336,7 +379,7 @@ func Logout(systemContext *types.SystemContext, opts *LogoutOptions, args []stri
authInvalid := docker.CheckAuth(context.Background(), systemContext, authConfig.Username, authConfig.Password, registry)
if authConfig.Username != "" && authConfig.Password != "" && authInvalid == nil {
fmt.Printf("Not logged into %s with current tool. Existing credentials were established via docker login. Please use docker logout instead.\n", key)
fmt.Printf("Not logged into %s with current tool. Existing credentials were established via docker login. Please use docker logout instead.\n", key) //nolint:forbidigo
return nil
}
return fmt.Errorf("not logged into %s", key)

View File

@@ -14,14 +14,15 @@ type LoginOptions struct {
// CLI flags managed by the FlagSet returned by GetLoginFlags
// Callers that use GetLoginFlags should not need to touch these values at all; callers that use
// other CLI frameworks should set them based on user input.
AuthFile string
CertDir string
Password string
Username string
StdinPassword bool
GetLoginSet bool
Verbose bool // set to true for verbose output
AcceptRepositories bool // set to true to allow namespaces or repositories rather than just registries
AuthFile string
DockerCompatAuthFile string
CertDir string
Password string
Username string
StdinPassword bool
GetLoginSet bool
Verbose bool // set to true for verbose output
AcceptRepositories bool // set to true to allow namespaces or repositories rather than just registries
// Options caller can set
Stdin io.Reader // set to os.Stdin
Stdout io.Writer // set to os.Stdout
@@ -34,9 +35,10 @@ type LogoutOptions struct {
// CLI flags managed by the FlagSet returned by GetLogoutFlags
// Callers that use GetLogoutFlags should not need to touch these values at all; callers that use
// other CLI frameworks should set them based on user input.
AuthFile string
All bool
AcceptRepositories bool // set to true to allow namespaces or repositories rather than just registries
AuthFile string
DockerCompatAuthFile string
All bool
AcceptRepositories bool // set to true to allow namespaces or repositories rather than just registries
// Options caller can set
Stdout io.Writer // set to os.Stdout
AcceptUnspecifiedRegistry bool // set to true if allows logout with unspecified registry
@@ -45,7 +47,8 @@ type LogoutOptions struct {
// GetLoginFlags defines and returns login flags for containers tools
func GetLoginFlags(flags *LoginOptions) *pflag.FlagSet {
fs := pflag.FlagSet{}
fs.StringVar(&flags.AuthFile, "authfile", GetDefaultAuthFile(), "path of the authentication file. Use REGISTRY_AUTH_FILE environment variable to override")
fs.StringVar(&flags.AuthFile, "authfile", "", "path of the authentication file. Use REGISTRY_AUTH_FILE environment variable to override")
fs.StringVar(&flags.DockerCompatAuthFile, "compat-auth-file", "", "path of a Docker-compatible config file to update instead")
fs.StringVar(&flags.CertDir, "cert-dir", "", "use certificates at the specified path to access the registry")
fs.StringVarP(&flags.Password, "password", "p", "", "Password for registry")
fs.StringVarP(&flags.Username, "username", "u", "", "Username for registry")
@@ -59,6 +62,7 @@ func GetLoginFlags(flags *LoginOptions) *pflag.FlagSet {
func GetLoginFlagsCompletions() completion.FlagCompletions {
flagCompletion := completion.FlagCompletions{}
flagCompletion["authfile"] = completion.AutocompleteDefault
flagCompletion["compat-auth-file"] = completion.AutocompleteDefault
flagCompletion["cert-dir"] = completion.AutocompleteDefault
flagCompletion["password"] = completion.AutocompleteNone
flagCompletion["username"] = completion.AutocompleteNone
@@ -68,7 +72,8 @@ func GetLoginFlagsCompletions() completion.FlagCompletions {
// GetLogoutFlags defines and returns logout flags for containers tools
func GetLogoutFlags(flags *LogoutOptions) *pflag.FlagSet {
fs := pflag.FlagSet{}
fs.StringVar(&flags.AuthFile, "authfile", GetDefaultAuthFile(), "path of the authentication file. Use REGISTRY_AUTH_FILE environment variable to override")
fs.StringVar(&flags.AuthFile, "authfile", "", "path of the authentication file. Use REGISTRY_AUTH_FILE environment variable to override")
fs.StringVar(&flags.DockerCompatAuthFile, "compat-auth-file", "", "path of a Docker-compatible config file to update instead")
fs.BoolVarP(&flags.All, "all", "a", false, "Remove the cached credentials for all registries in the auth file")
return &fs
}
@@ -77,5 +82,6 @@ func GetLogoutFlags(flags *LogoutOptions) *pflag.FlagSet {
func GetLogoutFlagsCompletions() completion.FlagCompletions {
flagCompletion := completion.FlagCompletions{}
flagCompletion["authfile"] = completion.AutocompleteDefault
flagCompletion["compat-auth-file"] = completion.AutocompleteDefault
return flagCompletion
}

View File

@@ -165,7 +165,7 @@ func (ob *optionalIntValue) String() string {
if !ob.present {
return "" // If the value is not present, just return an empty string, any other value wouldn't make sense.
}
return strconv.Itoa(int(ob.value))
return strconv.Itoa(ob.value)
}
// Type returns the int's type.

View File

@@ -0,0 +1,57 @@
//go:build linux || darwin || freebsd
// +build linux darwin freebsd
package password
import (
"errors"
"os"
"os/signal"
"syscall"
terminal "golang.org/x/term"
)
var ErrInterrupt = errors.New("interrupted")
// Read reads a password from the terminal without echo.
func Read(fd int) ([]byte, error) {
// Store and restore the terminal status on interruptions to
// avoid that the terminal remains in the password state
// This is necessary as for https://github.com/golang/go/issues/31180
oldState, err := terminal.GetState(fd)
if err != nil {
return make([]byte, 0), err
}
type Buffer struct {
Buffer []byte
Error error
}
errorChannel := make(chan Buffer, 1)
// SIGINT and SIGTERM restore the terminal, otherwise the no-echo mode would remain intact
interruptChannel := make(chan os.Signal, 1)
signal.Notify(interruptChannel, syscall.SIGINT, syscall.SIGTERM)
defer func() {
signal.Stop(interruptChannel)
close(interruptChannel)
}()
go func() {
for range interruptChannel {
if oldState != nil {
_ = terminal.Restore(fd, oldState)
}
errorChannel <- Buffer{Buffer: make([]byte, 0), Error: ErrInterrupt}
}
}()
go func() {
buf, err := terminal.ReadPassword(fd)
errorChannel <- Buffer{Buffer: buf, Error: err}
}()
buf := <-errorChannel
return buf.Buffer, buf.Error
}

View File

@@ -0,0 +1,21 @@
//go:build windows
// +build windows
package password
import (
terminal "golang.org/x/term"
)
// Read reads a password from the terminal.
func Read(fd int) ([]byte, error) {
oldState, err := terminal.GetState(fd)
if err != nil {
return make([]byte, 0), err
}
buf, err := terminal.ReadPassword(fd)
if oldState != nil {
_ = terminal.Restore(fd, oldState)
}
return buf, err
}

View File

@@ -51,8 +51,7 @@ func Split(src string) (entries []string) {
}
entries = []string{}
var runes [][]rune
lastClass := 0
class := 0
var class, lastClass int
// split into fields based on class of unicode character
for _, r := range src {
switch {

View File

@@ -74,7 +74,6 @@ func IsErrorRetryable(err error) bool {
}
switch e := err.(type) {
case errcode.Error:
switch e.Code {
case errcode.ErrorCodeUnauthorized, errcode.ErrorCodeDenied,

View File

@@ -83,12 +83,12 @@ func (ic *imageCopier) copyBlobFromStream(ctx context.Context, srcReader io.Read
return types.BlobInfo{}, err
}
// === Report progress using the ic.c.progress channel, if required.
if ic.c.progress != nil && ic.c.progressInterval > 0 {
// === Report progress using the ic.c.options.Progress channel, if required.
if ic.c.options.Progress != nil && ic.c.options.ProgressInterval > 0 {
progressReader := newProgressReader(
stream.reader,
ic.c.progress,
ic.c.progressInterval,
ic.c.options.Progress,
ic.c.options.ProgressInterval,
srcInfo,
)
defer progressReader.reportDone()

View File

@@ -284,10 +284,24 @@ func (d *bpCompressionStepData) recordValidatedDigestData(c *copier, uploadedInf
}
}
if d.uploadedCompressorName != "" && d.uploadedCompressorName != internalblobinfocache.UnknownCompression {
c.blobInfoCache.RecordDigestCompressorName(uploadedInfo.Digest, d.uploadedCompressorName)
if d.uploadedCompressorName != compressiontypes.ZstdChunkedAlgorithmName {
// HACK: Dont record zstd:chunked algorithms.
// There is already a similar hack in internal/imagedestination/impl/helpers.BlobMatchesRequiredCompression,
// and that one prevents reusing zstd:chunked blobs, so recording the algorithm here would be mostly harmless.
//
// We skip that here anyway to work around the inability of blobPipelineDetectCompressionStep to differentiate
// between zstd and zstd:chunked; so we could, in varying situations over time, call RecordDigestCompressorName
// with the same digest and both ZstdAlgorithmName and ZstdChunkedAlgorithmName , which causes warnings about
// inconsistent data to be logged.
c.blobInfoCache.RecordDigestCompressorName(uploadedInfo.Digest, d.uploadedCompressorName)
}
}
if srcInfo.Digest != "" && d.srcCompressorName != "" && d.srcCompressorName != internalblobinfocache.UnknownCompression {
c.blobInfoCache.RecordDigestCompressorName(srcInfo.Digest, d.srcCompressorName)
if srcInfo.Digest != "" && srcInfo.Digest != uploadedInfo.Digest &&
d.srcCompressorName != "" && d.srcCompressorName != internalblobinfocache.UnknownCompression {
if d.srcCompressorName != compressiontypes.ZstdChunkedAlgorithmName {
// HACK: Dont record zstd:chunked algorithms, see above.
c.blobInfoCache.RecordDigestCompressorName(srcInfo.Digest, d.srcCompressorName)
}
}
return nil
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/containers/image/v5/internal/private"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/blobinfocache"
compression "github.com/containers/image/v5/pkg/compression/types"
"github.com/containers/image/v5/signature"
"github.com/containers/image/v5/signature/signer"
"github.com/containers/image/v5/transports"
@@ -126,36 +127,58 @@ type Options struct {
// Download layer contents with "nondistributable" media types ("foreign" layers) and translate the layer media type
// to not indicate "nondistributable".
DownloadForeignLayers bool
// Contains slice of OptionCompressionVariant, where copy will ensure that for each platform
// in the manifest list, a variant with the requested compression will exist.
// Invalid when copying a non-multi-architecture image. That will probably
// change in the future.
EnsureCompressionVariantsExist []OptionCompressionVariant
// ForceCompressionFormat ensures that the compression algorithm set in
// DestinationCtx.CompressionFormat is used exclusively, and blobs of other
// compression algorithms are not reused.
ForceCompressionFormat bool
}
// OptionCompressionVariant allows to supply information about
// selected compression algorithm and compression level by the
// end-user. Refer to EnsureCompressionVariantsExist to know
// more about its usage.
type OptionCompressionVariant struct {
Algorithm compression.Algorithm
Level *int // Only used when we are creating a new image instance using the specified algorithm, not when the image already contains such an instance
}
// copier allows us to keep track of diffID values for blobs, and other
// data shared across one or more images in a possible manifest list.
// The owner must call close() when done.
type copier struct {
dest private.ImageDestination
rawSource private.ImageSource
reportWriter io.Writer
progressOutput io.Writer
progressInterval time.Duration
progress chan types.ProgressProperties
policyContext *signature.PolicyContext
dest private.ImageDestination
rawSource private.ImageSource
options *Options // never nil
reportWriter io.Writer
progressOutput io.Writer
unparsedToplevel *image.UnparsedImage // for rawSource
blobInfoCache internalblobinfocache.BlobInfoCache2
ociDecryptConfig *encconfig.DecryptConfig
ociEncryptConfig *encconfig.EncryptConfig
concurrentBlobCopiesSemaphore *semaphore.Weighted // Limits the amount of concurrently copied blobs
downloadForeignLayers bool
signers []*signer.Signer // Signers to use to create new signatures for the image
signersToClose []*signer.Signer // Signers that should be closed when this copier is destroyed.
signers []*signer.Signer // Signers to use to create new signatures for the image
signersToClose []*signer.Signer // Signers that should be closed when this copier is destroyed.
}
// Internal function to validate `requireCompressionFormatMatch` for copySingleImageOptions
func shouldRequireCompressionFormatMatch(options *Options) (bool, error) {
if options.ForceCompressionFormat && (options.DestinationCtx == nil || options.DestinationCtx.CompressionFormat == nil) {
return false, fmt.Errorf("cannot use ForceCompressionFormat with undefined default compression format")
}
return options.ForceCompressionFormat, nil
}
// Image copies image from srcRef to destRef, using policyContext to validate
// source image admissibility. It returns the manifest which was written to
// the new copy of the image.
func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) (copiedManifest []byte, retErr error) {
// NOTE this function uses an output parameter for the error return value.
// Setting this and returning is the ideal way to return an error.
//
// the defers in this routine will wrap the error return with its own errors
// which can be valuable context in the middle of a multi-streamed copy.
if options == nil {
options = &Options{}
}
@@ -209,27 +232,29 @@ func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef,
}
c := &copier{
dest: dest,
rawSource: rawSource,
reportWriter: reportWriter,
progressOutput: progressOutput,
progressInterval: options.ProgressInterval,
progress: options.Progress,
policyContext: policyContext,
dest: dest,
rawSource: rawSource,
options: options,
reportWriter: reportWriter,
progressOutput: progressOutput,
unparsedToplevel: image.UnparsedInstance(rawSource, nil),
// FIXME? The cache is used for sources and destinations equally, but we only have a SourceCtx and DestinationCtx.
// For now, use DestinationCtx (because blob reuse changes the behavior of the destination side more); eventually
// we might want to add a separate CommonCtx — or would that be too confusing?
blobInfoCache: internalblobinfocache.FromBlobInfoCache(blobinfocache.DefaultCache(options.DestinationCtx)),
ociDecryptConfig: options.OciDecryptConfig,
ociEncryptConfig: options.OciEncryptConfig,
downloadForeignLayers: options.DownloadForeignLayers,
// For now, use DestinationCtx (because blob reuse changes the behavior of the destination side more).
// Conceptually the cache settings should be in copy.Options instead.
blobInfoCache: internalblobinfocache.FromBlobInfoCache(blobinfocache.DefaultCache(options.DestinationCtx)),
}
defer c.close()
c.blobInfoCache.Open()
defer c.blobInfoCache.Close()
// Set the concurrentBlobCopiesSemaphore if we can copy layers in parallel.
if dest.HasThreadSafePutBlob() && rawSource.HasThreadSafeGetBlob() {
c.concurrentBlobCopiesSemaphore = options.ConcurrentBlobCopiesSemaphore
c.concurrentBlobCopiesSemaphore = c.options.ConcurrentBlobCopiesSemaphore
if c.concurrentBlobCopiesSemaphore == nil {
max := options.MaxParallelDownloads
max := c.options.MaxParallelDownloads
if max == 0 {
max = maxParallelDownloads
}
@@ -237,33 +262,48 @@ func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef,
}
} else {
c.concurrentBlobCopiesSemaphore = semaphore.NewWeighted(int64(1))
if options.ConcurrentBlobCopiesSemaphore != nil {
if err := options.ConcurrentBlobCopiesSemaphore.Acquire(ctx, 1); err != nil {
if c.options.ConcurrentBlobCopiesSemaphore != nil {
if err := c.options.ConcurrentBlobCopiesSemaphore.Acquire(ctx, 1); err != nil {
return nil, fmt.Errorf("acquiring semaphore for concurrent blob copies: %w", err)
}
defer options.ConcurrentBlobCopiesSemaphore.Release(1)
defer c.options.ConcurrentBlobCopiesSemaphore.Release(1)
}
}
if err := c.setupSigners(options); err != nil {
if err := c.setupSigners(); err != nil {
return nil, err
}
unparsedToplevel := image.UnparsedInstance(rawSource, nil)
multiImage, err := isMultiImage(ctx, unparsedToplevel)
multiImage, err := isMultiImage(ctx, c.unparsedToplevel)
if err != nil {
return nil, fmt.Errorf("determining manifest MIME type for %s: %w", transports.ImageName(srcRef), err)
}
if !multiImage {
// The simple case: just copy a single image.
if copiedManifest, _, _, err = c.copySingleImage(ctx, policyContext, options, unparsedToplevel, unparsedToplevel, nil); err != nil {
if len(options.EnsureCompressionVariantsExist) > 0 {
return nil, fmt.Errorf("EnsureCompressionVariantsExist is not implemented when not creating a multi-architecture image")
}
requireCompressionFormatMatch, err := shouldRequireCompressionFormatMatch(options)
if err != nil {
return nil, err
}
// The simple case: just copy a single image.
single, err := c.copySingleImage(ctx, c.unparsedToplevel, nil, copySingleImageOptions{requireCompressionFormatMatch: requireCompressionFormatMatch})
if err != nil {
return nil, err
}
copiedManifest = single.manifest
} else if c.options.ImageListSelection == CopySystemImage {
if len(options.EnsureCompressionVariantsExist) > 0 {
return nil, fmt.Errorf("EnsureCompressionVariantsExist is not implemented when not creating a multi-architecture image")
}
requireCompressionFormatMatch, err := shouldRequireCompressionFormatMatch(options)
if err != nil {
return nil, err
}
} else if options.ImageListSelection == CopySystemImage {
// This is a manifest list, and we weren't asked to copy multiple images. Choose a single image that
// matches the current system to copy, and copy it.
mfest, manifestType, err := unparsedToplevel.Manifest(ctx)
mfest, manifestType, err := c.unparsedToplevel.Manifest(ctx)
if err != nil {
return nil, fmt.Errorf("reading manifest for %s: %w", transports.ImageName(srcRef), err)
}
@@ -271,34 +311,35 @@ func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef,
if err != nil {
return nil, fmt.Errorf("parsing primary manifest as list for %s: %w", transports.ImageName(srcRef), err)
}
instanceDigest, err := manifestList.ChooseInstanceByCompression(options.SourceCtx, options.PreferGzipInstances) // try to pick one that matches options.SourceCtx
instanceDigest, err := manifestList.ChooseInstanceByCompression(c.options.SourceCtx, c.options.PreferGzipInstances) // try to pick one that matches c.options.SourceCtx
if err != nil {
return nil, fmt.Errorf("choosing an image from manifest list %s: %w", transports.ImageName(srcRef), err)
}
logrus.Debugf("Source is a manifest list; copying (only) instance %s for current system", instanceDigest)
unparsedInstance := image.UnparsedInstance(rawSource, &instanceDigest)
if copiedManifest, _, _, err = c.copySingleImage(ctx, policyContext, options, unparsedToplevel, unparsedInstance, nil); err != nil {
single, err := c.copySingleImage(ctx, unparsedInstance, nil, copySingleImageOptions{requireCompressionFormatMatch: requireCompressionFormatMatch})
if err != nil {
return nil, fmt.Errorf("copying system image from manifest list: %w", err)
}
} else { /* options.ImageListSelection == CopyAllImages or options.ImageListSelection == CopySpecificImages, */
copiedManifest = single.manifest
} else { /* c.options.ImageListSelection == CopyAllImages or c.options.ImageListSelection == CopySpecificImages, */
// If we were asked to copy multiple images and can't, that's an error.
if !supportsMultipleImages(c.dest) {
return nil, fmt.Errorf("copying multiple images: destination transport %q does not support copying multiple images as a group", destRef.Transport().Name())
}
// Copy some or all of the images.
switch options.ImageListSelection {
switch c.options.ImageListSelection {
case CopyAllImages:
logrus.Debugf("Source is a manifest list; copying all instances")
case CopySpecificImages:
logrus.Debugf("Source is a manifest list; copying some instances")
}
if copiedManifest, err = c.copyMultipleImages(ctx, policyContext, options, unparsedToplevel); err != nil {
if copiedManifest, err = c.copyMultipleImages(ctx); err != nil {
return nil, err
}
}
if err := c.dest.Commit(ctx, unparsedToplevel); err != nil {
if err := c.dest.Commit(ctx, c.unparsedToplevel); err != nil {
return nil, fmt.Errorf("committing the finished image: %w", err)
}

View File

@@ -34,7 +34,7 @@ type bpDecryptionStepData struct {
// srcInfo is only used for error messages.
// Returns data for other steps; the caller should eventually use updateCryptoOperation.
func (ic *imageCopier) blobPipelineDecryptionStep(stream *sourceStream, srcInfo types.BlobInfo) (*bpDecryptionStepData, error) {
if !isOciEncrypted(stream.info.MediaType) || ic.c.ociDecryptConfig == nil {
if !isOciEncrypted(stream.info.MediaType) || ic.c.options.OciDecryptConfig == nil {
return &bpDecryptionStepData{
decrypting: false,
}, nil
@@ -47,7 +47,7 @@ func (ic *imageCopier) blobPipelineDecryptionStep(stream *sourceStream, srcInfo
desc := imgspecv1.Descriptor{
Annotations: stream.info.Annotations,
}
reader, decryptedDigest, err := ocicrypt.DecryptLayer(ic.c.ociDecryptConfig, stream.reader, desc, false)
reader, decryptedDigest, err := ocicrypt.DecryptLayer(ic.c.options.OciDecryptConfig, stream.reader, desc, false)
if err != nil {
return nil, fmt.Errorf("decrypting layer %s: %w", srcInfo.Digest, err)
}
@@ -70,7 +70,7 @@ func (d *bpDecryptionStepData) updateCryptoOperation(operation *types.LayerCrypt
}
}
// bpdData contains data that the copy pipeline needs about the encryption step.
// bpEncryptionStepData contains data that the copy pipeline needs about the encryption step.
type bpEncryptionStepData struct {
encrypting bool // We are actually encrypting the stream
finalizer ocicrypt.EncryptLayerFinalizer
@@ -81,7 +81,7 @@ type bpEncryptionStepData struct {
// Returns data for other steps; the caller should eventually call updateCryptoOperationAndAnnotations.
func (ic *imageCopier) blobPipelineEncryptionStep(stream *sourceStream, toEncrypt bool, srcInfo types.BlobInfo,
decryptionStep *bpDecryptionStepData) (*bpEncryptionStepData, error) {
if !toEncrypt || isOciEncrypted(srcInfo.MediaType) || ic.c.ociEncryptConfig == nil {
if !toEncrypt || isOciEncrypted(srcInfo.MediaType) || ic.c.options.OciEncryptConfig == nil {
return &bpEncryptionStepData{
encrypting: false,
}, nil
@@ -101,7 +101,7 @@ func (ic *imageCopier) blobPipelineEncryptionStep(stream *sourceStream, toEncryp
Size: srcInfo.Size,
Annotations: annotations,
}
reader, finalizer, err := ocicrypt.EncryptLayer(ic.c.ociEncryptConfig, stream.reader, desc)
reader, finalizer, err := ocicrypt.EncryptLayer(ic.c.options.OciEncryptConfig, stream.reader, desc)
if err != nil {
return nil, fmt.Errorf("encrypting blob %s: %w", srcInfo.Digest, err)
}

View File

@@ -5,16 +5,19 @@ import (
"context"
"errors"
"fmt"
"sort"
"strings"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/internal/image"
internalManifest "github.com/containers/image/v5/internal/manifest"
"github.com/containers/image/v5/internal/set"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/signature"
"github.com/containers/image/v5/pkg/compression"
digest "github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus"
"golang.org/x/exp/maps"
"golang.org/x/exp/slices"
)
@@ -28,30 +31,134 @@ const (
type instanceCopy struct {
op instanceCopyKind
sourceDigest digest.Digest
// Fields which can be used by callers when operation
// is `instanceCopyCopy`
copyForceCompressionFormat bool
// Fields which can be used by callers when operation
// is `instanceCopyClone`
cloneCompressionVariant OptionCompressionVariant
clonePlatform *imgspecv1.Platform
cloneAnnotations map[string]string
}
// internal type only to make imgspecv1.Platform comparable
type platformComparable struct {
architecture string
os string
osVersion string
osFeatures string
variant string
}
// Converts imgspecv1.Platform to a comparable format.
func platformV1ToPlatformComparable(platform *imgspecv1.Platform) platformComparable {
if platform == nil {
return platformComparable{}
}
osFeatures := slices.Clone(platform.OSFeatures)
sort.Strings(osFeatures)
return platformComparable{architecture: platform.Architecture,
os: platform.OS,
// This is strictly speaking ambiguous, fields of OSFeatures can contain a ','. Probably good enough for now.
osFeatures: strings.Join(osFeatures, ","),
osVersion: platform.OSVersion,
variant: platform.Variant,
}
}
// platformCompressionMap prepares a mapping of platformComparable -> CompressionAlgorithmNames for given digests
func platformCompressionMap(list internalManifest.List, instanceDigests []digest.Digest) (map[platformComparable]*set.Set[string], error) {
res := make(map[platformComparable]*set.Set[string])
for _, instanceDigest := range instanceDigests {
instanceDetails, err := list.Instance(instanceDigest)
if err != nil {
return nil, fmt.Errorf("getting details for instance %s: %w", instanceDigest, err)
}
platform := platformV1ToPlatformComparable(instanceDetails.ReadOnly.Platform)
platformSet, ok := res[platform]
if !ok {
platformSet = set.New[string]()
res[platform] = platformSet
}
platformSet.AddSlice(instanceDetails.ReadOnly.CompressionAlgorithmNames)
}
return res, nil
}
func validateCompressionVariantExists(input []OptionCompressionVariant) error {
for _, option := range input {
_, err := compression.AlgorithmByName(option.Algorithm.Name())
if err != nil {
return fmt.Errorf("invalid algorithm %q in option.EnsureCompressionVariantsExist: %w", option.Algorithm.Name(), err)
}
}
return nil
}
// prepareInstanceCopies prepares a list of instances which needs to copied to the manifest list.
func prepareInstanceCopies(instanceDigests []digest.Digest, options *Options) []instanceCopy {
func prepareInstanceCopies(list internalManifest.List, instanceDigests []digest.Digest, options *Options) ([]instanceCopy, error) {
res := []instanceCopy{}
if options.ImageListSelection == CopySpecificImages && len(options.EnsureCompressionVariantsExist) > 0 {
// List can already contain compressed instance for a compression selected in `EnsureCompressionVariantsExist`
// Its unclear what it means when `CopySpecificImages` includes an instance in options.Instances,
// EnsureCompressionVariantsExist asks for an instance with some compression,
// an instance with that compression already exists, but is not included in options.Instances.
// We might define the semantics and implement this in the future.
return res, fmt.Errorf("EnsureCompressionVariantsExist is not implemented for CopySpecificImages")
}
err := validateCompressionVariantExists(options.EnsureCompressionVariantsExist)
if err != nil {
return res, err
}
compressionsByPlatform, err := platformCompressionMap(list, instanceDigests)
if err != nil {
return nil, err
}
for i, instanceDigest := range instanceDigests {
if options.ImageListSelection == CopySpecificImages &&
!slices.Contains(options.Instances, instanceDigest) {
logrus.Debugf("Skipping instance %s (%d/%d)", instanceDigest, i+1, len(instanceDigests))
continue
}
instanceDetails, err := list.Instance(instanceDigest)
if err != nil {
return res, fmt.Errorf("getting details for instance %s: %w", instanceDigest, err)
}
forceCompressionFormat, err := shouldRequireCompressionFormatMatch(options)
if err != nil {
return nil, err
}
res = append(res, instanceCopy{
op: instanceCopyCopy,
sourceDigest: instanceDigest,
op: instanceCopyCopy,
sourceDigest: instanceDigest,
copyForceCompressionFormat: forceCompressionFormat,
})
platform := platformV1ToPlatformComparable(instanceDetails.ReadOnly.Platform)
compressionList := compressionsByPlatform[platform]
for _, compressionVariant := range options.EnsureCompressionVariantsExist {
if !compressionList.Contains(compressionVariant.Algorithm.Name()) {
res = append(res, instanceCopy{
op: instanceCopyClone,
sourceDigest: instanceDigest,
cloneCompressionVariant: compressionVariant,
clonePlatform: instanceDetails.ReadOnly.Platform,
cloneAnnotations: maps.Clone(instanceDetails.ReadOnly.Annotations),
})
// add current compression to the list so that we dont create duplicate clones
compressionList.Add(compressionVariant.Algorithm.Name())
}
}
}
return res
return res, nil
}
// copyMultipleImages copies some or all of an image list's instances, using
// policyContext to validate source image admissibility.
func (c *copier) copyMultipleImages(ctx context.Context, policyContext *signature.PolicyContext, options *Options, unparsedToplevel *image.UnparsedImage) (copiedManifest []byte, retErr error) {
// c.policyContext to validate source image admissibility.
func (c *copier) copyMultipleImages(ctx context.Context) (copiedManifest []byte, retErr error) {
// Parse the list and get a copy of the original value after it's re-encoded.
manifestList, manifestType, err := unparsedToplevel.Manifest(ctx)
manifestList, manifestType, err := c.unparsedToplevel.Manifest(ctx)
if err != nil {
return nil, fmt.Errorf("reading manifest list: %w", err)
}
@@ -61,7 +168,7 @@ func (c *copier) copyMultipleImages(ctx context.Context, policyContext *signatur
}
updatedList := originalList.CloneInternal()
sigs, err := c.sourceSignatures(ctx, unparsedToplevel, options,
sigs, err := c.sourceSignatures(ctx, c.unparsedToplevel,
"Getting image list signatures",
"Checking if image list destination supports signatures")
if err != nil {
@@ -94,12 +201,12 @@ func (c *copier) copyMultipleImages(ctx context.Context, policyContext *signatur
if destIsDigestedReference {
cannotModifyManifestListReason = "Destination specifies a digest"
}
if options.PreserveDigests {
if c.options.PreserveDigests {
cannotModifyManifestListReason = "Instructed to preserve digests"
}
// Determine if we'll need to convert the manifest list to a different format.
forceListMIMEType := options.ForceManifestMIMEType
forceListMIMEType := c.options.ForceManifestMIMEType
switch forceListMIMEType {
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, manifest.DockerV2Schema2MediaType:
forceListMIMEType = manifest.DockerV2ListMediaType
@@ -119,8 +226,11 @@ func (c *copier) copyMultipleImages(ctx context.Context, policyContext *signatur
// Copy each image, or just the ones we want to copy, in turn.
instanceDigests := updatedList.Instances()
instanceEdits := []internalManifest.ListEdit{}
instanceCopyList := prepareInstanceCopies(instanceDigests, options)
c.Printf("Copying %d of %d images in list\n", len(instanceCopyList), len(instanceDigests))
instanceCopyList, err := prepareInstanceCopies(updatedList, instanceDigests, c.options)
if err != nil {
return nil, fmt.Errorf("preparing instances for copy: %w", err)
}
c.Printf("Copying %d images generated from %d images in list\n", len(instanceCopyList), len(instanceDigests))
for i, instance := range instanceCopyList {
// Update instances to be edited by their `ListOperation` and
// populate necessary fields.
@@ -129,17 +239,39 @@ func (c *copier) copyMultipleImages(ctx context.Context, policyContext *signatur
logrus.Debugf("Copying instance %s (%d/%d)", instance.sourceDigest, i+1, len(instanceCopyList))
c.Printf("Copying image %s (%d/%d)\n", instance.sourceDigest, i+1, len(instanceCopyList))
unparsedInstance := image.UnparsedInstance(c.rawSource, &instanceCopyList[i].sourceDigest)
updatedManifest, updatedManifestType, updatedManifestDigest, err := c.copySingleImage(ctx, policyContext, options, unparsedToplevel, unparsedInstance, &instanceCopyList[i].sourceDigest)
updated, err := c.copySingleImage(ctx, unparsedInstance, &instanceCopyList[i].sourceDigest, copySingleImageOptions{requireCompressionFormatMatch: instance.copyForceCompressionFormat})
if err != nil {
return nil, fmt.Errorf("copying image %d/%d from manifest list: %w", i+1, len(instanceCopyList), err)
}
// Record the result of a possible conversion here.
instanceEdits = append(instanceEdits, internalManifest.ListEdit{
ListOperation: internalManifest.ListOpUpdate,
UpdateOldDigest: instance.sourceDigest,
UpdateDigest: updatedManifestDigest,
UpdateSize: int64(len(updatedManifest)),
UpdateMediaType: updatedManifestType})
ListOperation: internalManifest.ListOpUpdate,
UpdateOldDigest: instance.sourceDigest,
UpdateDigest: updated.manifestDigest,
UpdateSize: int64(len(updated.manifest)),
UpdateCompressionAlgorithms: updated.compressionAlgorithms,
UpdateMediaType: updated.manifestMIMEType})
case instanceCopyClone:
logrus.Debugf("Replicating instance %s (%d/%d)", instance.sourceDigest, i+1, len(instanceCopyList))
c.Printf("Replicating image %s (%d/%d)\n", instance.sourceDigest, i+1, len(instanceCopyList))
unparsedInstance := image.UnparsedInstance(c.rawSource, &instanceCopyList[i].sourceDigest)
updated, err := c.copySingleImage(ctx, unparsedInstance, &instanceCopyList[i].sourceDigest, copySingleImageOptions{
requireCompressionFormatMatch: true,
compressionFormat: &instance.cloneCompressionVariant.Algorithm,
compressionLevel: instance.cloneCompressionVariant.Level})
if err != nil {
return nil, fmt.Errorf("replicating image %d/%d from manifest list: %w", i+1, len(instanceCopyList), err)
}
// Record the result of a possible conversion here.
instanceEdits = append(instanceEdits, internalManifest.ListEdit{
ListOperation: internalManifest.ListOpAdd,
AddDigest: updated.manifestDigest,
AddSize: int64(len(updated.manifest)),
AddMediaType: updated.manifestMIMEType,
AddPlatform: instance.clonePlatform,
AddAnnotations: instance.cloneAnnotations,
AddCompressionAlgorithms: updated.compressionAlgorithms,
})
default:
return nil, fmt.Errorf("copying image: invalid copy operation %d", instance.op)
}
@@ -204,11 +336,11 @@ func (c *copier) copyMultipleImages(ctx context.Context, policyContext *signatur
}
// Sign the manifest list.
newSigs, err := c.createSignatures(ctx, manifestList, options.SignIdentity)
newSigs, err := c.createSignatures(ctx, manifestList, c.options.SignIdentity)
if err != nil {
return nil, err
}
sigs = append(sigs, newSigs...)
sigs = append(slices.Clone(sigs), newSigs...)
c.Printf("Storing list signatures\n")
if err := c.dest.PutSignaturesWithFormat(ctx, sigs, nil); err != nil {

View File

@@ -48,10 +48,13 @@ type progressBar struct {
// As a convention, most users of progress bars should call mark100PercentComplete on full success;
// by convention, we don't leave progress bars in partial state when fully done
// (even if we copied much less data than anticipated).
func (c *copier) createProgressBar(pool *mpb.Progress, partial bool, info types.BlobInfo, kind string, onComplete string) *progressBar {
func (c *copier) createProgressBar(pool *mpb.Progress, partial bool, info types.BlobInfo, kind string, onComplete string) (*progressBar, error) {
// shortDigestLen is the length of the digest used for blobs.
const shortDigestLen = 12
if err := info.Digest.Validate(); err != nil { // digest.Digest.Encoded() panics on failure, so validate explicitly.
return nil, err
}
prefix := fmt.Sprintf("Copying %s %s", kind, info.Digest.Encoded())
// Truncate the prefix (chopping of some part of the digest) to make all progress bars aligned in a column.
maxPrefixLen := len("Copying blob ") + shortDigestLen
@@ -84,6 +87,8 @@ func (c *copier) createProgressBar(pool *mpb.Progress, partial bool, info types.
),
mpb.AppendDecorators(
decor.OnComplete(decor.CountersKibiByte("%.1f / %.1f"), ""),
decor.Name(" | "),
decor.OnComplete(decor.EwmaSpeed(decor.SizeB1024(0), "% .1f", 30), ""),
),
)
}
@@ -94,12 +99,15 @@ func (c *copier) createProgressBar(pool *mpb.Progress, partial bool, info types.
mpb.PrependDecorators(
decor.OnComplete(decor.Name(prefix), onComplete),
),
mpb.AppendDecorators(
decor.OnComplete(decor.EwmaSpeed(decor.SizeB1024(0), "% .1f", 30), ""),
),
)
}
return &progressBar{
Bar: bar,
originalSize: info.Size,
}
}, nil
}
// printCopyInfo prints a "Copying ..." message on the copier if the output is

View File

@@ -13,20 +13,20 @@ import (
"github.com/containers/image/v5/transports"
)
// setupSigners initializes c.signers based on options.
func (c *copier) setupSigners(options *Options) error {
c.signers = append(c.signers, options.Signers...)
// c.signersToClose is intentionally not updated with options.Signers.
// setupSigners initializes c.signers.
func (c *copier) setupSigners() error {
c.signers = append(c.signers, c.options.Signers...)
// c.signersToClose is intentionally not updated with c.options.Signers.
// We immediately append created signers to c.signers, and we rely on c.close() to clean them up; so we dont need
// to clean up any created signers on failure.
if options.SignBy != "" {
if c.options.SignBy != "" {
opts := []simplesigning.Option{
simplesigning.WithKeyFingerprint(options.SignBy),
simplesigning.WithKeyFingerprint(c.options.SignBy),
}
if options.SignPassphrase != "" {
opts = append(opts, simplesigning.WithPassphrase(options.SignPassphrase))
if c.options.SignPassphrase != "" {
opts = append(opts, simplesigning.WithPassphrase(c.options.SignPassphrase))
}
signer, err := simplesigning.NewSigner(opts...)
if err != nil {
@@ -36,9 +36,9 @@ func (c *copier) setupSigners(options *Options) error {
c.signersToClose = append(c.signersToClose, signer)
}
if options.SignBySigstorePrivateKeyFile != "" {
if c.options.SignBySigstorePrivateKeyFile != "" {
signer, err := sigstore.NewSigner(
sigstore.WithPrivateKeyFile(options.SignBySigstorePrivateKeyFile, options.SignSigstorePrivateKeyPassphrase),
sigstore.WithPrivateKeyFile(c.options.SignBySigstorePrivateKeyFile, c.options.SignSigstorePrivateKeyPassphrase),
)
if err != nil {
return err
@@ -50,13 +50,13 @@ func (c *copier) setupSigners(options *Options) error {
return nil
}
// sourceSignatures returns signatures from unparsedSource based on options,
// sourceSignatures returns signatures from unparsedSource,
// and verifies that they can be used (to avoid copying a large image when we
// can tell in advance that it would ultimately fail)
func (c *copier) sourceSignatures(ctx context.Context, unparsed private.UnparsedImage, options *Options,
func (c *copier) sourceSignatures(ctx context.Context, unparsed private.UnparsedImage,
gettingSignaturesMessage, checkingDestMessage string) ([]internalsig.Signature, error) {
var sigs []internalsig.Signature
if options.RemoveSignatures {
if c.options.RemoveSignatures {
sigs = []internalsig.Signature{}
} else {
c.Printf("%s\n", gettingSignaturesMessage)

View File

@@ -18,7 +18,6 @@ import (
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/compression"
compressiontypes "github.com/containers/image/v5/pkg/compression/types"
"github.com/containers/image/v5/signature"
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/types"
digest "github.com/opencontainers/go-digest"
@@ -30,40 +29,54 @@ import (
// imageCopier tracks state specific to a single image (possibly an item of a manifest list)
type imageCopier struct {
c *copier
manifestUpdates *types.ManifestUpdateOptions
src *image.SourcedImage
diffIDsAreNeeded bool
cannotModifyManifestReason string // The reason the manifest cannot be modified, or an empty string if it can
canSubstituteBlobs bool
compressionFormat *compressiontypes.Algorithm // Compression algorithm to use, if the user explicitly requested one, or nil.
compressionLevel *int
ociEncryptLayers *[]int
c *copier
manifestUpdates *types.ManifestUpdateOptions
src *image.SourcedImage
diffIDsAreNeeded bool
cannotModifyManifestReason string // The reason the manifest cannot be modified, or an empty string if it can
canSubstituteBlobs bool
compressionFormat *compressiontypes.Algorithm // Compression algorithm to use, if the user explicitly requested one, or nil.
compressionLevel *int
requireCompressionFormatMatch bool
}
// copySingleImage copies a single (non-manifest-list) image unparsedImage, using policyContext to validate
type copySingleImageOptions struct {
requireCompressionFormatMatch bool
compressionFormat *compressiontypes.Algorithm // Compression algorithm to use, if the user explicitly requested one, or nil.
compressionLevel *int
}
// copySingleImageResult carries data produced by copySingleImage
type copySingleImageResult struct {
manifest []byte
manifestMIMEType string
manifestDigest digest.Digest
compressionAlgorithms []compressiontypes.Algorithm
}
// copySingleImage copies a single (non-manifest-list) image unparsedImage, using c.policyContext to validate
// source image admissibility.
func (c *copier) copySingleImage(ctx context.Context, policyContext *signature.PolicyContext, options *Options, unparsedToplevel, unparsedImage *image.UnparsedImage, targetInstance *digest.Digest) (retManifest []byte, retManifestType string, retManifestDigest digest.Digest, retErr error) {
func (c *copier) copySingleImage(ctx context.Context, unparsedImage *image.UnparsedImage, targetInstance *digest.Digest, opts copySingleImageOptions) (copySingleImageResult, error) {
// The caller is handling manifest lists; this could happen only if a manifest list contains a manifest list.
// Make sure we fail cleanly in such cases.
multiImage, err := isMultiImage(ctx, unparsedImage)
if err != nil {
// FIXME FIXME: How to name a reference for the sub-image?
return nil, "", "", fmt.Errorf("determining manifest MIME type for %s: %w", transports.ImageName(unparsedImage.Reference()), err)
return copySingleImageResult{}, fmt.Errorf("determining manifest MIME type for %s: %w", transports.ImageName(unparsedImage.Reference()), err)
}
if multiImage {
return nil, "", "", fmt.Errorf("Unexpectedly received a manifest list instead of a manifest for a single image")
return copySingleImageResult{}, fmt.Errorf("Unexpectedly received a manifest list instead of a manifest for a single image")
}
// Please keep this policy check BEFORE reading any other information about the image.
// (The multiImage check above only matches the MIME type, which we have received anyway.
// Actual parsing of anything should be deferred.)
if allowed, err := policyContext.IsRunningImageAllowed(ctx, unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return nil, "", "", fmt.Errorf("Source image rejected: %w", err)
if allowed, err := c.policyContext.IsRunningImageAllowed(ctx, unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
return copySingleImageResult{}, fmt.Errorf("Source image rejected: %w", err)
}
src, err := image.FromUnparsedImage(ctx, options.SourceCtx, unparsedImage)
src, err := image.FromUnparsedImage(ctx, c.options.SourceCtx, unparsedImage)
if err != nil {
return nil, "", "", fmt.Errorf("initializing image from source %s: %w", transports.ImageName(c.rawSource.Reference()), err)
return copySingleImageResult{}, fmt.Errorf("initializing image from source %s: %w", transports.ImageName(c.rawSource.Reference()), err)
}
// If the destination is a digested reference, make a note of that, determine what digest value we're
@@ -75,33 +88,33 @@ func (c *copier) copySingleImage(ctx context.Context, policyContext *signature.P
destIsDigestedReference = true
matches, err := manifest.MatchesDigest(src.ManifestBlob, digested.Digest())
if err != nil {
return nil, "", "", fmt.Errorf("computing digest of source image's manifest: %w", err)
return copySingleImageResult{}, fmt.Errorf("computing digest of source image's manifest: %w", err)
}
if !matches {
manifestList, _, err := unparsedToplevel.Manifest(ctx)
manifestList, _, err := c.unparsedToplevel.Manifest(ctx)
if err != nil {
return nil, "", "", fmt.Errorf("reading manifest from source image: %w", err)
return copySingleImageResult{}, fmt.Errorf("reading manifest from source image: %w", err)
}
matches, err = manifest.MatchesDigest(manifestList, digested.Digest())
if err != nil {
return nil, "", "", fmt.Errorf("computing digest of source image's manifest: %w", err)
return copySingleImageResult{}, fmt.Errorf("computing digest of source image's manifest: %w", err)
}
if !matches {
return nil, "", "", errors.New("Digest of source image's manifest would not match destination reference")
return copySingleImageResult{}, errors.New("Digest of source image's manifest would not match destination reference")
}
}
}
}
if err := checkImageDestinationForCurrentRuntime(ctx, options.DestinationCtx, src, c.dest); err != nil {
return nil, "", "", err
if err := checkImageDestinationForCurrentRuntime(ctx, c.options.DestinationCtx, src, c.dest); err != nil {
return copySingleImageResult{}, err
}
sigs, err := c.sourceSignatures(ctx, src, options,
sigs, err := c.sourceSignatures(ctx, src,
"Getting image source signatures",
"Checking if image destination supports signatures")
if err != nil {
return nil, "", "", err
return copySingleImageResult{}, err
}
// Determine if we're allowed to modify the manifest.
@@ -114,7 +127,7 @@ func (c *copier) copySingleImage(ctx context.Context, policyContext *signature.P
if destIsDigestedReference {
cannotModifyManifestReason = "Destination specifies a digest"
}
if options.PreserveDigests {
if c.options.PreserveDigests {
cannotModifyManifestReason = "Instructed to preserve digests"
}
@@ -123,13 +136,16 @@ func (c *copier) copySingleImage(ctx context.Context, policyContext *signature.P
manifestUpdates: &types.ManifestUpdateOptions{InformationOnly: types.ManifestUpdateInformation{Destination: c.dest}},
src: src,
// diffIDsAreNeeded is computed later
cannotModifyManifestReason: cannotModifyManifestReason,
ociEncryptLayers: options.OciEncryptLayers,
cannotModifyManifestReason: cannotModifyManifestReason,
requireCompressionFormatMatch: opts.requireCompressionFormatMatch,
}
if options.DestinationCtx != nil {
if opts.compressionFormat != nil {
ic.compressionFormat = opts.compressionFormat
ic.compressionLevel = opts.compressionLevel
} else if c.options.DestinationCtx != nil {
// Note that compressionFormat and compressionLevel can be nil.
ic.compressionFormat = options.DestinationCtx.CompressionFormat
ic.compressionLevel = options.DestinationCtx.CompressionLevel
ic.compressionFormat = c.options.DestinationCtx.CompressionFormat
ic.compressionLevel = c.options.DestinationCtx.CompressionLevel
}
// Decide whether we can substitute blobs with semantic equivalents:
// - Dont do that if we cant modify the manifest at all
@@ -142,20 +158,20 @@ func (c *copier) copySingleImage(ctx context.Context, policyContext *signature.P
ic.canSubstituteBlobs = ic.cannotModifyManifestReason == "" && len(c.signers) == 0
if err := ic.updateEmbeddedDockerReference(); err != nil {
return nil, "", "", err
return copySingleImageResult{}, err
}
destRequiresOciEncryption := (isEncrypted(src) && ic.c.ociDecryptConfig != nil) || options.OciEncryptLayers != nil
destRequiresOciEncryption := (isEncrypted(src) && ic.c.options.OciDecryptConfig == nil) || c.options.OciEncryptLayers != nil
manifestConversionPlan, err := determineManifestConversion(determineManifestConversionInputs{
srcMIMEType: ic.src.ManifestMIMEType,
destSupportedManifestMIMETypes: ic.c.dest.SupportedManifestMIMETypes(),
forceManifestMIMEType: options.ForceManifestMIMEType,
forceManifestMIMEType: c.options.ForceManifestMIMEType,
requiresOCIEncryption: destRequiresOciEncryption,
cannotModifyManifestReason: ic.cannotModifyManifestReason,
})
if err != nil {
return nil, "", "", err
return copySingleImageResult{}, err
}
// We set up this part of ic.manifestUpdates quite early, not just around the
// code that calls copyUpdatedConfigAndManifest, so that other parts of the copy code
@@ -169,27 +185,28 @@ func (c *copier) copySingleImage(ctx context.Context, policyContext *signature.P
ic.diffIDsAreNeeded = src.UpdatedImageNeedsLayerDiffIDs(*ic.manifestUpdates)
// If enabled, fetch and compare the destination's manifest. And as an optimization skip updating the destination iff equal
if options.OptimizeDestinationImageAlreadyExists {
if c.options.OptimizeDestinationImageAlreadyExists {
shouldUpdateSigs := len(sigs) > 0 || len(c.signers) != 0 // TODO: Consider allowing signatures updates only and skipping the image's layers/manifest copy if possible
noPendingManifestUpdates := ic.noPendingManifestUpdates()
logrus.Debugf("Checking if we can skip copying: has signatures=%t, OCI encryption=%t, no manifest updates=%t", shouldUpdateSigs, destRequiresOciEncryption, noPendingManifestUpdates)
if !shouldUpdateSigs && !destRequiresOciEncryption && noPendingManifestUpdates {
isSrcDestManifestEqual, retManifest, retManifestType, retManifestDigest, err := compareImageDestinationManifestEqual(ctx, options, src, targetInstance, c.dest)
logrus.Debugf("Checking if we can skip copying: has signatures=%t, OCI encryption=%t, no manifest updates=%t, compression match required for resuing blobs=%t", shouldUpdateSigs, destRequiresOciEncryption, noPendingManifestUpdates, opts.requireCompressionFormatMatch)
if !shouldUpdateSigs && !destRequiresOciEncryption && noPendingManifestUpdates && !ic.requireCompressionFormatMatch {
matchedResult, err := ic.compareImageDestinationManifestEqual(ctx, targetInstance)
if err != nil {
logrus.Warnf("Failed to compare destination image manifest: %v", err)
return nil, "", "", err
return copySingleImageResult{}, err
}
if isSrcDestManifestEqual {
if matchedResult != nil {
c.Printf("Skipping: image already present at destination\n")
return retManifest, retManifestType, retManifestDigest, nil
return *matchedResult, nil
}
}
}
if err := ic.copyLayers(ctx); err != nil {
return nil, "", "", err
compressionAlgos, err := ic.copyLayers(ctx)
if err != nil {
return copySingleImageResult{}, err
}
// With docker/distribution registries we do not know whether the registry accepts schema2 or schema1 only;
@@ -197,8 +214,12 @@ func (c *copier) copySingleImage(ctx context.Context, policyContext *signature.P
// without actually trying to upload something and getting a types.ManifestTypeRejectedError.
// So, try the preferred manifest MIME type with possibly-updated blob digests, media types, and sizes if
// we're altering how they're compressed. If the process succeeds, fine…
manifestBytes, retManifestDigest, err := ic.copyUpdatedConfigAndManifest(ctx, targetInstance)
retManifestType = manifestConversionPlan.preferredMIMEType
manifestBytes, manifestDigest, err := ic.copyUpdatedConfigAndManifest(ctx, targetInstance)
wipResult := copySingleImageResult{
manifest: manifestBytes,
manifestMIMEType: manifestConversionPlan.preferredMIMEType,
manifestDigest: manifestDigest,
}
if err != nil {
logrus.Debugf("Writing manifest using preferred type %s failed: %v", manifestConversionPlan.preferredMIMEType, err)
// … if it fails, and the failure is either because the manifest is rejected by the registry, or
@@ -213,14 +234,14 @@ func (c *copier) copySingleImage(ctx context.Context, policyContext *signature.P
// We dont have other options.
// In principle the code below would handle this as well, but the resulting error message is fairly ugly.
// Dont bother the user with MIME types if we have no choice.
return nil, "", "", err
return copySingleImageResult{}, err
}
// If the original MIME type is acceptable, determineManifestConversion always uses it as manifestConversionPlan.preferredMIMEType.
// So if we are here, we will definitely be trying to convert the manifest.
// With ic.cannotModifyManifestReason != "", that would just be a string of repeated failures for the same reason,
// so lets bail out early and with a better error message.
if ic.cannotModifyManifestReason != "" {
return nil, "", "", fmt.Errorf("writing manifest failed and we cannot try conversions: %q: %w", cannotModifyManifestReason, err)
return copySingleImageResult{}, fmt.Errorf("writing manifest failed and we cannot try conversions: %q: %w", cannotModifyManifestReason, err)
}
// errs is a list of errors when trying various manifest types. Also serves as an "upload succeeded" flag when set to nil.
@@ -236,34 +257,37 @@ func (c *copier) copySingleImage(ctx context.Context, policyContext *signature.P
}
// We have successfully uploaded a manifest.
manifestBytes = attemptedManifest
retManifestDigest = attemptedManifestDigest
retManifestType = manifestMIMEType
wipResult = copySingleImageResult{
manifest: attemptedManifest,
manifestMIMEType: manifestMIMEType,
manifestDigest: attemptedManifestDigest,
}
errs = nil // Mark this as a success so that we don't abort below.
break
}
if errs != nil {
return nil, "", "", fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
return copySingleImageResult{}, fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
}
}
if targetInstance != nil {
targetInstance = &retManifestDigest
targetInstance = &wipResult.manifestDigest
}
newSigs, err := c.createSignatures(ctx, manifestBytes, options.SignIdentity)
newSigs, err := c.createSignatures(ctx, wipResult.manifest, c.options.SignIdentity)
if err != nil {
return nil, "", "", err
return copySingleImageResult{}, err
}
sigs = append(sigs, newSigs...)
sigs = append(slices.Clone(sigs), newSigs...)
if len(sigs) > 0 {
c.Printf("Storing signatures\n")
if err := c.dest.PutSignaturesWithFormat(ctx, sigs, targetInstance); err != nil {
return nil, "", "", fmt.Errorf("writing signatures: %w", err)
return copySingleImageResult{}, fmt.Errorf("writing signatures: %w", err)
}
}
return manifestBytes, retManifestType, retManifestDigest, nil
wipResult.compressionAlgorithms = compressionAlgos
res := wipResult // We are done
return res, nil
}
// checkImageDestinationForCurrentRuntime enforces dest.MustMatchRuntimeOS, if necessary.
@@ -281,18 +305,18 @@ func checkImageDestinationForCurrentRuntime(ctx context.Context, sys *types.Syst
options := newOrderedSet()
match := false
for _, wantedPlatform := range wantedPlatforms {
// Waiting for https://github.com/opencontainers/image-spec/pull/777 :
// This currently cant use image.MatchesPlatform because we dont know what to use
// for image.Variant.
if wantedPlatform.OS == c.OS && wantedPlatform.Architecture == c.Architecture {
// For a transitional period, this might trigger warnings because the Variant
// field was added to OCI config only recently. If this turns out to be too noisy,
// revert this check to only look for (OS, Architecture).
if platform.MatchesPlatform(c.Platform, wantedPlatform) {
match = true
break
}
options.append(fmt.Sprintf("%s+%s", wantedPlatform.OS, wantedPlatform.Architecture))
options.append(fmt.Sprintf("%s+%s+%q", wantedPlatform.OS, wantedPlatform.Architecture, wantedPlatform.Variant))
}
if !match {
logrus.Infof("Image operating system mismatch: image uses OS %q+architecture %q, expecting one of %q",
c.OS, c.Architecture, strings.Join(options.list, ", "))
logrus.Infof("Image operating system mismatch: image uses OS %q+architecture %q+%q, expecting one of %q",
c.OS, c.Architecture, c.Variant, strings.Join(options.list, ", "))
}
}
return nil
@@ -323,52 +347,70 @@ func (ic *imageCopier) noPendingManifestUpdates() bool {
return reflect.DeepEqual(*ic.manifestUpdates, types.ManifestUpdateOptions{InformationOnly: ic.manifestUpdates.InformationOnly})
}
// compareImageDestinationManifestEqual compares the `src` and `dest` image manifests (reading the manifest from the
// (possibly remote) destination). Returning true and the destination's manifest, type and digest if they compare equal.
func compareImageDestinationManifestEqual(ctx context.Context, options *Options, src *image.SourcedImage, targetInstance *digest.Digest, dest types.ImageDestination) (bool, []byte, string, digest.Digest, error) {
srcManifestDigest, err := manifest.Digest(src.ManifestBlob)
// compareImageDestinationManifestEqual compares the source and destination image manifests (reading the manifest from the
// (possibly remote) destination). If they are equal, it returns a full copySingleImageResult, nil otherwise.
func (ic *imageCopier) compareImageDestinationManifestEqual(ctx context.Context, targetInstance *digest.Digest) (*copySingleImageResult, error) {
srcManifestDigest, err := manifest.Digest(ic.src.ManifestBlob)
if err != nil {
return false, nil, "", "", fmt.Errorf("calculating manifest digest: %w", err)
return nil, fmt.Errorf("calculating manifest digest: %w", err)
}
destImageSource, err := dest.Reference().NewImageSource(ctx, options.DestinationCtx)
destImageSource, err := ic.c.dest.Reference().NewImageSource(ctx, ic.c.options.DestinationCtx)
if err != nil {
logrus.Debugf("Unable to create destination image %s source: %v", dest.Reference(), err)
return false, nil, "", "", nil
logrus.Debugf("Unable to create destination image %s source: %v", ic.c.dest.Reference(), err)
return nil, nil
}
defer destImageSource.Close()
destManifest, destManifestType, err := destImageSource.GetManifest(ctx, targetInstance)
if err != nil {
logrus.Debugf("Unable to get destination image %s/%s manifest: %v", destImageSource, targetInstance, err)
return false, nil, "", "", nil
return nil, nil
}
destManifestDigest, err := manifest.Digest(destManifest)
if err != nil {
return false, nil, "", "", fmt.Errorf("calculating manifest digest: %w", err)
return nil, fmt.Errorf("calculating manifest digest: %w", err)
}
logrus.Debugf("Comparing source and destination manifest digests: %v vs. %v", srcManifestDigest, destManifestDigest)
if srcManifestDigest != destManifestDigest {
return false, nil, "", "", nil
return nil, nil
}
compressionAlgos := set.New[string]()
for _, srcInfo := range ic.src.LayerInfos() {
if c := compressionAlgorithmFromMIMEType(srcInfo); c != nil {
compressionAlgos.Add(c.Name())
}
}
algos, err := algorithmsByNames(compressionAlgos.Values())
if err != nil {
return nil, err
}
// Destination and source manifests, types and digests should all be equivalent
return true, destManifest, destManifestType, destManifestDigest, nil
return &copySingleImageResult{
manifest: destManifest,
manifestMIMEType: destManifestType,
manifestDigest: srcManifestDigest,
compressionAlgorithms: algos,
}, nil
}
// copyLayers copies layers from ic.src/ic.c.rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.cannotModifyManifestReason == "".
func (ic *imageCopier) copyLayers(ctx context.Context) error {
func (ic *imageCopier) copyLayers(ctx context.Context) ([]compressiontypes.Algorithm, error) {
srcInfos := ic.src.LayerInfos()
numLayers := len(srcInfos)
updatedSrcInfos, err := ic.src.LayerInfosForCopy(ctx)
if err != nil {
return err
return nil, err
}
srcInfosUpdated := false
if updatedSrcInfos != nil && !reflect.DeepEqual(srcInfos, updatedSrcInfos) {
if ic.cannotModifyManifestReason != "" {
return fmt.Errorf("Copying this image would require changing layer representation, which we cannot do: %q", ic.cannotModifyManifestReason)
return nil, fmt.Errorf("Copying this image would require changing layer representation, which we cannot do: %q", ic.cannotModifyManifestReason)
}
srcInfos = updatedSrcInfos
srcInfosUpdated = true
@@ -384,7 +426,7 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
// layer is empty.
man, err := manifest.FromBlob(ic.src.ManifestBlob, ic.src.ManifestMIMEType)
if err != nil {
return err
return nil, err
}
manifestLayerInfos := man.LayerInfos()
@@ -396,7 +438,7 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
defer ic.c.concurrentBlobCopiesSemaphore.Release(1)
defer copyGroup.Done()
cld := copyLayerData{}
if !ic.c.downloadForeignLayers && ic.c.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
if !ic.c.options.DownloadForeignLayers && ic.c.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
// DiffIDs are, currently, needed only when converting from schema1.
// In which case src.LayerInfos will not have URLs because schema1
// does not support them.
@@ -415,12 +457,18 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
// Decide which layers to encrypt
layersToEncrypt := set.New[int]()
var encryptAll bool
if ic.ociEncryptLayers != nil {
encryptAll = len(*ic.ociEncryptLayers) == 0
if ic.c.options.OciEncryptLayers != nil {
encryptAll = len(*ic.c.options.OciEncryptLayers) == 0
totalLayers := len(srcInfos)
for _, l := range *ic.ociEncryptLayers {
// if layer is negative, it is reverse indexed.
layersToEncrypt.Add((totalLayers + l) % totalLayers)
for _, l := range *ic.c.options.OciEncryptLayers {
switch {
case l >= 0 && l < totalLayers:
layersToEncrypt.Add(l)
case l < 0 && l+totalLayers >= 0: // Implies (l + totalLayers) < totalLayers
layersToEncrypt.Add(l + totalLayers) // If l is negative, it is reverse indexed.
default:
return nil, fmt.Errorf("when choosing layers to encrypt, layer index %d out of range (%d layers exist)", l, totalLayers)
}
}
if encryptAll {
@@ -450,14 +498,18 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
// A call to copyGroup.Wait() is done at this point by the defer above.
return nil
}(); err != nil {
return err
return nil, err
}
compressionAlgos := set.New[string]()
destInfos := make([]types.BlobInfo, numLayers)
diffIDs := make([]digest.Digest, numLayers)
for i, cld := range data {
if cld.err != nil {
return cld.err
return nil, cld.err
}
if cld.destInfo.CompressionAlgorithm != nil {
compressionAlgos.Add(cld.destInfo.CompressionAlgorithm.Name())
}
destInfos[i] = cld.destInfo
diffIDs[i] = cld.diffID
@@ -472,7 +524,11 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
if srcInfosUpdated || layerDigestsDiffer(srcInfos, destInfos) {
ic.manifestUpdates.LayerInfos = destInfos
}
return nil
algos, err := algorithmsByNames(compressionAlgos.Values())
if err != nil {
return nil, err
}
return algos, nil
}
// layerDigestsDiffer returns true iff the digests in a and b differ (ignoring sizes and possible other fields)
@@ -543,7 +599,10 @@ func (ic *imageCopier) copyConfig(ctx context.Context, src types.Image) error {
destInfo, err := func() (types.BlobInfo, error) { // A scope for defer
progressPool := ic.c.newProgressPool()
defer progressPool.Wait()
bar := ic.c.createProgressBar(progressPool, false, srcInfo, "config", "done")
bar, err := ic.c.createProgressBar(progressPool, false, srcInfo, "config", "done")
if err != nil {
return types.BlobInfo{}, err
}
defer bar.Abort(false)
ic.c.printCopyInfo("config", srcInfo)
@@ -577,6 +636,19 @@ type diffIDResult struct {
err error
}
func compressionAlgorithmFromMIMEType(srcInfo types.BlobInfo) *compressiontypes.Algorithm {
// This MIME type → compression mapping belongs in manifest-specific code in our manifest
// package (but we should preferably replace/change UpdatedImage instead of productizing
// this workaround).
switch srcInfo.MediaType {
case manifest.DockerV2Schema2LayerMediaType, imgspecv1.MediaTypeImageLayerGzip:
return &compression.Gzip
case imgspecv1.MediaTypeImageLayerZstd:
return &compression.Zstd
}
return nil
}
// copyLayer copies a layer with srcInfo (with known Digest and Annotations and possibly known Size) in src to dest, perhaps (de/re/)compressing it,
// and returns a complete blobInfo of the copied layer, and a value for LayerDiffIDs if diffIDIsNeeded
// srcRef can be used as an additional hint to the destination during checking whether a layer can be reused but srcRef can be nil.
@@ -588,27 +660,22 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
// which uses the compression information to compute the updated MediaType values.
// (Sadly UpdatedImage() is documented to not update MediaTypes from
// ManifestUpdateOptions.LayerInfos[].MediaType, so we are doing it indirectly.)
//
// This MIME type → compression mapping belongs in manifest-specific code in our manifest
// package (but we should preferably replace/change UpdatedImage instead of productizing
// this workaround).
if srcInfo.CompressionAlgorithm == nil {
switch srcInfo.MediaType {
case manifest.DockerV2Schema2LayerMediaType, imgspecv1.MediaTypeImageLayerGzip:
srcInfo.CompressionAlgorithm = &compression.Gzip
case imgspecv1.MediaTypeImageLayerZstd:
srcInfo.CompressionAlgorithm = &compression.Zstd
}
srcInfo.CompressionAlgorithm = compressionAlgorithmFromMIMEType(srcInfo)
}
ic.c.printCopyInfo("blob", srcInfo)
cachedDiffID := ic.c.blobInfoCache.UncompressedDigest(srcInfo.Digest) // May be ""
diffIDIsNeeded := ic.diffIDsAreNeeded && cachedDiffID == ""
diffIDIsNeeded := false
var cachedDiffID digest.Digest = ""
if ic.diffIDsAreNeeded {
cachedDiffID = ic.c.blobInfoCache.UncompressedDigest(srcInfo.Digest) // May be ""
diffIDIsNeeded = cachedDiffID == ""
}
// When encrypting to decrypting, only use the simple code path. We might be able to optimize more
// (e.g. if we know the DiffID of an encrypted compressed layer, it might not be necessary to pull, decrypt and decompress again),
// but its not trivially safe to do such things, so until someone takes the effort to make a comprehensive argument, lets not.
encryptingOrDecrypting := toEncrypt || (isOciEncrypted(srcInfo.MediaType) && ic.c.ociDecryptConfig != nil)
encryptingOrDecrypting := toEncrypt || (isOciEncrypted(srcInfo.MediaType) && ic.c.options.OciDecryptConfig != nil)
canAvoidProcessingCompleteLayer := !diffIDIsNeeded && !encryptingOrDecrypting
// Dont read the layer from the source if we already have the blob, and optimizations are acceptable.
@@ -623,27 +690,41 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
// a failure when we eventually try to update the manifest with the digest and MIME type of the reused blob.
// Fixing that will probably require passing more information to TryReusingBlob() than the current version of
// the ImageDestination interface lets us pass in.
var requiredCompression *compressiontypes.Algorithm
var originalCompression *compressiontypes.Algorithm
if ic.requireCompressionFormatMatch {
requiredCompression = ic.compressionFormat
originalCompression = srcInfo.CompressionAlgorithm
}
reused, reusedBlob, err := ic.c.dest.TryReusingBlobWithOptions(ctx, srcInfo, private.TryReusingBlobOptions{
Cache: ic.c.blobInfoCache,
CanSubstitute: canSubstitute,
EmptyLayer: emptyLayer,
LayerIndex: &layerIndex,
SrcRef: srcRef,
Cache: ic.c.blobInfoCache,
CanSubstitute: canSubstitute,
EmptyLayer: emptyLayer,
LayerIndex: &layerIndex,
SrcRef: srcRef,
RequiredCompression: requiredCompression,
OriginalCompression: originalCompression,
})
if err != nil {
return types.BlobInfo{}, "", fmt.Errorf("trying to reuse blob %s at destination: %w", srcInfo.Digest, err)
}
if reused {
logrus.Debugf("Skipping blob %s (already present):", srcInfo.Digest)
func() { // A scope for defer
bar := ic.c.createProgressBar(pool, false, types.BlobInfo{Digest: reusedBlob.Digest, Size: 0}, "blob", "skipped: already exists")
if err := func() error { // A scope for defer
bar, err := ic.c.createProgressBar(pool, false, types.BlobInfo{Digest: reusedBlob.Digest, Size: 0}, "blob", "skipped: already exists")
if err != nil {
return err
}
defer bar.Abort(false)
bar.mark100PercentComplete()
}()
return nil
}(); err != nil {
return types.BlobInfo{}, "", err
}
// Throw an event that the layer has been skipped
if ic.c.progress != nil && ic.c.progressInterval > 0 {
ic.c.progress <- types.ProgressProperties{
if ic.c.options.Progress != nil && ic.c.options.ProgressInterval > 0 {
ic.c.options.Progress <- types.ProgressProperties{
Event: types.ProgressEventSkipped,
Artifact: srcInfo,
}
@@ -658,8 +739,11 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
// Attempt a partial only when the source allows to retrieve a blob partially and
// the destination has support for it.
if canAvoidProcessingCompleteLayer && ic.c.rawSource.SupportsGetBlobAt() && ic.c.dest.SupportsPutBlobPartial() {
if reused, blobInfo := func() (bool, types.BlobInfo) { // A scope for defer
bar := ic.c.createProgressBar(pool, true, srcInfo, "blob", "done")
reused, blobInfo, err := func() (bool, types.BlobInfo, error) { // A scope for defer
bar, err := ic.c.createProgressBar(pool, true, srcInfo, "blob", "done")
if err != nil {
return false, types.BlobInfo{}, err
}
hideProgressBar := true
defer func() { // Note that this is not the same as defer bar.Abort(hideProgressBar); we need hideProgressBar to be evaluated lazily.
bar.Abort(hideProgressBar)
@@ -672,23 +756,32 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
uploadedBlob, err := ic.c.dest.PutBlobPartial(ctx, &proxy, srcInfo, ic.c.blobInfoCache)
if err == nil {
if srcInfo.Size != -1 {
bar.SetRefill(srcInfo.Size - bar.Current())
refill := srcInfo.Size - bar.Current()
bar.SetCurrent(srcInfo.Size)
bar.SetRefill(refill)
}
bar.mark100PercentComplete()
hideProgressBar = false
logrus.Debugf("Retrieved partial blob %v", srcInfo.Digest)
return true, updatedBlobInfoFromUpload(srcInfo, uploadedBlob)
return true, updatedBlobInfoFromUpload(srcInfo, uploadedBlob), nil
}
logrus.Debugf("Failed to retrieve partial blob: %v", err)
return false, types.BlobInfo{}
}(); reused {
return false, types.BlobInfo{}, nil
}()
if err != nil {
return types.BlobInfo{}, "", err
}
if reused {
return blobInfo, cachedDiffID, nil
}
}
// Fallback: copy the layer, computing the diffID if we need to do so
return func() (types.BlobInfo, digest.Digest, error) { // A scope for defer
bar := ic.c.createProgressBar(pool, false, srcInfo, "blob", "done")
bar, err := ic.c.createProgressBar(pool, false, srcInfo, "blob", "done")
if err != nil {
return types.BlobInfo{}, "", err
}
defer bar.Abort(false)
srcStream, srcBlobSize, err := ic.c.rawSource.GetBlob(ctx, srcInfo, ic.c.blobInfoCache)
@@ -818,3 +911,16 @@ func computeDiffID(stream io.Reader, decompressor compressiontypes.DecompressorF
return digest.Canonical.FromReader(stream)
}
// algorithmsByNames returns slice of Algorithms from slice of Algorithm Names
func algorithmsByNames(names []string) ([]compressiontypes.Algorithm, error) {
result := []compressiontypes.Algorithm{}
for _, name := range names {
algo, err := compression.AlgorithmByName(name)
if err != nil {
return nil, err
}
result = append(result, algo)
}
return result, nil
}

View File

@@ -173,7 +173,10 @@ func (d *dirImageDestination) PutBlobWithOptions(ctx context.Context, stream io.
}
}
blobPath := d.ref.layerPath(blobDigest)
blobPath, err := d.ref.layerPath(blobDigest)
if err != nil {
return private.UploadedBlob{}, err
}
// need to explicitly close the file, since a rename won't otherwise not work on Windows
blobFile.Close()
explicitClosed = true
@@ -190,10 +193,16 @@ func (d *dirImageDestination) PutBlobWithOptions(ctx context.Context, stream io.
// If the blob has been successfully reused, returns (true, info, nil).
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
func (d *dirImageDestination) TryReusingBlobWithOptions(ctx context.Context, info types.BlobInfo, options private.TryReusingBlobOptions) (bool, private.ReusedBlob, error) {
if !impl.OriginalBlobMatchesRequiredCompression(options) {
return false, private.ReusedBlob{}, nil
}
if info.Digest == "" {
return false, private.ReusedBlob{}, fmt.Errorf("Can not check for a blob with unknown digest")
}
blobPath := d.ref.layerPath(info.Digest)
blobPath, err := d.ref.layerPath(info.Digest)
if err != nil {
return false, private.ReusedBlob{}, err
}
finfo, err := os.Stat(blobPath)
if err != nil && os.IsNotExist(err) {
return false, private.ReusedBlob{}, nil
@@ -213,7 +222,11 @@ func (d *dirImageDestination) TryReusingBlobWithOptions(ctx context.Context, inf
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dirImageDestination) PutManifest(ctx context.Context, manifest []byte, instanceDigest *digest.Digest) error {
return os.WriteFile(d.ref.manifestPath(instanceDigest), manifest, 0644)
path, err := d.ref.manifestPath(instanceDigest)
if err != nil {
return err
}
return os.WriteFile(path, manifest, 0644)
}
// PutSignaturesWithFormat writes a set of signatures to the destination.
@@ -226,7 +239,11 @@ func (d *dirImageDestination) PutSignaturesWithFormat(ctx context.Context, signa
if err != nil {
return err
}
if err := os.WriteFile(d.ref.signaturePath(i, instanceDigest), blob, 0644); err != nil {
path, err := d.ref.signaturePath(i, instanceDigest)
if err != nil {
return err
}
if err := os.WriteFile(path, blob, 0644); err != nil {
return err
}
}

View File

@@ -55,7 +55,11 @@ func (s *dirImageSource) Close() error {
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *dirImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
m, err := os.ReadFile(s.ref.manifestPath(instanceDigest))
path, err := s.ref.manifestPath(instanceDigest)
if err != nil {
return nil, "", err
}
m, err := os.ReadFile(path)
if err != nil {
return nil, "", err
}
@@ -66,7 +70,11 @@ func (s *dirImageSource) GetManifest(ctx context.Context, instanceDigest *digest
// The Digest field in BlobInfo is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
// May update BlobInfoCache, preferably after it knows for certain that a blob truly exists at a specific location.
func (s *dirImageSource) GetBlob(ctx context.Context, info types.BlobInfo, cache types.BlobInfoCache) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(info.Digest))
path, err := s.ref.layerPath(info.Digest)
if err != nil {
return nil, -1, err
}
r, err := os.Open(path)
if err != nil {
return nil, -1, err
}
@@ -84,7 +92,10 @@ func (s *dirImageSource) GetBlob(ctx context.Context, info types.BlobInfo, cache
func (s *dirImageSource) GetSignaturesWithFormat(ctx context.Context, instanceDigest *digest.Digest) ([]signature.Signature, error) {
signatures := []signature.Signature{}
for i := 0; ; i++ {
path := s.ref.signaturePath(i, instanceDigest)
path, err := s.ref.signaturePath(i, instanceDigest)
if err != nil {
return nil, err
}
sigBlob, err := os.ReadFile(path)
if err != nil {
if os.IsNotExist(err) {

View File

@@ -161,25 +161,34 @@ func (ref dirReference) DeleteImage(ctx context.Context, sys *types.SystemContex
}
// manifestPath returns a path for the manifest within a directory using our conventions.
func (ref dirReference) manifestPath(instanceDigest *digest.Digest) string {
func (ref dirReference) manifestPath(instanceDigest *digest.Digest) (string, error) {
if instanceDigest != nil {
return filepath.Join(ref.path, instanceDigest.Encoded()+".manifest.json")
if err := instanceDigest.Validate(); err != nil { // digest.Digest.Encoded() panics on failure, and could possibly result in a path with ../, so validate explicitly.
return "", err
}
return filepath.Join(ref.path, instanceDigest.Encoded()+".manifest.json"), nil
}
return filepath.Join(ref.path, "manifest.json")
return filepath.Join(ref.path, "manifest.json"), nil
}
// layerPath returns a path for a layer tarball within a directory using our conventions.
func (ref dirReference) layerPath(digest digest.Digest) string {
func (ref dirReference) layerPath(digest digest.Digest) (string, error) {
if err := digest.Validate(); err != nil { // digest.Digest.Encoded() panics on failure, and could possibly result in a path with ../, so validate explicitly.
return "", err
}
// FIXME: Should we keep the digest identification?
return filepath.Join(ref.path, digest.Encoded())
return filepath.Join(ref.path, digest.Encoded()), nil
}
// signaturePath returns a path for a signature within a directory using our conventions.
func (ref dirReference) signaturePath(index int, instanceDigest *digest.Digest) string {
func (ref dirReference) signaturePath(index int, instanceDigest *digest.Digest) (string, error) {
if instanceDigest != nil {
return filepath.Join(ref.path, fmt.Sprintf(instanceDigest.Encoded()+".signature-%d", index+1))
if err := instanceDigest.Validate(); err != nil { // digest.Digest.Encoded() panics on failure, and could possibly result in a path with ../, so validate explicitly.
return "", err
}
return filepath.Join(ref.path, fmt.Sprintf(instanceDigest.Encoded()+".signature-%d", index+1)), nil
}
return filepath.Join(ref.path, fmt.Sprintf("signature-%d", index+1))
return filepath.Join(ref.path, fmt.Sprintf("signature-%d", index+1)), nil
}
// versionPath returns a path for the version file within a directory using our conventions.

View File

@@ -9,11 +9,6 @@ import (
"github.com/docker/go-connections/tlsconfig"
)
const (
// The default API version to be used in case none is explicitly specified
defaultAPIVersion = "1.22"
)
// NewDockerClient initializes a new API client based on the passed SystemContext.
func newDockerClient(sys *types.SystemContext) (*dockerclient.Client, error) {
host := dockerclient.DefaultDockerHost
@@ -23,7 +18,7 @@ func newDockerClient(sys *types.SystemContext) (*dockerclient.Client, error) {
opts := []dockerclient.Opt{
dockerclient.WithHost(host),
dockerclient.WithVersion(defaultAPIVersion),
dockerclient.WithAPIVersionNegotiation(),
}
// We conditionalize building the TLS configuration only to TLS sockets:

View File

@@ -2,6 +2,7 @@ package daemon
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
@@ -85,12 +86,40 @@ func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeRe
}
}()
err = imageLoad(ctx, c, reader)
}
// imageLoad accepts tar stream on reader and sends it to c
func imageLoad(ctx context.Context, c *client.Client, reader *io.PipeReader) error {
resp, err := c.ImageLoad(ctx, reader, true)
if err != nil {
err = fmt.Errorf("saving image to docker engine: %w", err)
return
return fmt.Errorf("starting a load operation in docker engine: %w", err)
}
defer resp.Body.Close()
// jsonError and jsonMessage are small subsets of docker/docker/pkg/jsonmessage.JSONError and JSONMessage,
// copied here to minimize dependencies.
type jsonError struct {
Message string `json:"message,omitempty"`
}
type jsonMessage struct {
Error *jsonError `json:"errorDetail,omitempty"`
}
dec := json.NewDecoder(resp.Body)
for {
var msg jsonMessage
if err := dec.Decode(&msg); err != nil {
if err == io.EOF {
break
}
return fmt.Errorf("parsing docker load progress: %w", err)
}
if msg.Error != nil {
return fmt.Errorf("docker engine reported: %s", msg.Error.Message)
}
}
return nil // No error reported = success
}
// DesiredLayerCompression indicates if layers must be compressed, decompressed or preserved

View File

@@ -24,6 +24,7 @@ import (
"github.com/docker/distribution/registry/api/errcode"
dockerChallenge "github.com/docker/distribution/registry/client/auth/challenge"
"golang.org/x/exp/slices"
)
// errNoErrorsInBody is returned when an HTTP response body parses to an empty
@@ -105,7 +106,7 @@ func makeErrorList(err error) []error {
}
func mergeErrors(err1, err2 error) error {
return errcode.Errors(append(makeErrorList(err1), makeErrorList(err2)...))
return errcode.Errors(append(slices.Clone(makeErrorList(err1)), makeErrorList(err2)...))
}
// handleErrorResponse returns error parsed from HTTP response for an

View File

@@ -1,7 +1,6 @@
package docker
import (
"bytes"
"context"
"crypto/tls"
"encoding/json"
@@ -19,6 +18,7 @@ import (
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/internal/iolimits"
"github.com/containers/image/v5/internal/set"
"github.com/containers/image/v5/internal/useragent"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/docker/config"
@@ -121,6 +121,9 @@ type dockerClient struct {
// Private state for detectProperties:
detectPropertiesOnce sync.Once // detectPropertiesOnce is used to execute detectProperties() at most once.
detectPropertiesError error // detectPropertiesError caches the initial error.
// Private state for logResponseWarnings
reportedWarningsLock sync.Mutex
reportedWarnings *set.Set[string]
}
type authScope struct {
@@ -281,10 +284,11 @@ func newDockerClient(sys *types.SystemContext, registry, reference string) (*doc
}
return &dockerClient{
sys: sys,
registry: registry,
userAgent: userAgent,
tlsClientConfig: tlsClientConfig,
sys: sys,
registry: registry,
userAgent: userAgent,
tlsClientConfig: tlsClientConfig,
reportedWarnings: set.New[string](),
}, nil
}
@@ -359,6 +363,11 @@ func SearchRegistry(ctx context.Context, sys *types.SystemContext, registry, ima
hostname := registry
if registry == dockerHostname {
hostname = dockerV1Hostname
// A search term of library/foo does not find the library/foo image on the docker.io servers,
// which is surprising - and that Docker is modifying the search term client-side this same way,
// and it seems convenient to do the same thing.
// Read more here: https://github.com/containers/image/pull/2133#issue-1928524334
image = strings.TrimPrefix(image, "library/")
}
client, err := newDockerClient(sys, hostname, registry)
@@ -624,9 +633,76 @@ func (c *dockerClient) makeRequestToResolvedURLOnce(ctx context.Context, method
if err != nil {
return nil, err
}
if warnings := res.Header.Values("Warning"); len(warnings) != 0 {
c.logResponseWarnings(res, warnings)
}
return res, nil
}
// logResponseWarnings logs warningHeaders from res, if any.
func (c *dockerClient) logResponseWarnings(res *http.Response, warningHeaders []string) {
c.reportedWarningsLock.Lock()
defer c.reportedWarningsLock.Unlock()
for _, header := range warningHeaders {
warningString := parseRegistryWarningHeader(header)
if warningString == "" {
logrus.Debugf("Ignored Warning: header from registry: %q", header)
} else {
if !c.reportedWarnings.Contains(warningString) {
c.reportedWarnings.Add(warningString)
// Note that reportedWarnings is based only on warningString, so that we dont
// repeat the same warning for every request - but the warning includes the URL;
// so it may not be specific to that URL.
logrus.Warnf("Warning from registry (first encountered at %q): %q", res.Request.URL.Redacted(), warningString)
} else {
logrus.Debugf("Repeated warning from registry at %q: %q", res.Request.URL.Redacted(), warningString)
}
}
}
}
// parseRegistryWarningHeader parses a Warning: header per RFC 7234, limited to the warning
// values allowed by opencontainers/distribution-spec.
// It returns the warning string if the header has the expected format, or "" otherwise.
func parseRegistryWarningHeader(header string) string {
const expectedPrefix = `299 - "`
const expectedSuffix = `"`
// warning-value = warn-code SP warn-agent SP warn-text [ SP warn-date ]
// distribution-spec requires warn-code=299, warn-agent="-", warn-date missing
if !strings.HasPrefix(header, expectedPrefix) || !strings.HasSuffix(header, expectedSuffix) {
return ""
}
header = header[len(expectedPrefix) : len(header)-len(expectedSuffix)]
// ”Recipients that process the value of a quoted-string MUST handle a quoted-pair
// as if it were replaced by the octet following the backslash.”, so lets do that…
res := strings.Builder{}
afterBackslash := false
for _, c := range []byte(header) { // []byte because escaping is defined in terms of bytes, not Unicode code points
switch {
case c == 0x7F || (c < ' ' && c != '\t'):
return "" // Control characters are forbidden
case afterBackslash:
res.WriteByte(c)
afterBackslash = false
case c == '"':
// This terminates the warn-text and warn-date, forbidden by distribution-spec, follows,
// or completely invalid input.
return ""
case c == '\\':
afterBackslash = true
default:
res.WriteByte(c)
}
}
if afterBackslash {
return ""
}
return res.String()
}
// we're using the challenges from the /v2/ ping response and not the one from the destination
// URL in this request because:
//
@@ -876,6 +952,8 @@ func (c *dockerClient) detectProperties(ctx context.Context) error {
return c.detectPropertiesError
}
// fetchManifest fetches a manifest for (the repo of ref) + tagOrDigest.
// The caller is responsible for ensuring tagOrDigest uses the expected format.
func (c *dockerClient) fetchManifest(ctx context.Context, ref dockerReference, tagOrDigest string) ([]byte, string, error) {
path := fmt.Sprintf(manifestPath, reference.Path(ref.ref), tagOrDigest)
headers := map[string][]string{
@@ -958,6 +1036,9 @@ func (c *dockerClient) getBlob(ctx context.Context, ref dockerReference, info ty
}
}
if err := info.Digest.Validate(); err != nil { // Make sure info.Digest.String() does not contain any unexpected characters
return nil, 0, err
}
path := fmt.Sprintf(blobsPath, reference.Path(ref.ref), info.Digest.String())
logrus.Debugf("Downloading %s", path)
res, err := c.makeRequest(ctx, http.MethodGet, path, nil, nil, v2Auth, nil)
@@ -1008,9 +1089,10 @@ func isManifestUnknownError(err error) bool {
if errors.As(err, &e) && e.ErrorCode() == errcode.ErrorCodeUnknown && e.Message == "Not Found" {
return true
}
// ALSO registry.redhat.io as of October 2022
// opencontainers/distribution-spec does not require the errcode.Error payloads to be used,
// but specifies that the HTTP status must be 404.
var unexpected *unexpectedHTTPResponseError
if errors.As(err, &unexpected) && unexpected.StatusCode == http.StatusNotFound && bytes.Contains(unexpected.Response, []byte("Not found")) {
if errors.As(err, &unexpected) && unexpected.StatusCode == http.StatusNotFound {
return true
}
return false
@@ -1020,7 +1102,10 @@ func isManifestUnknownError(err error) bool {
// digest in ref.
// It returns (nil, nil) if the manifest does not exist.
func (c *dockerClient) getSigstoreAttachmentManifest(ctx context.Context, ref dockerReference, digest digest.Digest) (*manifest.OCI1, error) {
tag := sigstoreAttachmentTag(digest)
tag, err := sigstoreAttachmentTag(digest)
if err != nil {
return nil, err
}
sigstoreRef, err := reference.WithTag(reference.TrimNamed(ref.ref), tag)
if err != nil {
return nil, err
@@ -1053,6 +1138,9 @@ func (c *dockerClient) getSigstoreAttachmentManifest(ctx context.Context, ref do
// getExtensionsSignatures returns signatures from the X-Registry-Supports-Signatures API extension,
// using the original data structures.
func (c *dockerClient) getExtensionsSignatures(ctx context.Context, ref dockerReference, manifestDigest digest.Digest) (*extensionSignatureList, error) {
if err := manifestDigest.Validate(); err != nil { // Make sure manifestDigest.String() does not contain any unexpected characters
return nil, err
}
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(ref.ref), manifestDigest)
res, err := c.makeRequest(ctx, http.MethodGet, path, nil, nil, v2Auth, nil)
if err != nil {
@@ -1076,8 +1164,11 @@ func (c *dockerClient) getExtensionsSignatures(ctx context.Context, ref dockerRe
}
// sigstoreAttachmentTag returns a sigstore attachment tag for the specified digest.
func sigstoreAttachmentTag(d digest.Digest) string {
return strings.Replace(d.String(), ":", "-", 1) + ".sig"
func sigstoreAttachmentTag(d digest.Digest) (string, error) {
if err := d.Validate(); err != nil { // Make sure d.String() doesnt contain any unexpected characters
return "", err
}
return strings.Replace(d.String(), ":", "-", 1) + ".sig", nil
}
// Close removes resources associated with an initialized dockerClient, if any.

View File

@@ -14,6 +14,7 @@ import (
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/types"
"github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
)
// Image is a Docker-specific implementation of types.ImageCloser with a few extra methods
@@ -88,7 +89,20 @@ func GetRepositoryTags(ctx context.Context, sys *types.SystemContext, ref types.
if err = json.NewDecoder(res.Body).Decode(&tagsHolder); err != nil {
return nil, err
}
tags = append(tags, tagsHolder.Tags...)
for _, tag := range tagsHolder.Tags {
if _, err := reference.WithTag(dr.ref, tag); err != nil { // Ensure the tag does not contain unexpected values
// Per https://github.com/containers/skopeo/issues/2346 , unknown versions of JFrog Artifactory,
// contrary to the tag format specified in
// https://github.com/opencontainers/distribution-spec/blob/8a871c8234977df058f1a14e299fe0a673853da2/spec.md?plain=1#L160 ,
// include digests in the list.
if _, err := digest.Parse(tag); err == nil {
logrus.Debugf("Ignoring invalid tag %q matching a digest format", tag)
continue
}
return nil, fmt.Errorf("registry returned invalid tag %q: %w", tag, err)
}
tags = append(tags, tag)
}
link := res.Header.Get("Link")
if link == "" {
@@ -123,6 +137,9 @@ func GetDigest(ctx context.Context, sys *types.SystemContext, ref types.ImageRef
if !ok {
return "", errors.New("ref must be a dockerReference")
}
if dr.isUnknownDigest {
return "", fmt.Errorf("docker: reference %q is for unknown digest case; cannot get digest", dr.StringWithinTransport())
}
tagOrDigest, err := dr.tagOrDigest()
if err != nil {

View File

@@ -137,7 +137,7 @@ func (d *dockerImageDestination) PutBlobWithOptions(ctx context.Context, stream
// If requested, precompute the blob digest to prevent uploading layers that already exist on the registry.
// This functionality is particularly useful when BlobInfoCache has not been populated with compressed digests,
// the source blob is uncompressed, and the destination blob is being compressed "on the fly".
if inputInfo.Digest == "" && d.c.sys.DockerRegistryPushPrecomputeDigests {
if inputInfo.Digest == "" && d.c.sys != nil && d.c.sys.DockerRegistryPushPrecomputeDigests {
logrus.Debugf("Precomputing digest layer for %s", reference.Path(d.ref.ref))
streamCopy, cleanup, err := streamdigest.ComputeBlobInfo(d.c.sys, stream, &inputInfo)
if err != nil {
@@ -229,6 +229,9 @@ func (d *dockerImageDestination) PutBlobWithOptions(ctx context.Context, stream
// If the destination does not contain the blob, or it is unknown, blobExists ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dockerImageDestination) blobExists(ctx context.Context, repo reference.Named, digest digest.Digest, extraScope *authScope) (bool, int64, error) {
if err := digest.Validate(); err != nil { // Make sure digest.String() does not contain any unexpected characters
return false, -1, err
}
checkPath := fmt.Sprintf(blobsPath, reference.Path(repo), digest.String())
logrus.Debugf("Checking %s", checkPath)
res, err := d.c.makeRequest(ctx, http.MethodHead, checkPath, nil, nil, v2Auth, extraScope)
@@ -321,33 +324,78 @@ func (d *dockerImageDestination) TryReusingBlobWithOptions(ctx context.Context,
return false, private.ReusedBlob{}, errors.New("Can not check for a blob with unknown digest")
}
// First, check whether the blob happens to already exist at the destination.
haveBlob, reusedInfo, err := d.tryReusingExactBlob(ctx, info, options.Cache)
if err != nil {
return false, private.ReusedBlob{}, err
}
if haveBlob {
return true, reusedInfo, nil
if impl.OriginalBlobMatchesRequiredCompression(options) {
// First, check whether the blob happens to already exist at the destination.
haveBlob, reusedInfo, err := d.tryReusingExactBlob(ctx, info, options.Cache)
if err != nil {
return false, private.ReusedBlob{}, err
}
if haveBlob {
return true, reusedInfo, nil
}
} else {
requiredCompression := "nil"
if options.OriginalCompression != nil {
requiredCompression = options.OriginalCompression.Name()
}
logrus.Debugf("Ignoring exact blob match case due to compression mismatch ( %s vs %s )", options.RequiredCompression.Name(), requiredCompression)
}
// Then try reusing blobs from other locations.
candidates := options.Cache.CandidateLocations2(d.ref.Transport(), bicTransportScope(d.ref), info.Digest, options.CanSubstitute)
for _, candidate := range candidates {
candidateRepo, err := parseBICLocationReference(candidate.Location)
var err error
compressionOperation, compressionAlgorithm, err := blobinfocache.OperationAndAlgorithmForCompressor(candidate.CompressorName)
if err != nil {
logrus.Debugf("Error parsing BlobInfoCache location reference: %s", err)
logrus.Debugf("OperationAndAlgorithmForCompressor Failed: %v", err)
continue
}
if candidate.CompressorName != blobinfocache.Uncompressed {
logrus.Debugf("Trying to reuse cached location %s compressed with %s in %s", candidate.Digest.String(), candidate.CompressorName, candidateRepo.Name())
var candidateRepo reference.Named
if !candidate.UnknownLocation {
candidateRepo, err = parseBICLocationReference(candidate.Location)
if err != nil {
logrus.Debugf("Error parsing BlobInfoCache location reference: %s", err)
continue
}
}
if !impl.BlobMatchesRequiredCompression(options, compressionAlgorithm) {
requiredCompression := "nil"
if compressionAlgorithm != nil {
requiredCompression = compressionAlgorithm.Name()
}
if !candidate.UnknownLocation {
logrus.Debugf("Ignoring candidate blob %s as reuse candidate due to compression mismatch ( %s vs %s ) in %s", candidate.Digest.String(), options.RequiredCompression.Name(), requiredCompression, candidateRepo.Name())
} else {
logrus.Debugf("Ignoring candidate blob %s as reuse candidate due to compression mismatch ( %s vs %s ) with no location match, checking current repo", candidate.Digest.String(), options.RequiredCompression.Name(), requiredCompression)
}
continue
}
if !candidate.UnknownLocation {
if candidate.CompressorName != blobinfocache.Uncompressed {
logrus.Debugf("Trying to reuse blob with cached digest %s compressed with %s in destination repo %s", candidate.Digest.String(), candidate.CompressorName, candidateRepo.Name())
} else {
logrus.Debugf("Trying to reuse blob with cached digest %s in destination repo %s", candidate.Digest.String(), candidateRepo.Name())
}
// Sanity checks:
if reference.Domain(candidateRepo) != reference.Domain(d.ref.ref) {
// OCI distribution spec 1.1 allows mounting blobs without specifying the source repo
// (the "from" parameter); in that case we might try to use these candidates as well.
//
// OTOH that would mean we cant do the “blobExists” check, and if there is no match
// we could get an upload request that we would have to cancel.
logrus.Debugf("... Internal error: domain %s does not match destination %s", reference.Domain(candidateRepo), reference.Domain(d.ref.ref))
continue
}
} else {
logrus.Debugf("Trying to reuse cached location %s with no compression in %s", candidate.Digest.String(), candidateRepo.Name())
}
// Sanity checks:
if reference.Domain(candidateRepo) != reference.Domain(d.ref.ref) {
logrus.Debugf("... Internal error: domain %s does not match destination %s", reference.Domain(candidateRepo), reference.Domain(d.ref.ref))
continue
if candidate.CompressorName != blobinfocache.Uncompressed {
logrus.Debugf("Trying to reuse blob with cached digest %s compressed with %s with no location match, checking current repo", candidate.Digest.String(), candidate.CompressorName)
} else {
logrus.Debugf("Trying to reuse blob with cached digest %s in destination repo with no location match, checking current repo", candidate.Digest.String())
}
// This digest is a known variant of this blob but we dont
// have a recorded location in this registry, lets try looking
// for it in the current repo.
candidateRepo = reference.TrimNamed(d.ref.ref)
}
if candidateRepo.Name() == d.ref.ref.Name() && candidate.Digest == info.Digest {
logrus.Debug("... Already tried the primary destination")
@@ -388,12 +436,6 @@ func (d *dockerImageDestination) TryReusingBlobWithOptions(ctx context.Context,
options.Cache.RecordKnownLocation(d.ref.Transport(), bicTransportScope(d.ref), candidate.Digest, newBICLocationReference(d.ref))
compressionOperation, compressionAlgorithm, err := blobinfocache.OperationAndAlgorithmForCompressor(candidate.CompressorName)
if err != nil {
logrus.Debugf("... Failed: %v", err)
continue
}
return true, private.ReusedBlob{
Digest: candidate.Digest,
Size: size,
@@ -413,12 +455,21 @@ func (d *dockerImageDestination) TryReusingBlobWithOptions(ctx context.Context,
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dockerImageDestination) PutManifest(ctx context.Context, m []byte, instanceDigest *digest.Digest) error {
var refTail string
if instanceDigest != nil {
// If d.ref.isUnknownDigest=true, then we push without a tag, so get the
// digest that will be used
if d.ref.isUnknownDigest {
digest, err := manifest.Digest(m)
if err != nil {
return err
}
refTail = digest.String()
} else if instanceDigest != nil {
// If the instanceDigest is provided, then use it as the refTail, because the reference,
// whether it includes a tag or a digest, refers to the list as a whole, and not this
// particular instance.
refTail = instanceDigest.String()
// Double-check that the manifest we've been given matches the digest we've been given.
// This also validates the format of instanceDigest.
matches, err := manifest.MatchesDigest(m, *instanceDigest)
if err != nil {
return fmt.Errorf("digesting manifest in PutManifest: %w", err)
@@ -585,11 +636,13 @@ func (d *dockerImageDestination) putSignaturesToLookaside(signatures []signature
// NOTE: Keep this in sync with docs/signature-protocols.md!
for i, signature := range signatures {
sigURL := lookasideStorageURL(d.c.signatureBase, manifestDigest, i)
err := d.putOneSignature(sigURL, signature)
sigURL, err := lookasideStorageURL(d.c.signatureBase, manifestDigest, i)
if err != nil {
return err
}
if err := d.putOneSignature(sigURL, signature); err != nil {
return err
}
}
// Remove any other signatures, if present.
// We stop at the first missing signature; if a previous deleting loop aborted
@@ -597,7 +650,10 @@ func (d *dockerImageDestination) putSignaturesToLookaside(signatures []signature
// is enough for dockerImageSource to stop looking for other signatures, so that
// is sufficient.
for i := len(signatures); ; i++ {
sigURL := lookasideStorageURL(d.c.signatureBase, manifestDigest, i)
sigURL, err := lookasideStorageURL(d.c.signatureBase, manifestDigest, i)
if err != nil {
return err
}
missing, err := d.c.deleteOneSignature(sigURL)
if err != nil {
return err
@@ -668,6 +724,10 @@ func (d *dockerImageDestination) putSignaturesToSigstoreAttachments(ctx context.
}
}
// To make sure we can safely append to the slices of ociManifest, without adding a remote dependency on the code that creates it.
ociManifest.Layers = slices.Clone(ociManifest.Layers)
// We dont need to ^^^ for ociConfig.RootFS.DiffIDs because we have created it empty ourselves, and json.Unmarshal is documented to append() to
// the slice in the original object (or in a newly allocated object).
for _, sig := range signatures {
mimeType := sig.UntrustedMIMEType()
payloadBlob := sig.UntrustedPayload()
@@ -724,8 +784,12 @@ func (d *dockerImageDestination) putSignaturesToSigstoreAttachments(ctx context.
if err != nil {
return err
}
attachmentTag, err := sigstoreAttachmentTag(manifestDigest)
if err != nil {
return err
}
logrus.Debugf("Uploading sigstore attachment manifest")
return d.uploadManifest(ctx, manifestBlob, sigstoreAttachmentTag(manifestDigest))
return d.uploadManifest(ctx, manifestBlob, attachmentTag)
}
func layerMatchesSigstoreSignature(layer imgspecv1.Descriptor, mimeType string,
@@ -841,6 +905,7 @@ func (d *dockerImageDestination) putSignaturesToAPIExtension(ctx context.Context
return err
}
// manifestDigest is known to be valid because it was not rejected by getExtensionsSignatures above.
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), manifestDigest.String())
res, err := d.c.makeRequest(ctx, http.MethodPut, path, nil, bytes.NewReader(body), v2Auth, nil)
if err != nil {

View File

@@ -38,8 +38,8 @@ type dockerImageSource struct {
impl.DoesNotAffectLayerInfosForCopy
stubs.ImplementsGetBlobAt
logicalRef dockerReference // The reference the user requested.
physicalRef dockerReference // The actual reference we are accessing (possibly a mirror)
logicalRef dockerReference // The reference the user requested. This must satisfy !isUnknownDigest
physicalRef dockerReference // The actual reference we are accessing (possibly a mirror). This must satisfy !isUnknownDigest
c *dockerClient
// State
cachedManifest []byte // nil if not loaded yet
@@ -48,7 +48,12 @@ type dockerImageSource struct {
// newImageSource creates a new ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
// The caller must ensure !ref.isUnknownDigest.
func newImageSource(ctx context.Context, sys *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
if ref.isUnknownDigest {
return nil, fmt.Errorf("reading images from docker: reference %q without a tag or digest is not supported", ref.StringWithinTransport())
}
registryConfig, err := loadRegistryConfiguration(sys)
if err != nil {
return nil, err
@@ -121,7 +126,7 @@ func newImageSource(ctx context.Context, sys *types.SystemContext, ref dockerRef
// The caller must call .Close() on the returned ImageSource.
func newImageSourceAttempt(ctx context.Context, sys *types.SystemContext, logicalRef dockerReference, pullSource sysregistriesv2.PullSource,
registryConfig *registryConfiguration) (*dockerImageSource, error) {
physicalRef, err := newReference(pullSource.Reference)
physicalRef, err := newReference(pullSource.Reference, false)
if err != nil {
return nil, err
}
@@ -189,6 +194,9 @@ func simplifyContentType(contentType string) string {
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *dockerImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil {
if err := instanceDigest.Validate(); err != nil { // Make sure instanceDigest.String() does not contain any unexpected characters
return nil, "", err
}
return s.fetchManifest(ctx, instanceDigest.String())
}
err := s.ensureManifestIsLoaded(ctx)
@@ -198,6 +206,8 @@ func (s *dockerImageSource) GetManifest(ctx context.Context, instanceDigest *dig
return s.cachedManifest, s.cachedManifestMIMEType, nil
}
// fetchManifest fetches a manifest for tagOrDigest.
// The caller is responsible for ensuring tagOrDigest uses the expected format.
func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest string) ([]byte, string, error) {
return s.c.fetchManifest(ctx, s.physicalRef, tagOrDigest)
}
@@ -347,6 +357,9 @@ func (s *dockerImageSource) GetBlobAt(ctx context.Context, info types.BlobInfo,
return nil, nil, fmt.Errorf("external URLs not supported with GetBlobAt")
}
if err := info.Digest.Validate(); err != nil { // Make sure info.Digest.String() does not contain any unexpected characters
return nil, nil, err
}
path := fmt.Sprintf(blobsPath, reference.Path(s.physicalRef.ref), info.Digest.String())
logrus.Debugf("Downloading %s", path)
res, err := s.c.makeRequest(ctx, http.MethodGet, path, headers, nil, v2Auth, nil)
@@ -457,7 +470,10 @@ func (s *dockerImageSource) getSignaturesFromLookaside(ctx context.Context, inst
return nil, fmt.Errorf("server provided %d signatures, assuming that's unreasonable and a server error", maxLookasideSignatures)
}
sigURL := lookasideStorageURL(s.c.signatureBase, manifestDigest, i)
sigURL, err := lookasideStorageURL(s.c.signatureBase, manifestDigest, i)
if err != nil {
return nil, err
}
signature, missing, err := s.getOneSignature(ctx, sigURL)
if err != nil {
return nil, err
@@ -591,6 +607,10 @@ func (s *dockerImageSource) getSignaturesFromSigstoreAttachments(ctx context.Con
// deleteImage deletes the named image from the registry, if supported.
func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) error {
if ref.isUnknownDigest {
return fmt.Errorf("Docker reference without a tag or digest cannot be deleted")
}
registryConfig, err := loadRegistryConfiguration(sys)
if err != nil {
return err
@@ -651,7 +671,10 @@ func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerRefere
}
for i := 0; ; i++ {
sigURL := lookasideStorageURL(c.signatureBase, manifestDigest, i)
sigURL, err := lookasideStorageURL(c.signatureBase, manifestDigest, i)
if err != nil {
return err
}
missing, err := c.deleteOneSignature(sigURL)
if err != nil {
return err

View File

@@ -12,6 +12,11 @@ import (
"github.com/containers/image/v5/types"
)
// UnknownDigestSuffix can be appended to a reference when the caller
// wants to push an image without a tag or digest.
// NewReferenceUnknownDigest() is called when this const is detected.
const UnknownDigestSuffix = "@@unknown-digest@@"
func init() {
transports.Register(Transport)
}
@@ -43,7 +48,8 @@ func (t dockerTransport) ValidatePolicyConfigurationScope(scope string) error {
// dockerReference is an ImageReference for Docker images.
type dockerReference struct {
ref reference.Named // By construction we know that !reference.IsNameOnly(ref)
ref reference.Named // By construction we know that !reference.IsNameOnly(ref) unless isUnknownDigest=true
isUnknownDigest bool
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an Docker ImageReference.
@@ -51,23 +57,46 @@ func ParseReference(refString string) (types.ImageReference, error) {
if !strings.HasPrefix(refString, "//") {
return nil, fmt.Errorf("docker: image reference %s does not start with //", refString)
}
// Check if ref has UnknownDigestSuffix suffixed to it
unknownDigest := false
if strings.HasSuffix(refString, UnknownDigestSuffix) {
unknownDigest = true
refString = strings.TrimSuffix(refString, UnknownDigestSuffix)
}
ref, err := reference.ParseNormalizedNamed(strings.TrimPrefix(refString, "//"))
if err != nil {
return nil, err
}
if unknownDigest {
if !reference.IsNameOnly(ref) {
return nil, fmt.Errorf("docker: image reference %q has unknown digest set but it contains either a tag or digest", ref.String()+UnknownDigestSuffix)
}
return NewReferenceUnknownDigest(ref)
}
ref = reference.TagNameOnly(ref)
return NewReference(ref)
}
// NewReference returns a Docker reference for a named reference. The reference must satisfy !reference.IsNameOnly().
func NewReference(ref reference.Named) (types.ImageReference, error) {
return newReference(ref)
return newReference(ref, false)
}
// NewReferenceUnknownDigest returns a Docker reference for a named reference, which can be used to write images without setting
// a tag on the registry. The reference must satisfy reference.IsNameOnly()
func NewReferenceUnknownDigest(ref reference.Named) (types.ImageReference, error) {
return newReference(ref, true)
}
// newReference returns a dockerReference for a named reference.
func newReference(ref reference.Named) (dockerReference, error) {
if reference.IsNameOnly(ref) {
return dockerReference{}, fmt.Errorf("Docker reference %s has neither a tag nor a digest", reference.FamiliarString(ref))
func newReference(ref reference.Named, unknownDigest bool) (dockerReference, error) {
if reference.IsNameOnly(ref) && !unknownDigest {
return dockerReference{}, fmt.Errorf("Docker reference %s is not for an unknown digest case; tag or digest is needed", reference.FamiliarString(ref))
}
if !reference.IsNameOnly(ref) && unknownDigest {
return dockerReference{}, fmt.Errorf("Docker reference %s is for an unknown digest case but reference has a tag or digest", reference.FamiliarString(ref))
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// The docker/distribution API does not really support that (we cant ask for an image with a specific
@@ -81,7 +110,8 @@ func newReference(ref reference.Named) (dockerReference, error) {
}
return dockerReference{
ref: ref,
ref: ref,
isUnknownDigest: unknownDigest,
}, nil
}
@@ -95,7 +125,11 @@ func (ref dockerReference) Transport() types.ImageTransport {
// e.g. default attribute values omitted by the user may be filled in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref dockerReference) StringWithinTransport() string {
return "//" + reference.FamiliarString(ref.ref)
famString := "//" + reference.FamiliarString(ref.ref)
if ref.isUnknownDigest {
return famString + UnknownDigestSuffix
}
return famString
}
// DockerReference returns a Docker reference associated with this reference
@@ -113,6 +147,9 @@ func (ref dockerReference) DockerReference() reference.Named {
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
// Returns "" if configuration identities for these references are not supported.
func (ref dockerReference) PolicyConfigurationIdentity() string {
if ref.isUnknownDigest {
return ref.ref.Name()
}
res, err := policyconfiguration.DockerReferenceIdentity(ref.ref)
if res == "" || err != nil { // Coverage: Should never happen, NewReference above should refuse values which could cause a failure.
panic(fmt.Sprintf("Internal inconsistency: policyconfiguration.DockerReferenceIdentity returned %#v, %v", res, err))
@@ -126,7 +163,13 @@ func (ref dockerReference) PolicyConfigurationIdentity() string {
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref dockerReference) PolicyConfigurationNamespaces() []string {
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
namespaces := policyconfiguration.DockerReferenceNamespaces(ref.ref)
if ref.isUnknownDigest {
if len(namespaces) != 0 && namespaces[0] == ref.ref.Name() {
namespaces = namespaces[1:]
}
}
return namespaces
}
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
@@ -163,6 +206,10 @@ func (ref dockerReference) tagOrDigest() (string, error) {
if ref, ok := ref.ref.(reference.NamedTagged); ok {
return ref.Tag(), nil
}
if ref.isUnknownDigest {
return "", fmt.Errorf("Docker reference %q is for an unknown digest case, has neither a digest nor a tag", reference.FamiliarString(ref.ref))
}
// This should not happen, NewReference above refuses reference.IsNameOnly values.
return "", fmt.Errorf("Internal inconsistency: Reference %s unexpectedly has neither a digest nor a tag", reference.FamiliarString(ref.ref))
}

View File

@@ -47,7 +47,12 @@ func httpResponseToError(res *http.Response, context string) error {
}
// registryHTTPResponseToError creates a Go error from an HTTP error response of a docker/distribution
// registry
// registry.
//
// WARNING: The OCI distribution spec says
// “A `4XX` response code from the registry MAY return a body in any format.”; but if it is
// JSON, it MUST use the errcode.Error structure.
// So, callers should primarily decide based on HTTP StatusCode, not based on error type here.
func registryHTTPResponseToError(res *http.Response) error {
err := handleErrorResponse(res)
// len(errs) == 0 should never be returned by handleErrorResponse; if it does, we don't modify it and let the caller report it as is.
@@ -83,7 +88,7 @@ func registryHTTPResponseToError(res *http.Response) error {
response = response[:50] + "..."
}
// %.0w makes e visible to error.Unwrap() without including any text
err = fmt.Errorf("StatusCode: %d, %s%.0w", e.StatusCode, response, e)
err = fmt.Errorf("StatusCode: %d, %q%.0w", e.StatusCode, response, e)
case errcode.Error:
// e.Error() is fmt.Sprintf("%s: %s", e.Code.Error(), e.Message, which is usually
// rather redundant. So reword it without using e.Code.Error() if e.Message is the default.

View File

@@ -111,11 +111,19 @@ func (d *Destination) PutBlobWithOptions(ctx context.Context, stream io.Reader,
return private.UploadedBlob{}, fmt.Errorf("reading Config file stream: %w", err)
}
d.config = buf
if err := d.archive.sendFileLocked(d.archive.configPath(inputInfo.Digest), inputInfo.Size, bytes.NewReader(buf)); err != nil {
configPath, err := d.archive.configPath(inputInfo.Digest)
if err != nil {
return private.UploadedBlob{}, err
}
if err := d.archive.sendFileLocked(configPath, inputInfo.Size, bytes.NewReader(buf)); err != nil {
return private.UploadedBlob{}, fmt.Errorf("writing Config file: %w", err)
}
} else {
if err := d.archive.sendFileLocked(d.archive.physicalLayerPath(inputInfo.Digest), inputInfo.Size, stream); err != nil {
layerPath, err := d.archive.physicalLayerPath(inputInfo.Digest)
if err != nil {
return private.UploadedBlob{}, err
}
if err := d.archive.sendFileLocked(layerPath, inputInfo.Size, stream); err != nil {
return private.UploadedBlob{}, err
}
}
@@ -129,6 +137,9 @@ func (d *Destination) PutBlobWithOptions(ctx context.Context, stream io.Reader,
// If the blob has been successfully reused, returns (true, info, nil).
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
func (d *Destination) TryReusingBlobWithOptions(ctx context.Context, info types.BlobInfo, options private.TryReusingBlobOptions) (bool, private.ReusedBlob, error) {
if !impl.OriginalBlobMatchesRequiredCompression(options) {
return false, private.ReusedBlob{}, nil
}
if err := d.archive.lock(); err != nil {
return false, private.ReusedBlob{}, err
}

View File

@@ -57,7 +57,7 @@ func NewReaderFromFile(sys *types.SystemContext, path string) (*Reader, error) {
// The caller should call .Close() on the returned archive when done.
func NewReaderFromStream(sys *types.SystemContext, inputStream io.Reader) (*Reader, error) {
// Save inputStream to a temporary file
tarCopyFile, err := os.CreateTemp(tmpdir.TemporaryDirectoryForBigFiles(sys), "docker-tar")
tarCopyFile, err := tmpdir.CreateBigFileTemp(sys, "docker-tar")
if err != nil {
return nil, fmt.Errorf("creating temporary file: %w", err)
}

View File

@@ -95,7 +95,10 @@ func (w *Writer) ensureSingleLegacyLayerLocked(layerID string, layerDigest diges
if !w.legacyLayers.Contains(layerID) {
// Create a symlink for the legacy format, where there is one subdirectory per layer ("image").
// See also the comment in physicalLayerPath.
physicalLayerPath := w.physicalLayerPath(layerDigest)
physicalLayerPath, err := w.physicalLayerPath(layerDigest)
if err != nil {
return err
}
if err := w.sendSymlinkLocked(filepath.Join(layerID, legacyLayerFileName), filepath.Join("..", physicalLayerPath)); err != nil {
return fmt.Errorf("creating layer symbolic link: %w", err)
}
@@ -139,6 +142,9 @@ func (w *Writer) writeLegacyMetadataLocked(layerDescriptors []manifest.Schema2De
}
// This chainID value matches the computation in docker/docker/layer.CreateChainID …
if err := l.Digest.Validate(); err != nil { // This should never fail on this code path, still: make sure the chainID computation is unambiguous.
return err
}
if chainID == "" {
chainID = l.Digest
} else {
@@ -204,12 +210,20 @@ func checkManifestItemsMatch(a, b *ManifestItem) error {
func (w *Writer) ensureManifestItemLocked(layerDescriptors []manifest.Schema2Descriptor, configDigest digest.Digest, repoTags []reference.NamedTagged) error {
layerPaths := []string{}
for _, l := range layerDescriptors {
layerPaths = append(layerPaths, w.physicalLayerPath(l.Digest))
p, err := w.physicalLayerPath(l.Digest)
if err != nil {
return err
}
layerPaths = append(layerPaths, p)
}
var item *ManifestItem
configPath, err := w.configPath(configDigest)
if err != nil {
return err
}
newItem := ManifestItem{
Config: w.configPath(configDigest),
Config: configPath,
RepoTags: []string{},
Layers: layerPaths,
Parent: "", // We dont have this information
@@ -294,21 +308,27 @@ func (w *Writer) Close() error {
// configPath returns a path we choose for storing a config with the specified digest.
// NOTE: This is an internal implementation detail, not a format property, and can change
// any time.
func (w *Writer) configPath(configDigest digest.Digest) string {
return configDigest.Hex() + ".json"
func (w *Writer) configPath(configDigest digest.Digest) (string, error) {
if err := configDigest.Validate(); err != nil { // digest.Digest.Hex() panics on failure, and could possibly result in unexpected paths, so validate explicitly.
return "", err
}
return configDigest.Hex() + ".json", nil
}
// physicalLayerPath returns a path we choose for storing a layer with the specified digest
// (the actual path, i.e. a regular file, not a symlink that may be used in the legacy format).
// NOTE: This is an internal implementation detail, not a format property, and can change
// any time.
func (w *Writer) physicalLayerPath(layerDigest digest.Digest) string {
func (w *Writer) physicalLayerPath(layerDigest digest.Digest) (string, error) {
if err := layerDigest.Validate(); err != nil { // digest.Digest.Hex() panics on failure, and could possibly result in unexpected paths, so validate explicitly.
return "", err
}
// Note that this can't be e.g. filepath.Join(l.Digest.Hex(), legacyLayerFileName); due to the way
// writeLegacyMetadata constructs layer IDs differently from inputinfo.Digest values (as described
// inside it), most of the layers would end up in subdirectories alone without any metadata; (docker load)
// tries to load every subdirectory as an image and fails if the config is missing. So, keep the layers
// in the root of the tarball.
return layerDigest.Hex() + ".tar"
return layerDigest.Hex() + ".tar", nil
}
type tarFI struct {

Some files were not shown because too many files have changed in this diff Show More