Compare commits

...

40 Commits

Author SHA1 Message Date
Miloslav Trmač
f45ae950aa Release 1.7.0
skopeo list-tags docker-archive:... is now available.

- Improve a comment in the 010-inspect.bats test
- do not recommend upgrading all packages
- Bump github.com/containers/image/v5 from 5.19.1 to 5.20.0
- Update github.com/containerd/containerd
- Bump github.com/docker/docker
- Bump github.com/spf13/cobra from 1.3.0 to 1.4.0
- Add support for docker-archive: to skopeo list-tags
- Rename "self" receiver
- Remove assignments to an unused variable
- Add various missing error handling
- Simplify the proxy server a bit
- Bump github.com/stretchr/testify from 1.7.0 to 1.7.1
- Use assert.ErrorContains
- Update to Go 1.14 and revendor
- Use check.C.MkDir() instead of manual ioutil.TempDir() calls
- Formally record that we require Go 1.15
- Update the command to install golint
- Bump github.com/containers/ocicrypt from 1.1.2 to 1.1.3
- Bump github.com/docker/docker
- Bump github.com/containers/storage from 1.38.2 to 1.39.0
- Bump github.com/containers/common from 0.47.4 to 0.47.5
- Bump github.com/prometheus/client_golang to v1.11.1

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-24 20:32:24 +01:00
Lokesh Mandvekar
3bc062423e Bump github.com/prometheus/client_golang to v1.11.1
Resolves: CVE-2022-21698

Skopeo isn't actually impacted by the CVE unless a Prometheus listener
is set up, which is not a part of Skopeo's default behavior.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2022-03-24 14:57:52 -04:00
Miloslav Trmač
d0d7d97f9c Merge pull request #1604 from containers/dependabot/go_modules/github.com/containers/common-0.47.5
Bump github.com/containers/common from 0.47.4 to 0.47.5
2022-03-24 19:32:55 +01:00
dependabot[bot]
89cd19519f Bump github.com/containers/common from 0.47.4 to 0.47.5
Bumps [github.com/containers/common](https://github.com/containers/common) from 0.47.4 to 0.47.5.
- [Release notes](https://github.com/containers/common/releases)
- [Commits](https://github.com/containers/common/compare/v0.47.4...v0.47.5)

---
updated-dependencies:
- dependency-name: github.com/containers/common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-24 17:41:02 +00:00
Miloslav Trmač
3e973e1aa2 Merge pull request #1603 from containers/dependabot/go_modules/github.com/containers/storage-1.39.0
Bump github.com/containers/storage from 1.38.2 to 1.39.0
2022-03-24 18:39:51 +01:00
dependabot[bot]
7f6b0e39d0 Bump github.com/containers/storage from 1.38.2 to 1.39.0
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.38.2 to 1.39.0.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/main/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.38.2...v1.39.0)

---
updated-dependencies:
- dependency-name: github.com/containers/storage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-24 17:14:41 +00:00
Miloslav Trmač
cc2445de81 Merge pull request #1602 from containers/dependabot/go_modules/github.com/docker/docker-20.10.14incompatible
Bump github.com/docker/docker from 20.10.13+incompatible to 20.10.14+incompatible
2022-03-24 18:13:38 +01:00
dependabot[bot]
f6bf57460d Bump github.com/docker/docker
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 20.10.13+incompatible to 20.10.14+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Changelog](https://github.com/moby/moby/blob/master/CHANGELOG.md)
- [Commits](https://github.com/docker/docker/compare/v20.10.13...v20.10.14)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-24 09:13:14 +00:00
Miloslav Trmač
91cd3510eb Merge pull request #1600 from containers/dependabot/go_modules/github.com/containers/ocicrypt-1.1.3
Bump github.com/containers/ocicrypt from 1.1.2 to 1.1.3
2022-03-21 15:45:57 +01:00
dependabot[bot]
ac7edc7d10 Bump github.com/containers/ocicrypt from 1.1.2 to 1.1.3
Bumps [github.com/containers/ocicrypt](https://github.com/containers/ocicrypt) from 1.1.2 to 1.1.3.
- [Release notes](https://github.com/containers/ocicrypt/releases)
- [Commits](https://github.com/containers/ocicrypt/compare/v1.1.2...v1.1.3)

---
updated-dependencies:
- dependency-name: github.com/containers/ocicrypt
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-21 09:10:52 +00:00
Valentin Rothberg
92b1eec64c Merge pull request #1593 from mtrmac/go-1.15
Formally require Go 1.15
2022-03-17 08:55:27 +01:00
Miloslav Trmač
c819bc1754 Update the command to install golint
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-16 16:05:08 +01:00
Miloslav Trmač
6a2f38d66c Formally record that we require Go 1.15
We already do in practice:
> vendor/golang.org/x/net/http2/transport.go:417:45: undefined: os.ErrDeadlineExceeded

so make that official.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-16 16:05:08 +01:00
Miloslav Trmač
2019b79c7f Use check.C.MkDir() instead of manual ioutil.TempDir() calls
This saves us at least 2 lines (error check, and cleanup) on every
instance, or in some cases adds cleanup that we forgot.

This is inspired by, but not directly related to, Go 1.15's addition of
Testing.T.TempDir.

NOTE: This might significantly increase the tests' disk space requirements;
AFAICS the temporary directories are only cleaned up when a whole "suite
finishes running.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-16 16:05:08 +01:00
Miloslav Trmač
f79cc8aeda Update to Go 1.14 and revendor
> go mod tidy -go=1.14
> make vendor

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-16 16:05:08 +01:00
Miloslav Trmač
0bfe297fc1 Merge pull request #1595 from mtrmac/ErrorContains
Use assert.ErrorContains
2022-03-16 16:04:38 +01:00
Miloslav Trmač
ac4c291f76 Use assert.ErrorContains
...added in github.com/stretchr/testify 1.7.1.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-16 15:13:31 +01:00
Miloslav Trmač
d2837c9e56 Merge pull request #1594 from containers/dependabot/go_modules/github.com/stretchr/testify-1.7.1
Bump github.com/stretchr/testify from 1.7.0 to 1.7.1
2022-03-16 15:09:29 +01:00
dependabot[bot]
5aaf3a9e4c Bump github.com/stretchr/testify from 1.7.0 to 1.7.1
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.7.0 to 1.7.1.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.7.0...v1.7.1)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-16 09:19:41 +00:00
Valentin Rothberg
0c4a9cc684 Merge pull request #1592 from mtrmac/lint-1.18
Various lint fixes
2022-03-16 09:08:29 +01:00
Miloslav Trmač
bd524670b1 Simplify the proxy server a bit
Move JSON parsing into the request processing handler
so that we can consolidate the two instances of the response sending code.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-15 21:48:51 +01:00
Miloslav Trmač
693de29e37 Add various missing error handling
... as found by (golangci-lint run).

Note: this does not add (golangci-lint run) to the Makefile
to ensure the coding standard.

(BTW golangci-lint currently fails on structcheck, which doesn't
handle embedded structs, and that's a years-long known unfixed
limitation.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-15 21:48:51 +01:00
Miloslav Trmač
f44ee2f80a Remove assignments to an unused variable
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-15 21:48:51 +01:00
Miloslav Trmač
a71900996f Rename "self" receiver
> receiver name should be a reflection of its identity; don't use generic names such as "this" or "self" (ST1006)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-15 21:48:51 +01:00
Valentin Rothberg
a26578178b Merge pull request #1581 from zhangguanzhang/list-tags
Add support for docker-archive: to skopeo list-tags
2022-03-15 10:11:28 +01:00
zhangguanzhang
7ba56f3f7a Add support for docker-archive: to skopeo list-tags
Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
2022-03-15 09:32:05 +08:00
Daniel J Walsh
0f701726bd Merge pull request #1589 from containers/dependabot/go_modules/github.com/docker/docker-20.10.13incompatible
Bump github.com/docker/docker from 20.10.12+incompatible to 20.10.13+incompatible
2022-03-11 05:01:09 -05:00
Daniel J Walsh
91ad8c39c6 Merge pull request #1590 from containers/dependabot/go_modules/github.com/spf13/cobra-1.4.0
Bump github.com/spf13/cobra from 1.3.0 to 1.4.0
2022-03-11 05:00:41 -05:00
dependabot[bot]
ad3e8f407d Bump github.com/spf13/cobra from 1.3.0 to 1.4.0
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Changelog](https://github.com/spf13/cobra/blob/master/CHANGELOG.md)
- [Commits](https://github.com/spf13/cobra/compare/v1.3.0...v1.4.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-11 09:11:45 +00:00
dependabot[bot]
0703ec6ce8 Bump github.com/docker/docker
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 20.10.12+incompatible to 20.10.13+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Changelog](https://github.com/moby/moby/blob/master/CHANGELOG.md)
- [Commits](https://github.com/docker/docker/compare/v20.10.12...v20.10.13)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-11 09:11:37 +00:00
Valentin Rothberg
3e2defd6d3 Merge pull request #1585 from mtrmac/update-containerd
Update github.com/containerd/containerd
2022-03-07 09:47:20 +01:00
Miloslav Trmač
5200272846 Update github.com/containerd/containerd
$ go get -u github.ccom/containerd/containerd
$ make vendor

... to silence warnings about https://github.com/advisories/GHSA-crp2-qrr5-8pq7 ,
in code we don't use.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-03-04 19:26:12 +01:00
Miloslav Trmač
43eab90b36 Merge pull request #1582 from containers/dependabot/go_modules/github.com/containers/image/v5-5.20.0
Bump github.com/containers/image/v5 from 5.19.1 to 5.20.0
2022-03-04 19:15:24 +01:00
dependabot[bot]
0ad25b2d33 Bump github.com/containers/image/v5 from 5.19.1 to 5.20.0
Bumps [github.com/containers/image/v5](https://github.com/containers/image) from 5.19.1 to 5.20.0.
- [Release notes](https://github.com/containers/image/releases)
- [Commits](https://github.com/containers/image/compare/v5.19.1...v5.20.0)

---
updated-dependencies:
- dependency-name: github.com/containers/image/v5
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-02 09:21:25 +00:00
Daniel J Walsh
22d187181b Merge pull request #1578 from slhck/patch-1
do not recommend upgrading all packages
2022-02-28 07:45:59 -05:00
Werner Robitza
8cbfcc820a do not recommend upgrading all packages
The command to install skopeo for Ubuntu 20.04 includes a forced upgrade step for all packages.

Installing skopeo does not require the upgrade step, and it could lead to possible issues completely unrelated to the project.

Signed-off-by: Werner Robitza <werner.robitza@gmail.com>
2022-02-25 11:46:17 +01:00
Miloslav Trmač
8539d21152 Merge pull request #1576 from mtrmac/inspect-test-docs
Improve a comment in the 010-inspect.bats test
2022-02-23 22:48:17 +01:00
Miloslav Trmač
370be7e777 Improve a comment in the 010-inspect.bats test
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2022-02-22 19:44:14 +01:00
Daniel J Walsh
95e17ed1e0 Merge pull request #1573 from rhatdan/main
Bump to v1.6.1
2022-02-16 12:04:39 -05:00
Daniel J Walsh
73edfb8216 Move to v1.7.0-dev
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2022-02-16 12:03:15 -05:00
264 changed files with 9141 additions and 4950 deletions

View File

@@ -90,7 +90,7 @@ osx_task:
export PATH=$GOPATH/bin:$PATH
brew update
brew install gpgme go go-md2man
go get -u golang.org/x/lint/golint
go install golang.org/x/lint/golint@latest
test_script: |
export PATH=$GOPATH/bin:$PATH
go version

View File

@@ -110,7 +110,7 @@ func parseMultiArch(multiArch string) (copy.ImageListSelection, error) {
}
}
func (opts *copyOptions) run(args []string, stdout io.Writer) error {
func (opts *copyOptions) run(args []string, stdout io.Writer) (retErr error) {
if len(args) != 2 {
return errorShouldDisplayUsage{errors.New("Exactly two arguments expected")}
}
@@ -125,7 +125,11 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
if err != nil {
return fmt.Errorf("Error loading trust policy: %v", err)
}
defer policyContext.Destroy()
defer func() {
if err := policyContext.Destroy(); err != nil {
retErr = fmt.Errorf("(error tearing down policy context: %v): %w", err, retErr)
}
}()
srcRef, err := alltransports.ParseImageName(imageNames[0])
if err != nil {

View File

@@ -5,10 +5,12 @@ import (
"encoding/json"
"fmt"
"io"
"sort"
"strings"
"github.com/containers/common/pkg/retry"
"github.com/containers/image/v5/docker"
"github.com/containers/image/v5/docker/archive"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/transports/alltransports"
"github.com/containers/image/v5/types"
@@ -18,7 +20,7 @@ import (
// tagListOutput is the output format of (skopeo list-tags), primarily so that we can format it with a simple json.MarshalIndent.
type tagListOutput struct {
Repository string
Repository string `json:",omitempty"`
Tags []string
}
@@ -28,6 +30,21 @@ type tagsOptions struct {
retryOpts *retry.RetryOptions
}
var transportHandlers = map[string]func(ctx context.Context, sys *types.SystemContext, opts *tagsOptions, userInput string) (repositoryName string, tagListing []string, err error){
docker.Transport.Name(): listDockerRepoTags,
archive.Transport.Name(): listDockerArchiveTags,
}
// supportedTransports returns all the supported transports
func supportedTransports(joinStr string) string {
res := make([]string, 0, len(transportHandlers))
for handlerName := range transportHandlers {
res = append(res, handlerName)
}
sort.Strings(res)
return strings.Join(res, joinStr)
}
func tagsCmd(global *globalOptions) *cobra.Command {
sharedFlags, sharedOpts := sharedImageFlags()
imageFlags, imageOpts := dockerImageFlags(global, sharedOpts, nil, "", "")
@@ -38,13 +55,14 @@ func tagsCmd(global *globalOptions) *cobra.Command {
image: imageOpts,
retryOpts: retryOpts,
}
cmd := &cobra.Command{
Use: "list-tags [command options] REPOSITORY-NAME",
Short: "List tags in the transport/repository specified by the REPOSITORY-NAME",
Long: `Return the list of tags from the transport/repository "REPOSITORY-NAME"
Use: "list-tags [command options] SOURCE-IMAGE",
Short: "List tags in the transport/repository specified by the SOURCE-IMAGE",
Long: `Return the list of tags from the transport/repository "SOURCE-IMAGE"
Supported transports:
docker
` + supportedTransports(" ") + `
See skopeo-list-tags(1) section "REPOSITORY NAMES" for the expected format
`,
@@ -95,6 +113,58 @@ func listDockerTags(ctx context.Context, sys *types.SystemContext, imgRef types.
return repositoryName, tags, nil
}
// return the tagLists from a docker repo
func listDockerRepoTags(ctx context.Context, sys *types.SystemContext, opts *tagsOptions, userInput string) (repositoryName string, tagListing []string, err error) {
// Do transport-specific parsing and validation to get an image reference
imgRef, err := parseDockerRepositoryReference(userInput)
if err != nil {
return
}
if err = retry.RetryIfNecessary(ctx, func() error {
repositoryName, tagListing, err = listDockerTags(ctx, sys, imgRef)
return err
}, opts.retryOpts); err != nil {
return
}
return
}
// return the tagLists from a docker archive file
func listDockerArchiveTags(ctx context.Context, sys *types.SystemContext, opts *tagsOptions, userInput string) (repositoryName string, tagListing []string, err error) {
ref, err := alltransports.ParseImageName(userInput)
if err != nil {
return
}
tarReader, _, err := archive.NewReaderForReference(sys, ref)
if err != nil {
return
}
defer tarReader.Close()
imageRefs, err := tarReader.List()
if err != nil {
return
}
var repoTags []string
for imageIndex, items := range imageRefs {
for _, ref := range items {
repoTags, err = tarReader.ManifestTagsForReference(ref)
if err != nil {
return
}
// handle for each untagged image
if len(repoTags) == 0 {
repoTags = []string{fmt.Sprintf("@%d", imageIndex)}
}
tagListing = append(tagListing, repoTags...)
}
}
return
}
func (opts *tagsOptions) run(args []string, stdout io.Writer) (retErr error) {
ctx, cancel := opts.global.commandTimeoutContext()
defer cancel()
@@ -113,23 +183,17 @@ func (opts *tagsOptions) run(args []string, stdout io.Writer) (retErr error) {
return fmt.Errorf("Invalid %q: does not specify a transport", args[0])
}
if transport.Name() != docker.Transport.Name() {
return fmt.Errorf("Unsupported transport '%v' for tag listing. Only '%v' currently supported", transport.Name(), docker.Transport.Name())
}
// Do transport-specific parsing and validation to get an image reference
imgRef, err := parseDockerRepositoryReference(args[0])
if err != nil {
return err
}
var repositoryName string
var tagListing []string
if err = retry.RetryIfNecessary(ctx, func() error {
repositoryName, tagListing, err = listDockerTags(ctx, sys, imgRef)
return err
}, opts.retryOpts); err != nil {
return err
if val, ok := transportHandlers[transport.Name()]; ok {
repositoryName, tagListing, err = val(ctx, sys, opts, args[0])
if err != nil {
return err
}
} else {
return fmt.Errorf("Unsupported transport '%s' for tag listing. Only supported: %s",
transport.Name(), supportedTransports(", "))
}
outputData := tagListOutput{

View File

@@ -98,7 +98,7 @@ const maxMsgSize = 32 * 1024
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER
// We hard error if the input JSON numbers we expect to be
// integers are above this.
const maxJSONFloat = float64(1<<53 - 1)
const maxJSONFloat = float64(uint64(1)<<53 - 1)
// request is the JSON serialization of a function call
type request struct {
@@ -664,7 +664,14 @@ func proxyCmd(global *globalOptions) *cobra.Command {
// processRequest dispatches a remote request.
// replyBuf is the result of the invocation.
// terminate should be true if processing of requests should halt.
func (h *proxyHandler) processRequest(req request) (rb replyBuf, terminate bool, err error) {
func (h *proxyHandler) processRequest(readBytes []byte) (rb replyBuf, terminate bool, err error) {
var req request
// Parse the request JSON
if err = json.Unmarshal(readBytes, &req); err != nil {
err = fmt.Errorf("invalid request: %v", err)
return
}
// Dispatch on the method
switch req.Method {
case "Initialize":
@@ -717,18 +724,15 @@ func (opts *proxyOptions) run(args []string, stdout io.Writer) error {
}
return fmt.Errorf("reading socket: %v", err)
}
// Parse the request JSON
readbuf := buf[0:n]
var req request
if err := json.Unmarshal(readbuf, &req); err != nil {
rb := replyBuf{}
rb.send(conn, fmt.Errorf("invalid request: %v", err))
}
rb, terminate, err := handler.processRequest(req)
rb, terminate, err := handler.processRequest(readbuf)
if terminate {
return nil
}
rb.send(conn, err)
if err := rb.send(conn, err); err != nil {
return fmt.Errorf("writing to socket: %w", err)
}
}
}

View File

@@ -25,9 +25,8 @@ const (
// Test that results of runSkopeo failed with nothing on stdout, and substring
// within the error message.
func assertTestFailed(t *testing.T, stdout string, err error, substring string) {
assert.Error(t, err)
assert.ErrorContains(t, err, substring)
assert.Empty(t, stdout)
assert.Contains(t, err.Error(), substring)
}
func TestStandaloneSign(t *testing.T) {

View File

@@ -500,7 +500,7 @@ func imagesToCopy(source string, transport string, sourceCtx *types.SystemContex
return descriptors, nil
}
func (opts *syncOptions) run(args []string, stdout io.Writer) error {
func (opts *syncOptions) run(args []string, stdout io.Writer) (retErr error) {
if len(args) != 2 {
return errorShouldDisplayUsage{errors.New("Exactly two arguments expected")}
}
@@ -510,7 +510,11 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) error {
if err != nil {
return errors.Wrapf(err, "Error loading trust policy")
}
defer policyContext.Destroy()
defer func() {
if err := policyContext.Destroy(); err != nil {
retErr = fmt.Errorf("(error tearing down policy context: %v): %w", err, retErr)
}
}()
// validate source and destination options
contains := func(val string, list []string) (_ bool) {

View File

@@ -34,7 +34,7 @@ func commandAction(handler func(args []string, stdout io.Writer) error) func(cmd
return func(c *cobra.Command, args []string) error {
err := handler(args, c.OutOrStdout())
if _, ok := err.(errorShouldDisplayUsage); ok {
c.Help()
return c.Help()
}
return err
}

View File

@@ -1,14 +1,14 @@
% skopeo-list-tags(1)
## NAME
skopeo\-list\-tags - List tags in the transport-specific image repository.
skopeo\-list\-tags - List image names in a transport-specific collection of images.
## SYNOPSIS
**skopeo list-tags** [*options*] _repository-name_
**skopeo list-tags** [*options*] _source-image_
Return a list of tags from _repository-name_ in a registry.
Return a list of tags from _source-image_ in a registry or a local docker-archive file.
_repository-name_ name of repository to retrieve tag listing from
_source-image_ name of the repository to retrieve a tag listing from or a local docker-archive file.
## OPTIONS
@@ -53,7 +53,7 @@ The password to access the registry.
## REPOSITORY NAMES
Repository names are transport-specific references as each transport may have its own concept of a "repository" and "tags". Currently, only the Docker transport is supported.
Repository names are transport-specific references as each transport may have its own concept of a "repository" and "tags".
This commands refers to repositories using a _transport_`:`_details_ format. The following formats are supported:
@@ -72,6 +72,8 @@ This commands refers to repositories using a _transport_`:`_details_ format. The
"docker.io/myuser/myimage:v1.0"
"docker.io/myuser/myimage@sha256:f48c4cc192f4c3c6a069cb5cca6d0a9e34d6076ba7c214fd0cc3ca60e0af76bb"
**docker-archive:path[:docker-reference]
more than one images were stored in a docker save-formatted file.
## EXAMPLES
@@ -121,8 +123,48 @@ $ skopeo list-tags docker://localhost:5000/fedora
```
### Docker-archive Transport
To list the tags in a local docker-archive file:
```sh
$ skopeo list-tags docker-archive:/tmp/busybox.tar.gz
{
"Tags": [
"busybox:1.28.3"
]
}
```
Also supports more than one tags in an archive:
```sh
$ skopeo list-tags docker-archive:/tmp/docker-two-images.tar.gz
{
"Tags": [
"example.com/empty:latest",
"example.com/empty/but:different"
]
}
```
Will include a source-index entry for each untagged image:
```sh
$ skopeo list-tags docker-archive:/tmp/four-tags-with-an-untag.tar
{
"Tags": [
"image1:tag1",
"image2:tag2",
"@2",
"image4:tag4"
]
}
```
# SEE ALSO
skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5)
skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5), containers-transports(1)
## AUTHORS

View File

@@ -102,7 +102,7 @@ Print the version number
| [skopeo-copy(1)](skopeo-copy.1.md) | Copy an image (manifest, filesystem layers, signatures) from one location to another. |
| [skopeo-delete(1)](skopeo-delete.1.md) | Mark the _image-name_ for later deletion by the registry's garbage collector. |
| [skopeo-inspect(1)](skopeo-inspect.1.md) | Return low-level information about _image-name_ in a registry. |
| [skopeo-list-tags(1)](skopeo-list-tags.1.md) | List tags in the transport-specific image repository. |
| [skopeo-list-tags(1)](skopeo-list-tags.1.md) | List image names in a transport-specific collection of images.|
| [skopeo-login(1)](skopeo-login.1.md) | Login to a container registry. |
| [skopeo-logout(1)](skopeo-logout.1.md) | Logout of a container registry. |
| [skopeo-manifest-digest(1)](skopeo-manifest-digest.1.md) | Compute a manifest digest for a manifest-file and write it to standard output. |

21
go.mod
View File

@@ -1,24 +1,29 @@
module github.com/containers/skopeo
go 1.12
go 1.15
require (
github.com/containers/common v0.47.4
github.com/containers/image/v5 v5.19.1
github.com/containers/ocicrypt v1.1.2
github.com/containers/storage v1.38.2
github.com/docker/docker v20.10.12+incompatible
github.com/containerd/containerd v1.6.1 // indirect
github.com/containers/common v0.47.5
github.com/containers/image/v5 v5.20.0
github.com/containers/ocicrypt v1.1.3
github.com/containers/storage v1.39.0
github.com/docker/docker v20.10.14+incompatible
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.3-0.20211202193544-a5463b7f9c84
github.com/opencontainers/image-tools v1.0.0-rc3
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.11.1 // indirect
github.com/russross/blackfriday v2.0.0+incompatible // indirect
github.com/sirupsen/logrus v1.8.1
github.com/spf13/cobra v1.3.0
github.com/spf13/cobra v1.4.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.7.0
github.com/stretchr/testify v1.7.1
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635
golang.org/x/net v0.0.0-20220225172249-27dd8689420f // indirect
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9 // indirect
google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
gopkg.in/yaml.v2 v2.4.0
)

235
go.sum
View File

@@ -1,4 +1,5 @@
bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
bazil.org/fuse v0.0.0-20200407214033-5883e5a4b512/go.mod h1:FbcW6z/2VytnFDhZfumh8Ss8zxHE6qpMP5sHTRe0EaM=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
@@ -36,6 +37,7 @@ cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4g
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
cloud.google.com/go/firestore v1.6.1/go.mod h1:asNXNOzBdyVQmEU+ggO8UPodTkEVFW5Qx+rwHnAz+EY=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
@@ -49,19 +51,24 @@ cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/14rcole/gopopulate v0.0.0-20180821133914-b175b219e774 h1:SCbEWT58NSt7d2mcFdvxC9uyrdcTfvBbPLThhkDmXzg=
github.com/14rcole/gopopulate v0.0.0-20180821133914-b175b219e774/go.mod h1:6/0dYRLLXyJjbkIPeeGyoJ/eKOSI0eU6eTlCBYibgd0=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20210715213245-6c3934b029d8/go.mod h1:CzsSbkDixRphAF5hS6wbMKq0eI6ccJRb7/A0M6JBnwg=
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210608223527-2377c96fe795/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.0.0 h1:dtDWrepsVPfW9H/4y7dDgFc2MBUSeJhlaDtK13CxFlU=
@@ -77,8 +84,9 @@ github.com/Microsoft/go-winio v0.4.17-0.20210211115548-6eac466e5fa3/go.mod h1:JP
github.com/Microsoft/go-winio v0.4.17-0.20210324224401-5516f17a5958/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.5.1 h1:aPJp2QD7OOrhO5tQXqQoGSJc+DjDtWTGLOmNyAm6FgY=
github.com/Microsoft/go-winio v0.5.1/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7-0.20190325164909-8abdbb8205e4/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
@@ -94,10 +102,12 @@ github.com/Microsoft/hcsshim v0.9.2/go.mod h1:7pLA8lDk46WKDWlVsENo92gC0XFa8rbKfy
github.com/Microsoft/hcsshim/test v0.0.0-20201218223536-d3e5debf77da/go.mod h1:5hlzMzRKMLyo42nCZ9oml8AdTlq/0cvIaBv6tK1RehU=
github.com/Microsoft/hcsshim/test v0.0.0-20210227013316-43a75bb4edd3/go.mod h1:mw7qgWloBUl75W/gVH3cQszUg1+gUITj7D6NY7ywVnY=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7/go.mod h1:z4/9nQmJSSwwds7ejkxaJwO37dru3geImFUdJlaLzQo=
github.com/ProtonMail/go-crypto v0.0.0-20210920160938-87db9fbc61c7/go.mod h1:z4/9nQmJSSwwds7ejkxaJwO37dru3geImFUdJlaLzQo=
github.com/ProtonMail/go-crypto v0.0.0-20211112122917-428f8eabeeb3/go.mod h1:z4/9nQmJSSwwds7ejkxaJwO37dru3geImFUdJlaLzQo=
github.com/ProtonMail/go-crypto v0.0.0-20220113124808-70ae35bab23f/go.mod h1:z4/9nQmJSSwwds7ejkxaJwO37dru3geImFUdJlaLzQo=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
@@ -112,6 +122,7 @@ github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuy
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/alexflint/go-filemutex v0.0.0-20171022225611-72bdc8eae2ae/go.mod h1:CgnQgUtFrFz9mxFNtED3jI5tLDjKlOM+oUF/sTk6ps0=
github.com/alexflint/go-filemutex v1.1.0/go.mod h1:7P4iRhttt/nUvUOrYIhcpMzv2G6CY9UnI16Z+UJqRyk=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
@@ -125,6 +136,7 @@ github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgI
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/aws/aws-sdk-go v1.15.11/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@@ -133,6 +145,7 @@ github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6r
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
github.com/bits-and-blooms/bitset v1.2.0/go.mod h1:gIdJ4wp64HaoK2YrL1Q5/N7Y16edYb8uY+O0FJTyyDA=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
@@ -144,8 +157,11 @@ github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8n
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
github.com/cenkalti/backoff/v4 v4.1.2/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@@ -177,6 +193,9 @@ github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWH
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211130200136-a8f946100490/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/cockroachdb/datadriven v0.0.0-20200714090401-bf6692d28da5/go.mod h1:h6jFvWxBdQXxjopDMZyH2UVceIRfR84bdzbkoKrsWNo=
github.com/cockroachdb/errors v1.2.4/go.mod h1:rQD95gz6FARkaKkQXUksEje/d9a6wBJoCr5oaCLELYA=
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI=
github.com/containerd/aufs v0.0.0-20200908144142-dab0cbea06f4/go.mod h1:nukgQABAEopAHvB6j7cnP5zJ+/3aVcE7hCYqvIwAHyE=
github.com/containerd/aufs v0.0.0-20201003224125-76a6863f2989/go.mod h1:AkGGQs9NM2vtYHaUen+NljV0/baGCAPELGm2q9ZXpWU=
github.com/containerd/aufs v0.0.0-20210316121734-20793ff83c97/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
@@ -190,8 +209,9 @@ github.com/containerd/cgroups v0.0.0-20200531161412-0dbf7f05ba59/go.mod h1:pA0z1
github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
github.com/containerd/cgroups v0.0.0-20200824123100-0b889c03f102/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
github.com/containerd/cgroups v0.0.0-20210114181951-8a68de567b68/go.mod h1:ZJeTFisyysqgcCdecO57Dj79RfL0LNeGiFUqLYQRYLE=
github.com/containerd/cgroups v1.0.1 h1:iJnMvco9XGvKUvNQkv88bE4uJXxRQH18efbKo9w5vHQ=
github.com/containerd/cgroups v1.0.1/go.mod h1:0SJrPIenamHDcZhEcJMNBB85rHcUsw4f25ZfBiPYRkU=
github.com/containerd/cgroups v1.0.3 h1:ADZftAkglvCiD44c77s5YmMqaP2pzVCFZvBmAlBdAP4=
github.com/containerd/cgroups v1.0.3/go.mod h1:/ofk34relqNjSGyqPrmEULrO4Sc8LJhvJmWbUCUKqj8=
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
@@ -213,8 +233,10 @@ github.com/containerd/containerd v1.5.0-beta.4/go.mod h1:GmdgZd2zA2GYIBZ0w09Zvgq
github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoTJseu1FGOKuoA4nNb2s=
github.com/containerd/containerd v1.5.1/go.mod h1:0DOxVqwDy2iZvrZp2JUx/E+hS0UNTVn7dJnIOwtYR4g=
github.com/containerd/containerd v1.5.7/go.mod h1:gyvv6+ugqY25TiXxcZC3L5yOeYgEw0QMhscqVp1AR9c=
github.com/containerd/containerd v1.5.9 h1:rs6Xg1gtIxaeyG+Smsb/0xaSDu1VgFhOCKBXxMxbsF4=
github.com/containerd/containerd v1.5.8/go.mod h1:YdFSv5bTFLpG2HIYmfqDpSYYTDX+mc5qtSuYx1YUb/s=
github.com/containerd/containerd v1.5.9/go.mod h1:fvQqCfadDGga5HZyn3j4+dx56qj2I9YwBrlSdalvJYQ=
github.com/containerd/containerd v1.6.1 h1:oa2uY0/0G+JX4X7hpGCYvkp9FjUancz56kSNnb1sG3o=
github.com/containerd/containerd v1.6.1/go.mod h1:1nJz5xCZPusx6jJU8Frfct988y0NpumIq9ODB0kLtoE=
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
@@ -222,6 +244,7 @@ github.com/containerd/continuity v0.0.0-20200710164510-efbc4488d8fe/go.mod h1:cE
github.com/containerd/continuity v0.0.0-20201208142359-180525291bb7/go.mod h1:kR3BEg7bDFaEddKm54WSmrol1fKWDU1nKYkgrcgZT7Y=
github.com/containerd/continuity v0.0.0-20210208174643-50096c924a4e/go.mod h1:EXlVlkqNba9rJe3j7w3Xa924itAMLgZH4UD/Q4PExuQ=
github.com/containerd/continuity v0.1.0/go.mod h1:ICJu0PwR54nI0yPEnJ6jcS+J7CZAUXrLh8lPo2knzsM=
github.com/containerd/continuity v0.2.2/go.mod h1:pWygW9u7LtS1o4N/Tn0FoCFDIXZ7rxcMX7HX1Dmibvk=
github.com/containerd/fifo v0.0.0-20180307165137-3d5202aec260/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/fifo v0.0.0-20200410184934-f15a3290365b/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
@@ -230,6 +253,8 @@ github.com/containerd/fifo v0.0.0-20210316144830-115abcc95a1d/go.mod h1:ocF/ME1S
github.com/containerd/fifo v1.0.0/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
github.com/containerd/go-cni v1.0.1/go.mod h1:+vUpYxKvAF72G9i1WoDOiPGRtQpqsNW/ZHtSlv++smU=
github.com/containerd/go-cni v1.0.2/go.mod h1:nrNABBHzu0ZwCug9Ije8hL2xBCYh/pjfMb1aZGrrohk=
github.com/containerd/go-cni v1.1.0/go.mod h1:Rflh2EJ/++BA2/vY5ao3K6WJRR/bZKsX123aPk+kUtA=
github.com/containerd/go-cni v1.1.3/go.mod h1:Rflh2EJ/++BA2/vY5ao3K6WJRR/bZKsX123aPk+kUtA=
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/go-runc v0.0.0-20190911050354-e029b79d8cda/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328/go.mod h1:PpyHrqVs8FTi9vpyHwPwiNEGaACDxT/N/pLcvMSRA9g=
@@ -239,12 +264,14 @@ github.com/containerd/imgcrypt v1.0.1/go.mod h1:mdd8cEPW7TPgNG4FpuP3sGBiQ7Yi/zak
github.com/containerd/imgcrypt v1.0.4-0.20210301171431-0ae5c75f59ba/go.mod h1:6TNsg0ctmizkrOgXRNQjAPFWpMYRWuiB6dSF4Pfa5SA=
github.com/containerd/imgcrypt v1.1.1-0.20210312161619-7ed62a527887/go.mod h1:5AZJNI6sLHJljKuI9IHnw1pWqo/F0nGDOuR9zgTs7ow=
github.com/containerd/imgcrypt v1.1.1/go.mod h1:xpLnwiQmEUJPvQoAapeb2SNCxz7Xr6PJrXQb0Dpc4ms=
github.com/containerd/imgcrypt v1.1.3/go.mod h1:/TPA1GIDXMzbj01yd8pIbQiLdQxed5ue1wb8bP7PQu4=
github.com/containerd/nri v0.0.0-20201007170849-eb1350a75164/go.mod h1:+2wGSDGFYfE5+So4M5syatU0N0f0LbWpuqyMi4/BE8c=
github.com/containerd/nri v0.0.0-20210316161719-dbaa18c31c14/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
github.com/containerd/nri v0.1.0/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
github.com/containerd/stargz-snapshotter/estargz v0.4.1/go.mod h1:x7Q9dg9QYb4+ELgxmo4gBUeJB0tl5dqH1Sdz0nJU1QM=
github.com/containerd/stargz-snapshotter/estargz v0.11.0 h1:t0IW5kOmY7AXDAWRUs2uVzDhijAUOAYVr/dyRhOQvBg=
github.com/containerd/stargz-snapshotter/estargz v0.11.0/go.mod h1:/KsZXsJRllMbTKFfG0miFQWViQKdI9+9aSXs+HN0+ac=
github.com/containerd/stargz-snapshotter/estargz v0.11.3 h1:k2kN16Px6LYuv++qFqK+JTcYqc8bEVxzGpf8/gFBL5M=
github.com/containerd/stargz-snapshotter/estargz v0.11.3/go.mod h1:7vRJIcImfY8bpifnMjt+HTJoQxASq7T28MYbP15/Nf0=
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
@@ -267,21 +294,26 @@ github.com/containernetworking/cni v1.0.1/go.mod h1:AKuhXbN5EzmD4yTNtfSsX3tPcmtr
github.com/containernetworking/plugins v0.8.6/go.mod h1:qnw5mN19D8fIwkqW7oHHYDHVlzhJpcY6TQxn/fUyDDM=
github.com/containernetworking/plugins v0.9.1/go.mod h1:xP/idU2ldlzN6m4p5LmGiwRDjeJr6FLK6vuiUwoH7P8=
github.com/containernetworking/plugins v1.0.1/go.mod h1:QHCfGpaTwYTbbH+nZXKVTxNBDZcxSOplJT5ico8/FLE=
github.com/containers/common v0.47.4 h1:kS202Z/bTQIM/pwyuJ+lF8143Uli6AB9Q9OVR0xa9CM=
github.com/containers/common v0.47.4/go.mod h1:HgX0mFXyB0Tbe2REEIp9x9CxET6iSzmHfwR6S/t2LZc=
github.com/containers/image/v5 v5.19.1 h1:g4/+XIuh1kRoRn2MfLDhfHhkNOIO9JtqhSyo55tjpfE=
github.com/containers/common v0.47.5 h1:Qm9o+wVPO9sbggTKubN3xYMtPRaPv7dmcrJQgongHHw=
github.com/containers/common v0.47.5/go.mod h1:HgX0mFXyB0Tbe2REEIp9x9CxET6iSzmHfwR6S/t2LZc=
github.com/containers/image/v5 v5.19.1/go.mod h1:ewoo3u+TpJvGmsz64XgzbyTHwHtM94q7mgK/pX+v2SE=
github.com/containers/libtrust v0.0.0-20190913040956-14b96171aa3b h1:Q8ePgVfHDplZ7U33NwHZkrVELsZP5fYj9pM5WBZB2GE=
github.com/containers/image/v5 v5.20.0 h1:BYFMRvYqmEHnHo0sjTbnLbj0fzkGLDx6P57lszm30B4=
github.com/containers/image/v5 v5.20.0/go.mod h1:5UL1ooih6+USVYXk19r8ScQNsbTprhlJxrHezAu4OVE=
github.com/containers/libtrust v0.0.0-20190913040956-14b96171aa3b/go.mod h1:9rfv8iPl1ZP7aqh9YA68wnZv2NUDbXdcdPHVz0pFbPY=
github.com/containers/libtrust v0.0.0-20200511145503-9c3a6c22cd9a h1:spAGlqziZjCJL25C6F1zsQY05tfCKE9F5YwtEWWe6hU=
github.com/containers/libtrust v0.0.0-20200511145503-9c3a6c22cd9a/go.mod h1:9rfv8iPl1ZP7aqh9YA68wnZv2NUDbXdcdPHVz0pFbPY=
github.com/containers/ocicrypt v1.0.1/go.mod h1:MeJDzk1RJHv89LjsH0Sp5KTY3ZYkjXO/C+bKAeWFIrc=
github.com/containers/ocicrypt v1.1.0/go.mod h1:b8AOe0YR67uU8OqfVNcznfFpAzu3rdgUV4GP9qXPfu4=
github.com/containers/ocicrypt v1.1.1/go.mod h1:Dm55fwWm1YZAjYRaJ94z2mfZikIyIN4B0oB3dj3jFxY=
github.com/containers/ocicrypt v1.1.2 h1:Ez+GAMP/4GLix5Ywo/fL7O0nY771gsBIigiqUm1aXz0=
github.com/containers/ocicrypt v1.1.2/go.mod h1:Dm55fwWm1YZAjYRaJ94z2mfZikIyIN4B0oB3dj3jFxY=
github.com/containers/storage v1.38.2 h1:8bAIxnVBGKzMw5EWCivVj24bztQT6IkDp4uHiyhnzwE=
github.com/containers/ocicrypt v1.1.3 h1:uMxn2wTb4nDR7GqG3rnZSfpJXqWURfzZ7nKydzIeKpA=
github.com/containers/ocicrypt v1.1.3/go.mod h1:xpdkbVAuaH3WzbEabUd5yDsl9SwJA5pABH85425Es2g=
github.com/containers/storage v1.38.2/go.mod h1:INP0RPLHWBxx+pTsO5uiHlDUGHDFvWZPWprAbAlQWPQ=
github.com/containers/storage v1.39.0 h1:NV93CVx6KAQ04cldeJyqa7uDZivhmO3rXla1cyn75dk=
github.com/containers/storage v1.39.0/go.mod h1:UAD0cKLouN4BOQRgZut/nMjrh/EnTCjSNPgp4ZuGWMs=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-iptables v0.4.5/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
github.com/coreos/go-iptables v0.5.0/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
github.com/coreos/go-iptables v0.6.0/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
@@ -327,8 +359,9 @@ github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4Kfc
github.com/docker/distribution v2.8.0+incompatible h1:l9EaZDICImO1ngI+uTifW+ZYvvz7fKISBAKpg+MbWbY=
github.com/docker/distribution v2.8.0+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v1.4.2-0.20190924003213-a8608b5b67c7/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v20.10.12+incompatible h1:CEeNmFM0QZIsJCZKMkZx0ZcahTiewkrgiwfYD+dfl1U=
github.com/docker/docker v20.10.12+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v20.10.14+incompatible h1:+T9/PRYWNDo5SZl5qS1r9Mo/0Q8AwxKKPtu9S1yxM0w=
github.com/docker/docker v20.10.14+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.6.3/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
github.com/docker/docker-credential-helpers v0.6.4 h1:axCks+yV+2MR3/kZhAmy07yC56WZ2Pwu/fKWtKuZB0o=
github.com/docker/docker-credential-helpers v0.6.4/go.mod h1:ofX3UI0Gz1TteYBjtgs07O36Pyasyp66D2uKT7H8W1c=
@@ -367,11 +400,14 @@ github.com/envoyproxy/go-control-plane v0.10.1/go.mod h1:AY7fTTXNdv/aJ2O5jwpxAPO
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v0.6.2/go.mod h1:2t7qjJNvHPx8IjnBOzl9E9/baC+qXE/TeeyBRzgJDws=
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
@@ -379,6 +415,7 @@ github.com/fsnotify/fsnotify v1.5.1 h1:mZcQUHVQUQWoPXXtuf9yuEXKudkV2sx1E06UadKWp
github.com/fsnotify/fsnotify v1.5.1/go.mod h1:T3375wBYaZdLLcVNkcVbzGHY7f1l/uK5T5Ai1i3InKU=
github.com/fullsailor/pkcs7 v0.0.0-20190404230743-d7302db945fa/go.mod h1:KnogPXtdwXqoenmZCw6S+25EAm2MkxbG0deNDu4cbSA=
github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
@@ -394,21 +431,32 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2
github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.1/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/stdr v1.2.0/go.mod h1:YkVgnZu1ZjjL7xTxrfm/LLZBfkhTqSR1ydtm6jTKKwI=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0=
github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.0.0-20160704190145-13c6e3589ad9/go.mod h1:W3Z9FmVs9qj+KR4zFKmDPGiLdk1D9Rlm7cyMvf57TTg=
github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/jsonreference v0.19.5/go.mod h1:RdybgQwPxbL4UEjuAruzK1x3nE69AqPYEJeo/TWfEeg=
github.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc=
github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=
github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/godbus/dbus v0.0.0-20151105175453-c7fdd8b5cd55/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
@@ -464,6 +512,7 @@ github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiu
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@@ -482,6 +531,7 @@ github.com/google/go-intervals v0.0.2 h1:FGrVEiUnTRKR8yE04qzXYaJMtnIYqobR5QbblK3
github.com/google/go-intervals v0.0.2/go.mod h1:MkaR3LNRfeKLPmqgJYs4E66z5InYjmCjbbr4TQlcT6Y=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
@@ -513,6 +563,8 @@ github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5m
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU=
github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ=
github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
@@ -526,15 +578,19 @@ github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/ad
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/api v1.11.0/go.mod h1:XjsvQN+RJGWI2TWy1/kqaE16HrR2J/FWgkYjdZQsX9M=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms=
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
@@ -549,20 +605,25 @@ github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/mdns v1.0.1/go.mod h1:4gW7WsVCke5TE7EPeYliwHlRUyBtfCwuFwuMg2DmyNY=
github.com/hashicorp/mdns v1.0.4/go.mod h1:mtBihi+LeNXGtG8L9dX59gAEa12BDtBQSp4v/YAJqrc=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/memberlist v0.2.2/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
github.com/hashicorp/memberlist v0.3.0/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hashicorp/serf v0.9.5/go.mod h1:UWDWwZeL5cuWDJdl0C6wrvrUwEqtQ4ZKBKKENpqIUyk=
github.com/hashicorp/serf v0.9.6/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
@@ -577,6 +638,7 @@ github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/intel/goresctrl v0.2.0/go.mod h1:+CZdzouYFn5EsxgqAQTEzMfwKwuc0fVdMrT9FCCAVRQ=
github.com/j-keck/arping v0.0.0-20160618110441-2cf9dc699c56/go.mod h1:ymszkNOg6tORTn+6F6j+Jc8TOr5osrynvN6ivFWZ2GA=
github.com/j-keck/arping v1.0.2/go.mod h1:aJbELhR92bSk7tp79AWM/ftfc90EfEi2bQJrbBFOsPw=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
@@ -586,6 +648,9 @@ github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht
github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/joefitzgerald/rainbow-reporter v0.1.0/go.mod h1:481CNgqmVHQZzdIbN52CupLJyoVwB10FQ/IQlF1pdL8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
@@ -597,6 +662,7 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/kevinburke/ssh_config v1.1.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
@@ -606,8 +672,10 @@ github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+o
github.com/klauspost/compress v1.4.1/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.11.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.14.2 h1:S0OHlFk/Gbon/yauFJ4FfJJF5V0fc5HbBTJazi28pRw=
github.com/klauspost/compress v1.14.2/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.14.4/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.15.1 h1:y9FcTHGyrebwfP0ZZqFiaxTaiDnUrGkJkI+f583BL1A=
github.com/klauspost/compress v1.15.1/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/pgzip v1.2.5 h1:qnWYvvKqedOF2ulHpMG72XQol4ILEJ8k2wwRl/Km8oE=
github.com/klauspost/pgzip v1.2.5/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
@@ -628,12 +696,15 @@ github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/linuxkit/virtsock v0.0.0-20201010232012-f8cee7dfc7a3/go.mod h1:3r6x7q95whyfWQpmGZTu3gk3v2YkMi05HEzl7Tf7YEo=
github.com/lyft/protoc-gen-star v0.5.3/go.mod h1:V0xaHgaf5oCCqmcxYcWiDfTiKsZsRc87/1qhoTACD8w=
github.com/magefile/mage v1.11.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/magefile/mage v1.12.1/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/manifoldco/promptui v0.9.0/go.mod h1:ka04sppxSGFAtxX0qhlYQjISsg9mR4GWtQEhdbn6Pgg=
github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
github.com/matryer/is v1.2.0/go.mod h1:2fLPjFQM9rhQ15aVEtbuwhJinnOqrmgXPNdZsdwlWXA=
@@ -663,24 +734,34 @@ github.com/maxbrunsfeld/counterfeiter/v6 v6.2.2/go.mod h1:eD9eIE7cdwcMi9rYluz88J
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
github.com/miekg/pkcs11 v1.0.3 h1:iMwmD7I5225wv84WxIG/bmxz9AXjWvTWIbM/TYHvWtw=
github.com/miekg/pkcs11 v1.0.3/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/miekg/pkcs11 v1.1.1 h1:Ugu9pdy6vAYku5DEpVWVFPYnzV+bxB+iRdbuFSu7TvU=
github.com/miekg/pkcs11 v1.1.1/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible h1:aKW/4cBs+yK6gpqU3K/oIwk9Q/XICqd3zOX/UFuvqmk=
github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/sys/mountinfo v0.4.0/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/moby/sys/mountinfo v0.5.0 h1:2Ks8/r6lopsxWi9m58nlwjaeSzUX9iiL1vj5qB/9ObI=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/mountinfo v0.6.0 h1:gUDhXQx58YNrpHlK4nSL+7y2pxFZkUcXqzFDKWdC0Oo=
github.com/moby/sys/mountinfo v0.6.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/signal v0.6.0/go.mod h1:GQ6ObYZfqacOwTtlXvcmh9A26dVRul/hbOZn88Kg8Tg=
github.com/moby/sys/symlink v0.1.0/go.mod h1:GGDODQmbFOjFsXvfLVn3+ZRxkch54RkSiGqsZeMYowQ=
github.com/moby/sys/symlink v0.2.0/go.mod h1:7uZVF2dqJjG/NsClqul95CqKOBRQyYSNnJ6BMgR/gFs=
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo=
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297/go.mod h1:vgPCkQMyxTZ7IDy8SXRufE172gr8+K/JE/7hHFxHW3A=
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 h1:dcztxKSvZ4Id8iPpHERQBbIJfabdt4wUm5qy3wOL2Zc=
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -696,6 +777,7 @@ github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
@@ -714,6 +796,7 @@ github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+
github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.13.0/go.mod h1:+REjRxOmWfHCjfv9TTWB1jD1Frx4XydAD3zm1lskyM0=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
@@ -739,6 +822,7 @@ github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.2-0.20211117181255-693428a734f5/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.3-0.20211202193544-a5463b7f9c84 h1:g47eG1u/gw0JB7mZ88TcHKCmsy7sWUNZD8ZS9Jhi0O8=
github.com/opencontainers/image-spec v1.0.3-0.20211202193544-a5463b7f9c84/go.mod h1:Qnt1q4cjDNQI9bT832ziho5Iw2BhK8o1KwLOwW56VP4=
@@ -766,12 +850,15 @@ github.com/opencontainers/selinux v1.8.0/go.mod h1:RScLhm78qiWa2gbVCcGkC7tCGdgk3
github.com/opencontainers/selinux v1.8.2/go.mod h1:MUIHuUEvKB1wtJjQdOyYRgOnLD2xAPP8dBsCoU0KuF8=
github.com/opencontainers/selinux v1.10.0 h1:rAiKF8hTcgLI3w0DHm6i0ylVVcOrlgR1kK99DRLDhyU=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/ostreedev/ostree-go v0.0.0-20190702140239-759a8c1ac913 h1:TnbXhKzrTOyuvWrjI8W6pcoI9XPbLHFXCdN2dtUw7Rw=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/ostreedev/ostree-go v0.0.0-20190702140239-759a8c1ac913/go.mod h1:J6OG6YJVEWopen4avK3VNQSnALmmjvniMmni/YFYAwc=
github.com/ostreedev/ostree-go v0.0.0-20210805093236-719684c64e4f h1:/UDgs8FGMqwnHagNDPGOlts35QkhAZ8by3DR7nMih7M=
github.com/ostreedev/ostree-go v0.0.0-20210805093236-719684c64e4f/go.mod h1:J6OG6YJVEWopen4avK3VNQSnALmmjvniMmni/YFYAwc=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.8.1/go.mod h1:T2/BmBdy8dvIRq1a/8aqjN41wvWlN4lrapLU/GW4pbc=
github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/pelletier/go-toml v1.9.4/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@@ -793,8 +880,10 @@ github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDf
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_golang v1.7.1 h1:NTGy1Ja9pByO+xAeH/qiWnLrKtr3hJPNjaVUwnjpdpA=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.11.1 h1:+4eQaD7vAZ6DsfsxB15hbE0odUjGI5ARs9yskGu1v4s=
github.com/prometheus/client_golang v1.11.1/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -807,8 +896,10 @@ github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0 h1:RyRA7RzGXQZiW+tGMr7sxa85G1z0yOpM1qq5c8lNawc=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.30.0 h1:JEkYlQnpzrzQFxi6gnukFPdQ+ac82oRhzMcIduJu/Ug=
github.com/prometheus/common v0.30.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
@@ -819,8 +910,9 @@ github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDa
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0 h1:mxy4L2jP6qMonqmq+aTtOx1ifVWUgG/TAmntgbh3xv4=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0VU=
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
@@ -861,7 +953,9 @@ github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
@@ -872,8 +966,10 @@ github.com/spf13/cast v1.4.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkU
github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.3.0 h1:R7cSvGu+Vv+qX0gW5R/85dx2kmmJT5z5NM8ifdYjdn0=
github.com/spf13/cobra v1.1.3/go.mod h1:pGADOWyqRD/YMrPZigI/zbliZ2wVD/23d+is3pSWzOo=
github.com/spf13/cobra v1.3.0/go.mod h1:BrRVncBjOJa/eUcVVm9CE+oC6as8k+VYr4NY7WCi9V4=
github.com/spf13/cobra v1.4.0 h1:y+wJpx64xcgO1V+RcnwW0LEHxTKRi2ZDPSBjWnrg88Q=
github.com/spf13/cobra v1.4.0/go.mod h1:Wo4iy3BUC+X2Fybo0PDqwJIv3dNRiZLHQymsfxlB84g=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
@@ -883,9 +979,11 @@ github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnIn
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/spf13/viper v1.10.0/go.mod h1:SoyBPwAtKDzypXNDFKN5kzH7ppppbGZtls1UpIy5AsM=
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 h1:lIOOHPEbXzO3vnmx2gok1Tfs31Q8GQqKLc8vVqyQq/I=
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980/go.mod h1:AO3tvPzVZ/ayst6UlUKUv6rcPQInYe3IknH3jYhAKu8=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
@@ -896,12 +994,14 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/sylabs/release-tools v0.1.0/go.mod h1:pqP/z/11/rYMQ0OM/Nn7TxGijw7KfZwW9UolD/J1TUo=
github.com/sylabs/sif/v2 v2.3.1 h1:NHoc/rZpnOS05etmT+j8IJOZP2Cc8zHHG8rKSVosvZs=
github.com/sylabs/sif/v2 v2.3.1/go.mod h1:NnvveH62GiibimL00MrI6YYcZfb7DnZMcRo/40giY+0=
github.com/sylabs/sif/v2 v2.3.2 h1:Kj60dUcE3TSM8Px4TaIbX7PUafB1QGhUi70Fz5Gf7iU=
github.com/sylabs/sif/v2 v2.3.2/go.mod h1:IrLX2pzmQ2O4qgv5iy3HdKJcBNYds9DTMd9Je8A9tX4=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI=
@@ -911,7 +1011,9 @@ github.com/tchap/go-patricia v2.3.0+incompatible h1:GkY4dP3cEfEASBPPkWd+AmjYxhmD
github.com/tchap/go-patricia v2.3.0+incompatible/go.mod h1:bmLyhP68RS6kStMGxByiQ23RP/odRBOTVjwp2cDyi6I=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c/go.mod h1:hzIxponao9Kjc7aWznkXaL4U4TWaDSs8zcsY4Ka08nM=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ulikunitz/xz v0.5.8/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
github.com/ulikunitz/xz v0.5.10 h1:t92gobL9l3HE202wg3rlk19F6X+JOxl9BBrCCMYEYd8=
@@ -938,8 +1040,9 @@ github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr
github.com/xanzy/ssh-agent v0.3.0/go.mod h1:3s9xbODqPuuhK9JV1R321M/FlMZSBvE5aY6eAcqrDh0=
github.com/xanzy/ssh-agent v0.3.1/go.mod h1:QIE4lCeL7nkC25x+yA3LBIYfwCc1TFziCtG7cBAac6w=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonpointer v0.0.0-20190809123943-df4f5c81cb3b h1:6cLsL+2FW6dRAdl5iMtHgRogVCff0QpRi9653YmdcJA=
github.com/xeipuuv/gojsonpointer v0.0.0-20190809123943-df4f5c81cb3b/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
@@ -961,9 +1064,16 @@ go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/bbolt v1.3.6 h1:/ecaJf0sk1l4l6V4awd65v2C3ILy7MSj+s/x1ADCIMU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/api/v3 v3.5.1/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/pkg/v3 v3.5.1/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v2 v2.305.0/go.mod h1:h9puh54ZTgAKtEbut2oe9P4L/oqKCVB6xsXlzd7alYQ=
go.etcd.io/etcd/client/v2 v2.305.1/go.mod h1:pMEacxZW7o8pg4CrFE7pquyCJJzZvkvdD2RibOCCCGs=
go.etcd.io/etcd/client/v3 v3.5.0/go.mod h1:AIKXXVX/DQXtfTEqBryiLTUXwON+GuvO6Z7lLS/oTh0=
go.etcd.io/etcd/pkg/v3 v3.5.0/go.mod h1:UzJGatBQ1lXChBkQF0AuAtkRQMYnHubxAEYIrC3MSsE=
go.etcd.io/etcd/raft/v3 v3.5.0/go.mod h1:UFOHSIvO/nKwd4lhkwabrTD3cqW5yVyYYf/KlD00Szc=
go.etcd.io/etcd/server/v3 v3.5.0/go.mod h1:3Ah5ruV+M+7RZr0+Y/5mNLwC+eQlni+mQmOVdCRJoS4=
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1 h1:A/5uWzF44DlIgdm/PQFwfMkW0JX+cIcQi/SwLAmZP5M=
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
@@ -974,10 +1084,32 @@ go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/contrib v0.20.0/go.mod h1:G/EtFaa6qaN7+LxqfIAT3GiZa7Wv5DTBUzl5H4LY0Kc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0/go.mod h1:oVGt1LRbBOBq1A5BQLlUg9UaU/54aiHw8cgjV3aWZ/E=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.28.0/go.mod h1:vEhqr0m4eTc+DWxfsXoXue2GBgV2uUwVznkGIHW/e5w=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0/go.mod h1:2AboqHi0CiIZU0qwhtUfCYD1GeUzvvIXWNkhDt7ZMG4=
go.opentelemetry.io/otel v0.20.0/go.mod h1:Y3ugLH2oa81t5QO+Lty+zXf8zC9L26ax4Nzoxm/dooo=
go.opentelemetry.io/otel v1.3.0/go.mod h1:PWIKzi6JCp7sM0k9yZ43VX+T345uNbAkDKwHVjb2PTs=
go.opentelemetry.io/otel/exporters/otlp v0.20.0/go.mod h1:YIieizyaN77rtLJra0buKiNBOm9XQfkPEKBeuhoMwAM=
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.3.0/go.mod h1:VpP4/RMn8bv8gNo9uK7/IMY4mtWLELsS+JIP0inH0h4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.3.0/go.mod h1:hO1KLR7jcKaDDKDkvI9dP/FIhpmna5lkqPUQdEjFAM8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.3.0/go.mod h1:keUU7UfnwWTWpJ+FWnyqmogPa82nuU5VUANFq49hlMY=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.3.0/go.mod h1:QNX1aly8ehqqX1LEa6YniTU7VY9I6R3X/oPxhGdTceE=
go.opentelemetry.io/otel/metric v0.20.0/go.mod h1:598I5tYlH1vzBjn+BTuhzTCSb/9debfNp6R3s7Pr1eU=
go.opentelemetry.io/otel/oteltest v0.20.0/go.mod h1:L7bgKf9ZB7qCwT9Up7i9/pn0PWIa9FqQ2IQ8LoxiGnw=
go.opentelemetry.io/otel/sdk v0.20.0/go.mod h1:g/IcepuwNsoiX5Byy2nNV0ySUF1em498m7hBWC279Yc=
go.opentelemetry.io/otel/sdk v1.3.0/go.mod h1:rIo4suHNhQwBIPg9axF8V9CA72Wz2mKF1teNrup8yzs=
go.opentelemetry.io/otel/sdk/export/metric v0.20.0/go.mod h1:h7RBNMsDJ5pmI1zExLi+bJK+Dr8NQCh0qGhm1KDnNlE=
go.opentelemetry.io/otel/sdk/metric v0.20.0/go.mod h1:knxiS8Xd4E/N+ZqKmUPf3gTTZ4/0TjTXukfxjzSTpHE=
go.opentelemetry.io/otel/trace v0.20.0/go.mod h1:6GjCW8zgDjwGHGa6GkyeB8+/5vjT16gUEi0Nf1iBdgw=
go.opentelemetry.io/otel/trace v1.3.0/go.mod h1:c/VDhno8888bvQYmbYLqe41/Ldmr/KKunbvWM4/fEjk=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.opentelemetry.io/proto/otlp v0.11.0/go.mod h1:QpEjXPrNQzrFDZgoTo49dgHR9RYRSrg3NAKnUGl9YpQ=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/goleak v1.1.12/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
@@ -998,6 +1130,7 @@ golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
@@ -1047,6 +1180,7 @@ golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1084,6 +1218,7 @@ golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
@@ -1094,11 +1229,16 @@ golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96b
golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210825183410-e898025ed96a/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210929193557-e81a3d93ecf6/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 h1:CIJ76btIcR3eFI5EgSo6k1qKw9KJexJuRLI9G7Hp5wE=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211209124913-491a49abca63/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f h1:oA4XRj0qtSt8Yo1Zms0CUlsT3KG69V2UGQWPBxujDmc=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1189,9 +1329,11 @@ golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200817155316-9781c653f443/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200831180312-196b9ba8737a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1216,10 +1358,12 @@ golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210426230700-d19ff857e887/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210502180810-71e4cd670f79/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -1228,6 +1372,8 @@ golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210816183151-1e6c022a8912/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210903071746-97244b99971b/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -1238,11 +1384,16 @@ golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211205182925-97ca703d548d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 h1:XfKQ4OlFl8okEOr5UvAqFRVj8pY/4yfcXrddB8qAbU0=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9 h1:nhht2DYV/Sn3qOayu8lM+cU1ii9sTLUeBQwQQfUHtrs=
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b h1:9zKuko04nR4gjZ4+DNjHqRlAJqbJETHwiNKDqTfOjfE=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1258,8 +1409,10 @@ golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e h1:EHBhcS0mlXEAVwNyO2dLfjToGsyY4j24pTs2ScHnX7s=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac h1:7zkz7BUtwNFFqcowJ+RIgu2MaV/MapERkDIy+mwPyjs=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -1283,6 +1436,8 @@ golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@@ -1394,6 +1549,7 @@ google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfG
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
@@ -1405,6 +1561,7 @@ google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
@@ -1437,8 +1594,9 @@ google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ6
google.golang.org/genproto v0.0.0-20211129164237-f09f9a12af12/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211203200212-54befc351ae9/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa h1:I0YcKz0I7OAhddo7ya8kMnvprhcWM045PmkBdMO9zN0=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8 h1:U9V52f6rAgINH7kT+musA1qF8kWyVOxzF8eYuOVuFwQ=
google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
@@ -1469,8 +1627,10 @@ google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnD
google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.42.0 h1:XT2/MFpuPFsEX2fWh3YQtHkZ+WYZFQRfaUgLZYj/p6A=
google.golang.org/grpc v1.42.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/grpc v1.43.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/grpc v1.44.0 h1:weqSxi/TMs1SqFRMHCtBgXRs8k3X39QIDEZ0pRcttUg=
google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
@@ -1500,6 +1660,7 @@ gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/ini.v1 v1.66.2/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
@@ -1521,6 +1682,7 @@ gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
@@ -1538,40 +1700,55 @@ honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9
k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo=
k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ=
k8s.io/api v0.20.6/go.mod h1:X9e8Qag6JV/bL5G6bU8sdVRltWKmdHsFUGS3eVndqE8=
k8s.io/api v0.22.5/go.mod h1:mEhXyLaSD1qTOf40rRiKXkc+2iCem09rWLlFwhCEiAs=
k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apimachinery v0.20.6/go.mod h1:ejZXtW1Ra6V1O5H8xPBGz+T3+4gfkTCeExAHKU57MAc=
k8s.io/apimachinery v0.22.1/go.mod h1:O3oNtNadZdeOMxHFVxOreoznohCpy0z6mocxbZr7oJ0=
k8s.io/apimachinery v0.22.5/go.mod h1:xziclGKwuuJ2RM5/rSFQSYAj0zdbci3DH8kj+WvyN0U=
k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU=
k8s.io/apiserver v0.20.4/go.mod h1:Mc80thBKOyy7tbvFtB4kJv1kbdD0eIH8k8vianJcbFM=
k8s.io/apiserver v0.20.6/go.mod h1:QIJXNt6i6JB+0YQRNcS0hdRHJlMhflFmsBDeSgT1r8Q=
k8s.io/apiserver v0.22.5/go.mod h1:s2WbtgZAkTKt679sYtSudEQrTGWUSQAPe6MupLnlmaQ=
k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y=
k8s.io/client-go v0.20.4/go.mod h1:LiMv25ND1gLUdBeYxBIwKpkSC5IsozMMmOOeSJboP+k=
k8s.io/client-go v0.20.6/go.mod h1:nNQMnOvEUEsOzRRFIIkdmYOjAZrC8bgq0ExboWSU1I0=
k8s.io/client-go v0.22.5/go.mod h1:cs6yf/61q2T1SdQL5Rdcjg9J1ElXSwbjSrW2vFImM4Y=
k8s.io/code-generator v0.19.7/go.mod h1:lwEq3YnLYb/7uVXLorOJfxg+cUu2oihFhHZ0n9NIla0=
k8s.io/component-base v0.20.1/go.mod h1:guxkoJnNoh8LNrbtiQOlyp2Y2XFCZQmrcg2n/DeYNLk=
k8s.io/component-base v0.20.4/go.mod h1:t4p9EdiagbVCJKrQ1RsA5/V4rFQNDfRlevJajlGwgjI=
k8s.io/component-base v0.20.6/go.mod h1:6f1MPBAeI+mvuts3sIdtpjljHWBQ2cIy38oBIWMYnrM=
k8s.io/component-base v0.22.5/go.mod h1:VK3I+TjuF9eaa+Ln67dKxhGar5ynVbwnGrUiNF4MqCI=
k8s.io/cri-api v0.17.3/go.mod h1:X1sbHmuXhwaHs9xxYffLqJogVsnI+f6cPRcgPel7ywM=
k8s.io/cri-api v0.20.1/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
k8s.io/cri-api v0.20.4/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
k8s.io/cri-api v0.20.6/go.mod h1:ew44AjNXwyn1s0U4xCKGodU7J1HzBeZ1MpGrpa5r8Yc=
k8s.io/cri-api v0.23.1/go.mod h1:REJE3PSU0h/LOV1APBrupxrEJqnoxZC8KWzkBUHwrK4=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20200428234225-8167cfdcfc14/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20201113003025-83324d819ded/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/klog/v2 v2.9.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
k8s.io/klog/v2 v2.30.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6/go.mod h1:UuqjUnNftUyPE5H64/qeyjQoUZhGpeFDVdxjTeEVN2o=
k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
k8s.io/kube-openapi v0.0.0-20211109043538-20434351676c/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20210819203725-bdf08cb9a70a/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.15/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.22/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/structured-merge-diff/v4 v4.0.1/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.0.3/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.2/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=

View File

@@ -81,7 +81,6 @@ provides packages for Ubuntu 20.04 (it should also work with direct derivatives
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add -
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get -y install skopeo
```

View File

@@ -36,12 +36,12 @@ func (s *SkopeoSuite) SetUpSuite(c *check.C) {
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
if s.regV2 != nil {
s.regV2.Close()
s.regV2.tearDown(c)
}
if s.regV2WithAuth != nil {
//cmd := exec.Command("docker", "logout", s.regV2WithAuth)
//c.Assert(cmd.Run(), check.IsNil)
s.regV2WithAuth.Close()
s.regV2WithAuth.tearDown(c)
}
}

View File

@@ -64,9 +64,7 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, false, false)
s.s1Registry = setupRegistryV2At(c, v2s1DockerRegistryURL, false, true)
gpgHome, err := ioutil.TempDir("", "skopeo-gpg")
c.Assert(err, check.IsNil)
s.gpgHome = gpgHome
s.gpgHome = c.MkDir()
os.Setenv("GNUPGHOME", s.gpgHome)
for _, key := range []string{"personal", "official"} {
@@ -82,14 +80,11 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
}
func (s *CopySuite) TearDownSuite(c *check.C) {
if s.gpgHome != "" {
os.RemoveAll(s.gpgHome)
}
if s.registry != nil {
s.registry.Close()
s.registry.tearDown(c)
}
if s.s1Registry != nil {
s.s1Registry.Close()
s.s1Registry.tearDown(c)
}
if s.cluster != nil {
s.cluster.tearDown(c)
@@ -97,32 +92,20 @@ func (s *CopySuite) TearDownSuite(c *check.C) {
}
func (s *CopySuite) TestCopyWithManifestList(c *check.C) {
dir, err := ioutil.TempDir("", "copy-manifest-list")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir)
dir := c.MkDir()
assertSkopeoSucceeds(c, "", "copy", knownListImage, "dir:"+dir)
}
func (s *CopySuite) TestCopyAllWithManifestList(c *check.C) {
dir, err := ioutil.TempDir("", "copy-all-manifest-list")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir)
dir := c.MkDir()
assertSkopeoSucceeds(c, "", "copy", "--all", knownListImage, "dir:"+dir)
}
func (s *CopySuite) TestCopyAllWithManifestListRoundTrip(c *check.C) {
oci1, err := ioutil.TempDir("", "copy-all-manifest-list-oci")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci1)
oci2, err := ioutil.TempDir("", "copy-all-manifest-list-oci")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci2)
dir1, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
oci1 := c.MkDir()
oci2 := c.MkDir()
dir1 := c.MkDir()
dir2 := c.MkDir()
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", knownListImage, "oci:"+oci1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "oci:"+oci1, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "dir:"+dir1, "oci:"+oci2)
@@ -133,18 +116,10 @@ func (s *CopySuite) TestCopyAllWithManifestListRoundTrip(c *check.C) {
}
func (s *CopySuite) TestCopyAllWithManifestListConverge(c *check.C) {
oci1, err := ioutil.TempDir("", "copy-all-manifest-list-oci")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci1)
oci2, err := ioutil.TempDir("", "copy-all-manifest-list-oci")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci2)
dir1, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
oci1 := c.MkDir()
oci2 := c.MkDir()
dir1 := c.MkDir()
dir2 := c.MkDir()
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", knownListImage, "oci:"+oci1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "oci:"+oci1, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "--format", "oci", knownListImage, "dir:"+dir2)
@@ -155,9 +130,7 @@ func (s *CopySuite) TestCopyAllWithManifestListConverge(c *check.C) {
}
func (s *CopySuite) TestCopyNoneWithManifestList(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir1 := c.MkDir()
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=index-only", knownListImage, "dir:"+dir1)
manifestPath := filepath.Join(dir1, "manifest.json")
@@ -170,18 +143,10 @@ func (s *CopySuite) TestCopyNoneWithManifestList(c *check.C) {
}
func (s *CopySuite) TestCopyWithManifestListConverge(c *check.C) {
oci1, err := ioutil.TempDir("", "copy-all-manifest-list-oci")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci1)
oci2, err := ioutil.TempDir("", "copy-all-manifest-list-oci")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci2)
dir1, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-all-manifest-list-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
oci1 := c.MkDir()
oci2 := c.MkDir()
dir1 := c.MkDir()
dir2 := c.MkDir()
assertSkopeoSucceeds(c, "", "copy", knownListImage, "oci:"+oci1)
assertSkopeoSucceeds(c, "", "copy", "--multi-arch=all", "oci:"+oci1, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "--format", "oci", knownListImage, "dir:"+dir2)
@@ -192,24 +157,16 @@ func (s *CopySuite) TestCopyWithManifestListConverge(c *check.C) {
}
func (s *CopySuite) TestCopyAllWithManifestListStorageFails(c *check.C) {
storage, err := ioutil.TempDir("", "copy-storage")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
assertSkopeoFails(c, `.*destination transport .* does not support copying multiple images as a group.*`, "copy", "--multi-arch=all", knownListImage, "containers-storage:"+storage+"test")
}
func (s *CopySuite) TestCopyWithManifestListStorage(c *check.C) {
storage, err := ioutil.TempDir("", "copy-manifest-list-storage")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
dir1, err := ioutil.TempDir("", "copy-manifest-list-storage-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-manifest-list-storage-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
dir1 := c.MkDir()
dir2 := c.MkDir()
assertSkopeoSucceeds(c, "", "copy", knownListImage, "containers-storage:"+storage+"test")
assertSkopeoSucceeds(c, "", "copy", knownListImage, "dir:"+dir1)
assertSkopeoSucceeds(c, "", "copy", "containers-storage:"+storage+"test", "dir:"+dir2)
@@ -218,16 +175,10 @@ func (s *CopySuite) TestCopyWithManifestListStorage(c *check.C) {
}
func (s *CopySuite) TestCopyWithManifestListStorageMultiple(c *check.C) {
storage, err := ioutil.TempDir("", "copy-manifest-list-storage-multiple")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
dir1, err := ioutil.TempDir("", "copy-manifest-list-storage-multiple-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-manifest-list-storage-multiple-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
dir1 := c.MkDir()
dir2 := c.MkDir()
assertSkopeoSucceeds(c, "", "--override-arch", "amd64", "copy", knownListImage, "containers-storage:"+storage+"test")
assertSkopeoSucceeds(c, "", "--override-arch", "arm64", "copy", knownListImage, "containers-storage:"+storage+"test")
assertSkopeoSucceeds(c, "", "--override-arch", "arm64", "copy", knownListImage, "dir:"+dir1)
@@ -237,18 +188,10 @@ func (s *CopySuite) TestCopyWithManifestListStorageMultiple(c *check.C) {
}
func (s *CopySuite) TestCopyWithManifestListDigest(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-manifest-list-digest-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-manifest-list-digest-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
oci1, err := ioutil.TempDir("", "copy-manifest-list-digest-oci")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci1)
oci2, err := ioutil.TempDir("", "copy-manifest-list-digest-oci")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci2)
dir1 := c.MkDir()
dir2 := c.MkDir()
oci1 := c.MkDir()
oci2 := c.MkDir()
m := combinedOutputOfCommand(c, skopeoBinary, "inspect", "--raw", knownListImage)
manifestDigest, err := manifest.Digest([]byte(m))
c.Assert(err, check.IsNil)
@@ -262,12 +205,8 @@ func (s *CopySuite) TestCopyWithManifestListDigest(c *check.C) {
}
func (s *CopySuite) TestCopyWithDigestfileOutput(c *check.C) {
tempdir, err := ioutil.TempDir("", "tempdir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tempdir)
dir1, err := ioutil.TempDir("", "copy-manifest-list-digest-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
tempdir := c.MkDir()
dir1 := c.MkDir()
digestOutPath := filepath.Join(tempdir, "digest.txt")
assertSkopeoSucceeds(c, "", "copy", "--digestfile="+digestOutPath, knownListImage, "dir:"+dir1)
readDigest, err := ioutil.ReadFile(digestOutPath)
@@ -277,16 +216,10 @@ func (s *CopySuite) TestCopyWithDigestfileOutput(c *check.C) {
}
func (s *CopySuite) TestCopyWithManifestListStorageDigest(c *check.C) {
storage, err := ioutil.TempDir("", "copy-manifest-list-storage-digest")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
dir1, err := ioutil.TempDir("", "copy-manifest-list-storage-digest-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-manifest-list-storage-digest-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
dir1 := c.MkDir()
dir2 := c.MkDir()
m := combinedOutputOfCommand(c, skopeoBinary, "inspect", "--raw", knownListImage)
manifestDigest, err := manifest.Digest([]byte(m))
c.Assert(err, check.IsNil)
@@ -299,16 +232,10 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigest(c *check.C) {
}
func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArches(c *check.C) {
storage, err := ioutil.TempDir("", "copy-manifest-list-storage-digest")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
dir1, err := ioutil.TempDir("", "copy-manifest-list-storage-digest-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-manifest-list-storage-digest-dir")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
dir1 := c.MkDir()
dir2 := c.MkDir()
m := combinedOutputOfCommand(c, skopeoBinary, "inspect", "--raw", knownListImage)
manifestDigest, err := manifest.Digest([]byte(m))
c.Assert(err, check.IsNil)
@@ -321,9 +248,7 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArches(c *check
}
func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesBothUseListDigest(c *check.C) {
storage, err := ioutil.TempDir("", "copy-manifest-list-storage-digest-multiple-arches-both")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
m := combinedOutputOfCommand(c, skopeoBinary, "inspect", "--raw", knownListImage)
manifestDigest, err := manifest.Digest([]byte(m))
@@ -343,9 +268,7 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesBothUseLi
}
func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesFirstUsesListDigest(c *check.C) {
storage, err := ioutil.TempDir("", "copy-manifest-list-storage-digest-multiple-arches-first")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
m := combinedOutputOfCommand(c, skopeoBinary, "inspect", "--raw", knownListImage)
manifestDigest, err := manifest.Digest([]byte(m))
@@ -379,9 +302,7 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesFirstUses
}
func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesSecondUsesListDigest(c *check.C) {
storage, err := ioutil.TempDir("", "copy-manifest-list-storage-digest-multiple-arches-second")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
m := combinedOutputOfCommand(c, skopeoBinary, "inspect", "--raw", knownListImage)
manifestDigest, err := manifest.Digest([]byte(m))
@@ -415,9 +336,7 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesSecondUse
}
func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesThirdUsesListDigest(c *check.C) {
storage, err := ioutil.TempDir("", "copy-manifest-list-storage-digest-multiple-arches-third")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
m := combinedOutputOfCommand(c, skopeoBinary, "inspect", "--raw", knownListImage)
manifestDigest, err := manifest.Digest([]byte(m))
@@ -451,9 +370,7 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesThirdUses
}
func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesTagAndDigest(c *check.C) {
storage, err := ioutil.TempDir("", "copy-manifest-list-storage-digest-multiple-arches-tag-digest")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
m := combinedOutputOfCommand(c, skopeoBinary, "inspect", "--raw", knownListImage)
manifestDigest, err := manifest.Digest([]byte(m))
@@ -496,28 +413,20 @@ func (s *CopySuite) TestCopyWithManifestListStorageDigestMultipleArchesTagAndDig
}
func (s *CopySuite) TestCopyFailsWhenImageOSDoesNotMatchRuntimeOS(c *check.C) {
storage, err := ioutil.TempDir("", "copy-fails-image-does-not-match-runtime")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
assertSkopeoFails(c, `.*no image found in manifest list for architecture .*, variant .*, OS .*`, "copy", knownWindowsOnlyImage, "containers-storage:"+storage+"test")
}
func (s *CopySuite) TestCopySucceedsWhenImageDoesNotMatchRuntimeButWeOverride(c *check.C) {
storage, err := ioutil.TempDir("", "copy-succeeds-image-does-not-match-runtime-but-override")
c.Assert(err, check.IsNil)
defer os.RemoveAll(storage)
storage := c.MkDir()
storage = fmt.Sprintf("[vfs@%s/root+%s/runroot]", storage, storage)
assertSkopeoSucceeds(c, "", "--override-os=windows", "--override-arch=amd64", "copy", knownWindowsOnlyImage, "containers-storage:"+storage+"test")
}
func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
dir1 := c.MkDir()
dir2 := c.MkDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
// "pull": docker: → dir:
@@ -533,12 +442,8 @@ func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
func (s *CopySuite) TestCopySimple(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "copy-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
dir1 := c.MkDir()
dir2 := c.MkDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
// "pull": docker: → dir:
@@ -557,7 +462,7 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
ociImgName := "pause"
defer os.RemoveAll(ociDest)
assertSkopeoSucceeds(c, "", "copy", "docker://k8s.gcr.io/pause:latest", "oci:"+ociDest+":"+ociImgName)
_, err = os.Stat(ociDest)
_, err := os.Stat(ociDest)
c.Assert(err, check.IsNil)
// docker v2s2 -> OCI image layout without image name
@@ -569,31 +474,14 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
}
func (s *CopySuite) TestCopyEncryption(c *check.C) {
originalImageDir, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(originalImageDir)
encryptedImgDir, err := ioutil.TempDir("", "copy-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(encryptedImgDir)
decryptedImgDir, err := ioutil.TempDir("", "copy-3")
c.Assert(err, check.IsNil)
defer os.RemoveAll(decryptedImgDir)
keysDir, err := ioutil.TempDir("", "copy-4")
c.Assert(err, check.IsNil)
defer os.RemoveAll(keysDir)
undecryptedImgDir, err := ioutil.TempDir("", "copy-5")
c.Assert(err, check.IsNil)
defer os.RemoveAll(undecryptedImgDir)
multiLayerImageDir, err := ioutil.TempDir("", "copy-6")
c.Assert(err, check.IsNil)
defer os.RemoveAll(multiLayerImageDir)
partiallyEncryptedImgDir, err := ioutil.TempDir("", "copy-7")
c.Assert(err, check.IsNil)
defer os.RemoveAll(partiallyEncryptedImgDir)
partiallyDecryptedImgDir, err := ioutil.TempDir("", "copy-8")
c.Assert(err, check.IsNil)
defer os.RemoveAll(partiallyDecryptedImgDir)
originalImageDir := c.MkDir()
encryptedImgDir := c.MkDir()
decryptedImgDir := c.MkDir()
keysDir := c.MkDir()
undecryptedImgDir := c.MkDir()
multiLayerImageDir := c.MkDir()
partiallyEncryptedImgDir := c.MkDir()
partiallyDecryptedImgDir := c.MkDir()
// Create RSA key pair
privateKey, err := rsa.GenerateKey(rand.Reader, 4096)
@@ -745,12 +633,8 @@ func assertSchema1DirImagesAreEqualExceptNames(c *check.C, dir1, ref1, dir2, ref
// Streaming (skopeo copy)
func (s *CopySuite) TestCopyStreaming(c *check.C) {
dir1, err := ioutil.TempDir("", "streaming-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir2, err := ioutil.TempDir("", "streaming-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir2)
dir1 := c.MkDir()
dir2 := c.MkDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
// streaming: docker: → atomic:
@@ -770,12 +654,8 @@ func (s *CopySuite) TestCopyStreaming(c *check.C) {
func (s *CopySuite) TestCopyOCIRoundTrip(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
oci1, err := ioutil.TempDir("", "oci-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci1)
oci2, err := ioutil.TempDir("", "oci-2")
c.Assert(err, check.IsNil)
defer os.RemoveAll(oci2)
oci1 := c.MkDir()
oci2 := c.MkDir()
// Docker -> OCI
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", testFQIN, "oci:"+oci1+":latest")
@@ -798,7 +678,7 @@ func (s *CopySuite) TestCopyOCIRoundTrip(c *check.C) {
// Verify using the upstream OCI image validator, this should catch most
// non-compliance errors. DO NOT REMOVE THIS TEST UNLESS IT'S ABSOLUTELY
// NECESSARY.
err = image.ValidateLayout(oci1, nil, logger)
err := image.ValidateLayout(oci1, nil, logger)
c.Assert(err, check.IsNil)
err = image.ValidateLayout(oci2, nil, logger)
c.Assert(err, check.IsNil)
@@ -820,9 +700,7 @@ func (s *CopySuite) TestCopySignatures(c *check.C) {
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
dir, err := ioutil.TempDir("", "signatures-dest")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir)
dir := c.MkDir()
dirDest := "dir:" + dir
policy := fileFromFixture(c, "fixtures/policy.json", map[string]string{"@keydir@": s.gpgHome})
@@ -876,9 +754,7 @@ func (s *CopySuite) TestCopyDirSignatures(c *check.C) {
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
topDir, err := ioutil.TempDir("", "dir-signatures-top")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
topDir := c.MkDir()
topDirDest := "dir:" + topDir
for _, suffix := range []string{"/dir1", "/dir2", "/restricted/personal", "/restricted/official", "/restricted/badidentity", "/dest"} {
@@ -921,9 +797,7 @@ func (s *CopySuite) TestCopyDirSignatures(c *check.C) {
func (s *CopySuite) TestCopyCompression(c *check.C) {
const uncompresssedLayerFile = "160d823fdc48e62f97ba62df31e55424f8f5eb6b679c865eec6e59adfe304710"
topDir, err := ioutil.TempDir("", "compression-top")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
topDir := c.MkDir()
for i, t := range []struct{ fixture, remote string }{
{"uncompressed-image-s1", "docker://" + v2DockerRegistryURL + "/compression/compression:s1"},
@@ -982,9 +856,7 @@ func (s *CopySuite) TestCopyDockerSigstore(c *check.C) {
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
tmpDir, err := ioutil.TempDir("", "signatures-sigstore")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
copyDest := filepath.Join(tmpDir, "dest")
err = os.Mkdir(copyDest, 0755)
c.Assert(err, check.IsNil)
@@ -1050,9 +922,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
topDir, err := ioutil.TempDir("", "atomic-extension")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
topDir := c.MkDir()
for _, subdir := range []string{"dirAA", "dirAD", "dirDA", "dirDD", "registries.d"} {
err := os.MkdirAll(filepath.Join(topDir, subdir), 0755)
c.Assert(err, check.IsNil)
@@ -1102,9 +972,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
// copyWithSignedIdentity creates a copy of an unsigned image, adding a signature for an unrelated identity
// This should be easier than using standalone-sign.
func copyWithSignedIdentity(c *check.C, src, dest, signedIdentity, signBy, registriesDir string) {
topDir, err := ioutil.TempDir("", "copyWithSignedIdentity")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
topDir := c.MkDir()
signingDir := filepath.Join(topDir, "signing-temp")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", src, "dir:"+signingDir)
@@ -1126,9 +994,7 @@ func (s *CopySuite) TestCopyVerifyingMirroredSignatures(c *check.C) {
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
topDir, err := ioutil.TempDir("", "mirrored-signatures")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
topDir := c.MkDir()
registriesDir := filepath.Join(topDir, "registries.d") // An empty directory to disable sigstore use
dirDest := "dir:" + filepath.Join(topDir, "unused-dest")
@@ -1189,9 +1055,7 @@ func (s *CopySuite) TestCopyVerifyingMirroredSignatures(c *check.C) {
func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", testFQIN, fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
defer os.RemoveAll(dir1)
dir1 := c.MkDir()
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--src-creds=testuser:testpassword", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "dir:"+dir1)
}
@@ -1205,9 +1069,7 @@ func (s *SkopeoSuite) TestCopySrcAndDestWithAuth(c *check.C) {
}
func (s *CopySuite) TestCopyNoPanicOnHTTPResponseWithoutTLSVerifyFalse(c *check.C) {
topDir, err := ioutil.TempDir("", "no-panic-on-https-response-without-tls-verify-false")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
topDir := c.MkDir()
const ourRegistry = "docker://" + v2DockerRegistryURL + "/"
@@ -1223,9 +1085,7 @@ func (s *CopySuite) TestCopySchemaConversion(c *check.C) {
}
func (s *CopySuite) TestCopyManifestConversion(c *check.C) {
topDir, err := ioutil.TempDir("", "manifest-conversion")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
topDir := c.MkDir()
srcDir := filepath.Join(topDir, "source")
destDir1 := filepath.Join(topDir, "dest1")
destDir2 := filepath.Join(topDir, "dest2")
@@ -1249,18 +1109,14 @@ func (s *CopySuite) TestCopyManifestConversion(c *check.C) {
}
func (s *CopySuite) TestCopyPreserveDigests(c *check.C) {
topDir, err := ioutil.TempDir("", "preserve-digests")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
topDir := c.MkDir()
assertSkopeoSucceeds(c, "", "copy", knownListImage, "--multi-arch=all", "--preserve-digests", "dir:"+topDir)
assertSkopeoFails(c, ".*Instructed to preserve digests.*", "copy", knownListImage, "--multi-arch=all", "--preserve-digests", "--format=oci", "dir:"+topDir)
}
func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Registry, schema2Registry string) {
topDir, err := ioutil.TempDir("", "schema-conversion")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
topDir := c.MkDir()
for _, subdir := range []string{"input1", "input2", "dest2"} {
err := os.MkdirAll(filepath.Join(topDir, subdir), 0755)
c.Assert(err, check.IsNil)
@@ -1294,16 +1150,14 @@ func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Regist
const regConfFixture = "./fixtures/registries.conf"
func (s *SkopeoSuite) TestSuccessCopySrcWithMirror(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
dir := c.MkDir()
assertSkopeoSucceeds(c, "", "--registries-conf="+regConfFixture, "copy",
"docker://mirror.invalid/busybox", "dir:"+dir)
}
func (s *SkopeoSuite) TestFailureCopySrcWithMirrorsUnavailable(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
dir := c.MkDir()
// .invalid domains are, per RFC 6761, supposed to result in NXDOMAIN.
// With systemd-resolved (used only via NSS?), we instead seem to get “Temporary failure in name resolution”
@@ -1312,16 +1166,14 @@ func (s *SkopeoSuite) TestFailureCopySrcWithMirrorsUnavailable(c *check.C) {
}
func (s *SkopeoSuite) TestSuccessCopySrcWithMirrorAndPrefix(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
dir := c.MkDir()
assertSkopeoSucceeds(c, "", "--registries-conf="+regConfFixture, "copy",
"docker://gcr.invalid/foo/bar/busybox", "dir:"+dir)
}
func (s *SkopeoSuite) TestFailureCopySrcWithMirrorAndPrefixUnavailable(c *check.C) {
dir, err := ioutil.TempDir("", "copy-mirror")
c.Assert(err, check.IsNil)
dir := c.MkDir()
// .invalid domains are, per RFC 6761, supposed to result in NXDOMAIN.
// With systemd-resolved (used only via NSS?), we instead seem to get “Temporary failure in name resolution”

View File

@@ -33,10 +33,7 @@ type openshiftCluster struct {
// in isolated test environment.
func startOpenshiftCluster(c *check.C) *openshiftCluster {
cluster := &openshiftCluster{}
dir, err := ioutil.TempDir("", "openshift-cluster")
c.Assert(err, check.IsNil)
cluster.workingDir = dir
cluster.workingDir = c.MkDir()
cluster.startMaster(c)
cluster.prepareRegistryConfig(c)
@@ -258,12 +255,12 @@ func (cluster *openshiftCluster) relaxImageSignerPermissions(c *check.C) {
// tearDown stops the cluster services and deletes (only some!) of the state.
func (cluster *openshiftCluster) tearDown(c *check.C) {
for i := len(cluster.processes) - 1; i >= 0; i-- {
cluster.processes[i].Process.Kill()
}
if cluster.workingDir != "" {
os.RemoveAll(cluster.workingDir)
// Its undocumented what Kill() returns if the process has terminated,
// so we couldnt check just for that. This is running in a container anyway…
_ = cluster.processes[i].Process.Kill()
}
if cluster.dockerDir != "" {
os.RemoveAll(cluster.dockerDir)
err := os.RemoveAll(cluster.dockerDir)
c.Assert(err, check.IsNil)
}
}

View File

@@ -57,7 +57,7 @@ type pipefd struct {
fd *os.File
}
func (self *proxy) call(method string, args []interface{}) (rval interface{}, fd *pipefd, err error) {
func (p *proxy) call(method string, args []interface{}) (rval interface{}, fd *pipefd, err error) {
req := request{
Method: method,
Args: args,
@@ -66,7 +66,7 @@ func (self *proxy) call(method string, args []interface{}) (rval interface{}, fd
if err != nil {
return
}
n, err := self.c.Write(reqbuf)
n, err := p.c.Write(reqbuf)
if err != nil {
return
}
@@ -76,7 +76,7 @@ func (self *proxy) call(method string, args []interface{}) (rval interface{}, fd
}
oob := make([]byte, syscall.CmsgSpace(1))
replybuf := make([]byte, maxMsgSize)
n, oobn, _, _, err := self.c.ReadMsgUnix(replybuf, oob)
n, oobn, _, _, err := p.c.ReadMsgUnix(replybuf, oob)
if err != nil {
err = fmt.Errorf("reading reply: %v", err)
return
@@ -119,9 +119,9 @@ func (self *proxy) call(method string, args []interface{}) (rval interface{}, fd
return
}
func (self *proxy) callNoFd(method string, args []interface{}) (rval interface{}, err error) {
func (p *proxy) callNoFd(method string, args []interface{}) (rval interface{}, err error) {
var fd *pipefd
rval, fd, err = self.call(method, args)
rval, fd, err = p.call(method, args)
if err != nil {
return
}
@@ -132,9 +132,9 @@ func (self *proxy) callNoFd(method string, args []interface{}) (rval interface{}
return rval, nil
}
func (self *proxy) callReadAllBytes(method string, args []interface{}) (rval interface{}, buf []byte, err error) {
func (p *proxy) callReadAllBytes(method string, args []interface{}) (rval interface{}, buf []byte, err error) {
var fd *pipefd
rval, fd, err = self.call(method, args)
rval, fd, err = p.call(method, args)
if err != nil {
return
}
@@ -150,7 +150,7 @@ func (self *proxy) callReadAllBytes(method string, args []interface{}) (rval int
err: err,
}
}()
_, err = self.callNoFd("FinishPipe", []interface{}{fd.id})
_, err = p.callNoFd("FinishPipe", []interface{}{fd.id})
if err != nil {
return
}
@@ -241,7 +241,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
}
imgid := uint32(imgidv)
v, manifestBytes, err := p.callReadAllBytes("GetManifest", []interface{}{imgid})
_, manifestBytes, err := p.callReadAllBytes("GetManifest", []interface{}{imgid})
if err != nil {
return err
}
@@ -250,7 +250,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
return err
}
v, configBytes, err := p.callReadAllBytes("GetFullConfig", []interface{}{imgid})
_, configBytes, err := p.callReadAllBytes("GetFullConfig", []interface{}{imgid})
if err != nil {
return err
}
@@ -269,7 +269,7 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
}
// Also test this legacy interface
v, ctrconfigBytes, err := p.callReadAllBytes("GetConfig", []interface{}{imgid})
_, ctrconfigBytes, err := p.callReadAllBytes("GetConfig", []interface{}{imgid})
if err != nil {
return err
}
@@ -285,6 +285,9 @@ func runTestGetManifestAndConfig(p *proxy, img string) error {
}
_, err = p.callNoFd("CloseImage", []interface{}{imgid})
if err != nil {
return err
}
return nil
}

View File

@@ -20,7 +20,6 @@ const (
type testRegistryV2 struct {
cmd *exec.Cmd
url string
dir string
username string
password string
email string
@@ -45,10 +44,7 @@ func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistry
}
func newTestRegistryV2At(c *check.C, url string, auth, schema1 bool) (*testRegistryV2, error) {
tmp, err := ioutil.TempDir("", "registry-test-")
if err != nil {
return nil, err
}
tmp := c.MkDir()
template := `version: 0.1
loglevel: debug
storage:
@@ -86,7 +82,6 @@ http:
return nil, err
}
if _, err := fmt.Fprintf(config, template, tmp, url, htpasswd); err != nil {
os.RemoveAll(tmp)
return nil, err
}
@@ -98,7 +93,6 @@ http:
cmd := exec.Command(binary, confPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%s", url), cmd)
if err := cmd.Start(); err != nil {
os.RemoveAll(tmp)
if os.IsNotExist(err) {
c.Skip(err.Error())
}
@@ -107,7 +101,6 @@ http:
return &testRegistryV2{
cmd: cmd,
url: url,
dir: tmp,
username: username,
password: password,
email: email,
@@ -126,7 +119,8 @@ func (t *testRegistryV2) Ping() error {
return nil
}
func (t *testRegistryV2) Close() {
t.cmd.Process.Kill()
os.RemoveAll(t.dir)
func (t *testRegistryV2) tearDown(c *check.C) {
// Its undocumented what Kill() returns if the process has terminated,
// so we couldnt check just for that. This is running in a container anyway…
_ = t.cmd.Process.Kill()
}

View File

@@ -21,7 +21,6 @@ func init() {
}
type SigningSuite struct {
gpgHome string
fingerprint string
}
@@ -40,25 +39,18 @@ func (s *SigningSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.gpgHome, err = ioutil.TempDir("", "skopeo-gpg")
c.Assert(err, check.IsNil)
os.Setenv("GNUPGHOME", s.gpgHome)
gpgHome := c.MkDir()
os.Setenv("GNUPGHOME", gpgHome)
runCommandWithInput(c, "Key-Type: RSA\nName-Real: Testing user\n%no-protection\n%commit\n", gpgBinary, "--homedir", s.gpgHome, "--batch", "--gen-key")
runCommandWithInput(c, "Key-Type: RSA\nName-Real: Testing user\n%no-protection\n%commit\n", gpgBinary, "--homedir", gpgHome, "--batch", "--gen-key")
lines, err := exec.Command(gpgBinary, "--homedir", s.gpgHome, "--with-colons", "--no-permission-warning", "--fingerprint").Output()
lines, err := exec.Command(gpgBinary, "--homedir", gpgHome, "--with-colons", "--no-permission-warning", "--fingerprint").Output()
c.Assert(err, check.IsNil)
s.fingerprint, err = findFingerprint(lines)
c.Assert(err, check.IsNil)
}
func (s *SigningSuite) TearDownSuite(c *check.C) {
if s.gpgHome != "" {
err := os.RemoveAll(s.gpgHome)
c.Assert(err, check.IsNil)
}
s.gpgHome = ""
os.Unsetenv("GNUPGHOME")
}

View File

@@ -40,7 +40,6 @@ func init() {
type SyncSuite struct {
cluster *openshiftCluster
registry *testRegistryV2
gpgHome string
}
func (s *SyncSuite) SetUpSuite(c *check.C) {
@@ -74,10 +73,8 @@ func (s *SyncSuite) SetUpSuite(c *check.C) {
// FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, registryAuth, registrySchema1)
gpgHome, err := ioutil.TempDir("", "skopeo-gpg")
c.Assert(err, check.IsNil)
s.gpgHome = gpgHome
os.Setenv("GNUPGHOME", s.gpgHome)
gpgHome := c.MkDir()
os.Setenv("GNUPGHOME", gpgHome)
for _, key := range []string{"personal", "official"} {
batchInput := fmt.Sprintf("Key-Type: RSA\nName-Real: Test key - %s\nName-email: %s@example.com\n%%no-protection\n%%commit\n",
@@ -85,7 +82,7 @@ func (s *SyncSuite) SetUpSuite(c *check.C) {
runCommandWithInput(c, batchInput, gpgBinary, "--batch", "--gen-key")
out := combinedOutputOfCommand(c, gpgBinary, "--armor", "--export", fmt.Sprintf("%s@example.com", key))
err := ioutil.WriteFile(filepath.Join(s.gpgHome, fmt.Sprintf("%s-pubkey.gpg", key)),
err := ioutil.WriteFile(filepath.Join(gpgHome, fmt.Sprintf("%s-pubkey.gpg", key)),
[]byte(out), 0600)
c.Assert(err, check.IsNil)
}
@@ -96,11 +93,8 @@ func (s *SyncSuite) TearDownSuite(c *check.C) {
return
}
if s.gpgHome != "" {
os.RemoveAll(s.gpgHome)
}
if s.registry != nil {
s.registry.Close()
s.registry.tearDown(c)
}
if s.cluster != nil {
s.cluster.tearDown(c)
@@ -108,9 +102,7 @@ func (s *SyncSuite) TearDownSuite(c *check.C) {
}
func (s *SyncSuite) TestDocker2DirTagged(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedImage
@@ -136,9 +128,7 @@ func (s *SyncSuite) TestDocker2DirTagged(c *check.C) {
}
func (s *SyncSuite) TestDocker2DirTaggedAll(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedManifestList
@@ -164,16 +154,14 @@ func (s *SyncSuite) TestDocker2DirTaggedAll(c *check.C) {
}
func (s *SyncSuite) TestPreserveDigests(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedManifestList
// copy docker => dir
assertSkopeoSucceeds(c, "", "copy", "--all", "--preserve-digests", "docker://"+image, "dir:"+tmpDir)
_, err = os.Stat(path.Join(tmpDir, "manifest.json"))
_, err := os.Stat(path.Join(tmpDir, "manifest.json"))
c.Assert(err, check.IsNil)
assertSkopeoFails(c, ".*Instructed to preserve digests.*", "copy", "--all", "--preserve-digests", "--format=oci", "docker://"+image, "dir:"+tmpDir)
@@ -186,8 +174,7 @@ func (s *SyncSuite) TestScoped(c *check.C) {
c.Assert(err, check.IsNil)
imagePath := imageRef.DockerReference().String()
dir1, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
dir1 := c.MkDir()
assertSkopeoSucceeds(c, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, path.Base(imagePath), "manifest.json"))
c.Assert(err, check.IsNil)
@@ -195,8 +182,6 @@ func (s *SyncSuite) TestScoped(c *check.C) {
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
os.RemoveAll(dir1)
}
func (s *SyncSuite) TestDirIsNotOverwritten(c *check.C) {
@@ -210,8 +195,7 @@ func (s *SyncSuite) TestDirIsNotOverwritten(c *check.C) {
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "docker://"+image, "docker://"+path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())))
//sync upstream image to dir, not scoped
dir1, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
dir1 := c.MkDir()
assertSkopeoSucceeds(c, "", "sync", "--src", "docker", "--dest", "dir", image, dir1)
_, err = os.Stat(path.Join(dir1, path.Base(imagePath), "manifest.json"))
c.Assert(err, check.IsNil)
@@ -226,14 +210,10 @@ func (s *SyncSuite) TestDirIsNotOverwritten(c *check.C) {
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src-tls-verify=false", "--src", "docker", "--dest", "dir", path.Join(v2DockerRegistryURL, reference.Path(imageRef.DockerReference())), dir1)
_, err = os.Stat(path.Join(dir1, imagePath, "manifest.json"))
c.Assert(err, check.IsNil)
os.RemoveAll(dir1)
}
func (s *SyncSuite) TestDocker2DirUntagged(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableRepo
@@ -255,9 +235,7 @@ func (s *SyncSuite) TestDocker2DirUntagged(c *check.C) {
}
func (s *SyncSuite) TestYamlUntagged(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
dir1 := path.Join(tmpDir, "dir1")
image := pullableRepo
@@ -278,7 +256,8 @@ func (s *SyncSuite) TestYamlUntagged(c *check.C) {
// sync to the local registry
yamlFile := path.Join(tmpDir, "registries.yaml")
ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
err = ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "docker", "--dest-tls-verify=false", yamlFile, v2DockerRegistryURL)
// sync back from local registry to a folder
os.Remove(yamlFile)
@@ -289,7 +268,8 @@ func (s *SyncSuite) TestYamlUntagged(c *check.C) {
%s: []
`, v2DockerRegistryURL, imagePath)
ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
err = ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
sysCtx = types.SystemContext{
@@ -319,9 +299,7 @@ func (s *SyncSuite) TestYamlUntagged(c *check.C) {
}
func (s *SyncSuite) TestYamlRegex2Dir(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
@@ -334,7 +312,8 @@ k8s.gcr.io:
c.Assert(nTags, check.Not(check.Equals), 0)
yamlFile := path.Join(tmpDir, "registries.yaml")
ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
err := ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
nManifests := 0
@@ -353,9 +332,7 @@ k8s.gcr.io:
}
func (s *SyncSuite) TestYamlDigest2Dir(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
@@ -365,7 +342,8 @@ k8s.gcr.io:
- sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
`
yamlFile := path.Join(tmpDir, "registries.yaml")
ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
err := ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
nManifests := 0
@@ -384,9 +362,7 @@ k8s.gcr.io:
}
func (s *SyncSuite) TestYaml2Dir(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
dir1 := path.Join(tmpDir, "dir1")
yamlConfig := `
@@ -417,7 +393,8 @@ quay.io:
c.Assert(nTags, check.Not(check.Equals), 0)
yamlFile := path.Join(tmpDir, "registries.yaml")
ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
err := ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
nManifests := 0
@@ -437,9 +414,7 @@ quay.io:
func (s *SyncSuite) TestYamlTLSVerify(c *check.C) {
const localRegURL = "docker://" + v2DockerRegistryURL + "/"
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
dir1 := path.Join(tmpDir, "dir1")
image := pullableRepoWithLatestTag
tag := "latest"
@@ -481,7 +456,8 @@ func (s *SyncSuite) TestYamlTLSVerify(c *check.C) {
for _, cfg := range testCfg {
yamlConfig := fmt.Sprintf(yamlTemplate, v2DockerRegistryURL, cfg.tlsVerify, image, tag)
yamlFile := path.Join(tmpDir, "registries.yaml")
ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
err := ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
c.Assert(err, check.IsNil)
cfg.checker(c, cfg.msg, "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
os.Remove(yamlFile)
@@ -491,9 +467,7 @@ func (s *SyncSuite) TestYamlTLSVerify(c *check.C) {
}
func (s *SyncSuite) TestSyncManifestOutput(c *check.C) {
tmpDir, err := ioutil.TempDir("", "sync-manifest-output")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
destDir1 := filepath.Join(tmpDir, "dest1")
destDir2 := filepath.Join(tmpDir, "dest2")
@@ -513,9 +487,7 @@ func (s *SyncSuite) TestSyncManifestOutput(c *check.C) {
func (s *SyncSuite) TestDocker2DockerTagged(c *check.C) {
const localRegURL = "docker://" + v2DockerRegistryURL + "/"
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableTaggedImage
@@ -546,15 +518,13 @@ func (s *SyncSuite) TestDocker2DockerTagged(c *check.C) {
func (s *SyncSuite) TestDir2DockerTagged(c *check.C) {
const localRegURL = "docker://" + v2DockerRegistryURL + "/"
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
// FIXME: It would be nice to use one of the local Docker registries instead of needing an Internet connection.
image := pullableRepoWithLatestTag
dir1 := path.Join(tmpDir, "dir1")
err = os.Mkdir(dir1, 0755)
err := os.Mkdir(dir1, 0755)
c.Assert(err, check.IsNil)
dir2 := path.Join(tmpDir, "dir2")
err = os.Mkdir(dir2, 0755)
@@ -586,9 +556,7 @@ func (s *SyncSuite) TestDir2DockerTagged(c *check.C) {
}
func (s *SyncSuite) TestFailsWithDir2Dir(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
dir1 := path.Join(tmpDir, "dir1")
dir2 := path.Join(tmpDir, "dir2")
@@ -598,9 +566,7 @@ func (s *SyncSuite) TestFailsWithDir2Dir(c *check.C) {
}
func (s *SyncSuite) TestFailsNoSourceImages(c *check.C) {
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
assertSkopeoFails(c, ".*No images to sync found in .*",
"sync", "--scoped", "--dest-tls-verify=false", "--src", "dir", "--dest", "docker", tmpDir, v2DockerRegistryURL)
@@ -612,9 +578,7 @@ func (s *SyncSuite) TestFailsNoSourceImages(c *check.C) {
func (s *SyncSuite) TestFailsWithDockerSourceNoRegistry(c *check.C) {
const regURL = "google.com/namespace/imagename"
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
//untagged
assertSkopeoFails(c, ".*invalid status code from registry 404.*",
@@ -627,9 +591,7 @@ func (s *SyncSuite) TestFailsWithDockerSourceNoRegistry(c *check.C) {
func (s *SyncSuite) TestFailsWithDockerSourceUnauthorized(c *check.C) {
const repo = "privateimagenamethatshouldnotbepublic"
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
//untagged
assertSkopeoFails(c, ".*Registry disallows tag list retrieval.*",
@@ -642,9 +604,7 @@ func (s *SyncSuite) TestFailsWithDockerSourceUnauthorized(c *check.C) {
func (s *SyncSuite) TestFailsWithDockerSourceNotExisting(c *check.C) {
repo := path.Join(v2DockerRegistryURL, "imagedoesnotexist")
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
//untagged
assertSkopeoFails(c, ".*invalid status code from registry 404.*",
@@ -657,9 +617,9 @@ func (s *SyncSuite) TestFailsWithDockerSourceNotExisting(c *check.C) {
func (s *SyncSuite) TestFailsWithDirSourceNotExisting(c *check.C) {
// Make sure the dir does not exist!
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
c.Assert(err, check.IsNil)
err = os.RemoveAll(tmpDir)
tmpDir := c.MkDir()
tmpDir = filepath.Join(tmpDir, "this-does-not-exist")
err := os.RemoveAll(tmpDir)
c.Assert(err, check.IsNil)
_, err = os.Stat(path.Join(tmpDir))
c.Check(os.IsNotExist(err), check.Equals, true)

View File

@@ -32,7 +32,8 @@ load helpers
config_digest=$(jq -r '.config.digest' <<<"$inspect_local_raw")
# Each SHA-named layer file (but not the config) must be listed in the output of 'inspect'.
# As of Skopeo 1.6, (skopeo inspect)'s output lists layer digests,
# In all existing versions of Skopeo (with 1.6 being the current as of this comment),
# the output of 'inspect' lists layer digests,
# but not the digest of the config blob ($config_digest), if any.
layers=$(jq -r '.Layers' <<<"$inspect_local")
for sha in $(find $workdir -type f | xargs -l1 basename | egrep '^[0-9a-f]{64}$'); do

View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bats
#
# list-tags tests
#
load helpers
# list from registry
@test "list-tags: remote repository on a registry" {
local remote_image=quay.io/libpod/alpine_labels
run_skopeo list-tags "docker://${remote_image}"
expect_output --substring "quay.io/libpod/alpine_labels"
expect_output --substring "latest"
}
# list from a local docker-archive file
@test "list-tags: from a docker-archive file" {
local file_name=${TEST_SOURCE_DIR}/testdata/docker-two-images.tar.xz
run_skopeo list-tags docker-archive:$file_name
expect_output --substring "example.com/empty:latest"
expect_output --substring "example.com/empty/but:different"
}
# vim: filetype=sh

Binary file not shown.

View File

@@ -113,6 +113,69 @@ func BasicInfoHeader(name string, size int64, fileInfo *winio.FileBasicInfo) *ta
return hdr
}
// SecurityDescriptorFromTarHeader reads the SDDL associated with the header of the current file
// from the tar header and returns the security descriptor into a byte slice.
func SecurityDescriptorFromTarHeader(hdr *tar.Header) ([]byte, error) {
// Maintaining old SDDL-based behavior for backward
// compatibility. All new tar headers written by this library
// will have raw binary for the security descriptor.
var sd []byte
var err error
if sddl, ok := hdr.PAXRecords[hdrSecurityDescriptor]; ok {
sd, err = winio.SddlToSecurityDescriptor(sddl)
if err != nil {
return nil, err
}
}
if sdraw, ok := hdr.PAXRecords[hdrRawSecurityDescriptor]; ok {
sd, err = base64.StdEncoding.DecodeString(sdraw)
if err != nil {
return nil, err
}
}
return sd, nil
}
// ExtendedAttributesFromTarHeader reads the EAs associated with the header of the
// current file from the tar header and returns it as a byte slice.
func ExtendedAttributesFromTarHeader(hdr *tar.Header) ([]byte, error) {
var eas []winio.ExtendedAttribute
var eadata []byte
var err error
for k, v := range hdr.PAXRecords {
if !strings.HasPrefix(k, hdrEaPrefix) {
continue
}
data, err := base64.StdEncoding.DecodeString(v)
if err != nil {
return nil, err
}
eas = append(eas, winio.ExtendedAttribute{
Name: k[len(hdrEaPrefix):],
Value: data,
})
}
if len(eas) != 0 {
eadata, err = winio.EncodeExtendedAttributes(eas)
if err != nil {
return nil, err
}
}
return eadata, nil
}
// EncodeReparsePointFromTarHeader reads the ReparsePoint structure from the tar header
// and encodes it into a byte slice. The file for which this function is called must be a
// symlink.
func EncodeReparsePointFromTarHeader(hdr *tar.Header) []byte {
_, isMountPoint := hdr.PAXRecords[hdrMountPoint]
rp := winio.ReparsePoint{
Target: filepath.FromSlash(hdr.Linkname),
IsMountPoint: isMountPoint,
}
return winio.EncodeReparsePoint(&rp)
}
// WriteTarFileFromBackupStream writes a file to a tar writer using data from a Win32 backup stream.
//
// This encodes Win32 metadata as tar pax vendor extensions starting with MSWINDOWS.
@@ -358,21 +421,10 @@ func FileInfoFromHeader(hdr *tar.Header) (name string, size int64, fileInfo *win
// tar file that was not processed, or io.EOF is there are no more.
func WriteBackupStreamFromTarFile(w io.Writer, t *tar.Reader, hdr *tar.Header) (*tar.Header, error) {
bw := winio.NewBackupStreamWriter(w)
var sd []byte
var err error
// Maintaining old SDDL-based behavior for backward compatibility. All new tar headers written
// by this library will have raw binary for the security descriptor.
if sddl, ok := hdr.PAXRecords[hdrSecurityDescriptor]; ok {
sd, err = winio.SddlToSecurityDescriptor(sddl)
if err != nil {
return nil, err
}
}
if sdraw, ok := hdr.PAXRecords[hdrRawSecurityDescriptor]; ok {
sd, err = base64.StdEncoding.DecodeString(sdraw)
if err != nil {
return nil, err
}
sd, err := SecurityDescriptorFromTarHeader(hdr)
if err != nil {
return nil, err
}
if len(sd) != 0 {
bhdr := winio.BackupHeader{
@@ -388,25 +440,12 @@ func WriteBackupStreamFromTarFile(w io.Writer, t *tar.Reader, hdr *tar.Header) (
return nil, err
}
}
var eas []winio.ExtendedAttribute
for k, v := range hdr.PAXRecords {
if !strings.HasPrefix(k, hdrEaPrefix) {
continue
}
data, err := base64.StdEncoding.DecodeString(v)
if err != nil {
return nil, err
}
eas = append(eas, winio.ExtendedAttribute{
Name: k[len(hdrEaPrefix):],
Value: data,
})
eadata, err := ExtendedAttributesFromTarHeader(hdr)
if err != nil {
return nil, err
}
if len(eas) != 0 {
eadata, err := winio.EncodeExtendedAttributes(eas)
if err != nil {
return nil, err
}
if len(eadata) != 0 {
bhdr := winio.BackupHeader{
Id: winio.BackupEaData,
Size: int64(len(eadata)),
@@ -420,13 +459,9 @@ func WriteBackupStreamFromTarFile(w io.Writer, t *tar.Reader, hdr *tar.Header) (
return nil, err
}
}
if hdr.Typeflag == tar.TypeSymlink {
_, isMountPoint := hdr.PAXRecords[hdrMountPoint]
rp := winio.ReparsePoint{
Target: filepath.FromSlash(hdr.Linkname),
IsMountPoint: isMountPoint,
}
reparse := winio.EncodeReparsePoint(&rp)
reparse := EncodeReparsePointFromTarHeader(hdr)
bhdr := winio.BackupHeader{
Id: winio.BackupReparseData,
Size: int64(len(reparse)),
@@ -439,7 +474,9 @@ func WriteBackupStreamFromTarFile(w io.Writer, t *tar.Reader, hdr *tar.Header) (
if err != nil {
return nil, err
}
}
if hdr.Typeflag == tar.TypeReg || hdr.Typeflag == tar.TypeRegA {
bhdr := winio.BackupHeader{
Id: winio.BackupData,

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package winio
@@ -143,6 +144,11 @@ func (f *win32File) Close() error {
return nil
}
// IsClosed checks if the file has been closed
func (f *win32File) IsClosed() bool {
return f.closing.isSet()
}
// prepareIo prepares for a new IO operation.
// The caller must call f.wg.Done() when the IO is finished, prior to Close() returning.
func (f *win32File) prepareIo() (*ioOperation, error) {

View File

@@ -1,9 +1,8 @@
module github.com/Microsoft/go-winio
go 1.12
go 1.13
require (
github.com/pkg/errors v0.9.1
github.com/sirupsen/logrus v1.7.0
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c
)

View File

@@ -1,14 +1,11 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/sirupsen/logrus v1.7.0 h1:ShrD1U9pZB12TX0cVy0DtePoCH97K8EtX+mg7ZARUtM=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037 h1:YyJpGZS1sBuBCzLAR1VEpK193GlqGZbnPFnPV/5Rsb4=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c h1:VwygUrnw9jn88c4u8GD3rZQbqrP/tgas88tPUbBxQrk=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package winio
@@ -252,15 +253,23 @@ func (conn *HvsockConn) Close() error {
return conn.sock.Close()
}
func (conn *HvsockConn) IsClosed() bool {
return conn.sock.IsClosed()
}
func (conn *HvsockConn) shutdown(how int) error {
err := syscall.Shutdown(conn.sock.handle, syscall.SHUT_RD)
if conn.IsClosed() {
return ErrFileClosed
}
err := syscall.Shutdown(conn.sock.handle, how)
if err != nil {
return os.NewSyscallError("shutdown", err)
}
return nil
}
// CloseRead shuts down the read end of the socket.
// CloseRead shuts down the read end of the socket, preventing future read operations.
func (conn *HvsockConn) CloseRead() error {
err := conn.shutdown(syscall.SHUT_RD)
if err != nil {
@@ -269,8 +278,8 @@ func (conn *HvsockConn) CloseRead() error {
return nil
}
// CloseWrite shuts down the write end of the socket, notifying the other endpoint that
// no more data will be written.
// CloseWrite shuts down the write end of the socket, preventing future write operations and
// notifying the other endpoint that no more data will be written.
func (conn *HvsockConn) CloseWrite() error {
err := conn.shutdown(syscall.SHUT_WR)
if err != nil {

View File

@@ -14,8 +14,6 @@ import (
"encoding/binary"
"fmt"
"strconv"
"golang.org/x/sys/windows"
)
// Variant specifies which GUID variant (or "type") of the GUID. It determines
@@ -41,13 +39,6 @@ type Version uint8
var _ = (encoding.TextMarshaler)(GUID{})
var _ = (encoding.TextUnmarshaler)(&GUID{})
// GUID represents a GUID/UUID. It has the same structure as
// golang.org/x/sys/windows.GUID so that it can be used with functions expecting
// that type. It is defined as its own type so that stringification and
// marshaling can be supported. The representation matches that used by native
// Windows code.
type GUID windows.GUID
// NewV4 returns a new version 4 (pseudorandom) GUID, as defined by RFC 4122.
func NewV4() (GUID, error) {
var b [16]byte

View File

@@ -0,0 +1,15 @@
// +build !windows
package guid
// GUID represents a GUID/UUID. It has the same structure as
// golang.org/x/sys/windows.GUID so that it can be used with functions expecting
// that type. It is defined as its own type as that is only available to builds
// targeted at `windows`. The representation matches that used by native Windows
// code.
type GUID struct {
Data1 uint32
Data2 uint16
Data3 uint16
Data4 [8]byte
}

View File

@@ -0,0 +1,10 @@
package guid
import "golang.org/x/sys/windows"
// GUID represents a GUID/UUID. It has the same structure as
// golang.org/x/sys/windows.GUID so that it can be used with functions expecting
// that type. It is defined as its own type so that stringification and
// marshaling can be supported. The representation matches that used by native
// Windows code.
type GUID windows.GUID

View File

@@ -3,11 +3,10 @@
package security
import (
"fmt"
"os"
"syscall"
"unsafe"
"github.com/pkg/errors"
)
type (
@@ -72,7 +71,7 @@ func GrantVmGroupAccess(name string) error {
// Stat (to determine if `name` is a directory).
s, err := os.Stat(name)
if err != nil {
return errors.Wrapf(err, "%s os.Stat %s", gvmga, name)
return fmt.Errorf("%s os.Stat %s: %w", gvmga, name, err)
}
// Get a handle to the file/directory. Must defer Close on success.
@@ -88,7 +87,7 @@ func GrantVmGroupAccess(name string) error {
sd := uintptr(0)
origDACL := uintptr(0)
if err := getSecurityInfo(fd, uint32(ot), uint32(si), nil, nil, &origDACL, nil, &sd); err != nil {
return errors.Wrapf(err, "%s GetSecurityInfo %s", gvmga, name)
return fmt.Errorf("%s GetSecurityInfo %s: %w", gvmga, name, err)
}
defer syscall.LocalFree((syscall.Handle)(unsafe.Pointer(sd)))
@@ -102,7 +101,7 @@ func GrantVmGroupAccess(name string) error {
// And finally use SetSecurityInfo to apply the updated DACL.
if err := setSecurityInfo(fd, uint32(ot), uint32(si), uintptr(0), uintptr(0), newDACL, uintptr(0)); err != nil {
return errors.Wrapf(err, "%s SetSecurityInfo %s", gvmga, name)
return fmt.Errorf("%s SetSecurityInfo %s: %w", gvmga, name, err)
}
return nil
@@ -120,7 +119,7 @@ func createFile(name string, isDir bool) (syscall.Handle, error) {
}
fd, err := syscall.CreateFile(&namep[0], da, sm, nil, syscall.OPEN_EXISTING, fa, 0)
if err != nil {
return 0, errors.Wrapf(err, "%s syscall.CreateFile %s", gvmga, name)
return 0, fmt.Errorf("%s syscall.CreateFile %s: %w", gvmga, name, err)
}
return fd, nil
}
@@ -131,7 +130,7 @@ func generateDACLWithAcesAdded(name string, isDir bool, origDACL uintptr) (uintp
// Generate pointers to the SIDs based on the string SIDs
sid, err := syscall.StringToSid(sidVmGroup)
if err != nil {
return 0, errors.Wrapf(err, "%s syscall.StringToSid %s %s", gvmga, name, sidVmGroup)
return 0, fmt.Errorf("%s syscall.StringToSid %s %s: %w", gvmga, name, sidVmGroup, err)
}
inheritance := inheritModeNoInheritance
@@ -154,7 +153,7 @@ func generateDACLWithAcesAdded(name string, isDir bool, origDACL uintptr) (uintp
modifiedDACL := uintptr(0)
if err := setEntriesInAcl(uintptr(uint32(1)), uintptr(unsafe.Pointer(&eaArray[0])), origDACL, &modifiedDACL); err != nil {
return 0, errors.Wrapf(err, "%s SetEntriesInAcl %s", gvmga, name)
return 0, fmt.Errorf("%s SetEntriesInAcl %s: %w", gvmga, name, err)
}
return modifiedDACL, nil

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package vhd
@@ -7,14 +8,13 @@ import (
"syscall"
"github.com/Microsoft/go-winio/pkg/guid"
"github.com/pkg/errors"
"golang.org/x/sys/windows"
)
//go:generate go run mksyscall_windows.go -output zvhd_windows.go vhd.go
//sys createVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (win32err error) = virtdisk.CreateVirtualDisk
//sys openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *OpenVirtualDiskParameters, handle *syscall.Handle) (win32err error) = virtdisk.OpenVirtualDisk
//sys openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (win32err error) = virtdisk.OpenVirtualDisk
//sys attachVirtualDisk(handle syscall.Handle, securityDescriptor *uintptr, attachVirtualDiskFlag uint32, providerSpecificFlags uint32, parameters *AttachVirtualDiskParameters, overlapped *syscall.Overlapped) (win32err error) = virtdisk.AttachVirtualDisk
//sys detachVirtualDisk(handle syscall.Handle, detachVirtualDiskFlags uint32, providerSpecificFlags uint32) (win32err error) = virtdisk.DetachVirtualDisk
//sys getVirtualDiskPhysicalPath(handle syscall.Handle, diskPathSizeInBytes *uint32, buffer *uint16) (win32err error) = virtdisk.GetVirtualDiskPhysicalPath
@@ -62,13 +62,27 @@ type OpenVirtualDiskParameters struct {
Version2 OpenVersion2
}
// The higher level `OpenVersion2` struct uses bools to refer to `GetInfoOnly` and `ReadOnly` for ease of use. However,
// the internal windows structure uses `BOOLS` aka int32s for these types. `openVersion2` is used for translating
// `OpenVersion2` fields to the correct windows internal field types on the `Open____` methods.
type openVersion2 struct {
getInfoOnly int32
readOnly int32
resiliencyGUID guid.GUID
}
type openVirtualDiskParameters struct {
version uint32
version2 openVersion2
}
type AttachVersion2 struct {
RestrictedOffset uint64
RestrictedLength uint64
}
type AttachVirtualDiskParameters struct {
Version uint32 // Must always be set to 2
Version uint32
Version2 AttachVersion2
}
@@ -146,16 +160,13 @@ func CreateVhdx(path string, maxSizeInGb, blockSizeInMb uint32) error {
return err
}
if err := syscall.CloseHandle(handle); err != nil {
return err
}
return nil
return syscall.CloseHandle(handle)
}
// DetachVirtualDisk detaches a virtual hard disk by handle.
func DetachVirtualDisk(handle syscall.Handle) (err error) {
if err := detachVirtualDisk(handle, 0, 0); err != nil {
return errors.Wrap(err, "failed to detach virtual disk")
return fmt.Errorf("failed to detach virtual disk: %w", err)
}
return nil
}
@@ -185,7 +196,7 @@ func AttachVirtualDisk(handle syscall.Handle, attachVirtualDiskFlag AttachVirtua
parameters,
nil,
); err != nil {
return errors.Wrap(err, "failed to attach virtual disk")
return fmt.Errorf("failed to attach virtual disk: %w", err)
}
return nil
}
@@ -209,7 +220,7 @@ func AttachVhd(path string) (err error) {
AttachVirtualDiskFlagNone,
&params,
); err != nil {
return errors.Wrap(err, "failed to attach virtual disk")
return fmt.Errorf("failed to attach virtual disk: %w", err)
}
return nil
}
@@ -234,19 +245,35 @@ func OpenVirtualDiskWithParameters(vhdPath string, virtualDiskAccessMask Virtual
var (
handle syscall.Handle
defaultType VirtualStorageType
getInfoOnly int32
readOnly int32
)
if parameters.Version != 2 {
return handle, fmt.Errorf("only version 2 VHDs are supported, found version: %d", parameters.Version)
}
if parameters.Version2.GetInfoOnly {
getInfoOnly = 1
}
if parameters.Version2.ReadOnly {
readOnly = 1
}
params := &openVirtualDiskParameters{
version: parameters.Version,
version2: openVersion2{
getInfoOnly,
readOnly,
parameters.Version2.ResiliencyGUID,
},
}
if err := openVirtualDisk(
&defaultType,
vhdPath,
uint32(virtualDiskAccessMask),
uint32(openVirtualDiskFlags),
parameters,
params,
&handle,
); err != nil {
return 0, errors.Wrap(err, "failed to open virtual disk")
return 0, fmt.Errorf("failed to open virtual disk: %w", err)
}
return handle, nil
}
@@ -272,7 +299,7 @@ func CreateVirtualDisk(path string, virtualDiskAccessMask VirtualDiskAccessMask,
nil,
&handle,
); err != nil {
return handle, errors.Wrap(err, "failed to create virtual disk")
return handle, fmt.Errorf("failed to create virtual disk: %w", err)
}
return handle, nil
}
@@ -290,7 +317,7 @@ func GetVirtualDiskPhysicalPath(handle syscall.Handle) (_ string, err error) {
&diskPathSizeInBytes,
&diskPhysicalPathBuf[0],
); err != nil {
return "", errors.Wrap(err, "failed to get disk physical path")
return "", fmt.Errorf("failed to get disk physical path: %w", err)
}
return windows.UTF16ToString(diskPhysicalPathBuf[:]), nil
}
@@ -314,10 +341,10 @@ func CreateDiffVhd(diffVhdPath, baseVhdPath string, blockSizeInMB uint32) error
createParams,
)
if err != nil {
return fmt.Errorf("failed to create differencing vhd: %s", err)
return fmt.Errorf("failed to create differencing vhd: %w", err)
}
if err := syscall.CloseHandle(vhdHandle); err != nil {
return fmt.Errorf("failed to close differencing vhd handle: %s", err)
return fmt.Errorf("failed to close differencing vhd handle: %w", err)
}
return nil
}

View File

@@ -88,7 +88,7 @@ func getVirtualDiskPhysicalPath(handle syscall.Handle, diskPathSizeInBytes *uint
return
}
func openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *OpenVirtualDiskParameters, handle *syscall.Handle) (win32err error) {
func openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (win32err error) {
var _p0 *uint16
_p0, win32err = syscall.UTF16PtrFromString(path)
if win32err != nil {
@@ -97,7 +97,7 @@ func openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtua
return _openVirtualDisk(virtualStorageType, _p0, virtualDiskAccessMask, openVirtualDiskFlags, parameters, handle)
}
func _openVirtualDisk(virtualStorageType *VirtualStorageType, path *uint16, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *OpenVirtualDiskParameters, handle *syscall.Handle) (win32err error) {
func _openVirtualDisk(virtualStorageType *VirtualStorageType, path *uint16, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (win32err error) {
r0, _, _ := syscall.Syscall6(procOpenVirtualDisk.Addr(), 6, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(openVirtualDiskFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(handle)))
if r0 != 0 {
win32err = syscall.Errno(r0)

View File

@@ -17,7 +17,7 @@
// Package errdefs defines the common errors used throughout containerd
// packages.
//
// Use with errors.Wrap and error.Wrapf to add context to an error.
// Use with fmt.Errorf to add context to an error.
//
// To detect an error class, use the IsXXX functions to tell whether an error
// is of a certain type.
@@ -28,8 +28,7 @@ package errdefs
import (
"context"
"github.com/pkg/errors"
"errors"
)
// Definitions of common error types used throughout containerd. All containerd

View File

@@ -18,9 +18,9 @@ package errdefs
import (
"context"
"fmt"
"strings"
"github.com/pkg/errors"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
@@ -68,9 +68,9 @@ func ToGRPC(err error) error {
// ToGRPCf maps the error to grpc error codes, assembling the formatting string
// and combining it with the target error string.
//
// This is equivalent to errors.ToGRPC(errors.Wrapf(err, format, args...))
// This is equivalent to errdefs.ToGRPC(fmt.Errorf("%s: %w", fmt.Sprintf(format, args...), err))
func ToGRPCf(err error, format string, args ...interface{}) error {
return ToGRPC(errors.Wrapf(err, format, args...))
return ToGRPC(fmt.Errorf("%s: %w", fmt.Sprintf(format, args...), err))
}
// FromGRPC returns the underlying error from a grpc service based on the grpc error code
@@ -104,9 +104,9 @@ func FromGRPC(err error) error {
msg := rebaseMessage(cls, err)
if msg != "" {
err = errors.Wrap(cls, msg)
err = fmt.Errorf("%s: %w", msg, cls)
} else {
err = errors.WithStack(cls)
err = cls
}
return err

View File

@@ -52,7 +52,8 @@ const (
// WithLogger returns a new context with the provided logger. Use in
// combination with logger.WithField(s) for great effect.
func WithLogger(ctx context.Context, logger *logrus.Entry) context.Context {
return context.WithValue(ctx, loggerKey{}, logger)
e := logger.WithContext(ctx)
return context.WithValue(ctx, loggerKey{}, e)
}
// GetLogger retrieves the current logger from the context. If no logger is
@@ -61,7 +62,7 @@ func GetLogger(ctx context.Context) *logrus.Entry {
logger := ctx.Value(loggerKey{})
if logger == nil {
return L
return L.WithContext(ctx)
}
return logger.(*logrus.Entry)

View File

@@ -38,12 +38,22 @@ func platformVector(platform specs.Platform) []specs.Platform {
switch platform.Architecture {
case "amd64":
if amd64Version, err := strconv.Atoi(strings.TrimPrefix(platform.Variant, "v")); err == nil && amd64Version > 1 {
for amd64Version--; amd64Version >= 1; amd64Version-- {
vector = append(vector, specs.Platform{
Architecture: platform.Architecture,
OS: platform.OS,
OSVersion: platform.OSVersion,
OSFeatures: platform.OSFeatures,
Variant: "v" + strconv.Itoa(amd64Version),
})
}
}
vector = append(vector, specs.Platform{
Architecture: "386",
OS: platform.OS,
OSVersion: platform.OSVersion,
OSFeatures: platform.OSFeatures,
Variant: platform.Variant,
})
case "arm":
if armVersion, err := strconv.Atoi(strings.TrimPrefix(platform.Variant, "v")); err == nil && armVersion > 5 {

View File

@@ -18,6 +18,7 @@ package platforms
import (
"bufio"
"fmt"
"os"
"runtime"
"strings"
@@ -25,7 +26,6 @@ import (
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/log"
"github.com/pkg/errors"
)
// Present the ARM instruction set architecture, eg: v7, v8
@@ -48,7 +48,7 @@ func cpuVariant() string {
// by ourselves. We can just parse these information from /proc/cpuinfo
func getCPUInfo(pattern string) (info string, err error) {
if !isLinuxOS(runtime.GOOS) {
return "", errors.Wrapf(errdefs.ErrNotImplemented, "getCPUInfo for OS %s", runtime.GOOS)
return "", fmt.Errorf("getCPUInfo for OS %s: %w", runtime.GOOS, errdefs.ErrNotImplemented)
}
cpuinfo, err := os.Open("/proc/cpuinfo")
@@ -75,7 +75,7 @@ func getCPUInfo(pattern string) (info string, err error) {
return "", err
}
return "", errors.Wrapf(errdefs.ErrNotFound, "getCPUInfo for pattern: %s", pattern)
return "", fmt.Errorf("getCPUInfo for pattern: %s: %w", pattern, errdefs.ErrNotFound)
}
func getCPUVariant() string {

View File

@@ -38,7 +38,7 @@ func isLinuxOS(os string) bool {
// The OS value should be normalized before calling this function.
func isKnownOS(os string) bool {
switch os {
case "aix", "android", "darwin", "dragonfly", "freebsd", "hurd", "illumos", "js", "linux", "nacl", "netbsd", "openbsd", "plan9", "solaris", "windows", "zos":
case "aix", "android", "darwin", "dragonfly", "freebsd", "hurd", "illumos", "ios", "js", "linux", "nacl", "netbsd", "openbsd", "plan9", "solaris", "windows", "zos":
return true
}
return false
@@ -60,7 +60,7 @@ func isArmArch(arch string) bool {
// The arch value should be normalized before being passed to this function.
func isKnownArch(arch string) bool {
switch arch {
case "386", "amd64", "amd64p32", "arm", "armbe", "arm64", "arm64be", "ppc64", "ppc64le", "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le", "ppc", "riscv", "riscv64", "s390", "s390x", "sparc", "sparc64", "wasm":
case "386", "amd64", "amd64p32", "arm", "armbe", "arm64", "arm64be", "ppc64", "ppc64le", "loong64", "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le", "ppc", "riscv", "riscv64", "s390", "s390x", "sparc", "sparc64", "wasm":
return true
}
return false
@@ -86,9 +86,11 @@ func normalizeArch(arch, variant string) (string, string) {
case "i386":
arch = "386"
variant = ""
case "x86_64", "x86-64":
case "x86_64", "x86-64", "amd64":
arch = "amd64"
variant = ""
if variant == "v1" {
variant = ""
}
case "aarch64", "arm64":
arch = "arm64"
switch variant {

View File

@@ -16,27 +16,11 @@
package platforms
import (
"runtime"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// DefaultString returns the default string specifier for the platform.
func DefaultString() string {
return Format(DefaultSpec())
}
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
// DefaultStrict returns strict form of Default.
func DefaultStrict() MatchComparer {
return OnlyStrict(DefaultSpec())

View File

@@ -0,0 +1,45 @@
//go:build darwin
// +build darwin
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package platforms
import (
"runtime"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
// Default returns the default matcher for the platform.
func Default() MatchComparer {
return Ordered(DefaultSpec(), specs.Platform{
// darwin runtime also supports Linux binary via runu/LKL
OS: "linux",
Architecture: runtime.GOARCH,
})
}

View File

@@ -1,4 +1,5 @@
// +build !windows
//go:build !windows && !darwin
// +build !windows,!darwin
/*
Copyright The containerd Authors.
@@ -18,6 +19,22 @@
package platforms
import (
"runtime"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
// Default returns the default matcher for the platform.
func Default() MatchComparer {
return Only(DefaultSpec())

View File

@@ -1,5 +1,3 @@
// +build windows
/*
Copyright The containerd Authors.
@@ -29,6 +27,18 @@ import (
"golang.org/x/sys/windows"
)
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
major, minor, build := windows.RtlGetNtVersionNumbers()
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
OSVersion: fmt.Sprintf("%d.%d.%d", major, minor, build),
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
type matchComparer struct {
defaults Matcher
osVersionPrefix string

View File

@@ -107,6 +107,8 @@
package platforms
import (
"fmt"
"path"
"regexp"
"runtime"
"strconv"
@@ -114,7 +116,6 @@ import (
"github.com/containerd/containerd/errdefs"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
var (
@@ -166,14 +167,14 @@ func (m *matcher) String() string {
func Parse(specifier string) (specs.Platform, error) {
if strings.Contains(specifier, "*") {
// TODO(stevvooe): need to work out exact wildcard handling
return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: wildcards not yet supported", specifier)
return specs.Platform{}, fmt.Errorf("%q: wildcards not yet supported: %w", specifier, errdefs.ErrInvalidArgument)
}
parts := strings.Split(specifier, "/")
for _, part := range parts {
if !specifierRe.MatchString(part) {
return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q is an invalid component of %q: platform specifier component must match %q", part, specifier, specifierRe.String())
return specs.Platform{}, fmt.Errorf("%q is an invalid component of %q: platform specifier component must match %q: %w", part, specifier, specifierRe.String(), errdefs.ErrInvalidArgument)
}
}
@@ -205,7 +206,7 @@ func Parse(specifier string) (specs.Platform, error) {
return p, nil
}
return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: unknown operating system or architecture", specifier)
return specs.Platform{}, fmt.Errorf("%q: unknown operating system or architecture: %w", specifier, errdefs.ErrInvalidArgument)
case 2:
// In this case, we treat as a regular os/arch pair. We don't care
// about whether or not we know of the platform.
@@ -227,7 +228,7 @@ func Parse(specifier string) (specs.Platform, error) {
return p, nil
}
return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: cannot parse platform specifier", specifier)
return specs.Platform{}, fmt.Errorf("%q: cannot parse platform specifier: %w", specifier, errdefs.ErrInvalidArgument)
}
// MustParse is like Parses but panics if the specifier cannot be parsed.
@@ -246,20 +247,7 @@ func Format(platform specs.Platform) string {
return "unknown"
}
return joinNotEmpty(platform.OS, platform.Architecture, platform.Variant)
}
func joinNotEmpty(s ...string) string {
var ss []string
for _, s := range s {
if s == "" {
continue
}
ss = append(ss, s)
}
return strings.Join(ss, "/")
return path.Join(platform.OS, platform.Architecture, platform.Variant)
}
// Normalize validates and translate the platform to the canonical value.
@@ -269,10 +257,5 @@ func joinNotEmpty(s ...string) string {
func Normalize(platform specs.Platform) specs.Platform {
platform.OS = normalizeOS(platform.OS)
platform.Architecture, platform.Variant = normalizeArch(platform.Architecture, platform.Variant)
// these fields are deprecated, remove them
platform.OSFeatures = nil
platform.OSVersion = ""
return platform
}

View File

@@ -26,6 +26,7 @@ import (
"archive/tar"
"bytes"
"compress/gzip"
"errors"
"fmt"
"io"
"io/ioutil"
@@ -38,7 +39,6 @@ import (
"github.com/containerd/stargz-snapshotter/estargz/errorutil"
"github.com/klauspost/compress/zstd"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
@@ -142,7 +142,7 @@ func Build(tarBlob *io.SectionReader, opt ...Option) (_ *Blob, rErr error) {
defer func() {
if rErr != nil {
if err := layerFiles.CleanupAll(); err != nil {
rErr = errors.Wrapf(rErr, "failed to cleanup tmp files: %v", err)
rErr = fmt.Errorf("failed to cleanup tmp files: %v: %w", err, rErr)
}
}
}()
@@ -307,7 +307,7 @@ func sortEntries(in io.ReaderAt, prioritized []string, missedPrioritized *[]stri
// Import tar file.
intar, err := importTar(in)
if err != nil {
return nil, errors.Wrap(err, "failed to sort")
return nil, fmt.Errorf("failed to sort: %w", err)
}
// Sort the tar file respecting to the prioritized files list.
@@ -318,7 +318,7 @@ func sortEntries(in io.ReaderAt, prioritized []string, missedPrioritized *[]stri
*missedPrioritized = append(*missedPrioritized, l)
continue // allow not found
}
return nil, errors.Wrap(err, "failed to sort tar entries")
return nil, fmt.Errorf("failed to sort tar entries: %w", err)
}
}
if len(prioritized) == 0 {
@@ -371,7 +371,7 @@ func importTar(in io.ReaderAt) (*tarFile, error) {
tf := &tarFile{}
pw, err := newCountReader(in)
if err != nil {
return nil, errors.Wrap(err, "failed to make position watcher")
return nil, fmt.Errorf("failed to make position watcher: %w", err)
}
tr := tar.NewReader(pw)
@@ -383,7 +383,7 @@ func importTar(in io.ReaderAt) (*tarFile, error) {
if err == io.EOF {
break
} else {
return nil, errors.Wrap(err, "failed to parse tar file")
return nil, fmt.Errorf("failed to parse tar file, %w", err)
}
}
switch cleanEntryName(h.Name) {
@@ -420,7 +420,7 @@ func moveRec(name string, in *tarFile, out *tarFile) error {
_, okIn := in.get(name)
_, okOut := out.get(name)
if !okIn && !okOut {
return errors.Wrapf(errNotFound, "file: %q", name)
return fmt.Errorf("file: %q: %w", name, errNotFound)
}
parent, _ := path.Split(strings.TrimSuffix(name, "/"))

View File

@@ -27,6 +27,7 @@ import (
"bytes"
"compress/gzip"
"crypto/sha256"
"errors"
"fmt"
"hash"
"io"
@@ -40,7 +41,6 @@ import (
"github.com/containerd/stargz-snapshotter/estargz/errorutil"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/vbatts/tar-split/archive/tar"
)
@@ -385,8 +385,7 @@ func (r *Reader) Verifiers() (TOCEntryVerifier, error) {
if e.Digest != "" {
d, err := digest.Parse(e.Digest)
if err != nil {
return nil, errors.Wrapf(err,
"failed to parse regular file digest %q", e.Digest)
return nil, fmt.Errorf("failed to parse regular file digest %q: %w", e.Digest, err)
}
regDigestMap[e.Offset] = d
} else {
@@ -401,8 +400,7 @@ func (r *Reader) Verifiers() (TOCEntryVerifier, error) {
if e.ChunkDigest != "" {
d, err := digest.Parse(e.ChunkDigest)
if err != nil {
return nil, errors.Wrapf(err,
"failed to parse chunk digest %q", e.ChunkDigest)
return nil, fmt.Errorf("failed to parse chunk digest %q: %w", e.ChunkDigest, err)
}
chunkDigestMap[e.Offset] = d
} else {
@@ -647,7 +645,7 @@ func Unpack(sr *io.SectionReader, c Decompressor) (io.ReadCloser, error) {
}
blobPayloadSize, _, _, err := c.ParseFooter(footer)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse footer")
return nil, fmt.Errorf("failed to parse footer: %w", err)
}
return c.Reader(io.LimitReader(sr, blobPayloadSize))
}

View File

@@ -3,9 +3,8 @@ module github.com/containerd/stargz-snapshotter/estargz
go 1.16
require (
github.com/klauspost/compress v1.14.2
github.com/klauspost/compress v1.15.1
github.com/opencontainers/go-digest v1.0.0
github.com/pkg/errors v0.9.1
github.com/vbatts/tar-split v0.11.2
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a
)

View File

@@ -1,12 +1,10 @@
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/klauspost/compress v1.14.2 h1:S0OHlFk/Gbon/yauFJ4FfJJF5V0fc5HbBTJazi28pRw=
github.com/klauspost/compress v1.14.2/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.15.1 h1:y9FcTHGyrebwfP0ZZqFiaxTaiDnUrGkJkI+f583BL1A=
github.com/klauspost/compress v1.15.1/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=

View File

@@ -34,7 +34,6 @@ import (
"strconv"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type gzipCompression struct {
@@ -150,7 +149,7 @@ func (gz *GzipDecompressor) ParseFooter(p []byte) (blobPayloadSize, tocOffset, t
}
tocOffset, err = strconv.ParseInt(string(subfield[:16]), 16, 64)
if err != nil {
return 0, 0, 0, errors.Wrapf(err, "legacy: failed to parse toc offset")
return 0, 0, 0, fmt.Errorf("legacy: failed to parse toc offset: %w", err)
}
return tocOffset, tocOffset, 0, nil
}
@@ -179,7 +178,7 @@ func (gz *LegacyGzipDecompressor) ParseFooter(p []byte) (blobPayloadSize, tocOff
}
zr, err := gzip.NewReader(bytes.NewReader(p))
if err != nil {
return 0, 0, 0, errors.Wrapf(err, "legacy: failed to get footer gzip reader")
return 0, 0, 0, fmt.Errorf("legacy: failed to get footer gzip reader: %w", err)
}
defer zr.Close()
extra := zr.Header.Extra
@@ -191,7 +190,7 @@ func (gz *LegacyGzipDecompressor) ParseFooter(p []byte) (blobPayloadSize, tocOff
}
tocOffset, err = strconv.ParseInt(string(extra[:16]), 16, 64)
if err != nil {
return 0, 0, 0, errors.Wrapf(err, "legacy: failed to parse toc offset")
return 0, 0, 0, fmt.Errorf("legacy: failed to parse toc offset: %w", err)
}
return tocOffset, tocOffset, 0, nil
}

View File

@@ -28,6 +28,7 @@ import (
"compress/gzip"
"crypto/sha256"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
@@ -41,7 +42,6 @@ import (
"github.com/containerd/stargz-snapshotter/estargz/errorutil"
"github.com/klauspost/compress/zstd"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// TestingController is Compression with some helper methods necessary for testing.
@@ -1062,18 +1062,18 @@ func parseStargz(sgz *io.SectionReader, controller TestingController) (decodedJT
fSize := controller.FooterSize()
footer := make([]byte, fSize)
if _, err := sgz.ReadAt(footer, sgz.Size()-fSize); err != nil {
return nil, 0, errors.Wrap(err, "error reading footer")
return nil, 0, fmt.Errorf("error reading footer: %w", err)
}
_, tocOffset, _, err := controller.ParseFooter(footer[positive(int64(len(footer))-fSize):])
if err != nil {
return nil, 0, errors.Wrapf(err, "failed to parse footer")
return nil, 0, fmt.Errorf("failed to parse footer: %w", err)
}
// Decode the TOC JSON
tocReader := io.NewSectionReader(sgz, tocOffset, sgz.Size()-tocOffset-fSize)
decodedJTOC, _, err = controller.ParseTOC(tocReader)
if err != nil {
return nil, 0, errors.Wrap(err, "failed to parse TOC")
return nil, 0, fmt.Errorf("failed to parse TOC: %w", err)
}
return decodedJTOC, tocOffset, nil
}

View File

@@ -4,6 +4,7 @@ import (
"bufio"
"context"
"fmt"
"net/url"
"os"
"path/filepath"
"strings"
@@ -165,20 +166,21 @@ func Login(ctx context.Context, systemContext *types.SystemContext, opts *LoginO
// parseCredentialsKey turns the provided argument into a valid credential key
// and computes the registry part.
func parseCredentialsKey(arg string, acceptRepositories bool) (key, registry string, err error) {
// URL arguments are replaced with their host[:port] parts.
key, err = replaceURLByHostPort(arg)
if err != nil {
return "", "", err
}
split := strings.Split(key, "/")
registry = split[0]
if !acceptRepositories {
registry = getRegistryName(arg)
key = registry
return key, registry, nil
return registry, registry, nil
}
key = trimScheme(arg)
if key != arg {
return "", "", errors.New("credentials key has https[s]:// prefix")
}
registry = getRegistryName(key)
// Return early if the key isn't namespaced or uses an http{s} prefix.
if registry == key {
// The key is not namespaced
return key, registry, nil
}
@@ -202,24 +204,18 @@ func parseCredentialsKey(arg string, acceptRepositories bool) (key, registry str
return key, registry, nil
}
// getRegistryName scrubs and parses the input to get the server name
func getRegistryName(server string) string {
// removes 'http://' or 'https://' from the front of the
// server/registry string if either is there. This will be mostly used
// for user input from 'Buildah login' and 'Buildah logout'.
server = trimScheme(server)
// gets the registry from the input. If the input is of the form
// quay.io/myuser/myimage, it will parse it and just return quay.io
split := strings.Split(server, "/")
return split[0]
}
// trimScheme removes the HTTP(s) scheme from the provided repository.
func trimScheme(repository string) string {
// removes 'http://' or 'https://' from the front of the
// server/registry string if either is there. This will be mostly used
// for user input from 'Buildah login' and 'Buildah logout'.
return strings.TrimPrefix(strings.TrimPrefix(repository, "https://"), "http://")
// If the specified string starts with http{s} it is replaced with it's
// host[:port] parts; everything else is stripped. Otherwise, the string is
// returned as is.
func replaceURLByHostPort(repository string) (string, error) {
if !strings.HasPrefix(repository, "https://") && !strings.HasPrefix(repository, "http://") {
return repository, nil
}
u, err := url.Parse(repository)
if err != nil {
return "", fmt.Errorf("trimming http{s} prefix: %v", err)
}
return u.Host, nil
}
// getUserAndPass gets the username and password from STDIN if not given

View File

@@ -15,8 +15,10 @@ import (
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/image"
internalblobinfocache "github.com/containers/image/v5/internal/blobinfocache"
"github.com/containers/image/v5/internal/imagedestination"
"github.com/containers/image/v5/internal/imagesource"
"github.com/containers/image/v5/internal/pkg/platform"
internalTypes "github.com/containers/image/v5/internal/types"
"github.com/containers/image/v5/internal/private"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/blobinfocache"
"github.com/containers/image/v5/pkg/compression"
@@ -63,8 +65,8 @@ var expectedCompressionFormats = map[string]*compressiontypes.Algorithm{
// copier allows us to keep track of diffID values for blobs, and other
// data shared across one or more images in a possible manifest list.
type copier struct {
dest types.ImageDestination
rawSource types.ImageSource
dest private.ImageDestination
rawSource private.ImageSource
reportWriter io.Writer
progressOutput io.Writer
progressInterval time.Duration
@@ -202,20 +204,22 @@ func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef,
reportWriter = options.ReportWriter
}
dest, err := destRef.NewImageDestination(ctx, options.DestinationCtx)
publicDest, err := destRef.NewImageDestination(ctx, options.DestinationCtx)
if err != nil {
return nil, errors.Wrapf(err, "initializing destination %s", transports.ImageName(destRef))
}
dest := imagedestination.FromPublic(publicDest)
defer func() {
if err := dest.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (dest: %v)", err)
}
}()
rawSource, err := srcRef.NewImageSource(ctx, options.SourceCtx)
publicRawSource, err := srcRef.NewImageSource(ctx, options.SourceCtx)
if err != nil {
return nil, errors.Wrapf(err, "initializing source %s", transports.ImageName(srcRef))
}
rawSource := imagesource.FromPublic(publicRawSource)
defer func() {
if err := rawSource.Close(); err != nil {
retErr = errors.Wrapf(retErr, " (src: %v)", err)
@@ -1225,28 +1229,13 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
// a failure when we eventually try to update the manifest with the digest and MIME type of the reused blob.
// Fixing that will probably require passing more information to TryReusingBlob() than the current version of
// the ImageDestination interface lets us pass in.
var (
blobInfo types.BlobInfo
reused bool
err error
)
// Note: the storage destination optimizes the committing of
// layers which requires passing the index of the layer.
// Hence, we need to special case and cast.
dest, ok := ic.c.dest.(internalTypes.ImageDestinationWithOptions)
if ok {
options := internalTypes.TryReusingBlobOptions{
Cache: ic.c.blobInfoCache,
CanSubstitute: ic.canSubstituteBlobs,
SrcRef: srcRef,
EmptyLayer: emptyLayer,
}
options.LayerIndex = &layerIndex
reused, blobInfo, err = dest.TryReusingBlobWithOptions(ctx, srcInfo, options)
} else {
reused, blobInfo, err = ic.c.dest.TryReusingBlob(ctx, srcInfo, ic.c.blobInfoCache, ic.canSubstituteBlobs)
}
reused, blobInfo, err := ic.c.dest.TryReusingBlobWithOptions(ctx, srcInfo, private.TryReusingBlobOptions{
Cache: ic.c.blobInfoCache,
CanSubstitute: ic.canSubstituteBlobs,
EmptyLayer: emptyLayer,
LayerIndex: &layerIndex,
SrcRef: srcRef,
})
if err != nil {
return types.BlobInfo{}, "", errors.Wrapf(err, "trying to reuse blob %s at destination", srcInfo.Digest)
}
@@ -1286,9 +1275,7 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
// of the source file are not known yet and must be fetched.
// Attempt a partial only when the source allows to retrieve a blob partially and
// the destination has support for it.
imgSource, okSource := ic.c.rawSource.(internalTypes.ImageSourceSeekable)
imgDest, okDest := ic.c.dest.(internalTypes.ImageDestinationPartial)
if okSource && okDest && !diffIDIsNeeded {
if ic.c.rawSource.SupportsGetBlobAt() && ic.c.dest.SupportsPutBlobPartial() && !diffIDIsNeeded {
if reused, blobInfo := func() (bool, types.BlobInfo) { // A scope for defer
bar := ic.c.createProgressBar(pool, true, srcInfo, "blob", "done")
hideProgressBar := true
@@ -1296,29 +1283,12 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
bar.Abort(hideProgressBar)
}()
progress := make(chan int64)
terminate := make(chan interface{})
defer close(terminate)
defer close(progress)
proxy := imageSourceSeekableProxy{
source: imgSource,
progress: progress,
proxy := blobChunkAccessorProxy{
wrapped: ic.c.rawSource,
bar: bar,
}
go func() {
for {
select {
case written := <-progress:
bar.IncrInt64(written)
case <-terminate:
return
}
}
}()
bar.SetTotal(srcInfo.Size, false)
info, err := imgDest.PutBlobPartial(ctx, proxy, srcInfo, ic.c.blobInfoCache)
info, err := ic.c.dest.PutBlobPartial(ctx, &proxy, srcInfo, ic.c.blobInfoCache)
if err == nil {
bar.SetRefill(srcInfo.Size - bar.Current())
bar.SetCurrent(srcInfo.Size)
@@ -1658,24 +1628,15 @@ func (c *copier) copyBlobFromStream(ctx context.Context, srcStream io.Reader, sr
}
// === Finally, send the layer stream to dest.
var uploadedInfo types.BlobInfo
// Note: the storage destination optimizes the committing of layers
// which requires passing the index of the layer. Hence, we need to
// special case and cast.
dest, ok := c.dest.(internalTypes.ImageDestinationWithOptions)
if ok {
options := internalTypes.PutBlobOptions{
Cache: c.blobInfoCache,
IsConfig: isConfig,
EmptyLayer: emptyLayer,
}
if !isConfig {
options.LayerIndex = &layerIndex
}
uploadedInfo, err = dest.PutBlobWithOptions(ctx, &errorAnnotationReader{destStream}, inputInfo, options)
} else {
uploadedInfo, err = c.dest.PutBlob(ctx, &errorAnnotationReader{destStream}, inputInfo, c.blobInfoCache, isConfig)
options := private.PutBlobOptions{
Cache: c.blobInfoCache,
IsConfig: isConfig,
EmptyLayer: emptyLayer,
}
if !isConfig {
options.LayerIndex = &layerIndex
}
uploadedInfo, err := c.dest.PutBlobWithOptions(ctx, &errorAnnotationReader{destStream}, inputInfo, options)
if err != nil {
return types.BlobInfo{}, errors.Wrap(err, "writing blob")
}

View File

@@ -5,8 +5,9 @@ import (
"io"
"time"
internalTypes "github.com/containers/image/v5/internal/types"
"github.com/containers/image/v5/internal/private"
"github.com/containers/image/v5/types"
"github.com/vbauerster/mpb/v7"
)
// progressReader is a reader that reports its progress on an interval.
@@ -80,25 +81,26 @@ func (r *progressReader) Read(p []byte) (int, error) {
return n, err
}
// imageSourceSeekableProxy wraps ImageSourceSeekable and keeps track of how many bytes
// blobChunkAccessorProxy wraps a BlobChunkAccessor and keeps track of how many bytes
// are received.
type imageSourceSeekableProxy struct {
// source is the seekable input to read from.
source internalTypes.ImageSourceSeekable
// progress is the chan where the total number of bytes read so far are reported.
progress chan int64
type blobChunkAccessorProxy struct {
wrapped private.BlobChunkAccessor // The underlying BlobChunkAccessor
bar *mpb.Bar // A progress bar updated with the number of bytes read so far
}
// GetBlobAt reads from the ImageSourceSeekable and report how many bytes were received
// to the progress chan.
func (s imageSourceSeekableProxy) GetBlobAt(ctx context.Context, bInfo types.BlobInfo, chunks []internalTypes.ImageSourceChunk) (chan io.ReadCloser, chan error, error) {
rc, errs, err := s.source.GetBlobAt(ctx, bInfo, chunks)
// GetBlobAt returns a sequential channel of readers that contain data for the requested
// blob chunks, and a channel that might get a single error value.
// The specified chunks must be not overlapping and sorted by their offset.
// The readers must be fully consumed, in the order they are returned, before blocking
// to read the next chunk.
func (s *blobChunkAccessorProxy) GetBlobAt(ctx context.Context, info types.BlobInfo, chunks []private.ImageSourceChunk) (chan io.ReadCloser, chan error, error) {
rc, errs, err := s.wrapped.GetBlobAt(ctx, info, chunks)
if err == nil {
total := int64(0)
for _, c := range chunks {
total += int64(c.Length)
}
s.progress <- total
s.bar.IncrInt64(total)
}
return rc, errs, err
}

View File

@@ -16,7 +16,7 @@ import (
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/internal/iolimits"
internalTypes "github.com/containers/image/v5/internal/types"
"github.com/containers/image/v5/internal/private"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/sysregistriesv2"
"github.com/containers/image/v5/types"
@@ -147,6 +147,11 @@ func (s *dockerImageSource) Close() error {
return nil
}
// SupportsGetBlobAt() returns true if GetBlobAt (BlobChunkAccessor) is supported.
func (s *dockerImageSource) SupportsGetBlobAt() bool {
return true
}
// LayerInfosForCopy returns either nil (meaning the values in the manifest are fine), or updated values for the layer
// blobsums that are listed in the image's manifest. If values are returned, they should be used when using GetBlob()
// to read the image's layers.
@@ -288,7 +293,7 @@ func (s *dockerImageSource) HasThreadSafeGetBlob() bool {
}
// splitHTTP200ResponseToPartial splits a 200 response in multiple streams as specified by the chunks
func splitHTTP200ResponseToPartial(streams chan io.ReadCloser, errs chan error, body io.ReadCloser, chunks []internalTypes.ImageSourceChunk) {
func splitHTTP200ResponseToPartial(streams chan io.ReadCloser, errs chan error, body io.ReadCloser, chunks []private.ImageSourceChunk) {
defer close(streams)
defer close(errs)
currentOffset := uint64(0)
@@ -322,7 +327,7 @@ func splitHTTP200ResponseToPartial(streams chan io.ReadCloser, errs chan error,
}
// handle206Response reads a 206 response and send each part as a separate ReadCloser to the streams chan.
func handle206Response(streams chan io.ReadCloser, errs chan error, body io.ReadCloser, chunks []internalTypes.ImageSourceChunk, mediaType string, params map[string]string) {
func handle206Response(streams chan io.ReadCloser, errs chan error, body io.ReadCloser, chunks []private.ImageSourceChunk, mediaType string, params map[string]string) {
defer close(streams)
defer close(errs)
if !strings.HasPrefix(mediaType, "multipart/") {
@@ -357,9 +362,12 @@ func handle206Response(streams chan io.ReadCloser, errs chan error, body io.Read
}
}
// GetBlobAt returns a stream for the specified blob.
// GetBlobAt returns a sequential channel of readers that contain data for the requested
// blob chunks, and a channel that might get a single error value.
// The specified chunks must be not overlapping and sorted by their offset.
func (s *dockerImageSource) GetBlobAt(ctx context.Context, info types.BlobInfo, chunks []internalTypes.ImageSourceChunk) (chan io.ReadCloser, chan error, error) {
// The readers must be fully consumed, in the order they are returned, before blocking
// to read the next chunk.
func (s *dockerImageSource) GetBlobAt(ctx context.Context, info types.BlobInfo, chunks []private.ImageSourceChunk) (chan io.ReadCloser, chan error, error) {
headers := make(map[string][]string)
var rangeVals []string
@@ -401,7 +409,7 @@ func (s *dockerImageSource) GetBlobAt(ctx context.Context, info types.BlobInfo,
return streams, errs, nil
case http.StatusBadRequest:
res.Body.Close()
return nil, nil, internalTypes.BadPartialRequestError{Status: res.Status}
return nil, nil, private.BadPartialRequestError{Status: res.Status}
default:
err := httpResponseToError(res, "Error fetching partial blob")
if err == nil {

View File

@@ -0,0 +1,6 @@
package reference
// Return true if the specified string fully matches `IdentifierRegexp`.
func IsFullIdentifier(s string) bool {
return anchoredIdentifierRegexp.MatchString(s)
}

View File

@@ -0,0 +1,69 @@
package imagedestination
import (
"context"
"fmt"
"io"
"github.com/containers/image/v5/internal/private"
"github.com/containers/image/v5/types"
)
// FromPublic(dest) returns an object that provides the private.ImageDestination API
//
// Eventually, we might want to expose this function, and methods of the returned object,
// as a public API (or rather, a variant that does not include the already-superseded
// methods of types.ImageDestination, and has added more future-proofing), and more strongly
// deprecate direct use of types.ImageDestination.
//
// NOTE: The returned API MUST NOT be a public interface (it can be either just a struct
// with public methods, or perhaps a private interface), so that we can add methods
// without breaking any external implementors of a public interface.
func FromPublic(dest types.ImageDestination) private.ImageDestination {
if dest2, ok := dest.(private.ImageDestination); ok {
return dest2
}
return &wrapped{ImageDestination: dest}
}
// wrapped provides the private.ImageDestination operations
// for a destination that only implements types.ImageDestination
type wrapped struct {
types.ImageDestination
}
// SupportsPutBlobPartial returns true if PutBlobPartial is supported.
func (w *wrapped) SupportsPutBlobPartial() bool {
return false
}
// PutBlobWithOptions writes contents of stream and returns data representing the result.
// inputInfo.Digest can be optionally provided if known; if provided, and stream is read to the end without error, the digest MUST match the stream contents.
// inputInfo.Size is the expected length of stream, if known.
// inputInfo.MediaType describes the blob format, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (w *wrapped) PutBlobWithOptions(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, options private.PutBlobOptions) (types.BlobInfo, error) {
return w.PutBlob(ctx, stream, inputInfo, options.Cache, options.IsConfig)
}
// PutBlobPartial attempts to create a blob using the data that is already present
// at the destination. chunkAccessor is accessed in a non-sequential way to retrieve the missing chunks.
// It is available only if SupportsPutBlobPartial().
// Even if SupportsPutBlobPartial() returns true, the call can fail, in which case the caller
// should fall back to PutBlobWithOptions.
func (w *wrapped) PutBlobPartial(ctx context.Context, chunkAccessor private.BlobChunkAccessor, srcInfo types.BlobInfo, cache types.BlobInfoCache) (types.BlobInfo, error) {
return types.BlobInfo{}, fmt.Errorf("internal error: PutBlobPartial is not supported by the %q transport", w.Reference().Transport().Name())
}
// TryReusingBlobWithOptions checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If the blob has been successfully reused, returns (true, info, nil); info must contain at least a digest and size, and may
// include CompressionOperation and CompressionAlgorithm fields to indicate that a change to the compression type should be
// reflected in the manifest that will be written.
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
func (w *wrapped) TryReusingBlobWithOptions(ctx context.Context, info types.BlobInfo, options private.TryReusingBlobOptions) (bool, types.BlobInfo, error) {
return w.TryReusingBlob(ctx, info, options.Cache, options.CanSubstitute)
}

View File

@@ -0,0 +1,47 @@
package imagesource
import (
"context"
"fmt"
"io"
"github.com/containers/image/v5/internal/private"
"github.com/containers/image/v5/types"
)
// FromPublic(src) returns an object that provides the private.ImageSource API
//
// Eventually, we might want to expose this function, and methods of the returned object,
// as a public API (or rather, a variant that does not include the already-superseded
// methods of types.ImageSource, and has added more future-proofing), and more strongly
// deprecate direct use of types.ImageSource.
//
// NOTE: The returned API MUST NOT be a public interface (it can be either just a struct
// with public methods, or perhaps a private interface), so that we can add methods
// without breaking any external implementors of a public interface.
func FromPublic(src types.ImageSource) private.ImageSource {
if src2, ok := src.(private.ImageSource); ok {
return src2
}
return &wrapped{ImageSource: src}
}
// wrapped provides the private.ImageSource operations
// for a source that only implements types.ImageSource
type wrapped struct {
types.ImageSource
}
// SupportsGetBlobAt() returns true if GetBlobAt (BlobChunkAccessor) is supported.
func (w *wrapped) SupportsGetBlobAt() bool {
return false
}
// GetBlobAt returns a sequential channel of readers that contain data for the requested
// blob chunks, and a channel that might get a single error value.
// The specified chunks must be not overlapping and sorted by their offset.
// The readers must be fully consumed, in the order they are returned, before blocking
// to read the next chunk.
func (w *wrapped) GetBlobAt(ctx context.Context, info types.BlobInfo, chunks []private.ImageSourceChunk) (chan io.ReadCloser, chan error, error) {
return nil, nil, fmt.Errorf("internal error: GetBlobAt is not supported by the %q transport", w.Reference().Transport().Name())
}

View File

@@ -1,74 +0,0 @@
// Copyright 2015 Jesse Sipprell. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux
// +build linux
package keyctl
import (
"golang.org/x/sys/unix"
)
// Key represents a single key linked to one or more kernel keyrings.
type Key struct {
Name string
id, ring keyID
size int
}
// ID returns the 32-bit kernel identifier for a specific key
func (k *Key) ID() int32 {
return int32(k.id)
}
// Get the key's value as a byte slice
func (k *Key) Get() ([]byte, error) {
var (
b []byte
err error
sizeRead int
)
if k.size == 0 {
k.size = 512
}
size := k.size
b = make([]byte, int(size))
sizeRead = size + 1
for sizeRead > size {
r1, err := unix.KeyctlBuffer(unix.KEYCTL_READ, int(k.id), b, size)
if err != nil {
return nil, err
}
if sizeRead = int(r1); sizeRead > size {
b = make([]byte, sizeRead)
size = sizeRead
sizeRead = size + 1
} else {
k.size = sizeRead
}
}
return b[:k.size], err
}
// Unlink a key from the keyring it was loaded from (or added to). If the key
// is not linked to any other keyrings, it is destroyed.
func (k *Key) Unlink() error {
_, err := unix.KeyctlInt(unix.KEYCTL_UNLINK, int(k.id), int(k.ring), 0, 0)
return err
}
// Describe returns a string describing the attributes of a specified key
func (k *Key) Describe() (string, error) {
keyAttr, err := unix.KeyctlString(unix.KEYCTL_DESCRIBE, int(k.id))
if err != nil {
return "", err
}
return keyAttr, nil
}

View File

@@ -1,118 +0,0 @@
// Copyright 2015 Jesse Sipprell. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux
// +build linux
// Package keyctl is a Go interface to linux kernel keyrings (keyctl interface)
package keyctl
import (
"unsafe"
"golang.org/x/sys/unix"
)
// Keyring is the basic interface to a linux keyctl keyring.
type Keyring interface {
ID
Add(string, []byte) (*Key, error)
Search(string) (*Key, error)
}
type keyring struct {
id keyID
}
// ID is unique 32-bit serial number identifiers for all Keys and Keyrings have.
type ID interface {
ID() int32
}
// Add a new key to a keyring. The key can be searched for later by name.
func (kr *keyring) Add(name string, key []byte) (*Key, error) {
r, err := unix.AddKey("user", name, key, int(kr.id))
if err == nil {
key := &Key{Name: name, id: keyID(r), ring: kr.id}
return key, nil
}
return nil, err
}
// Search for a key by name, this also searches child keyrings linked to this
// one. The key, if found, is linked to the top keyring that Search() was called
// from.
func (kr *keyring) Search(name string) (*Key, error) {
id, err := unix.KeyctlSearch(int(kr.id), "user", name, 0)
if err == nil {
return &Key{Name: name, id: keyID(id), ring: kr.id}, nil
}
return nil, err
}
// ID returns the 32-bit kernel identifier of a keyring
func (kr *keyring) ID() int32 {
return int32(kr.id)
}
// SessionKeyring returns the current login session keyring
func SessionKeyring() (Keyring, error) {
return newKeyring(unix.KEY_SPEC_SESSION_KEYRING)
}
// UserKeyring returns the keyring specific to the current user.
func UserKeyring() (Keyring, error) {
return newKeyring(unix.KEY_SPEC_USER_KEYRING)
}
// Unlink an object from a keyring
func Unlink(parent Keyring, child ID) error {
_, err := unix.KeyctlInt(unix.KEYCTL_UNLINK, int(child.ID()), int(parent.ID()), 0, 0)
return err
}
// Link a key into a keyring
func Link(parent Keyring, child ID) error {
_, err := unix.KeyctlInt(unix.KEYCTL_LINK, int(child.ID()), int(parent.ID()), 0, 0)
return err
}
// ReadUserKeyring reads user keyring and returns slice of key with id(key_serial_t) representing the IDs of all the keys that are linked to it
func ReadUserKeyring() ([]*Key, error) {
var (
b []byte
err error
sizeRead int
)
krSize := 4
size := krSize
b = make([]byte, size)
sizeRead = size + 1
for sizeRead > size {
r1, err := unix.KeyctlBuffer(unix.KEYCTL_READ, unix.KEY_SPEC_USER_KEYRING, b, size)
if err != nil {
return nil, err
}
if sizeRead = int(r1); sizeRead > size {
b = make([]byte, sizeRead)
size = sizeRead
sizeRead = size + 1
} else {
krSize = sizeRead
}
}
keyIDs := getKeyIDsFromByte(b[:krSize])
return keyIDs, err
}
func getKeyIDsFromByte(byteKeyIDs []byte) []*Key {
idSize := 4
var keys []*Key
for idx := 0; idx+idSize <= len(byteKeyIDs); idx = idx + idSize {
tempID := *(*int32)(unsafe.Pointer(&byteKeyIDs[idx]))
keys = append(keys, &Key{id: keyID(tempID)})
}
return keys
}

View File

@@ -1,34 +0,0 @@
// Copyright 2015 Jesse Sipprell. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux
// +build linux
package keyctl
import (
"golang.org/x/sys/unix"
)
// KeyPerm represents in-kernel access control permission to keys and keyrings
// as a 32-bit integer broken up into four permission sets, one per byte.
// In MSB order, the perms are: Processor, User, Group, Other.
type KeyPerm uint32
const (
// PermOtherAll sets all permission for Other
PermOtherAll KeyPerm = 0x3f << (8 * iota)
// PermGroupAll sets all permission for Group
PermGroupAll
// PermUserAll sets all permission for User
PermUserAll
// PermProcessAll sets all permission for Processor
PermProcessAll
)
// SetPerm sets the permissions on a key or keyring.
func SetPerm(k ID, p KeyPerm) error {
err := unix.KeyctlSetperm(int(k.ID()), uint32(p))
return err
}

View File

@@ -1,26 +0,0 @@
// Copyright 2015 Jesse Sipprell. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux
// +build linux
package keyctl
import (
"golang.org/x/sys/unix"
)
type keyID int32
func newKeyring(id keyID) (*keyring, error) {
r1, err := unix.KeyctlGetKeyringID(int(id), true)
if err != nil {
return nil, err
}
if id < 0 {
r1 = int(id)
}
return &keyring{id: keyID(r1)}, nil
}

View File

@@ -0,0 +1,106 @@
package private
import (
"context"
"io"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/types"
)
// ImageSource is an internal extension to the types.ImageSource interface.
type ImageSource interface {
types.ImageSource
// SupportsGetBlobAt() returns true if GetBlobAt (BlobChunkAccessor) is supported.
SupportsGetBlobAt() bool
// BlobChunkAccessor.GetBlobAt is available only if SupportsGetBlobAt().
BlobChunkAccessor
}
// ImageDestination is an internal extension to the types.ImageDestination
// interface.
type ImageDestination interface {
types.ImageDestination
// SupportsPutBlobPartial returns true if PutBlobPartial is supported.
SupportsPutBlobPartial() bool
// PutBlobWithOptions writes contents of stream and returns data representing the result.
// inputInfo.Digest can be optionally provided if known; if provided, and stream is read to the end without error, the digest MUST match the stream contents.
// inputInfo.Size is the expected length of stream, if known.
// inputInfo.MediaType describes the blob format, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
PutBlobWithOptions(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, options PutBlobOptions) (types.BlobInfo, error)
// PutBlobPartial attempts to create a blob using the data that is already present
// at the destination. chunkAccessor is accessed in a non-sequential way to retrieve the missing chunks.
// It is available only if SupportsPutBlobPartial().
// Even if SupportsPutBlobPartial() returns true, the call can fail, in which case the caller
// should fall back to PutBlobWithOptions.
PutBlobPartial(ctx context.Context, chunkAccessor BlobChunkAccessor, srcInfo types.BlobInfo, cache types.BlobInfoCache) (types.BlobInfo, error)
// TryReusingBlobWithOptions checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If the blob has been successfully reused, returns (true, info, nil); info must contain at least a digest and size, and may
// include CompressionOperation and CompressionAlgorithm fields to indicate that a change to the compression type should be
// reflected in the manifest that will be written.
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
TryReusingBlobWithOptions(ctx context.Context, info types.BlobInfo, options TryReusingBlobOptions) (bool, types.BlobInfo, error)
}
// PutBlobOptions are used in PutBlobWithOptions.
type PutBlobOptions struct {
Cache types.BlobInfoCache // Cache to optionally update with the uploaded bloblook up blob infos.
IsConfig bool // True if the blob is a config
// The following fields are new to internal/private. Users of internal/private MUST fill them in,
// but they also must expect that they will be ignored by types.ImageDestination transports.
EmptyLayer bool // True if the blob is an "empty"/"throwaway" layer, and may not necessarily be physically represented.
LayerIndex *int // If the blob is a layer, a zero-based index of the layer within the image; nil otherwise.
}
// TryReusingBlobOptions are used in TryReusingBlobWithOptions.
type TryReusingBlobOptions struct {
Cache types.BlobInfoCache // Cache to use and/or update.
// If true, it is allowed to use an equivalent of the desired blob;
// in that case the returned info may not match the input.
CanSubstitute bool
// The following fields are new to internal/private. Users of internal/private MUST fill them in,
// but they also must expect that they will be ignored by types.ImageDestination transports.
EmptyLayer bool // True if the blob is an "empty"/"throwaway" layer, and may not necessarily be physically represented.
LayerIndex *int // If the blob is a layer, a zero-based index of the layer within the image; nil otherwise.
SrcRef reference.Named // A reference to the source image that contains the input blob.
}
// ImageSourceChunk is a portion of a blob.
// This API is experimental and can be changed without bumping the major version number.
type ImageSourceChunk struct {
Offset uint64
Length uint64
}
// BlobChunkAccessor allows fetching discontiguous chunks of a blob.
type BlobChunkAccessor interface {
// GetBlobAt returns a sequential channel of readers that contain data for the requested
// blob chunks, and a channel that might get a single error value.
// The specified chunks must be not overlapping and sorted by their offset.
// The readers must be fully consumed, in the order they are returned, before blocking
// to read the next chunk.
GetBlobAt(ctx context.Context, info types.BlobInfo, chunks []ImageSourceChunk) (chan io.ReadCloser, chan error, error)
}
// BadPartialRequestError is returned by BlobChunkAccessor.GetBlobAt on an invalid request.
type BadPartialRequestError struct {
Status string
}
func (e BadPartialRequestError) Error() string {
return e.Status
}

View File

@@ -1,91 +0,0 @@
package types
import (
"context"
"io"
"github.com/containers/image/v5/docker/reference"
publicTypes "github.com/containers/image/v5/types"
)
// ImageDestinationWithOptions is an internal extension to the ImageDestination
// interface.
type ImageDestinationWithOptions interface {
publicTypes.ImageDestination
// PutBlobWithOptions is a wrapper around PutBlob. If
// options.LayerIndex is set, the blob will be committed directly.
// Either by the calling goroutine or by another goroutine already
// committing layers.
//
// Please note that TryReusingBlobWithOptions and PutBlobWithOptions
// *must* be used the together. Mixing the two with non "WithOptions"
// functions is not supported.
PutBlobWithOptions(ctx context.Context, stream io.Reader, blobinfo publicTypes.BlobInfo, options PutBlobOptions) (publicTypes.BlobInfo, error)
// TryReusingBlobWithOptions is a wrapper around TryReusingBlob. If
// options.LayerIndex is set, the reused blob will be recoreded as
// already pulled.
//
// Please note that TryReusingBlobWithOptions and PutBlobWithOptions
// *must* be used the together. Mixing the two with non "WithOptions"
// functions is not supported.
TryReusingBlobWithOptions(ctx context.Context, blobinfo publicTypes.BlobInfo, options TryReusingBlobOptions) (bool, publicTypes.BlobInfo, error)
}
// PutBlobOptions are used in PutBlobWithOptions.
type PutBlobOptions struct {
// Cache to look up blob infos.
Cache publicTypes.BlobInfoCache
// Denotes whether the blob is a config or not.
IsConfig bool
// Indicates an empty layer.
EmptyLayer bool
// The corresponding index in the layer slice.
LayerIndex *int
}
// TryReusingBlobOptions are used in TryReusingBlobWithOptions.
type TryReusingBlobOptions struct {
// Cache to look up blob infos.
Cache publicTypes.BlobInfoCache
// Use an equivalent of the desired blob.
CanSubstitute bool
// Indicates an empty layer.
EmptyLayer bool
// The corresponding index in the layer slice.
LayerIndex *int
// The reference of the image that contains the target blob.
SrcRef reference.Named
}
// ImageSourceChunk is a portion of a blob.
// This API is experimental and can be changed without bumping the major version number.
type ImageSourceChunk struct {
Offset uint64
Length uint64
}
// ImageSourceSeekable is an image source that permits to fetch chunks of the entire blob.
// This API is experimental and can be changed without bumping the major version number.
type ImageSourceSeekable interface {
// GetBlobAt returns a stream for the specified blob.
// The specified chunks must be not overlapping and sorted by their offset.
GetBlobAt(context.Context, publicTypes.BlobInfo, []ImageSourceChunk) (chan io.ReadCloser, chan error, error)
}
// ImageDestinationPartial is a service to store a blob by requesting the missing chunks to a ImageSourceSeekable.
// This API is experimental and can be changed without bumping the major version number.
type ImageDestinationPartial interface {
// PutBlobPartial writes contents of stream and returns data representing the result.
PutBlobPartial(ctx context.Context, stream ImageSourceSeekable, srcInfo publicTypes.BlobInfo, cache publicTypes.BlobInfoCache) (publicTypes.BlobInfo, error)
}
// BadPartialRequestError is returned by ImageSourceSeekable.GetBlobAt on an invalid request.
type BadPartialRequestError struct {
Status string
}
func (e BadPartialRequestError) Error() string {
return e.Status
}

View File

@@ -112,7 +112,7 @@ func (d *ociImageDestination) IgnoresEmbeddedDockerReference() bool {
// HasThreadSafePutBlob indicates whether PutBlob can be executed concurrently.
func (d *ociImageDestination) HasThreadSafePutBlob() bool {
return false
return true
}
// PutBlob writes contents of stream and returns data representing the result.

View File

@@ -1,119 +0,0 @@
package config
import (
"fmt"
"strings"
"github.com/containers/image/v5/internal/pkg/keyctl"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// NOTE: none of the functions here are currently used. If we ever want to
// re-enable keyring support, we should introduce a similar built-in credential
// helpers as for `sysregistriesv2.AuthenticationFileHelper`.
const keyDescribePrefix = "container-registry-login:" //nolint:deadcode,unused
func getAuthFromKernelKeyring(registry string) (string, string, error) { //nolint:deadcode,unused
userkeyring, err := keyctl.UserKeyring()
if err != nil {
return "", "", err
}
key, err := userkeyring.Search(genDescription(registry))
if err != nil {
return "", "", err
}
authData, err := key.Get()
if err != nil {
return "", "", err
}
parts := strings.SplitN(string(authData), "\x00", 2)
if len(parts) != 2 {
return "", "", nil
}
return parts[0], parts[1], nil
}
func deleteAuthFromKernelKeyring(registry string) error { //nolint:deadcode,unused
userkeyring, err := keyctl.UserKeyring()
if err != nil {
return err
}
key, err := userkeyring.Search(genDescription(registry))
if err != nil {
return err
}
return key.Unlink()
}
func removeAllAuthFromKernelKeyring() error { //nolint:deadcode,unused
keys, err := keyctl.ReadUserKeyring()
if err != nil {
return err
}
userkeyring, err := keyctl.UserKeyring()
if err != nil {
return err
}
for _, k := range keys {
keyAttr, err := k.Describe()
if err != nil {
return err
}
// split string "type;uid;gid;perm;description"
keyAttrs := strings.SplitN(keyAttr, ";", 5)
if len(keyAttrs) < 5 {
return errors.Errorf("Key attributes of %d are not available", k.ID())
}
keyDescribe := keyAttrs[4]
if strings.HasPrefix(keyDescribe, keyDescribePrefix) {
err := keyctl.Unlink(userkeyring, k)
if err != nil {
return errors.Wrapf(err, "unlinking key %d", k.ID())
}
logrus.Debugf("unlinked key %d:%s", k.ID(), keyAttr)
}
}
return nil
}
func setAuthToKernelKeyring(registry, username, password string) error { //nolint:deadcode,unused
keyring, err := keyctl.SessionKeyring()
if err != nil {
return err
}
id, err := keyring.Add(genDescription(registry), []byte(fmt.Sprintf("%s\x00%s", username, password)))
if err != nil {
return err
}
// sets all permission(view,read,write,search,link,set attribute) for current user
// it enables the user to search the key after it linked to user keyring and unlinked from session keyring
err = keyctl.SetPerm(id, keyctl.PermUserAll)
if err != nil {
return err
}
// link the key to userKeyring
userKeyring, err := keyctl.UserKeyring()
if err != nil {
return errors.Wrapf(err, "getting user keyring")
}
err = keyctl.Link(userKeyring, id)
if err != nil {
return errors.Wrapf(err, "linking the key to user keyring")
}
// unlink the key from session keyring
err = keyctl.Unlink(keyring, id)
if err != nil {
return errors.Wrapf(err, "unlinking the key from session keyring")
}
return nil
}
func genDescription(registry string) string { //nolint:deadcode,unused
return fmt.Sprintf("%s%s", keyDescribePrefix, registry)
}

View File

@@ -1,21 +0,0 @@
//go:build !linux && (!386 || !amd64)
// +build !linux
// +build !386 !amd64
package config
func getAuthFromKernelKeyring(registry string) (string, string, error) { //nolint:deadcode,unused
return "", "", ErrNotSupported
}
func deleteAuthFromKernelKeyring(registry string) error { //nolint:deadcode,unused
return ErrNotSupported
}
func setAuthToKernelKeyring(registry, username, password string) error { //nolint:deadcode,unused
return ErrNotSupported
}
func removeAllAuthFromKernelKeyring() error { //nolint:deadcode,unused
return ErrNotSupported
}

View File

@@ -18,9 +18,9 @@ import (
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/image"
"github.com/containers/image/v5/internal/private"
"github.com/containers/image/v5/internal/putblobdigest"
"github.com/containers/image/v5/internal/tmpdir"
internalTypes "github.com/containers/image/v5/internal/types"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/pkg/blobinfocache/none"
"github.com/containers/image/v5/types"
@@ -446,15 +446,20 @@ func (s *storageImageDestination) computeNextBlobCacheFile() string {
return filepath.Join(s.directory, fmt.Sprintf("%d", atomic.AddInt32(&s.nextTempFileID, 1)))
}
// PutBlobWithOptions is a wrapper around PutBlob. If options.LayerIndex is
// set, the blob will be committed directly. Either by the calling goroutine
// or by another goroutine already committing layers.
//
// Please not that TryReusingBlobWithOptions and PutBlobWithOptions *must* be
// used the together. Mixing the two with non "WithOptions" functions is not
// supported.
func (s *storageImageDestination) PutBlobWithOptions(ctx context.Context, stream io.Reader, blobinfo types.BlobInfo, options internalTypes.PutBlobOptions) (types.BlobInfo, error) {
info, err := s.PutBlob(ctx, stream, blobinfo, options.Cache, options.IsConfig)
// HasThreadSafePutBlob indicates whether PutBlob can be executed concurrently.
func (s *storageImageDestination) HasThreadSafePutBlob() bool {
return true
}
// PutBlobWithOptions writes contents of stream and returns data representing the result.
// inputInfo.Digest can be optionally provided if known; if provided, and stream is read to the end without error, the digest MUST match the stream contents.
// inputInfo.Size is the expected length of stream, if known.
// inputInfo.MediaType describes the blob format, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (s *storageImageDestination) PutBlobWithOptions(ctx context.Context, stream io.Reader, blobinfo types.BlobInfo, options private.PutBlobOptions) (types.BlobInfo, error) {
info, err := s.putBlobToPendingFile(ctx, stream, blobinfo, &options)
if err != nil {
return info, err
}
@@ -466,11 +471,6 @@ func (s *storageImageDestination) PutBlobWithOptions(ctx context.Context, stream
return info, s.queueOrCommit(ctx, info, *options.LayerIndex, options.EmptyLayer)
}
// HasThreadSafePutBlob indicates whether PutBlob can be executed concurrently.
func (s *storageImageDestination) HasThreadSafePutBlob() bool {
return true
}
// PutBlob writes contents of stream and returns data representing the result.
// inputInfo.Digest can be optionally provided if known; if provided, and stream is read to the end without error, the digest MUST match the stream contents.
// inputInfo.Size is the expected length of stream, if known.
@@ -480,6 +480,15 @@ func (s *storageImageDestination) HasThreadSafePutBlob() bool {
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (s *storageImageDestination) PutBlob(ctx context.Context, stream io.Reader, blobinfo types.BlobInfo, cache types.BlobInfoCache, isConfig bool) (types.BlobInfo, error) {
return s.PutBlobWithOptions(ctx, stream, blobinfo, private.PutBlobOptions{
Cache: cache,
IsConfig: isConfig,
})
}
// putBlobToPendingFile implements ImageDestination.PutBlobWithOptions, storing stream into an on-disk file.
// The caller must arrange the blob to be eventually commited using s.commitLayer().
func (s *storageImageDestination) putBlobToPendingFile(ctx context.Context, stream io.Reader, blobinfo types.BlobInfo, options *private.PutBlobOptions) (types.BlobInfo, error) {
// Stores a layer or data blob in our temporary directory, checking that any information
// in the blobinfo matches the incoming data.
errorBlobInfo := types.BlobInfo{
@@ -534,7 +543,7 @@ func (s *storageImageDestination) PutBlob(ctx context.Context, stream io.Reader,
s.lock.Unlock()
// This is safe because we have just computed diffID, and blobDigest was either computed
// by us, or validated by the caller (usually copy.digestingReader).
cache.RecordDigestUncompressedPair(blobDigest, diffID.Digest())
options.Cache.RecordDigestUncompressedPair(blobDigest, diffID.Digest())
return types.BlobInfo{
Digest: blobDigest,
Size: blobSize,
@@ -542,80 +551,40 @@ func (s *storageImageDestination) PutBlob(ctx context.Context, stream io.Reader,
}, nil
}
// TryReusingBlobWithOptions is a wrapper around TryReusingBlob. If
// options.LayerIndex is set, the reused blob will be recoreded as already
// pulled.
//
// Please not that TryReusingBlobWithOptions and PutBlobWithOptions *must* be
// used the together. Mixing the two with the non "WithOptions" functions
// is not supported.
func (s *storageImageDestination) TryReusingBlobWithOptions(ctx context.Context, blobinfo types.BlobInfo, options internalTypes.TryReusingBlobOptions) (bool, types.BlobInfo, error) {
reused, info, err := s.tryReusingBlobWithSrcRef(ctx, blobinfo, options.Cache, options.CanSubstitute, options.SrcRef)
if err != nil || !reused || options.LayerIndex == nil {
return reused, info, err
}
return reused, info, s.queueOrCommit(ctx, info, *options.LayerIndex, options.EmptyLayer)
}
// tryReusingBlobWithSrcRef is a wrapper around TryReusingBlob.
// If ref is provided, this function first tries to get layer from Additional Layer Store.
func (s *storageImageDestination) tryReusingBlobWithSrcRef(ctx context.Context, blobinfo types.BlobInfo, cache types.BlobInfoCache, canSubstitute bool, ref reference.Named) (bool, types.BlobInfo, error) {
// lock the entire method as it executes fairly quickly
s.lock.Lock()
defer s.lock.Unlock()
if ref != nil {
// Check if we have the layer in the underlying additional layer store.
aLayer, err := s.imageRef.transport.store.LookupAdditionalLayer(blobinfo.Digest, ref.String())
if err != nil && errors.Cause(err) != storage.ErrLayerUnknown {
return false, types.BlobInfo{}, errors.Wrapf(err, `looking for compressed layers with digest %q and labels`, blobinfo.Digest)
} else if err == nil {
// Record the uncompressed value so that we can use it to calculate layer IDs.
s.blobDiffIDs[blobinfo.Digest] = aLayer.UncompressedDigest()
s.blobAdditionalLayer[blobinfo.Digest] = aLayer
return true, types.BlobInfo{
Digest: blobinfo.Digest,
Size: aLayer.CompressedSize(),
MediaType: blobinfo.MediaType,
}, nil
}
}
return s.tryReusingBlobLocked(ctx, blobinfo, cache, canSubstitute)
}
type zstdFetcher struct {
stream internalTypes.ImageSourceSeekable
ctx context.Context
blobInfo types.BlobInfo
chunkAccessor private.BlobChunkAccessor
ctx context.Context
blobInfo types.BlobInfo
}
// GetBlobAt converts from chunked.GetBlobAt to ImageSourceSeekable.GetBlobAt.
// GetBlobAt converts from chunked.GetBlobAt to BlobChunkAccessor.GetBlobAt.
func (f *zstdFetcher) GetBlobAt(chunks []chunked.ImageSourceChunk) (chan io.ReadCloser, chan error, error) {
var newChunks []internalTypes.ImageSourceChunk
var newChunks []private.ImageSourceChunk
for _, v := range chunks {
i := internalTypes.ImageSourceChunk{
i := private.ImageSourceChunk{
Offset: v.Offset,
Length: v.Length,
}
newChunks = append(newChunks, i)
}
rc, errs, err := f.stream.GetBlobAt(f.ctx, f.blobInfo, newChunks)
if _, ok := err.(internalTypes.BadPartialRequestError); ok {
rc, errs, err := f.chunkAccessor.GetBlobAt(f.ctx, f.blobInfo, newChunks)
if _, ok := err.(private.BadPartialRequestError); ok {
err = chunked.ErrBadRequest{}
}
return rc, errs, err
}
// PutBlobPartial attempts to create a blob using the data that is already present at the destination storage. stream is accessed
// in a non-sequential way to retrieve the missing chunks.
func (s *storageImageDestination) PutBlobPartial(ctx context.Context, stream internalTypes.ImageSourceSeekable, srcInfo types.BlobInfo, cache types.BlobInfoCache) (types.BlobInfo, error) {
// PutBlobPartial attempts to create a blob using the data that is already present
// at the destination. chunkAccessor is accessed in a non-sequential way to retrieve the missing chunks.
// It is available only if SupportsPutBlobPartial().
// Even if SupportsPutBlobPartial() returns true, the call can fail, in which case the caller
// should fall back to PutBlobWithOptions.
func (s *storageImageDestination) PutBlobPartial(ctx context.Context, chunkAccessor private.BlobChunkAccessor, srcInfo types.BlobInfo, cache types.BlobInfoCache) (types.BlobInfo, error) {
fetcher := zstdFetcher{
stream: stream,
ctx: ctx,
blobInfo: srcInfo,
chunkAccessor: chunkAccessor,
ctx: ctx,
blobInfo: srcInfo,
}
differ, err := chunked.GetDiffer(ctx, s.imageRef.transport.store, srcInfo.Size, srcInfo.Annotations, &fetcher)
@@ -640,6 +609,22 @@ func (s *storageImageDestination) PutBlobPartial(ctx context.Context, stream int
return srcInfo, nil
}
// TryReusingBlobWithOptions checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If the blob has been successfully reused, returns (true, info, nil); info must contain at least a digest and size, and may
// include CompressionOperation and CompressionAlgorithm fields to indicate that a change to the compression type should be
// reflected in the manifest that will be written.
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
func (s *storageImageDestination) TryReusingBlobWithOptions(ctx context.Context, blobinfo types.BlobInfo, options private.TryReusingBlobOptions) (bool, types.BlobInfo, error) {
reused, info, err := s.tryReusingBlobAsPending(ctx, blobinfo, &options)
if err != nil || !reused || options.LayerIndex == nil {
return reused, info, err
}
return reused, info, s.queueOrCommit(ctx, info, *options.LayerIndex, options.EmptyLayer)
}
// TryReusingBlob checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
@@ -650,16 +635,36 @@ func (s *storageImageDestination) PutBlobPartial(ctx context.Context, stream int
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
// May use and/or update cache.
func (s *storageImageDestination) TryReusingBlob(ctx context.Context, blobinfo types.BlobInfo, cache types.BlobInfoCache, canSubstitute bool) (bool, types.BlobInfo, error) {
return s.TryReusingBlobWithOptions(ctx, blobinfo, private.TryReusingBlobOptions{
Cache: cache,
CanSubstitute: canSubstitute,
})
}
// tryReusingBlobAsPending implements TryReusingBlobWithOptions, filling s.blobDiffIDs and other metadata.
// The caller must arrange the blob to be eventually commited using s.commitLayer().
func (s *storageImageDestination) tryReusingBlobAsPending(ctx context.Context, blobinfo types.BlobInfo, options *private.TryReusingBlobOptions) (bool, types.BlobInfo, error) {
// lock the entire method as it executes fairly quickly
s.lock.Lock()
defer s.lock.Unlock()
return s.tryReusingBlobLocked(ctx, blobinfo, cache, canSubstitute)
}
if options.SrcRef != nil {
// Check if we have the layer in the underlying additional layer store.
aLayer, err := s.imageRef.transport.store.LookupAdditionalLayer(blobinfo.Digest, options.SrcRef.String())
if err != nil && errors.Cause(err) != storage.ErrLayerUnknown {
return false, types.BlobInfo{}, errors.Wrapf(err, `looking for compressed layers with digest %q and labels`, blobinfo.Digest)
} else if err == nil {
// Record the uncompressed value so that we can use it to calculate layer IDs.
s.blobDiffIDs[blobinfo.Digest] = aLayer.UncompressedDigest()
s.blobAdditionalLayer[blobinfo.Digest] = aLayer
return true, types.BlobInfo{
Digest: blobinfo.Digest,
Size: aLayer.CompressedSize(),
MediaType: blobinfo.MediaType,
}, nil
}
}
// tryReusingBlobLocked implements a core functionality of TryReusingBlob.
// This must be called with a lock being held on storageImageDestination.
func (s *storageImageDestination) tryReusingBlobLocked(ctx context.Context, blobinfo types.BlobInfo, cache types.BlobInfoCache, canSubstitute bool) (bool, types.BlobInfo, error) {
if blobinfo.Digest == "" {
return false, types.BlobInfo{}, errors.Errorf(`Can not check for a blob with unknown digest`)
}
@@ -708,9 +713,9 @@ func (s *storageImageDestination) tryReusingBlobLocked(ctx context.Context, blob
// Does the blob correspond to a known DiffID which we already have available?
// Because we must return the size, which is unknown for unavailable compressed blobs, the returned BlobInfo refers to the
// uncompressed layer, and that can happen only if canSubstitute, or if the incoming manifest already specifies the size.
if canSubstitute || blobinfo.Size != -1 {
if uncompressedDigest := cache.UncompressedDigest(blobinfo.Digest); uncompressedDigest != "" && uncompressedDigest != blobinfo.Digest {
// uncompressed layer, and that can happen only if options.CanSubstitute, or if the incoming manifest already specifies the size.
if options.CanSubstitute || blobinfo.Size != -1 {
if uncompressedDigest := options.Cache.UncompressedDigest(blobinfo.Digest); uncompressedDigest != "" && uncompressedDigest != blobinfo.Digest {
layers, err := s.imageRef.transport.store.LayersByUncompressedDigest(uncompressedDigest)
if err != nil && errors.Cause(err) != storage.ErrLayerUnknown {
return false, types.BlobInfo{}, errors.Wrapf(err, `looking for layers with digest %q`, uncompressedDigest)
@@ -720,8 +725,8 @@ func (s *storageImageDestination) tryReusingBlobLocked(ctx context.Context, blob
s.blobDiffIDs[blobinfo.Digest] = layers[0].UncompressedDigest
return true, blobinfo, nil
}
if !canSubstitute {
return false, types.BlobInfo{}, fmt.Errorf("Internal error: canSubstitute was expected to be true for blobInfo %v", blobinfo)
if !options.CanSubstitute {
return false, types.BlobInfo{}, fmt.Errorf("Internal error: options.CanSubstitute was expected to be true for blobInfo %v", blobinfo)
}
s.blobDiffIDs[uncompressedDigest] = layers[0].UncompressedDigest
return true, types.BlobInfo{
@@ -1261,6 +1266,11 @@ func (s *storageImageDestination) IgnoresEmbeddedDockerReference() bool {
return true // Yes, we want the unmodified manifest
}
// SupportsPutBlobPartial returns true if PutBlobPartial is supported.
func (s *storageImageDestination) SupportsPutBlobPartial() bool {
return true
}
// PutSignatures records the image's signatures for committing as a single data blob.
func (s *storageImageDestination) PutSignatures(ctx context.Context, signatures [][]byte, instanceDigest *digest.Digest) error {
sizes := []int{}

View File

@@ -6,9 +6,9 @@ const (
// VersionMajor is for an API incompatible changes
VersionMajor = 5
// VersionMinor is for functionality in a backwards-compatible manner
VersionMinor = 19
VersionMinor = 20
// VersionPatch is for backwards-compatible bug fixes
VersionPatch = 1
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = ""

View File

@@ -0,0 +1,3 @@
## The libtrust Project Community Code of Conduct
The libtrust project follows the [Containers Community Code of Conduct](https://github.com/containers/common/blob/master/CODE-OF-CONDUCT.md).

3
vendor/github.com/containers/libtrust/SECURITY.md generated vendored Normal file
View File

@@ -0,0 +1,3 @@
## Security and Disclosure Information Policy for the libtrust Project
The libtrust Project follows the [Security and Disclosure Information Policy](https://github.com/containers/common/blob/master/SECURITY.md) for the Containers Projects.

View File

@@ -5,9 +5,9 @@ go 1.12
require (
github.com/golang/protobuf v1.4.3
github.com/google/go-cmp v0.5.2 // indirect
github.com/miekg/pkcs11 v1.0.3
github.com/miekg/pkcs11 v1.1.1
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.1
github.com/opencontainers/image-spec v1.0.2
github.com/pkg/errors v0.9.1
github.com/sirupsen/logrus v1.7.0
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980

View File

@@ -30,12 +30,12 @@ github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.2 h1:X2ev0eStA3AbceY54o37/0PQ/UWqKEiiO2dKL5OPaFM=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/miekg/pkcs11 v1.0.3 h1:iMwmD7I5225wv84WxIG/bmxz9AXjWvTWIbM/TYHvWtw=
github.com/miekg/pkcs11 v1.0.3/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/miekg/pkcs11 v1.1.1 h1:Ugu9pdy6vAYku5DEpVWVFPYnzV+bxB+iRdbuFSu7TvU=
github.com/miekg/pkcs11 v1.1.1/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM=
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=

View File

@@ -1 +1 @@
1.38.2
1.39.0

View File

@@ -84,8 +84,17 @@ type ContainerStore interface {
// SetNames updates the list of names associated with the container
// with the specified ID.
// Deprecated: Prone to race conditions, suggested alternatives are `AddNames` and `RemoveNames`.
SetNames(id string, names []string) error
// AddNames adds the supplied values to the list of names associated with the container with
// the specified id.
AddNames(id string, names []string) error
// RemoveNames removes the supplied values from the list of names associated with the container with
// the specified id.
RemoveNames(id string, names []string) error
// Get retrieves information about a container given an ID or name.
Get(id string) (*Container, error)
@@ -324,6 +333,12 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
fmt.Sprintf("the container name \"%s\" is already in use by \"%s\". You have to remove that container to be able to reuse that name.", name, r.byname[name].ID))
}
}
if err := hasOverlappingRanges(options.UIDMap); err != nil {
return nil, err
}
if err := hasOverlappingRanges(options.GIDMap); err != nil {
return nil, err
}
if err == nil {
container = &Container{
ID: id,
@@ -371,22 +386,40 @@ func (r *containerStore) removeName(container *Container, name string) {
container.Names = stringSliceWithoutValue(container.Names, name)
}
// Deprecated: Prone to race conditions, suggested alternatives are `AddNames` and `RemoveNames`.
func (r *containerStore) SetNames(id string, names []string) error {
names = dedupeNames(names)
if container, ok := r.lookup(id); ok {
for _, name := range container.Names {
delete(r.byname, name)
}
for _, name := range names {
if otherContainer, ok := r.byname[name]; ok {
r.removeName(otherContainer, name)
}
r.byname[name] = container
}
container.Names = names
return r.Save()
return r.updateNames(id, names, setNames)
}
func (r *containerStore) AddNames(id string, names []string) error {
return r.updateNames(id, names, addNames)
}
func (r *containerStore) RemoveNames(id string, names []string) error {
return r.updateNames(id, names, removeNames)
}
func (r *containerStore) updateNames(id string, names []string, op updateNameOperation) error {
container, ok := r.lookup(id)
if !ok {
return ErrContainerUnknown
}
return ErrContainerUnknown
oldNames := container.Names
names, err := applyNameOperation(oldNames, names, op)
if err != nil {
return err
}
for _, name := range oldNames {
delete(r.byname, name)
}
for _, name := range names {
if otherContainer, ok := r.byname[name]; ok {
r.removeName(otherContainer, name)
}
r.byname[name] = container
}
container.Names = names
return r.Save()
}
func (r *containerStore) Delete(id string) error {

View File

@@ -50,11 +50,14 @@ func chownByMapsMain() {
if len(toHost.UIDs()) == 0 && len(toHost.GIDs()) == 0 {
toHost = nil
}
chowner := newLChowner()
chown := func(path string, info os.FileInfo, _ error) error {
if path == "." {
return nil
}
return platformLChown(path, info, toHost, toContainer)
return chowner.LChown(path, info, toHost, toContainer)
}
if err := pwalk.Walk(".", chown); err != nil {
fmt.Fprintf(os.Stderr, "error during chown: %v", err)

View File

@@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package graphdriver
@@ -6,17 +7,50 @@ import (
"errors"
"fmt"
"os"
"sync"
"syscall"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/system"
)
func platformLChown(path string, info os.FileInfo, toHost, toContainer *idtools.IDMappings) error {
type inode struct {
Dev uint64
Ino uint64
}
type platformChowner struct {
mutex sync.Mutex
inodes map[inode]bool
}
func newLChowner() *platformChowner {
return &platformChowner{
inodes: make(map[inode]bool),
}
}
func (c *platformChowner) LChown(path string, info os.FileInfo, toHost, toContainer *idtools.IDMappings) error {
st, ok := info.Sys().(*syscall.Stat_t)
if !ok {
return nil
}
i := inode{
Dev: uint64(st.Dev),
Ino: uint64(st.Ino),
}
c.mutex.Lock()
_, found := c.inodes[i]
if !found {
c.inodes[i] = true
}
c.mutex.Unlock()
if found {
return nil
}
// Map an on-disk UID/GID pair from host to container
// using the first map, then back to the host using the
// second map. Skip that first step if they're 0, to

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package graphdriver
@@ -9,6 +10,13 @@ import (
"github.com/containers/storage/pkg/idtools"
)
func platformLChown(path string, info os.FileInfo, toHost, toContainer *idtools.IDMappings) error {
type platformChowner struct {
}
func newLChowner() *platformChowner {
return &platformChowner{}
}
func (c *platformChowner) LChown(path string, info os.FileInfo, toHost, toContainer *idtools.IDMappings) error {
return &os.PathError{"lchown", path, syscall.EWINDOWS}
}

View File

@@ -1,3 +1,4 @@
//go:build linux
// +build linux
package overlay
@@ -291,6 +292,31 @@ func Init(home string, options graphdriver.Options) (graphdriver.Driver, error)
backingFs = fsName
}
runhome := filepath.Join(options.RunRoot, filepath.Base(home))
rootUID, rootGID, err := idtools.GetRootUIDGID(options.UIDMaps, options.GIDMaps)
if err != nil {
return nil, err
}
// Create the driver home dir
if err := idtools.MkdirAllAs(path.Join(home, linkDir), 0700, rootUID, rootGID); err != nil {
return nil, err
}
if err := idtools.MkdirAllAs(runhome, 0700, rootUID, rootGID); err != nil {
return nil, err
}
if opts.mountProgram == "" {
if supported, err := SupportsNativeOverlay(home, runhome); err != nil {
return nil, err
} else if !supported {
if path, err := exec.LookPath("fuse-overlayfs"); err == nil {
opts.mountProgram = path
}
}
}
if opts.mountProgram != "" {
if unshare.IsRootless() && isNetworkFileSystem(fsMagic) && opts.forceMask == nil {
m := os.FileMode(0700)
@@ -315,20 +341,6 @@ func Init(home string, options graphdriver.Options) (graphdriver.Driver, error)
}
}
rootUID, rootGID, err := idtools.GetRootUIDGID(options.UIDMaps, options.GIDMaps)
if err != nil {
return nil, err
}
// Create the driver home dir
if err := idtools.MkdirAllAs(path.Join(home, linkDir), 0700, rootUID, rootGID); err != nil {
return nil, err
}
runhome := filepath.Join(options.RunRoot, filepath.Base(home))
if err := idtools.MkdirAllAs(runhome, 0700, rootUID, rootGID); err != nil {
return nil, err
}
var usingMetacopy bool
var supportsDType bool
var supportsVolatile *bool
@@ -568,14 +580,11 @@ func cachedFeatureRecord(runhome, feature string, supported bool, text string) (
return err
}
func SupportsNativeOverlay(graphroot, rundir string) (bool, error) {
if os.Geteuid() != 0 || graphroot == "" || rundir == "" {
func SupportsNativeOverlay(home, runhome string) (bool, error) {
if os.Geteuid() != 0 || home == "" || runhome == "" {
return false, nil
}
home := filepath.Join(graphroot, "overlay")
runhome := filepath.Join(rundir, "overlay")
var contents string
flagContent, err := ioutil.ReadFile(getMountProgramFlagFile(home))
if err == nil {
@@ -919,7 +928,9 @@ func (d *Driver) create(id, parent string, opts *graphdriver.CreateOpts, disable
defer func() {
// Clean up on failure
if retErr != nil {
os.RemoveAll(dir)
if err2 := os.RemoveAll(dir); err2 != nil {
logrus.Errorf("While recovering from a failure creating a layer, error deleting %#v: %v", dir, err2)
}
}
}()
@@ -1166,6 +1177,9 @@ func (d *Driver) Remove(id string) error {
// under each layer has a symlink created for it under the linkDir. If the symlink does not
// exist, it creates them
func (d *Driver) recreateSymlinks() error {
// We have at most 3 corrective actions per layer, so 10 iterations is plenty.
const maxIterations = 10
// List all the directories under the home directory
dirs, err := ioutil.ReadDir(d.home)
if err != nil {
@@ -1183,6 +1197,7 @@ func (d *Driver) recreateSymlinks() error {
// Keep looping as long as we take some corrective action in each iteration
var errs *multierror.Error
madeProgress := true
iterations := 0
for madeProgress {
errs = nil
madeProgress = false
@@ -1233,7 +1248,12 @@ func (d *Driver) recreateSymlinks() error {
if len(targetComponents) != 3 || targetComponents[0] != ".." || targetComponents[2] != "diff" {
errs = multierror.Append(errs, errors.Errorf("link target of %q looks weird: %q", link, target))
// force the link to be recreated on the next pass
os.Remove(filepath.Join(linksDir, link.Name()))
if err := os.Remove(filepath.Join(linksDir, link.Name())); err != nil {
if !os.IsNotExist(err) {
errs = multierror.Append(errs, errors.Wrapf(err, "removing link %q", link))
} // else dont report any error, but also dont set madeProgress.
continue
}
madeProgress = true
continue
}
@@ -1243,6 +1263,8 @@ func (d *Driver) recreateSymlinks() error {
linkFile := filepath.Join(d.dir(targetID), "link")
data, err := ioutil.ReadFile(linkFile)
if err != nil || string(data) != link.Name() {
// NOTE: If two or more links point to the same target, we will update linkFile
// with every value of link.Name(), and set madeProgress = true every time.
if err := ioutil.WriteFile(linkFile, []byte(link.Name()), 0644); err != nil {
errs = multierror.Append(errs, errors.Wrapf(err, "correcting link for layer %s", targetID))
continue
@@ -1250,6 +1272,11 @@ func (d *Driver) recreateSymlinks() error {
madeProgress = true
}
}
iterations++
if iterations >= maxIterations {
errs = multierror.Append(errs, fmt.Errorf("Reached %d iterations in overlay graph drivers recreateSymlink, giving up", iterations))
break
}
}
if errs != nil {
return errs.ErrorOrNil()
@@ -1443,6 +1470,21 @@ func (d *Driver) get(id string, disableShifting bool, options graphdriver.MountO
workdir := path.Join(dir, "work")
if d.options.mountProgram == "" && unshare.IsRootless() {
optsList = append(optsList, "userxattr")
}
if options.Volatile && !hasVolatileOption(optsList) {
supported, err := d.getSupportsVolatile()
if err != nil {
return "", err
}
// If "volatile" is not supported by the file system, just ignore the request
if supported {
optsList = append(optsList, "volatile")
}
}
var opts string
if readWrite {
opts = fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", strings.Join(absLowers, ":"), diffDir, workdir)
@@ -1450,22 +1492,7 @@ func (d *Driver) get(id string, disableShifting bool, options graphdriver.MountO
opts = fmt.Sprintf("lowerdir=%s:%s", diffDir, strings.Join(absLowers, ":"))
}
if len(optsList) > 0 {
opts = fmt.Sprintf("%s,%s", strings.Join(optsList, ","), opts)
}
if d.options.mountProgram == "" && unshare.IsRootless() {
opts = fmt.Sprintf("%s,userxattr", opts)
}
// If "volatile" is not supported by the file system, just ignore the request
if options.Volatile && !hasVolatileOption(strings.Split(opts, ",")) {
supported, err := d.getSupportsVolatile()
if err != nil {
return "", err
}
if supported {
opts = fmt.Sprintf("%s,volatile", opts)
}
opts = fmt.Sprintf("%s,%s", opts, strings.Join(optsList, ","))
}
mountData := label.FormatMountLabel(opts, options.MountLabel)
@@ -1474,10 +1501,6 @@ func (d *Driver) get(id string, disableShifting bool, options graphdriver.MountO
pageSize := unix.Getpagesize()
// Use relative paths and mountFrom when the mount data has exceeded
// the page size. The mount syscall fails if the mount data cannot
// fit within a page and relative links make the mount data much
// smaller at the expense of requiring a fork exec to chroot.
if d.options.mountProgram != "" {
mountFunc = func(source string, target string, mType string, flags uintptr, label string) error {
if !disableShifting {
@@ -1503,18 +1526,26 @@ func (d *Driver) get(id string, disableShifting bool, options graphdriver.MountO
}
return nil
}
} else if len(mountData) > pageSize {
} else if len(mountData) >= pageSize {
// Use relative paths and mountFrom when the mount data has exceeded
// the page size. The mount syscall fails if the mount data cannot
// fit within a page and relative links make the mount data much
// smaller at the expense of requiring a fork exec to chroot.
workdir = path.Join(id, "work")
//FIXME: We need to figure out to get this to work with additional stores
if readWrite {
diffDir := path.Join(id, "diff")
opts = fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", strings.Join(relLowers, ":"), diffDir, workdir)
} else {
opts = fmt.Sprintf("lowerdir=%s", strings.Join(absLowers, ":"))
opts = fmt.Sprintf("lowerdir=%s", strings.Join(relLowers, ":"))
}
if len(optsList) > 0 {
opts = fmt.Sprintf("%s,%s", opts, strings.Join(optsList, ","))
}
mountData = label.FormatMountLabel(opts, options.MountLabel)
if len(mountData) > pageSize {
return "", fmt.Errorf("cannot mount layer, mount label %q too large %d > page size %d", options.MountLabel, len(mountData), pageSize)
if len(mountData) >= pageSize {
return "", fmt.Errorf("cannot mount layer, mount label %q too large %d >= page size %d", options.MountLabel, len(mountData), pageSize)
}
mountFunc = func(source string, target string, mType string, flags uintptr, label string) error {
return mountFrom(d.home, source, target, mType, flags, label)

View File

@@ -1,6 +1,8 @@
package storage
import (
"errors"
"github.com/containers/storage/types"
)
@@ -55,4 +57,9 @@ var (
ErrStoreIsReadOnly = types.ErrStoreIsReadOnly
// ErrNotSupported is returned when the requested functionality is not supported.
ErrNotSupported = types.ErrNotSupported
// ErrInvalidMappings is returned when the specified mappings are invalid.
ErrInvalidMappings = types.ErrInvalidMappings
// ErrInvalidNameOperation is returned when updateName is called with invalid operation.
// Internal error
errInvalidUpdateNameOperation = errors.New("invalid update name operation")
)

View File

@@ -4,26 +4,26 @@ module github.com/containers/storage
require (
github.com/BurntSushi/toml v1.0.0
github.com/Microsoft/go-winio v0.5.1
github.com/Microsoft/go-winio v0.5.2
github.com/Microsoft/hcsshim v0.9.2
github.com/containerd/stargz-snapshotter/estargz v0.11.0
github.com/containerd/stargz-snapshotter/estargz v0.11.3
github.com/cyphar/filepath-securejoin v0.2.3
github.com/docker/go-units v0.4.0
github.com/google/go-intervals v0.0.2
github.com/hashicorp/go-multierror v1.1.1
github.com/json-iterator/go v1.1.12
github.com/klauspost/compress v1.14.2
github.com/klauspost/compress v1.15.1
github.com/klauspost/pgzip v1.2.5
github.com/mattn/go-shellwords v1.0.12
github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible
github.com/moby/sys/mountinfo v0.5.0
github.com/moby/sys/mountinfo v0.6.0
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/runc v1.1.0
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417
github.com/opencontainers/selinux v1.10.0
github.com/pkg/errors v0.9.1
github.com/sirupsen/logrus v1.8.1
github.com/stretchr/testify v1.7.0
github.com/stretchr/testify v1.7.1
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635
github.com/tchap/go-patricia v2.3.0+incompatible
github.com/ulikunitz/xz v0.5.10

View File

@@ -47,8 +47,8 @@ github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugX
github.com/Microsoft/go-winio v0.4.17-0.20210211115548-6eac466e5fa3/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.4.17-0.20210324224401-5516f17a5958/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.5.1 h1:aPJp2QD7OOrhO5tQXqQoGSJc+DjDtWTGLOmNyAm6FgY=
github.com/Microsoft/go-winio v0.5.1/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7-0.20190325164909-8abdbb8205e4/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
@@ -176,8 +176,8 @@ github.com/containerd/nri v0.0.0-20201007170849-eb1350a75164/go.mod h1:+2wGSDGFY
github.com/containerd/nri v0.0.0-20210316161719-dbaa18c31c14/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
github.com/containerd/nri v0.1.0/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
github.com/containerd/stargz-snapshotter/estargz v0.4.1/go.mod h1:x7Q9dg9QYb4+ELgxmo4gBUeJB0tl5dqH1Sdz0nJU1QM=
github.com/containerd/stargz-snapshotter/estargz v0.11.0 h1:t0IW5kOmY7AXDAWRUs2uVzDhijAUOAYVr/dyRhOQvBg=
github.com/containerd/stargz-snapshotter/estargz v0.11.0/go.mod h1:/KsZXsJRllMbTKFfG0miFQWViQKdI9+9aSXs+HN0+ac=
github.com/containerd/stargz-snapshotter/estargz v0.11.3 h1:k2kN16Px6LYuv++qFqK+JTcYqc8bEVxzGpf8/gFBL5M=
github.com/containerd/stargz-snapshotter/estargz v0.11.3/go.mod h1:7vRJIcImfY8bpifnMjt+HTJoQxASq7T28MYbP15/Nf0=
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
@@ -424,8 +424,8 @@ github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.11.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.14.2 h1:S0OHlFk/Gbon/yauFJ4FfJJF5V0fc5HbBTJazi28pRw=
github.com/klauspost/compress v1.14.2/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.15.1 h1:y9FcTHGyrebwfP0ZZqFiaxTaiDnUrGkJkI+f583BL1A=
github.com/klauspost/compress v1.15.1/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/pgzip v1.2.5 h1:qnWYvvKqedOF2ulHpMG72XQol4ILEJ8k2wwRl/Km8oE=
github.com/klauspost/pgzip v1.2.5/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@@ -466,8 +466,9 @@ github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQ
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/sys/mountinfo v0.4.0/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/moby/sys/mountinfo v0.5.0 h1:2Ks8/r6lopsxWi9m58nlwjaeSzUX9iiL1vj5qB/9ObI=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/mountinfo v0.6.0 h1:gUDhXQx58YNrpHlK4nSL+7y2pxFZkUcXqzFDKWdC0Oo=
github.com/moby/sys/mountinfo v0.6.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/symlink v0.1.0/go.mod h1:GGDODQmbFOjFsXvfLVn3+ZRxkch54RkSiGqsZeMYowQ=
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -621,8 +622,8 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI=

View File

@@ -1,6 +1,9 @@
package storage
import (
"fmt"
"strings"
"github.com/containers/storage/pkg/idtools"
"github.com/google/go-intervals/intervalset"
"github.com/pkg/errors"
@@ -218,3 +221,45 @@ func maxInt(a, b int) int {
}
return a
}
func hasOverlappingRanges(mappings []idtools.IDMap) error {
hostIntervals := intervalset.Empty()
containerIntervals := intervalset.Empty()
var conflicts []string
for _, m := range mappings {
c := interval{start: m.ContainerID, end: m.ContainerID + m.Size}
h := interval{start: m.HostID, end: m.HostID + m.Size}
added := false
overlaps := false
containerIntervals.IntervalsBetween(c, func(x intervalset.Interval) bool {
overlaps = true
return false
})
if overlaps {
conflicts = append(conflicts, fmt.Sprintf("%v:%v:%v", m.ContainerID, m.HostID, m.Size))
added = true
}
containerIntervals.Add(intervalset.NewSet([]intervalset.Interval{c}))
hostIntervals.IntervalsBetween(h, func(x intervalset.Interval) bool {
overlaps = true
return false
})
if overlaps && !added {
conflicts = append(conflicts, fmt.Sprintf("%v:%v:%v", m.ContainerID, m.HostID, m.Size))
}
hostIntervals.Add(intervalset.NewSet([]intervalset.Interval{h}))
}
if conflicts != nil {
if len(conflicts) == 1 {
return errors.Wrapf(ErrInvalidMappings, "the specified UID and/or GID mapping %s conflicts with other mappings", conflicts[0])
}
return errors.Wrapf(ErrInvalidMappings, "the specified UID and/or GID mappings %s conflict with other mappings", strings.Join(conflicts, ", "))
}
return nil
}

View File

@@ -136,8 +136,19 @@ type ImageStore interface {
// SetNames replaces the list of names associated with an image with the
// supplied values. The values are expected to be valid normalized
// named image references.
// Deprecated: Prone to race conditions, suggested alternatives are `AddNames` and `RemoveNames`.
SetNames(id string, names []string) error
// AddNames adds the supplied values to the list of names associated with the image with
// the specified id. The values are expected to be valid normalized
// named image references.
AddNames(id string, names []string) error
// RemoveNames removes the supplied values from the list of names associated with the image with
// the specified id. The values are expected to be valid normalized
// named image references.
RemoveNames(id string, names []string) error
// Delete removes the record of the image.
Delete(id string) error
@@ -425,37 +436,36 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string, c
if created.IsZero() {
created = time.Now().UTC()
}
if err == nil {
image = &Image{
ID: id,
Digest: searchableDigest,
Digests: nil,
Names: names,
TopLayer: layer,
Metadata: metadata,
BigDataNames: []string{},
BigDataSizes: make(map[string]int64),
BigDataDigests: make(map[string]digest.Digest),
Created: created,
Flags: make(map[string]interface{}),
}
err := image.recomputeDigests()
if err != nil {
return nil, errors.Wrapf(err, "error validating digests for new image")
}
r.images = append(r.images, image)
r.idindex.Add(id)
r.byid[id] = image
for _, name := range names {
r.byname[name] = image
}
for _, digest := range image.Digests {
list := r.bydigest[digest]
r.bydigest[digest] = append(list, image)
}
err = r.Save()
image = copyImage(image)
image = &Image{
ID: id,
Digest: searchableDigest,
Digests: nil,
Names: names,
TopLayer: layer,
Metadata: metadata,
BigDataNames: []string{},
BigDataSizes: make(map[string]int64),
BigDataDigests: make(map[string]digest.Digest),
Created: created,
Flags: make(map[string]interface{}),
}
err = image.recomputeDigests()
if err != nil {
return nil, errors.Wrapf(err, "error validating digests for new image")
}
r.images = append(r.images, image)
r.idindex.Add(id)
r.byid[id] = image
for _, name := range names {
r.byname[name] = image
}
for _, digest := range image.Digests {
list := r.bydigest[digest]
r.bydigest[digest] = append(list, image)
}
err = r.Save()
image = copyImage(image)
return image, err
}
@@ -506,26 +516,44 @@ func (i *Image) addNameToHistory(name string) {
i.NamesHistory = dedupeNames(append([]string{name}, i.NamesHistory...))
}
// Deprecated: Prone to race conditions, suggested alternatives are `AddNames` and `RemoveNames`.
func (r *imageStore) SetNames(id string, names []string) error {
return r.updateNames(id, names, setNames)
}
func (r *imageStore) AddNames(id string, names []string) error {
return r.updateNames(id, names, addNames)
}
func (r *imageStore) RemoveNames(id string, names []string) error {
return r.updateNames(id, names, removeNames)
}
func (r *imageStore) updateNames(id string, names []string, op updateNameOperation) error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to change image name assignments at %q", r.imagespath())
}
names = dedupeNames(names)
if image, ok := r.lookup(id); ok {
for _, name := range image.Names {
delete(r.byname, name)
}
for _, name := range names {
if otherImage, ok := r.byname[name]; ok {
r.removeName(otherImage, name)
}
r.byname[name] = image
image.addNameToHistory(name)
}
image.Names = names
return r.Save()
image, ok := r.lookup(id)
if !ok {
return errors.Wrapf(ErrImageUnknown, "error locating image with ID %q", id)
}
return errors.Wrapf(ErrImageUnknown, "error locating image with ID %q", id)
oldNames := image.Names
names, err := applyNameOperation(oldNames, names, op)
if err != nil {
return err
}
for _, name := range oldNames {
delete(r.byname, name)
}
for _, name := range names {
if otherImage, ok := r.byname[name]; ok {
r.removeName(otherImage, name)
}
r.byname[name] = image
image.addNameToHistory(name)
}
image.Names = names
return r.Save()
}
func (r *imageStore) Delete(id string) error {

View File

@@ -221,8 +221,17 @@ type LayerStore interface {
// SetNames replaces the list of names associated with a layer with the
// supplied values.
// Deprecated: Prone to race conditions, suggested alternatives are `AddNames` and `RemoveNames`.
SetNames(id string, names []string) error
// AddNames adds the supplied values to the list of names associated with the layer with the
// specified id.
AddNames(id string, names []string) error
// RemoveNames remove the supplied values from the list of names associated with the layer with the
// specified id.
RemoveNames(id string, names []string) error
// Delete deletes a layer with the specified name or ID.
Delete(id string) error
@@ -399,14 +408,13 @@ func (r *layerStore) Load() error {
if layer.Flags == nil {
layer.Flags = make(map[string]interface{})
}
if cleanup, ok := layer.Flags[incompleteFlag]; ok {
if b, ok := cleanup.(bool); ok && b {
err = r.deleteInternal(layer.ID)
if err != nil {
break
}
shouldSave = true
if layerHasIncompleteFlag(layer) {
logrus.Warnf("Found incomplete layer %#v, deleting it", layer.ID)
err = r.deleteInternal(layer.ID)
if err != nil {
break
}
shouldSave = true
}
}
}
@@ -742,26 +750,17 @@ func (r *layerStore) Put(id string, parentLayer *Layer, names []string, mountLab
}
if moreOptions.TemplateLayer != "" {
if err = r.driver.CreateFromTemplate(id, moreOptions.TemplateLayer, templateIDMappings, parent, parentMappings, &opts, writeable); err != nil {
if id != "" {
return nil, -1, errors.Wrapf(err, "error creating copy of template layer %q with ID %q", moreOptions.TemplateLayer, id)
}
return nil, -1, errors.Wrapf(err, "error creating copy of template layer %q", moreOptions.TemplateLayer)
return nil, -1, errors.Wrapf(err, "error creating copy of template layer %q with ID %q", moreOptions.TemplateLayer, id)
}
oldMappings = templateIDMappings
} else {
if writeable {
if err = r.driver.CreateReadWrite(id, parent, &opts); err != nil {
if id != "" {
return nil, -1, errors.Wrapf(err, "error creating read-write layer with ID %q", id)
}
return nil, -1, errors.Wrapf(err, "error creating read-write layer")
return nil, -1, errors.Wrapf(err, "error creating read-write layer with ID %q", id)
}
} else {
if err = r.driver.Create(id, parent, &opts); err != nil {
if id != "" {
return nil, -1, errors.Wrapf(err, "error creating layer with ID %q", id)
}
return nil, -1, errors.Wrapf(err, "error creating layer")
return nil, -1, errors.Wrapf(err, "error creating layer with ID %q", id)
}
}
oldMappings = parentMappings
@@ -770,7 +769,9 @@ func (r *layerStore) Put(id string, parentLayer *Layer, names []string, mountLab
if err = r.driver.UpdateLayerIDMap(id, oldMappings, idMappings, mountLabel); err != nil {
// We don't have a record of this layer, but at least
// try to clean it up underneath us.
r.driver.Remove(id)
if err2 := r.driver.Remove(id); err2 != nil {
logrus.Errorf("While recovering from a failure creating in UpdateLayerIDMap, error deleting layer %#v: %v", id, err2)
}
return nil, -1, err
}
}
@@ -795,21 +796,26 @@ func (r *layerStore) Put(id string, parentLayer *Layer, names []string, mountLab
for flag, value := range flags {
layer.Flags[flag] = value
}
savedIncompleteLayer := false
if diff != nil {
layer.Flags[incompleteFlag] = true
err = r.Save()
if err != nil {
// We don't have a record of this layer, but at least
// try to clean it up underneath us.
r.driver.Remove(id)
if err2 := r.driver.Remove(id); err2 != nil {
logrus.Errorf("While recovering from a failure saving incomplete layer metadata, error deleting layer %#v: %v", id, err2)
}
return nil, -1, err
}
savedIncompleteLayer = true
size, err = r.applyDiffWithOptions(layer.ID, moreOptions, diff)
if err != nil {
if r.Delete(layer.ID) != nil {
if err2 := r.Delete(layer.ID); err2 != nil {
// Either a driver error or an error saving.
// We now have a layer that's been marked for
// deletion but which we failed to remove.
logrus.Errorf("While recovering from a failure applying layer diff, error deleting layer %#v: %v", layer.ID, err2)
}
return nil, -1, err
}
@@ -817,9 +823,20 @@ func (r *layerStore) Put(id string, parentLayer *Layer, names []string, mountLab
}
err = r.Save()
if err != nil {
// We don't have a record of this layer, but at least
// try to clean it up underneath us.
r.driver.Remove(id)
if savedIncompleteLayer {
if err2 := r.Delete(layer.ID); err2 != nil {
// Either a driver error or an error saving.
// We now have a layer that's been marked for
// deletion but which we failed to remove.
logrus.Errorf("While recovering from a failure saving finished layer metadata, error deleting layer %#v: %v", layer.ID, err2)
}
} else {
// We don't have a record of this layer, but at least
// try to clean it up underneath us.
if err2 := r.driver.Remove(id); err2 != nil {
logrus.Errorf("While recovering from a failure saving finished layer metadata, error deleting layer %#v in graph driver: %v", id, err2)
}
}
return nil, -1, err
}
layer = copyLayer(layer)
@@ -1032,25 +1049,43 @@ func (r *layerStore) removeName(layer *Layer, name string) {
layer.Names = stringSliceWithoutValue(layer.Names, name)
}
// Deprecated: Prone to race conditions, suggested alternatives are `AddNames` and `RemoveNames`.
func (r *layerStore) SetNames(id string, names []string) error {
return r.updateNames(id, names, setNames)
}
func (r *layerStore) AddNames(id string, names []string) error {
return r.updateNames(id, names, addNames)
}
func (r *layerStore) RemoveNames(id string, names []string) error {
return r.updateNames(id, names, removeNames)
}
func (r *layerStore) updateNames(id string, names []string, op updateNameOperation) error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to change layer name assignments at %q", r.layerspath())
}
names = dedupeNames(names)
if layer, ok := r.lookup(id); ok {
for _, name := range layer.Names {
delete(r.byname, name)
}
for _, name := range names {
if otherLayer, ok := r.byname[name]; ok {
r.removeName(otherLayer, name)
}
r.byname[name] = layer
}
layer.Names = names
return r.Save()
layer, ok := r.lookup(id)
if !ok {
return ErrLayerUnknown
}
return ErrLayerUnknown
oldNames := layer.Names
names, err := applyNameOperation(oldNames, names, op)
if err != nil {
return err
}
for _, name := range oldNames {
delete(r.byname, name)
}
for _, name := range names {
if otherLayer, ok := r.byname[name]; ok {
r.removeName(otherLayer, name)
}
r.byname[name] = layer
}
layer.Names = names
return r.Save()
}
func (r *layerStore) datadir(id string) string {
@@ -1149,6 +1184,17 @@ func (r *layerStore) tspath(id string) string {
return filepath.Join(r.layerdir, id+tarSplitSuffix)
}
// layerHasIncompleteFlag returns true if layer.Flags contains an incompleteFlag set to true
func layerHasIncompleteFlag(layer *Layer) bool {
// layer.Flags[…] is defined to succeed and return ok == false if Flags == nil
if flagValue, ok := layer.Flags[incompleteFlag]; ok {
if b, ok := flagValue.(bool); ok && b {
return true
}
}
return false
}
func (r *layerStore) deleteInternal(id string) error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to delete layers at %q", r.layerspath())
@@ -1157,6 +1203,18 @@ func (r *layerStore) deleteInternal(id string) error {
if !ok {
return ErrLayerUnknown
}
// Ensure that if we are interrupted, the layer will be cleaned up.
if !layerHasIncompleteFlag(layer) {
if layer.Flags == nil {
layer.Flags = make(map[string]interface{})
}
layer.Flags[incompleteFlag] = true
if err := r.Save(); err != nil {
return err
}
}
// We never unset incompleteFlag; below, we remove the entire object from r.layers.
id = layer.ID
err := r.driver.Remove(id)
if err != nil {

View File

@@ -108,35 +108,32 @@ func (c *layersCache) load() error {
}
bigData, err := c.store.LayerBigData(r.ID, cacheKey)
if err != nil {
if errors.Cause(err) == os.ErrNotExist {
// if the cache areadly exists, read and use it
if err == nil {
defer bigData.Close()
metadata, err := readMetadataFromCache(bigData)
if err == nil {
c.addLayer(r.ID, metadata)
continue
}
logrus.Warningf("Error reading cache file for layer %q: %v", r.ID, err)
} else if errors.Cause(err) != os.ErrNotExist {
return err
}
defer bigData.Close()
metadata, err := readMetadataFromCache(bigData)
if err != nil {
logrus.Warningf("Error reading cache file for layer %q: %v", r.ID, err)
}
if metadata != nil {
c.addLayer(r.ID, metadata)
continue
}
// otherwise create it from the layer TOC.
manifestReader, err := c.store.LayerBigData(r.ID, bigDataKey)
if err != nil {
continue
}
defer manifestReader.Close()
manifest, err := ioutil.ReadAll(manifestReader)
if err != nil {
return fmt.Errorf("open manifest file for layer %q: %w", r.ID, err)
}
metadata, err = writeCache(manifest, r.ID, c.store)
metadata, err := writeCache(manifest, r.ID, c.store)
if err == nil {
c.addLayer(r.ID, metadata)
}

View File

@@ -1248,7 +1248,7 @@ func (d whiteoutHandler) Mknod(path string, mode uint32, dev int) error {
func checkChownErr(err error, name string, uid, gid int) error {
if errors.Is(err, syscall.EINVAL) {
return fmt.Errorf("potentially insufficient UIDs or GIDs available in user namespace (requested %d:%d for %s): Check /etc/subuid and /etc/subgid if configured locally: %w", uid, gid, name, err)
return fmt.Errorf("potentially insufficient UIDs or GIDs available in user namespace (requested %d:%d for %s): Check /etc/subuid and /etc/subgid if configured locally and run podman-system-migrate: %w", uid, gid, name, err)
}
return err
}

View File

@@ -12,109 +12,109 @@ type ThinpoolOptionsConfig struct {
// grown. This is specified in terms of % of pool size. So a value of
// 20 means that when threshold is hit, pool will be grown by 20% of
// existing pool size.
AutoExtendPercent string `toml:"autoextend_percent"`
AutoExtendPercent string `toml:"autoextend_percent,omitempty"`
// AutoExtendThreshold determines the pool extension threshold in terms
// of percentage of pool size. For example, if threshold is 60, that
// means when pool is 60% full, threshold has been hit.
AutoExtendThreshold string `toml:"autoextend_threshold"`
AutoExtendThreshold string `toml:"autoextend_threshold,omitempty"`
// BaseSize specifies the size to use when creating the base device,
// which limits the size of images and containers.
BaseSize string `toml:"basesize"`
BaseSize string `toml:"basesize,omitempty"`
// BlockSize specifies a custom blocksize to use for the thin pool.
BlockSize string `toml:"blocksize"`
BlockSize string `toml:"blocksize,omitempty"`
// DirectLvmDevice specifies a custom block storage device to use for
// the thin pool.
DirectLvmDevice string `toml:"directlvm_device"`
DirectLvmDevice string `toml:"directlvm_device,omitempty"`
// DirectLvmDeviceForcewipes device even if device already has a
// filesystem
DirectLvmDeviceForce string `toml:"directlvm_device_force"`
DirectLvmDeviceForce string `toml:"directlvm_device_force,omitempty"`
// Fs specifies the filesystem type to use for the base device.
Fs string `toml:"fs"`
Fs string `toml:"fs,omitempty"`
// log_level sets the log level of devicemapper.
LogLevel string `toml:"log_level"`
LogLevel string `toml:"log_level,omitempty"`
// MetadataSize specifies the size of the metadata for the thinpool
// It will be used with the `pvcreate --metadata` option.
MetadataSize string `toml:"metadatasize"`
MetadataSize string `toml:"metadatasize,omitempty"`
// MinFreeSpace specifies the min free space percent in a thin pool
// require for new device creation to
MinFreeSpace string `toml:"min_free_space"`
MinFreeSpace string `toml:"min_free_space,omitempty"`
// MkfsArg specifies extra mkfs arguments to be used when creating the
// basedevice.
MkfsArg string `toml:"mkfsarg"`
MkfsArg string `toml:"mkfsarg,omitempty"`
// MountOpt specifies extra mount options used when mounting the thin
// devices.
MountOpt string `toml:"mountopt"`
MountOpt string `toml:"mountopt,omitempty"`
// Size
Size string `toml:"size"`
Size string `toml:"size,omitempty"`
// UseDeferredDeletion marks device for deferred deletion
UseDeferredDeletion string `toml:"use_deferred_deletion"`
UseDeferredDeletion string `toml:"use_deferred_deletion,omitempty"`
// UseDeferredRemoval marks device for deferred removal
UseDeferredRemoval string `toml:"use_deferred_removal"`
UseDeferredRemoval string `toml:"use_deferred_removal,omitempty"`
// XfsNoSpaceMaxRetriesFreeSpace specifies the maximum number of
// retries XFS should attempt to complete IO when ENOSPC (no space)
// error is returned by underlying storage device.
XfsNoSpaceMaxRetries string `toml:"xfs_nospace_max_retries"`
XfsNoSpaceMaxRetries string `toml:"xfs_nospace_max_retries,omitempty"`
}
type AufsOptionsConfig struct {
// MountOpt specifies extra mount options used when mounting
MountOpt string `toml:"mountopt"`
MountOpt string `toml:"mountopt,omitempty"`
}
type BtrfsOptionsConfig struct {
// MinSpace is the minimal spaces allocated to the device
MinSpace string `toml:"min_space"`
MinSpace string `toml:"min_space,omitempty"`
// Size
Size string `toml:"size"`
Size string `toml:"size,omitempty"`
}
type OverlayOptionsConfig struct {
// IgnoreChownErrors is a flag for whether chown errors should be
// ignored when building an image.
IgnoreChownErrors string `toml:"ignore_chown_errors"`
IgnoreChownErrors string `toml:"ignore_chown_errors,omitempty"`
// MountOpt specifies extra mount options used when mounting
MountOpt string `toml:"mountopt"`
MountOpt string `toml:"mountopt,omitempty"`
// Alternative program to use for the mount of the file system
MountProgram string `toml:"mount_program"`
MountProgram string `toml:"mount_program,omitempty"`
// Size
Size string `toml:"size"`
Size string `toml:"size,omitempty"`
// Inodes is used to set a maximum inodes of the container image.
Inodes string `toml:"inodes"`
Inodes string `toml:"inodes,omitempty"`
// Do not create a bind mount on the storage home
SkipMountHome string `toml:"skip_mount_home"`
SkipMountHome string `toml:"skip_mount_home,omitempty"`
// ForceMask indicates the permissions mask (e.g. "0755") to use for new
// files and directories
ForceMask string `toml:"force_mask"`
ForceMask string `toml:"force_mask,omitempty"`
}
type VfsOptionsConfig struct {
// IgnoreChownErrors is a flag for whether chown errors should be
// ignored when building an image.
IgnoreChownErrors string `toml:"ignore_chown_errors"`
IgnoreChownErrors string `toml:"ignore_chown_errors,omitempty"`
}
type ZfsOptionsConfig struct {
// MountOpt specifies extra mount options used when mounting
MountOpt string `toml:"mountopt"`
MountOpt string `toml:"mountopt,omitempty"`
// Name is the File System name of the ZFS File system
Name string `toml:"fsname"`
Name string `toml:"fsname,omitempty"`
// Size
Size string `toml:"size"`
Size string `toml:"size,omitempty"`
}
// OptionsConfig represents the "storage.options" TOML config table.
@@ -122,82 +122,82 @@ type OptionsConfig struct {
// AdditionalImagesStores is the location of additional read/only
// Image stores. Usually used to access Networked File System
// for shared image content
AdditionalImageStores []string `toml:"additionalimagestores"`
AdditionalImageStores []string `toml:"additionalimagestores,omitempty"`
// AdditionalLayerStores is the location of additional read/only
// Layer stores. Usually used to access Networked File System
// for shared image content
// This API is experimental and can be changed without bumping the
// major version number.
AdditionalLayerStores []string `toml:"additionallayerstores"`
AdditionalLayerStores []string `toml:"additionallayerstores,omitempty"`
// Size
Size string `toml:"size"`
Size string `toml:"size,omitempty"`
// RemapUIDs is a list of default UID mappings to use for layers.
RemapUIDs string `toml:"remap-uids"`
RemapUIDs string `toml:"remap-uids,omitempty"`
// RemapGIDs is a list of default GID mappings to use for layers.
RemapGIDs string `toml:"remap-gids"`
RemapGIDs string `toml:"remap-gids,omitempty"`
// IgnoreChownErrors is a flag for whether chown errors should be
// ignored when building an image.
IgnoreChownErrors string `toml:"ignore_chown_errors"`
IgnoreChownErrors string `toml:"ignore_chown_errors,omitempty"`
// ForceMask indicates the permissions mask (e.g. "0755") to use for new
// files and directories.
ForceMask os.FileMode `toml:"force_mask"`
ForceMask os.FileMode `toml:"force_mask,omitempty"`
// RemapUser is the name of one or more entries in /etc/subuid which
// should be used to set up default UID mappings.
RemapUser string `toml:"remap-user"`
RemapUser string `toml:"remap-user,omitempty"`
// RemapGroup is the name of one or more entries in /etc/subgid which
// should be used to set up default GID mappings.
RemapGroup string `toml:"remap-group"`
RemapGroup string `toml:"remap-group,omitempty"`
// RootAutoUsernsUser is the name of one or more entries in /etc/subuid and
// /etc/subgid which should be used to set up automatically a userns.
RootAutoUsernsUser string `toml:"root-auto-userns-user"`
RootAutoUsernsUser string `toml:"root-auto-userns-user,omitempty"`
// AutoUsernsMinSize is the minimum size for a user namespace that is
// created automatically.
AutoUsernsMinSize uint32 `toml:"auto-userns-min-size"`
AutoUsernsMinSize uint32 `toml:"auto-userns-min-size,omitempty"`
// AutoUsernsMaxSize is the maximum size for a user namespace that is
// created automatically.
AutoUsernsMaxSize uint32 `toml:"auto-userns-max-size"`
AutoUsernsMaxSize uint32 `toml:"auto-userns-max-size,omitempty"`
// Aufs container options to be handed to aufs drivers
Aufs struct{ AufsOptionsConfig } `toml:"aufs"`
Aufs struct{ AufsOptionsConfig } `toml:"aufs,omitempty"`
// Btrfs container options to be handed to btrfs drivers
Btrfs struct{ BtrfsOptionsConfig } `toml:"btrfs"`
Btrfs struct{ BtrfsOptionsConfig } `toml:"btrfs,omitempty"`
// Thinpool container options to be handed to thinpool drivers
Thinpool struct{ ThinpoolOptionsConfig } `toml:"thinpool"`
Thinpool struct{ ThinpoolOptionsConfig } `toml:"thinpool,omitempty"`
// Overlay container options to be handed to overlay drivers
Overlay struct{ OverlayOptionsConfig } `toml:"overlay"`
Overlay struct{ OverlayOptionsConfig } `toml:"overlay,omitempty"`
// Vfs container options to be handed to VFS drivers
Vfs struct{ VfsOptionsConfig } `toml:"vfs"`
Vfs struct{ VfsOptionsConfig } `toml:"vfs,omitempty"`
// Zfs container options to be handed to ZFS drivers
Zfs struct{ ZfsOptionsConfig } `toml:"zfs"`
Zfs struct{ ZfsOptionsConfig } `toml:"zfs,omitempty"`
// Do not create a bind mount on the storage home
SkipMountHome string `toml:"skip_mount_home"`
SkipMountHome string `toml:"skip_mount_home,omitempty"`
// Alternative program to use for the mount of the file system
MountProgram string `toml:"mount_program"`
MountProgram string `toml:"mount_program,omitempty"`
// MountOpt specifies extra mount options used when mounting
MountOpt string `toml:"mountopt"`
MountOpt string `toml:"mountopt,omitempty"`
// PullOptions specifies options to be handed to pull managers
// This API is experimental and can be changed without bumping the major version number.
PullOptions map[string]string `toml:"pull_options"`
PullOptions map[string]string `toml:"pull_options,omitempty"`
// DisableVolatile doesn't allow volatile mounts when it is set.
DisableVolatile bool `toml:"disable-volatile"`
DisableVolatile bool `toml:"disable-volatile,omitempty"`
}
// GetGraphDriverOptions returns the driver specific options

View File

@@ -297,7 +297,7 @@ func parseSubidFile(path, username string) (ranges, error) {
func checkChownErr(err error, name string, uid, gid int) error {
if e, ok := err.(*os.PathError); ok && e.Err == syscall.EINVAL {
return errors.Wrapf(err, "potentially insufficient UIDs or GIDs available in user namespace (requested %d:%d for %s): Check /etc/subuid and /etc/subgid if configured locally", uid, gid, name)
return errors.Wrapf(err, "potentially insufficient UIDs or GIDs available in user namespace (requested %d:%d for %s): Check /etc/subuid and /etc/subgid if configured locally and run podman-system-migrate", uid, gid, name)
}
return err
}

View File

@@ -9,6 +9,7 @@ import (
"io"
"os"
"os/exec"
"os/signal"
"os/user"
"runtime"
"strconv"
@@ -484,6 +485,30 @@ func MaybeReexecUsingUserNamespace(evenForRoot bool) {
// Finish up.
logrus.Debugf("Running %+v with environment %+v, UID map %+v, and GID map %+v", cmd.Cmd.Args, os.Environ(), cmd.UidMappings, cmd.GidMappings)
// Forward SIGHUP, SIGINT, and SIGTERM to our child process.
interrupted := make(chan os.Signal, 100)
defer func() {
signal.Stop(interrupted)
close(interrupted)
}()
cmd.Hook = func(int) error {
go func() {
for receivedSignal := range interrupted {
cmd.Cmd.Process.Signal(receivedSignal)
}
}()
return nil
}
signal.Notify(interrupted, syscall.SIGHUP, syscall.SIGINT, syscall.SIGTERM)
// Make sure our child process gets SIGKILLed if we exit, for whatever
// reason, before it does.
if cmd.Cmd.SysProcAttr == nil {
cmd.Cmd.SysProcAttr = &syscall.SysProcAttr{}
}
cmd.Cmd.SysProcAttr.Pdeathsig = syscall.SIGKILL
ExecRunnable(cmd, nil)
}
@@ -501,11 +526,11 @@ func ExecRunnable(cmd Runnable, cleanup func()) {
if exitError.ProcessState.Exited() {
if waitStatus, ok := exitError.ProcessState.Sys().(syscall.WaitStatus); ok {
if waitStatus.Exited() {
logrus.Errorf("%v", exitError)
logrus.Debugf("%v", exitError)
exit(waitStatus.ExitStatus())
}
if waitStatus.Signaled() {
logrus.Errorf("%v", exitError)
logrus.Debugf("%v", exitError)
exit(int(waitStatus.Signal()) + 128)
}
}

View File

@@ -31,6 +31,14 @@ import (
"github.com/pkg/errors"
)
type updateNameOperation int
const (
setNames updateNameOperation = iota
addNames
removeNames
)
var (
stores []*store
storesLock sync.Mutex
@@ -368,8 +376,17 @@ type Store interface {
// SetNames changes the list of names for a layer, image, or container.
// Duplicate names are removed from the list automatically.
// Deprecated: Prone to race conditions, suggested alternatives are `AddNames` and `RemoveNames`.
SetNames(id string, names []string) error
// AddNames adds the list of names for a layer, image, or container.
// Duplicate names are removed from the list automatically.
AddNames(id string, names []string) error
// RemoveNames removes the list of names for a layer, image, or container.
// Duplicate names are removed from the list automatically.
RemoveNames(id string, names []string) error
// ListImageBigData retrieves a list of the (possibly large) chunks of
// named data associated with an image.
ListImageBigData(id string) ([]string, error)
@@ -2050,7 +2067,20 @@ func dedupeNames(names []string) []string {
return deduped
}
// Deprecated: Prone to race conditions, suggested alternatives are `AddNames` and `RemoveNames`.
func (s *store) SetNames(id string, names []string) error {
return s.updateNames(id, names, setNames)
}
func (s *store) AddNames(id string, names []string) error {
return s.updateNames(id, names, addNames)
}
func (s *store) RemoveNames(id string, names []string) error {
return s.updateNames(id, names, removeNames)
}
func (s *store) updateNames(id string, names []string, op updateNameOperation) error {
deduped := dedupeNames(names)
rlstore, err := s.LayerStore()
@@ -2063,7 +2093,16 @@ func (s *store) SetNames(id string, names []string) error {
return err
}
if rlstore.Exists(id) {
return rlstore.SetNames(id, deduped)
switch op {
case setNames:
return rlstore.SetNames(id, deduped)
case removeNames:
return rlstore.RemoveNames(id, deduped)
case addNames:
return rlstore.AddNames(id, deduped)
default:
return errInvalidUpdateNameOperation
}
}
ristore, err := s.ImageStore()
@@ -2076,7 +2115,16 @@ func (s *store) SetNames(id string, names []string) error {
return err
}
if ristore.Exists(id) {
return ristore.SetNames(id, deduped)
switch op {
case setNames:
return ristore.SetNames(id, deduped)
case removeNames:
return ristore.RemoveNames(id, deduped)
case addNames:
return ristore.AddNames(id, deduped)
default:
return errInvalidUpdateNameOperation
}
}
// Check is id refers to a RO Store
@@ -2114,7 +2162,16 @@ func (s *store) SetNames(id string, names []string) error {
return err
}
if rcstore.Exists(id) {
return rcstore.SetNames(id, deduped)
switch op {
case setNames:
return rcstore.SetNames(id, deduped)
case removeNames:
return rcstore.RemoveNames(id, deduped)
case addNames:
return rcstore.AddNames(id, deduped)
default:
return errInvalidUpdateNameOperation
}
}
return ErrLayerUnknown
}
@@ -2532,17 +2589,12 @@ func (s *store) DeleteContainer(id string) error {
}()
var errors []error
for {
select {
case err, ok := <-errChan:
if !ok {
return multierror.Append(nil, errors...).ErrorOrNil()
}
if err != nil {
errors = append(errors, err)
}
for err := range errChan {
if err != nil {
errors = append(errors, err)
}
}
return multierror.Append(nil, errors...).ErrorOrNil()
}
}
return ErrNotAContainer

View File

@@ -55,4 +55,6 @@ var (
ErrStoreIsReadOnly = errors.New("called a write method on a read-only store")
// ErrNotSupported is returned when the requested functionality is not supported.
ErrNotSupported = errors.New("not supported")
// ErrInvalidMappings is returned when the specified mappings are invalid.
ErrInvalidMappings = errors.New("invalid mappings specified")
)

View File

@@ -3,14 +3,12 @@ package types
import (
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"sync"
"time"
"github.com/BurntSushi/toml"
"github.com/containers/storage/drivers/overlay"
cfg "github.com/containers/storage/pkg/config"
"github.com/containers/storage/pkg/idtools"
"github.com/sirupsen/logrus"
@@ -19,11 +17,11 @@ import (
// TOML-friendly explicit tables used for conversions.
type TomlConfig struct {
Storage struct {
Driver string `toml:"driver"`
RunRoot string `toml:"runroot"`
GraphRoot string `toml:"graphroot"`
RootlessStoragePath string `toml:"rootless_storage_path"`
Options cfg.OptionsConfig `toml:"options"`
Driver string `toml:"driver,omitempty"`
RunRoot string `toml:"runroot,omitempty"`
GraphRoot string `toml:"graphroot,omitempty"`
RootlessStoragePath string `toml:"rootless_storage_path,omitempty"`
Options cfg.OptionsConfig `toml:"options,omitempty"`
} `toml:"storage"`
}
@@ -225,25 +223,11 @@ func getRootlessStorageOpts(rootlessUID int, systemOpts StoreOptions) (StoreOpti
opts.GraphDriverName = overlayDriver
}
if opts.GraphDriverName == "" || opts.GraphDriverName == overlayDriver {
supported, err := overlay.SupportsNativeOverlay(opts.GraphRoot, rootlessRuntime)
if err != nil {
return opts, err
}
if supported {
opts.GraphDriverName = overlayDriver
} else {
if path, err := exec.LookPath("fuse-overlayfs"); err == nil {
opts.GraphDriverName = overlayDriver
opts.GraphDriverOptions = []string{fmt.Sprintf("overlay.mount_program=%s", path)}
}
}
if opts.GraphDriverName == overlayDriver {
for _, o := range systemOpts.GraphDriverOptions {
if strings.Contains(o, "ignore_chown_errors") {
opts.GraphDriverOptions = append(opts.GraphDriverOptions, o)
break
}
if opts.GraphDriverName == overlayDriver {
for _, o := range systemOpts.GraphDriverOptions {
if strings.Contains(o, "ignore_chown_errors") {
opts.GraphDriverOptions = append(opts.GraphDriverOptions, o)
break
}
}
}
@@ -431,11 +415,12 @@ func Save(conf TomlConfig, rootless bool) error {
if err != nil {
return err
}
if err = os.Remove(configFile); !os.IsNotExist(err) {
if err = os.Remove(configFile); !os.IsNotExist(err) && err != nil {
return err
}
f, err := os.Open(configFile)
f, err := os.Create(configFile)
if err != nil {
return err
}

View File

@@ -40,3 +40,35 @@ func validateMountOptions(mountOptions []string) error {
}
return nil
}
func applyNameOperation(oldNames []string, opParameters []string, op updateNameOperation) ([]string, error) {
result := make([]string, 0)
switch op {
case setNames:
// ignore all old names and just return new names
return dedupeNames(opParameters), nil
case removeNames:
// remove given names from old names
for _, name := range oldNames {
// only keep names in final result which do not intersect with input names
// basically `result = oldNames - opParameters`
nameShouldBeRemoved := false
for _, opName := range opParameters {
if name == opName {
nameShouldBeRemoved = true
}
}
if !nameShouldBeRemoved {
result = append(result, name)
}
}
return dedupeNames(result), nil
case addNames:
result = append(result, opParameters...)
result = append(result, oldNames...)
return dedupeNames(result), nil
default:
return result, errInvalidUpdateNameOperation
}
return dedupeNames(result), nil
}

View File

@@ -5755,7 +5755,6 @@ paths:
property1: "string"
property2: "string"
IpcMode: ""
LxcConf: []
Memory: 0
MemorySwap: 0
MemoryReservation: 0
@@ -8607,12 +8606,20 @@ paths:
if `tty` was specified as part of creating and starting the exec instance.
operationId: "ExecResize"
responses:
201:
200:
description: "No error"
400:
description: "bad parameter"
schema:
$ref: "#/definitions/ErrorResponse"
404:
description: "No such exec instance"
schema:
$ref: "#/definitions/ErrorResponse"
500:
description: "Server error"
schema:
$ref: "#/definitions/ErrorResponse"
parameters:
- name: "id"
in: "path"

View File

@@ -44,6 +44,8 @@ func Wrap(outer, inner error) error {
//
// format is the format of the error message. The string '{{err}}' will
// be replaced with the original error message.
//
// Deprecated: Use fmt.Errorf()
func Wrapf(format string, err error) error {
outerMsg := "<nil>"
if err != nil {
@@ -148,6 +150,9 @@ func Walk(err error, cb WalkFunc) {
for _, err := range e.WrappedErrors() {
Walk(err, cb)
}
case interface{ Unwrap() error }:
cb(err)
Walk(e.Unwrap(), cb)
default:
cb(err)
}
@@ -167,3 +172,7 @@ func (w *wrappedError) Error() string {
func (w *wrappedError) WrappedErrors() []error {
return []error{w.Outer, w.Inner}
}
func (w *wrappedError) Unwrap() error {
return w.Inner
}

View File

@@ -17,6 +17,42 @@ This package provides various compression algorithms.
# changelog
* Mar 3, 2022 (v1.15.0)
* zstd: Refactor decoder by @klauspost in [#498](https://github.com/klauspost/compress/pull/498)
* zstd: Add stream encoding without goroutines by @klauspost in [#505](https://github.com/klauspost/compress/pull/505)
* huff0: Prevent single blocks exceeding 16 bits by @klauspost in[#507](https://github.com/klauspost/compress/pull/507)
* flate: Inline literal emission by @klauspost in [#509](https://github.com/klauspost/compress/pull/509)
* gzhttp: Add zstd to transport by @klauspost in [#400](https://github.com/klauspost/compress/pull/400)
* gzhttp: Make content-type optional by @klauspost in [#510](https://github.com/klauspost/compress/pull/510)
<details>
<summary>See Details</summary>
Both compression and decompression now supports "synchronous" stream operations. This means that whenever "concurrency" is set to 1, they will operate without spawning goroutines.
Stream decompression is now faster on asynchronous, since the goroutine allocation much more effectively splits the workload. On typical streams this will typically use 2 cores fully for decompression. When a stream has finished decoding no goroutines will be left over, so decoders can now safely be pooled and still be garbage collected.
While the release has been extensively tested, it is recommended to testing when upgrading.
</details>
* Feb 22, 2022 (v1.14.4)
* flate: Fix rare huffman only (-2) corruption. [#503](https://github.com/klauspost/compress/pull/503)
* zip: Update deprecated CreateHeaderRaw to correctly call CreateRaw by @saracen in [#502](https://github.com/klauspost/compress/pull/502)
* zip: don't read data descriptor early by @saracen in [#501](https://github.com/klauspost/compress/pull/501) #501
* huff0: Use static decompression buffer up to 30% faster by @klauspost in [#499](https://github.com/klauspost/compress/pull/499) [#500](https://github.com/klauspost/compress/pull/500)
* Feb 17, 2022 (v1.14.3)
* flate: Improve fastest levels compression speed ~10% more throughput. [#482](https://github.com/klauspost/compress/pull/482) [#489](https://github.com/klauspost/compress/pull/489) [#490](https://github.com/klauspost/compress/pull/490) [#491](https://github.com/klauspost/compress/pull/491) [#494](https://github.com/klauspost/compress/pull/494) [#478](https://github.com/klauspost/compress/pull/478)
* flate: Faster decompression speed, ~5-10%. [#483](https://github.com/klauspost/compress/pull/483)
* s2: Faster compression with Go v1.18 and amd64 microarch level 3+. [#484](https://github.com/klauspost/compress/pull/484) [#486](https://github.com/klauspost/compress/pull/486)
* Jan 25, 2022 (v1.14.2)
* zstd: improve header decoder by @dsnet [#476](https://github.com/klauspost/compress/pull/476)
* zstd: Add bigger default blocks [#469](https://github.com/klauspost/compress/pull/469)
* zstd: Remove unused decompression buffer [#470](https://github.com/klauspost/compress/pull/470)
* zstd: Fix logically dead code by @ningmingxiao [#472](https://github.com/klauspost/compress/pull/472)
* flate: Improve level 7-9 [#471](https://github.com/klauspost/compress/pull/471) [#473](https://github.com/klauspost/compress/pull/473)
* zstd: Add noasm tag for xxhash [#475](https://github.com/klauspost/compress/pull/475)
* Jan 11, 2022 (v1.14.1)
* s2: Add stream index in [#462](https://github.com/klauspost/compress/pull/462)
* flate: Speed and efficiency improvements in [#439](https://github.com/klauspost/compress/pull/439) [#461](https://github.com/klauspost/compress/pull/461) [#455](https://github.com/klauspost/compress/pull/455) [#452](https://github.com/klauspost/compress/pull/452) [#458](https://github.com/klauspost/compress/pull/458)
@@ -53,6 +89,9 @@ This package provides various compression algorithms.
* zstd: Detect short invalid signatures [#382](https://github.com/klauspost/compress/pull/382)
* zstd: Spawn decoder goroutine only if needed. [#380](https://github.com/klauspost/compress/pull/380)
<details>
<summary>See changes to v1.12.x</summary>
* May 25, 2021 (v1.12.3)
* deflate: Better/faster Huffman encoding [#374](https://github.com/klauspost/compress/pull/374)
* deflate: Allocate less for history. [#375](https://github.com/klauspost/compress/pull/375)
@@ -74,9 +113,10 @@ This package provides various compression algorithms.
* s2c/s2d/s2sx: Always truncate when writing files [#352](https://github.com/klauspost/compress/pull/352)
* zstd: Reduce memory usage further when using [WithLowerEncoderMem](https://pkg.go.dev/github.com/klauspost/compress/zstd#WithLowerEncoderMem) [#346](https://github.com/klauspost/compress/pull/346)
* s2: Fix potential problem with amd64 assembly and profilers [#349](https://github.com/klauspost/compress/pull/349)
</details>
<details>
<summary>See changes prior to v1.12.1</summary>
<summary>See changes to v1.11.x</summary>
* Mar 26, 2021 (v1.11.13)
* zstd: Big speedup on small dictionary encodes [#344](https://github.com/klauspost/compress/pull/344) [#345](https://github.com/klauspost/compress/pull/345)
@@ -135,7 +175,7 @@ This package provides various compression algorithms.
</details>
<details>
<summary>See changes prior to v1.11.0</summary>
<summary>See changes to v1.10.x</summary>
* July 8, 2020 (v1.10.11)
* zstd: Fix extra block when compressing with ReadFrom. [#278](https://github.com/klauspost/compress/pull/278)
@@ -297,11 +337,6 @@ This package provides various compression algorithms.
# deflate usage
* [High Throughput Benchmark](http://blog.klauspost.com/go-gzipdeflate-benchmarks/).
* [Small Payload/Webserver Benchmarks](http://blog.klauspost.com/gzip-performance-for-go-webservers/).
* [Linear Time Compression](http://blog.klauspost.com/constant-time-gzipzip-compression/).
* [Re-balancing Deflate Compression Levels](https://blog.klauspost.com/rebalancing-deflate-compression-levels/)
The packages are drop-in replacements for standard libraries. Simply replace the import path to use them:
| old import | new import | Documentation
@@ -323,6 +358,8 @@ Memory usage is typically 1MB for a Writer. stdlib is in the same range.
If you expect to have a lot of concurrently allocated Writers consider using
the stateless compress described below.
For compression performance, see: [this spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing).
# Stateless compression
This package offers stateless compression as a special option for gzip/deflate.

View File

@@ -179,7 +179,7 @@ func (e *fastGen) matchlen(s, t int32, src []byte) int32 {
// matchlenLong will return the match length between offsets and t in src.
// It is assumed that s > t, that t >=0 and s < len(src).
func (e *fastGen) matchlenLong(s, t int32, src []byte) int32 {
if debugDecode {
if debugDeflate {
if t >= s {
panic(fmt.Sprint("t >=s:", t, s))
}

View File

@@ -8,6 +8,7 @@ import (
"encoding/binary"
"fmt"
"io"
"math"
)
const (
@@ -24,6 +25,10 @@ const (
codegenCodeCount = 19
badCode = 255
// maxPredefinedTokens is the maximum number of tokens
// where we check if fixed size is smaller.
maxPredefinedTokens = 250
// bufferFlushSize indicates the buffer size
// after which bytes are flushed to the writer.
// Should preferably be a multiple of 6, since
@@ -36,8 +41,11 @@ const (
bufferSize = bufferFlushSize + 8
)
// Minimum length code that emits bits.
const lengthExtraBitsMinCode = 8
// The number of extra bits needed by length code X - LENGTH_CODES_START.
var lengthExtraBits = [32]int8{
var lengthExtraBits = [32]uint8{
/* 257 */ 0, 0, 0,
/* 260 */ 0, 0, 0, 0, 0, 1, 1, 1, 1, 2,
/* 270 */ 2, 2, 2, 3, 3, 3, 3, 4, 4, 4,
@@ -51,6 +59,9 @@ var lengthBase = [32]uint8{
64, 80, 96, 112, 128, 160, 192, 224, 255,
}
// Minimum offset code that emits bits.
const offsetExtraBitsMinCode = 4
// offset code word extra bits.
var offsetExtraBits = [32]int8{
0, 0, 0, 0, 1, 1, 2, 2, 3, 3,
@@ -78,10 +89,10 @@ func init() {
for i := range offsetCombined[:] {
// Don't use extended window values...
if offsetBase[i] > 0x006000 {
if offsetExtraBits[i] == 0 || offsetBase[i] > 0x006000 {
continue
}
offsetCombined[i] = uint32(offsetExtraBits[i])<<16 | (offsetBase[i])
offsetCombined[i] = uint32(offsetExtraBits[i]) | (offsetBase[i] << 8)
}
}
@@ -97,7 +108,7 @@ type huffmanBitWriter struct {
// Data waiting to be written is bytes[0:nbytes]
// and then the low nbits of bits.
bits uint64
nbits uint16
nbits uint8
nbytes uint8
lastHuffMan bool
literalEncoding *huffmanEncoder
@@ -215,7 +226,7 @@ func (w *huffmanBitWriter) write(b []byte) {
_, w.err = w.writer.Write(b)
}
func (w *huffmanBitWriter) writeBits(b int32, nb uint16) {
func (w *huffmanBitWriter) writeBits(b int32, nb uint8) {
w.bits |= uint64(b) << (w.nbits & 63)
w.nbits += nb
if w.nbits >= 48 {
@@ -571,7 +582,10 @@ func (w *huffmanBitWriter) writeBlock(tokens *tokens, eof bool, input []byte) {
// Fixed Huffman baseline.
var literalEncoding = fixedLiteralEncoding
var offsetEncoding = fixedOffsetEncoding
var size = w.fixedSize(extraBits)
var size = math.MaxInt32
if tokens.n < maxPredefinedTokens {
size = w.fixedSize(extraBits)
}
// Dynamic Huffman?
var numCodegens int
@@ -672,19 +686,21 @@ func (w *huffmanBitWriter) writeBlockDynamic(tokens *tokens, eof bool, input []b
size = reuseSize
}
if preSize := w.fixedSize(extraBits) + 7; usePrefs && preSize < size {
// Check if we get a reasonable size decrease.
if storable && ssize <= size {
w.writeStoredHeader(len(input), eof)
w.writeBytes(input)
if tokens.n < maxPredefinedTokens {
if preSize := w.fixedSize(extraBits) + 7; usePrefs && preSize < size {
// Check if we get a reasonable size decrease.
if storable && ssize <= size {
w.writeStoredHeader(len(input), eof)
w.writeBytes(input)
return
}
w.writeFixedHeader(eof)
if !sync {
tokens.AddEOB()
}
w.writeTokens(tokens.Slice(), fixedLiteralEncoding.codes, fixedOffsetEncoding.codes)
return
}
w.writeFixedHeader(eof)
if !sync {
tokens.AddEOB()
}
w.writeTokens(tokens.Slice(), fixedLiteralEncoding.codes, fixedOffsetEncoding.codes)
return
}
// Check if we get a reasonable size decrease.
if storable && ssize <= size {
@@ -717,19 +733,21 @@ func (w *huffmanBitWriter) writeBlockDynamic(tokens *tokens, eof bool, input []b
size, numCodegens = w.dynamicSize(w.literalEncoding, w.offsetEncoding, extraBits)
// Store predefined, if we don't get a reasonable improvement.
if preSize := w.fixedSize(extraBits); usePrefs && preSize <= size {
// Store bytes, if we don't get an improvement.
if storable && ssize <= preSize {
w.writeStoredHeader(len(input), eof)
w.writeBytes(input)
if tokens.n < maxPredefinedTokens {
if preSize := w.fixedSize(extraBits); usePrefs && preSize <= size {
// Store bytes, if we don't get an improvement.
if storable && ssize <= preSize {
w.writeStoredHeader(len(input), eof)
w.writeBytes(input)
return
}
w.writeFixedHeader(eof)
if !sync {
tokens.AddEOB()
}
w.writeTokens(tokens.Slice(), fixedLiteralEncoding.codes, fixedOffsetEncoding.codes)
return
}
w.writeFixedHeader(eof)
if !sync {
tokens.AddEOB()
}
w.writeTokens(tokens.Slice(), fixedLiteralEncoding.codes, fixedOffsetEncoding.codes)
return
}
if storable && ssize <= size {
@@ -833,9 +851,9 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
bits, nbits, nbytes := w.bits, w.nbits, w.nbytes
for _, t := range tokens {
if t < matchType {
if t < 256 {
//w.writeCode(lits[t.literal()])
c := lits[t.literal()]
c := lits[t]
bits |= uint64(c.code) << (nbits & 63)
nbits += c.len
if nbits >= 48 {
@@ -858,12 +876,12 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
// Write the length
length := t.length()
lengthCode := lengthCode(length)
lengthCode := lengthCode(length) & 31
if false {
w.writeCode(lengths[lengthCode&31])
w.writeCode(lengths[lengthCode])
} else {
// inlined
c := lengths[lengthCode&31]
c := lengths[lengthCode]
bits |= uint64(c.code) << (nbits & 63)
nbits += c.len
if nbits >= 48 {
@@ -883,10 +901,10 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
}
}
extraLengthBits := uint16(lengthExtraBits[lengthCode&31])
if extraLengthBits > 0 {
if lengthCode >= lengthExtraBitsMinCode {
extraLengthBits := lengthExtraBits[lengthCode]
//w.writeBits(extraLength, extraLengthBits)
extraLength := int32(length - lengthBase[lengthCode&31])
extraLength := int32(length - lengthBase[lengthCode])
bits |= uint64(extraLength) << (nbits & 63)
nbits += extraLengthBits
if nbits >= 48 {
@@ -907,10 +925,9 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
}
// Write the offset
offset := t.offset()
offsetCode := offset >> 16
offset &= matchOffsetOnlyMask
offsetCode := (offset >> 16) & 31
if false {
w.writeCode(offs[offsetCode&31])
w.writeCode(offs[offsetCode])
} else {
// inlined
c := offs[offsetCode]
@@ -932,11 +949,12 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
}
}
}
offsetComb := offsetCombined[offsetCode]
if offsetComb > 1<<16 {
if offsetCode >= offsetExtraBitsMinCode {
offsetComb := offsetCombined[offsetCode]
//w.writeBits(extraOffset, extraOffsetBits)
bits |= uint64(offset-(offsetComb&0xffff)) << (nbits & 63)
nbits += uint16(offsetComb >> 16)
bits |= uint64((offset-(offsetComb>>8))&matchOffsetOnlyMask) << (nbits & 63)
nbits += uint8(offsetComb)
if nbits >= 48 {
binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits)
//*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits
@@ -1002,6 +1020,29 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
// https://stackoverflow.com/a/25454430
const guessHeaderSizeBits = 70 * 8
histogram(input, w.literalFreq[:numLiterals], fill)
ssize, storable := w.storedSize(input)
if storable && len(input) > 1024 {
// Quick check for incompressible content.
abs := float64(0)
avg := float64(len(input)) / 256
max := float64(len(input) * 2)
for _, v := range w.literalFreq[:256] {
diff := float64(v) - avg
abs += diff * diff
if abs > max {
break
}
}
if abs < max {
if debugDeflate {
fmt.Println("stored", abs, "<", max)
}
// No chance we can compress this...
w.writeStoredHeader(len(input), eof)
w.writeBytes(input)
return
}
}
w.literalFreq[endBlockMarker] = 1
w.tmpLitEncoding.generate(w.literalFreq[:numLiterals], 15)
if fill {
@@ -1019,8 +1060,10 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
estBits += estBits >> w.logNewTablePenalty
// Store bytes, if we don't get a reasonable improvement.
ssize, storable := w.storedSize(input)
if storable && ssize <= estBits {
if debugDeflate {
fmt.Println("stored,", ssize, "<=", estBits)
}
w.writeStoredHeader(len(input), eof)
w.writeBytes(input)
return
@@ -1031,7 +1074,7 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
if estBits < reuseSize {
if debugDeflate {
//fmt.Println("not reusing, reuse:", reuseSize/8, "> new:", estBits/8, "- header est:", w.lastHeader/8)
fmt.Println("NOT reusing, reuse:", reuseSize/8, "> new:", estBits/8, "header est:", w.lastHeader/8, "bytes")
}
// We owe an EOB
w.writeCode(w.literalEncoding.codes[endBlockMarker])
@@ -1065,6 +1108,9 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
// Go 1.16 LOVES having these on stack. At least 1.5x the speed.
bits, nbits, nbytes := w.bits, w.nbits, w.nbytes
if debugDeflate {
count -= int(nbytes)*8 + int(nbits)
}
// Unroll, write 3 codes/loop.
// Fastest number of unrolls.
for len(input) > 3 {
@@ -1074,13 +1120,16 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits)
bits >>= (n * 8) & 63
nbits -= n * 8
nbytes += uint8(n)
nbytes += n
}
if nbytes >= bufferFlushSize {
if w.err != nil {
nbytes = 0
return
}
if debugDeflate {
count += int(nbytes) * 8
}
_, w.err = w.writer.Write(w.bytes[:nbytes])
nbytes = 0
}
@@ -1096,13 +1145,6 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
// Remaining...
for _, t := range input {
// Bitwriting inlined, ~30% speedup
c := encoding[t]
bits |= uint64(c.code) << (nbits & 63)
nbits += c.len
if debugDeflate {
count += int(c.len)
}
if nbits >= 48 {
binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits)
//*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits
@@ -1114,17 +1156,33 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
nbytes = 0
return
}
if debugDeflate {
count += int(nbytes) * 8
}
_, w.err = w.writer.Write(w.bytes[:nbytes])
nbytes = 0
}
}
// Bitwriting inlined, ~30% speedup
c := encoding[t]
bits |= uint64(c.code) << (nbits & 63)
nbits += c.len
if debugDeflate {
count += int(c.len)
}
}
// Restore...
w.bits, w.nbits, w.nbytes = bits, nbits, nbytes
if debugDeflate {
fmt.Println("wrote", count/8, "bytes")
nb := count + int(nbytes)*8 + int(nbits)
fmt.Println("wrote", nb, "bits,", nb/8, "bytes.")
}
// Flush if needed to have space.
if w.nbits >= 48 {
w.writeOutBits()
}
if eof || sync {
w.writeCode(w.literalEncoding.codes[endBlockMarker])
w.lastHeader = 0

View File

@@ -17,7 +17,8 @@ const (
// hcode is a huffman code with a bit code and bit length.
type hcode struct {
code, len uint16
code uint16
len uint8
}
type huffmanEncoder struct {
@@ -56,7 +57,7 @@ type levelInfo struct {
}
// set sets the code and length of an hcode.
func (h *hcode) set(code uint16, length uint16) {
func (h *hcode) set(code uint16, length uint8) {
h.len = length
h.code = code
}
@@ -80,7 +81,7 @@ func generateFixedLiteralEncoding() *huffmanEncoder {
var ch uint16
for ch = 0; ch < literalCount; ch++ {
var bits uint16
var size uint16
var size uint8
switch {
case ch < 144:
// size 8, 000110000 .. 10111111
@@ -99,7 +100,7 @@ func generateFixedLiteralEncoding() *huffmanEncoder {
bits = ch + 192 - 280
size = 8
}
codes[ch] = hcode{code: reverseBits(bits, byte(size)), len: size}
codes[ch] = hcode{code: reverseBits(bits, size), len: size}
}
return h
}
@@ -187,14 +188,19 @@ func (h *huffmanEncoder) bitCounts(list []literalNode, maxBits int32) []int32 {
// of the level j ancestor.
var leafCounts [maxBitsLimit][maxBitsLimit]int32
// Descending to only have 1 bounds check.
l2f := int32(list[2].freq)
l1f := int32(list[1].freq)
l0f := int32(list[0].freq) + int32(list[1].freq)
for level := int32(1); level <= maxBits; level++ {
// For every level, the first two items are the first two characters.
// We initialize the levels as if we had already figured this out.
levels[level] = levelInfo{
level: level,
lastFreq: int32(list[1].freq),
nextCharFreq: int32(list[2].freq),
nextPairFreq: int32(list[0].freq) + int32(list[1].freq),
lastFreq: l1f,
nextCharFreq: l2f,
nextPairFreq: l0f,
}
leafCounts[level][level] = 2
if level == 1 {
@@ -205,8 +211,8 @@ func (h *huffmanEncoder) bitCounts(list []literalNode, maxBits int32) []int32 {
// We need a total of 2*n - 2 items at top level and have already generated 2.
levels[maxBits].needed = 2*n - 4
level := maxBits
for {
level := uint32(maxBits)
for level < 16 {
l := &levels[level]
if l.nextPairFreq == math.MaxInt32 && l.nextCharFreq == math.MaxInt32 {
// We've run out of both leafs and pairs.
@@ -238,7 +244,13 @@ func (h *huffmanEncoder) bitCounts(list []literalNode, maxBits int32) []int32 {
// more values in the level below
l.lastFreq = l.nextPairFreq
// Take leaf counts from the lower level, except counts[level] remains the same.
copy(leafCounts[level][:level], leafCounts[level-1][:level])
if true {
save := leafCounts[level][level]
leafCounts[level] = leafCounts[level-1]
leafCounts[level][level] = save
} else {
copy(leafCounts[level][:level], leafCounts[level-1][:level])
}
levels[l.level-1].needed = 2
}
@@ -296,7 +308,7 @@ func (h *huffmanEncoder) assignEncodingAndSize(bitCount []int32, list []literalN
sortByLiteral(chunk)
for _, node := range chunk {
h.codes[node.literal] = hcode{code: reverseBits(code, uint8(n)), len: uint16(n)}
h.codes[node.literal] = hcode{code: reverseBits(code, uint8(n)), len: uint8(n)}
code++
}
list = list[0 : len(list)-int(bits)]
@@ -309,6 +321,7 @@ func (h *huffmanEncoder) assignEncodingAndSize(bitCount []int32, list []literalN
// maxBits The maximum number of bits to use for any literal.
func (h *huffmanEncoder) generate(freq []uint16, maxBits int32) {
list := h.freqcache[:len(freq)+1]
codes := h.codes[:len(freq)]
// Number of non-zero literals
count := 0
// Set list to be the set of all non-zero literals and their frequencies
@@ -317,11 +330,10 @@ func (h *huffmanEncoder) generate(freq []uint16, maxBits int32) {
list[count] = literalNode{uint16(i), f}
count++
} else {
list[count] = literalNode{}
h.codes[i].len = 0
codes[i].len = 0
}
}
list[len(freq)] = literalNode{}
list[count] = literalNode{}
list = list[:count]
if count <= 2 {

View File

@@ -36,6 +36,13 @@ type lengthExtra struct {
var decCodeToLen = [32]lengthExtra{{length: 0x0, extra: 0x0}, {length: 0x1, extra: 0x0}, {length: 0x2, extra: 0x0}, {length: 0x3, extra: 0x0}, {length: 0x4, extra: 0x0}, {length: 0x5, extra: 0x0}, {length: 0x6, extra: 0x0}, {length: 0x7, extra: 0x0}, {length: 0x8, extra: 0x1}, {length: 0xa, extra: 0x1}, {length: 0xc, extra: 0x1}, {length: 0xe, extra: 0x1}, {length: 0x10, extra: 0x2}, {length: 0x14, extra: 0x2}, {length: 0x18, extra: 0x2}, {length: 0x1c, extra: 0x2}, {length: 0x20, extra: 0x3}, {length: 0x28, extra: 0x3}, {length: 0x30, extra: 0x3}, {length: 0x38, extra: 0x3}, {length: 0x40, extra: 0x4}, {length: 0x50, extra: 0x4}, {length: 0x60, extra: 0x4}, {length: 0x70, extra: 0x4}, {length: 0x80, extra: 0x5}, {length: 0xa0, extra: 0x5}, {length: 0xc0, extra: 0x5}, {length: 0xe0, extra: 0x5}, {length: 0xff, extra: 0x0}, {length: 0x0, extra: 0x0}, {length: 0x0, extra: 0x0}, {length: 0x0, extra: 0x0}}
var bitMask32 = [32]uint32{
0, 1, 3, 7, 0xF, 0x1F, 0x3F, 0x7F, 0xFF,
0x1FF, 0x3FF, 0x7FF, 0xFFF, 0x1FFF, 0x3FFF, 0x7FFF, 0xFFFF,
0x1ffff, 0x3ffff, 0x7FFFF, 0xfFFFF, 0x1fFFFF, 0x3fFFFF, 0x7fFFFF, 0xffFFFF,
0x1ffFFFF, 0x3ffFFFF, 0x7ffFFFF, 0xfffFFFF, 0x1fffFFFF, 0x3fffFFFF, 0x7fffFFFF,
} // up to 32 bits
// Initialize the fixedHuffmanDecoder only once upon first use.
var fixedOnce sync.Once
var fixedHuffmanDecoder huffmanDecoder
@@ -559,221 +566,6 @@ func (f *decompressor) readHuffman() error {
return nil
}
// Decode a single Huffman block from f.
// hl and hd are the Huffman states for the lit/length values
// and the distance values, respectively. If hd == nil, using the
// fixed distance encoding associated with fixed Huffman blocks.
func (f *decompressor) huffmanBlockGeneric() {
const (
stateInit = iota // Zero value must be stateInit
stateDict
)
switch f.stepState {
case stateInit:
goto readLiteral
case stateDict:
goto copyHistory
}
readLiteral:
// Read literal and/or (length, distance) according to RFC section 3.2.3.
{
var v int
{
// Inlined v, err := f.huffSym(f.hl)
// Since a huffmanDecoder can be empty or be composed of a degenerate tree
// with single element, huffSym must error on these two edge cases. In both
// cases, the chunks slice will be 0 for the invalid sequence, leading it
// satisfy the n == 0 check below.
n := uint(f.hl.maxRead)
// Optimization. Compiler isn't smart enough to keep f.b,f.nb in registers,
// but is smart enough to keep local variables in registers, so use nb and b,
// inline call to moreBits and reassign b,nb back to f on return.
nb, b := f.nb, f.b
for {
for nb < n {
c, err := f.r.ReadByte()
if err != nil {
f.b = b
f.nb = nb
f.err = noEOF(err)
return
}
f.roffset++
b |= uint32(c) << (nb & regSizeMaskUint32)
nb += 8
}
chunk := f.hl.chunks[b&(huffmanNumChunks-1)]
n = uint(chunk & huffmanCountMask)
if n > huffmanChunkBits {
chunk = f.hl.links[chunk>>huffmanValueShift][(b>>huffmanChunkBits)&f.hl.linkMask]
n = uint(chunk & huffmanCountMask)
}
if n <= nb {
if n == 0 {
f.b = b
f.nb = nb
if debugDecode {
fmt.Println("huffsym: n==0")
}
f.err = CorruptInputError(f.roffset)
return
}
f.b = b >> (n & regSizeMaskUint32)
f.nb = nb - n
v = int(chunk >> huffmanValueShift)
break
}
}
}
var n uint // number of bits extra
var length int
var err error
switch {
case v < 256:
f.dict.writeByte(byte(v))
if f.dict.availWrite() == 0 {
f.toRead = f.dict.readFlush()
f.step = (*decompressor).huffmanBlockGeneric
f.stepState = stateInit
return
}
goto readLiteral
case v == 256:
f.finishBlock()
return
// otherwise, reference to older data
case v < 265:
length = v - (257 - 3)
n = 0
case v < 269:
length = v*2 - (265*2 - 11)
n = 1
case v < 273:
length = v*4 - (269*4 - 19)
n = 2
case v < 277:
length = v*8 - (273*8 - 35)
n = 3
case v < 281:
length = v*16 - (277*16 - 67)
n = 4
case v < 285:
length = v*32 - (281*32 - 131)
n = 5
case v < maxNumLit:
length = 258
n = 0
default:
if debugDecode {
fmt.Println(v, ">= maxNumLit")
}
f.err = CorruptInputError(f.roffset)
return
}
if n > 0 {
for f.nb < n {
if err = f.moreBits(); err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
}
var dist uint32
if f.hd == nil {
for f.nb < 5 {
if err = f.moreBits(); err != nil {
if debugDecode {
fmt.Println("morebits f.nb<5:", err)
}
f.err = err
return
}
}
dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3)))
f.b >>= 5
f.nb -= 5
} else {
sym, err := f.huffSym(f.hd)
if err != nil {
if debugDecode {
fmt.Println("huffsym:", err)
}
f.err = err
return
}
dist = uint32(sym)
}
switch {
case dist < 4:
dist++
case dist < maxNumDist:
nb := uint(dist-2) >> 1
// have 1 bit in bottom of dist, need nb more.
extra := (dist & 1) << (nb & regSizeMaskUint32)
for f.nb < nb {
if err = f.moreBits(); err != nil {
if debugDecode {
fmt.Println("morebits f.nb<nb:", err)
}
f.err = err
return
}
}
extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1)
f.b >>= nb & regSizeMaskUint32
f.nb -= nb
dist = 1<<((nb+1)&regSizeMaskUint32) + 1 + extra
default:
if debugDecode {
fmt.Println("dist too big:", dist, maxNumDist)
}
f.err = CorruptInputError(f.roffset)
return
}
// No check on length; encoding can be prescient.
if dist > uint32(f.dict.histSize()) {
if debugDecode {
fmt.Println("dist > f.dict.histSize():", dist, f.dict.histSize())
}
f.err = CorruptInputError(f.roffset)
return
}
f.copyLen, f.copyDist = length, int(dist)
goto copyHistory
}
copyHistory:
// Perform a backwards copy according to RFC section 3.2.3.
{
cnt := f.dict.tryWriteCopy(f.copyDist, f.copyLen)
if cnt == 0 {
cnt = f.dict.writeCopy(f.copyDist, f.copyLen)
}
f.copyLen -= cnt
if f.dict.availWrite() == 0 || f.copyLen > 0 {
f.toRead = f.dict.readFlush()
f.step = (*decompressor).huffmanBlockGeneric // We need to continue this work
f.stepState = stateDict
return
}
goto readLiteral
}
}
// Copy a single uncompressed data block from input to output.
func (f *decompressor) dataBlock() {
// Uncompressed.

Some files were not shown because too many files have changed in this diff Show More