mirror of
https://github.com/containers/skopeo.git
synced 2026-01-31 06:19:20 +00:00
Compare commits
81 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ca3bff6a7c | ||
|
|
563a4ac523 | ||
|
|
14ea9f8bfd | ||
|
|
05e38e127e | ||
|
|
1ef80d8082 | ||
|
|
597b6bd204 | ||
|
|
7e9a664764 | ||
|
|
79449a358d | ||
|
|
2d04db9ac8 | ||
|
|
3e7a28481c | ||
|
|
79225f2e65 | ||
|
|
e1c1bbf26d | ||
|
|
c4808f002e | ||
|
|
42203b366d | ||
|
|
1f11b8b350 | ||
|
|
ea23621c70 | ||
|
|
ab2bc6e8d1 | ||
|
|
c520041b83 | ||
|
|
e626fca6a7 | ||
|
|
92b6262224 | ||
|
|
e8dea9e770 | ||
|
|
28080c8d5f | ||
|
|
0cea6dde02 | ||
|
|
22482e099a | ||
|
|
7aba888e99 | ||
|
|
c61482d2cf | ||
|
|
db941ebd8f | ||
|
|
7add6fc80b | ||
|
|
eb9d74090e | ||
|
|
61351d44d7 | ||
|
|
aa73bd9d0d | ||
|
|
b08350db15 | ||
|
|
f63f78225d | ||
|
|
60aa4aa82d | ||
|
|
37264e21fb | ||
|
|
fe2591054c | ||
|
|
fd0c3d7f08 | ||
|
|
b325cc22b8 | ||
|
|
5f754820da | ||
|
|
43acc747d5 | ||
|
|
b3dec98757 | ||
|
|
b1795a08fb | ||
|
|
1307cac0c2 | ||
|
|
dc1567c8bc | ||
|
|
22c524b0e0 | ||
|
|
9a225c3968 | ||
|
|
0270e5694c | ||
|
|
4ff902dab9 | ||
|
|
64b3bd28e3 | ||
|
|
d8e506c648 | ||
|
|
aa6c809e5a | ||
|
|
1c27d6918f | ||
|
|
9f2491694d | ||
|
|
14245f2e24 | ||
|
|
8a1d480274 | ||
|
|
78b29a5c2f | ||
|
|
20d31daec0 | ||
|
|
5a8f212630 | ||
|
|
34e77f9897 | ||
|
|
93876acc5e | ||
|
|
031283efb1 | ||
|
|
23c54feddd | ||
|
|
04e04edbfe | ||
|
|
cbedcd967e | ||
|
|
fa08bd7e91 | ||
|
|
874d119dd9 | ||
|
|
eb43d93b57 | ||
|
|
c1a0084bb3 | ||
|
|
e8fb01e1ed | ||
|
|
0543f551c7 | ||
|
|
27f320b27f | ||
|
|
c0dffd9b3e | ||
|
|
66a97d038e | ||
|
|
2e8377a708 | ||
|
|
a76cfb7dc7 | ||
|
|
409dce8a89 | ||
|
|
5b14746045 | ||
|
|
a3d2e8323a | ||
|
|
2be4deb980 | ||
|
|
5f71547262 | ||
|
|
6c791a0559 |
@@ -32,6 +32,8 @@ RUN set -x \
|
||||
RUN set -x \
|
||||
&& export GOPATH=$(mktemp -d) \
|
||||
&& git clone --depth 1 -b v1.5.0-alpha.3 git://github.com/openshift/origin "$GOPATH/src/github.com/openshift/origin" \
|
||||
# The sed edits out a "go < 1.5" check which works incorrectly with go ≥ 1.10. \
|
||||
&& sed -i -e 's/\[\[ "\${go_version\[2]}" < "go1.5" ]]/false/' "$GOPATH/src/github.com/openshift/origin/hack/common.sh" \
|
||||
&& (cd "$GOPATH/src/github.com/openshift/origin" && make clean build && make all WHAT=cmd/dockerregistry) \
|
||||
&& cp -a "$GOPATH/src/github.com/openshift/origin/_output/local/bin/linux"/*/* /usr/local/bin \
|
||||
&& cp "$GOPATH/src/github.com/openshift/origin/images/dockerregistry/config.yml" /atomic-registry-config.yml \
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM ubuntu:17.04
|
||||
FROM ubuntu:17.10
|
||||
|
||||
RUN apt-get update && apt-get install -y \
|
||||
golang \
|
||||
|
||||
3
Makefile
3
Makefile
@@ -135,3 +135,6 @@ validate-local:
|
||||
|
||||
test-unit-local:
|
||||
$(GPGME_ENV) $(GO) test -tags "$(BUILDTAGS)" $$($(GO) list -tags "$(BUILDTAGS)" -e ./... | grep -v '^github\.com/projectatomic/skopeo/\(integration\|vendor/.*\)$$')
|
||||
|
||||
vendor: vendor.conf
|
||||
vndr -whitelist '^github.com/containers/image/docs/.*'
|
||||
|
||||
60
README.md
60
README.md
@@ -1,10 +1,19 @@
|
||||
skopeo [](https://travis-ci.org/projectatomic/skopeo)
|
||||
=
|
||||
|
||||
`skopeo` is a command line utility that performs various operations on container images and image repositories. Skopeo works with API V2 registries such as Docker registries, the Atomic registry, private registries, local directories and local OCI-layout directories. Skopeo does not require a daemon to be running to perform these operations which consist of:
|
||||
<img src="https://cdn.rawgit.com/projectatomic/skopeo/master/docs/skopeo.svg" width="250">
|
||||
|
||||
----
|
||||
|
||||
`skopeo` is a command line utility that performs various operations on container images and image repositories.
|
||||
|
||||
`skopeo` can work with [OCI images](https://github.com/opencontainers/image-spec) as well as the original Docker v2 images.
|
||||
|
||||
Skopeo works with API V2 registries such as Docker registries, the Atomic registry, private registries, local directories and local OCI-layout directories. Skopeo does not require a daemon to be running to perform these operations which consist of:
|
||||
|
||||
* Inspecting an image showing its properties including its layers.
|
||||
* Copying an image from and to various storage mechanisms.
|
||||
For example you can copy images from one registry to another, without requiring priviledge.
|
||||
* Inspecting a remote image showing its properties including its layers, without requiring you to pull the image to the host.
|
||||
* Deleting an image from an image repository.
|
||||
* When required by the repository, skopeo can pass the appropriate credentials and certificates for authentication.
|
||||
|
||||
@@ -75,9 +84,20 @@ $ skopeo inspect docker://docker.io/fedora:rawhide | jq '.Digest'
|
||||
|
||||
Copying images
|
||||
-
|
||||
`skopeo` can copy container images between various storage mechanisms,
|
||||
e.g. Docker registries (including the Docker Hub), the Atomic Registry,
|
||||
local directories, and local OCI-layout directories:
|
||||
`skopeo` can copy container images between various storage mechanisms, including:
|
||||
* Docker distribution based registries
|
||||
|
||||
- The Docker Hub, OpenShift, GCR, Artifactory, Quay ...
|
||||
|
||||
* Container Storage backends
|
||||
|
||||
- Docker daemon storage
|
||||
|
||||
- github.com/containers/storage (Backend for CRI-O, Buildah and friends)
|
||||
|
||||
* Local directories
|
||||
|
||||
* Local OCI-layout directories
|
||||
|
||||
```sh
|
||||
$ skopeo copy docker://busybox:1-glibc atomic:myns/unsigned:streaming
|
||||
@@ -129,8 +149,16 @@ $ skopeo copy --src-creds=testuser:testpassword docker://myregistrydomain.com:50
|
||||
If your cli config is found but it doesn't contain the necessary credentials for the queried registry
|
||||
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--creds` or `--src-creds|--dest-creds`.
|
||||
|
||||
Building
|
||||
|
||||
Obtaining skopeo
|
||||
-
|
||||
`skopeo` may already be packaged in your distribution, for example on Fedora 23 and later you can install it using
|
||||
```sh
|
||||
$ sudo dnf install skopeo
|
||||
```
|
||||
|
||||
Otherwise, read on for building and installing it from source:
|
||||
|
||||
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag.
|
||||
|
||||
There are two ways to build skopeo: in a container, or locally without a container. Choose the one which better matches your needs and environment.
|
||||
@@ -174,16 +202,12 @@ Then
|
||||
$ make docs
|
||||
```
|
||||
|
||||
Installing
|
||||
-
|
||||
If you built from source:
|
||||
### Installation
|
||||
Finally, after the binary and documentation is built:
|
||||
```sh
|
||||
$ sudo make install
|
||||
```
|
||||
`skopeo` is also available from Fedora 23 (and later):
|
||||
```sh
|
||||
$ sudo dnf install skopeo
|
||||
```
|
||||
|
||||
TODO
|
||||
-
|
||||
- list all images on registry?
|
||||
@@ -200,30 +224,30 @@ CONTRIBUTING
|
||||
|
||||
### Dependencies management
|
||||
|
||||
`skopeo` uses [`vndr`](https://github.com/LK4D4/vndr) for dependencies management.
|
||||
Make sure [`vndr`](https://github.com/LK4D4/vndr) is installed.
|
||||
|
||||
In order to add a new dependency to this project:
|
||||
|
||||
- add a new line to `vendor.conf` according to `vndr` rules (e.g. `github.com/pkg/errors master`)
|
||||
- run `vndr github.com/pkg/errors`
|
||||
- run `make vendor`
|
||||
|
||||
In order to update an existing dependency:
|
||||
|
||||
- update the relevant dependency line in `vendor.conf`
|
||||
- run `vndr github.com/pkg/errors`
|
||||
- run `make vendor`
|
||||
|
||||
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
|
||||
|
||||
- create out a new branch in your `skopeo` checkout and switch to it
|
||||
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
|
||||
- run `vndr github.com/containers/image`
|
||||
- run `make vendor`
|
||||
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
|
||||
- optionally add new integration tests to the skopeo repo
|
||||
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
|
||||
- iterate until tests pass and the PR is reviewed
|
||||
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
|
||||
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
|
||||
- run `vndr github.com/containers/image`
|
||||
- run `make vendor`
|
||||
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
|
||||
- after tests complete succcesfully again, merge the skopeo PR
|
||||
|
||||
|
||||
@@ -1,15 +1,19 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/copy"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/transports/alltransports"
|
||||
"github.com/containers/image/types"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
@@ -28,39 +32,68 @@ func contextsFromGlobalOptions(c *cli.Context) (*types.SystemContext, *types.Sys
|
||||
return sourceCtx, destinationCtx, nil
|
||||
}
|
||||
|
||||
func copyHandler(context *cli.Context) error {
|
||||
if len(context.Args()) != 2 {
|
||||
func copyHandler(c *cli.Context) error {
|
||||
if len(c.Args()) != 2 {
|
||||
return errors.New("Usage: copy source destination")
|
||||
}
|
||||
|
||||
policyContext, err := getPolicyContext(context)
|
||||
policyContext, err := getPolicyContext(c)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error loading trust policy: %v", err)
|
||||
}
|
||||
defer policyContext.Destroy()
|
||||
|
||||
srcRef, err := alltransports.ParseImageName(context.Args()[0])
|
||||
srcRef, err := alltransports.ParseImageName(c.Args()[0])
|
||||
if err != nil {
|
||||
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
|
||||
return fmt.Errorf("Invalid source name %s: %v", c.Args()[0], err)
|
||||
}
|
||||
destRef, err := alltransports.ParseImageName(context.Args()[1])
|
||||
destRef, err := alltransports.ParseImageName(c.Args()[1])
|
||||
if err != nil {
|
||||
return fmt.Errorf("Invalid destination name %s: %v", context.Args()[1], err)
|
||||
return fmt.Errorf("Invalid destination name %s: %v", c.Args()[1], err)
|
||||
}
|
||||
signBy := context.String("sign-by")
|
||||
removeSignatures := context.Bool("remove-signatures")
|
||||
signBy := c.String("sign-by")
|
||||
removeSignatures := c.Bool("remove-signatures")
|
||||
|
||||
sourceCtx, destinationCtx, err := contextsFromGlobalOptions(context)
|
||||
sourceCtx, destinationCtx, err := contextsFromGlobalOptions(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return copy.Image(policyContext, destRef, srcRef, ©.Options{
|
||||
RemoveSignatures: removeSignatures,
|
||||
SignBy: signBy,
|
||||
ReportWriter: os.Stdout,
|
||||
SourceCtx: sourceCtx,
|
||||
DestinationCtx: destinationCtx,
|
||||
var manifestType string
|
||||
if c.IsSet("format") {
|
||||
switch c.String("format") {
|
||||
case "oci":
|
||||
manifestType = imgspecv1.MediaTypeImageManifest
|
||||
case "v2s1":
|
||||
manifestType = manifest.DockerV2Schema1SignedMediaType
|
||||
case "v2s2":
|
||||
manifestType = manifest.DockerV2Schema2MediaType
|
||||
default:
|
||||
return fmt.Errorf("unknown format %q. Choose on of the supported formats: 'oci', 'v2s1', or 'v2s2'", c.String("format"))
|
||||
}
|
||||
}
|
||||
|
||||
if c.IsSet("additional-tag") {
|
||||
for _, image := range c.StringSlice("additional-tag") {
|
||||
ref, err := reference.ParseNormalizedNamed(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error parsing additional-tag '%s': %v", image, err)
|
||||
}
|
||||
namedTagged, isNamedTagged := ref.(reference.NamedTagged)
|
||||
if !isNamedTagged {
|
||||
return fmt.Errorf("additional-tag '%s' must be a tagged reference", image)
|
||||
}
|
||||
destinationCtx.DockerArchiveAdditionalTags = append(destinationCtx.DockerArchiveAdditionalTags, namedTagged)
|
||||
}
|
||||
}
|
||||
|
||||
return copy.Image(context.Background(), policyContext, destRef, srcRef, ©.Options{
|
||||
RemoveSignatures: removeSignatures,
|
||||
SignBy: signBy,
|
||||
ReportWriter: os.Stdout,
|
||||
SourceCtx: sourceCtx,
|
||||
DestinationCtx: destinationCtx,
|
||||
ForceManifestMIMEType: manifestType,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -80,6 +113,14 @@ var copyCmd = cli.Command{
|
||||
Action: copyHandler,
|
||||
// FIXME: Do we need to namespace the GPG aspect?
|
||||
Flags: []cli.Flag{
|
||||
cli.StringSliceFlag{
|
||||
Name: "additional-tag",
|
||||
Usage: "additional tags (supports docker-archive)",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "authfile",
|
||||
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "remove-signatures",
|
||||
Usage: "Do not copy signatures from SOURCE-IMAGE",
|
||||
@@ -101,20 +142,20 @@ var copyCmd = cli.Command{
|
||||
cli.StringFlag{
|
||||
Name: "src-cert-dir",
|
||||
Value: "",
|
||||
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the source registry",
|
||||
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the source registry or daemon",
|
||||
},
|
||||
cli.BoolTFlag{
|
||||
Name: "src-tls-verify",
|
||||
Usage: "require HTTPS and verify certificates when talking to the container source registry (defaults to true)",
|
||||
Usage: "require HTTPS and verify certificates when talking to the container source registry or daemon (defaults to true)",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "dest-cert-dir",
|
||||
Value: "",
|
||||
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the destination registry",
|
||||
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the destination registry or daemon",
|
||||
},
|
||||
cli.BoolTFlag{
|
||||
Name: "dest-tls-verify",
|
||||
Usage: "require HTTPS and verify certificates when talking to the container destination registry (defaults to true)",
|
||||
Usage: "require HTTPS and verify certificates when talking to the container destination registry or daemon (defaults to true)",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "dest-ostree-tmp-dir",
|
||||
@@ -131,5 +172,23 @@ var copyCmd = cli.Command{
|
||||
Value: "",
|
||||
Usage: "`DIRECTORY` to use to store retrieved blobs (OCI layout destinations only)",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "format, f",
|
||||
Usage: "`MANIFEST TYPE` (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)",
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "dest-compress",
|
||||
Usage: "Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "src-daemon-host",
|
||||
Value: "",
|
||||
Usage: "use docker daemon host at `HOST` (docker-daemon sources only)",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "dest-daemon-host",
|
||||
Value: "",
|
||||
Usage: "use docker daemon host at `HOST` (docker-daemon destinations only)",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
@@ -10,21 +11,21 @@ import (
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
func deleteHandler(context *cli.Context) error {
|
||||
if len(context.Args()) != 1 {
|
||||
func deleteHandler(c *cli.Context) error {
|
||||
if len(c.Args()) != 1 {
|
||||
return errors.New("Usage: delete imageReference")
|
||||
}
|
||||
|
||||
ref, err := alltransports.ParseImageName(context.Args()[0])
|
||||
ref, err := alltransports.ParseImageName(c.Args()[0])
|
||||
if err != nil {
|
||||
return fmt.Errorf("Invalid source name %s: %v", context.Args()[0], err)
|
||||
return fmt.Errorf("Invalid source name %s: %v", c.Args()[0], err)
|
||||
}
|
||||
|
||||
ctx, err := contextFromGlobalOptions(context, "")
|
||||
sys, err := contextFromGlobalOptions(c, "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return ref.DeleteImage(ctx)
|
||||
return ref.DeleteImage(context.Background(), sys)
|
||||
}
|
||||
|
||||
var deleteCmd = cli.Command{
|
||||
@@ -41,6 +42,10 @@ var deleteCmd = cli.Command{
|
||||
ArgsUsage: "IMAGE-NAME",
|
||||
Action: deleteHandler,
|
||||
Flags: []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: "authfile",
|
||||
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "creds",
|
||||
Value: "",
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
@@ -21,7 +22,7 @@ type inspectOutput struct {
|
||||
Tag string `json:",omitempty"`
|
||||
Digest digest.Digest
|
||||
RepoTags []string
|
||||
Created time.Time
|
||||
Created *time.Time
|
||||
DockerVersion string
|
||||
Labels map[string]string
|
||||
Architecture string
|
||||
@@ -42,6 +43,10 @@ var inspectCmd = cli.Command{
|
||||
`, strings.Join(transports.ListNames(), ", ")),
|
||||
ArgsUsage: "IMAGE-NAME",
|
||||
Flags: []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: "authfile",
|
||||
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "cert-dir",
|
||||
Value: "",
|
||||
@@ -62,7 +67,9 @@ var inspectCmd = cli.Command{
|
||||
},
|
||||
},
|
||||
Action: func(c *cli.Context) (retErr error) {
|
||||
img, err := parseImage(c)
|
||||
ctx := context.Background()
|
||||
|
||||
img, err := parseImage(ctx, c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -73,7 +80,7 @@ var inspectCmd = cli.Command{
|
||||
}
|
||||
}()
|
||||
|
||||
rawManifest, _, err := img.Manifest()
|
||||
rawManifest, _, err := img.Manifest(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -84,7 +91,7 @@ var inspectCmd = cli.Command{
|
||||
}
|
||||
return nil
|
||||
}
|
||||
imgInspect, err := img.Inspect()
|
||||
imgInspect, err := img.Inspect(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -106,7 +113,7 @@ var inspectCmd = cli.Command{
|
||||
}
|
||||
if dockerImg, ok := img.(*docker.Image); ok {
|
||||
outputData.Name = dockerImg.SourceRefFullName()
|
||||
outputData.RepoTags, err = dockerImg.GetRepositoryTags()
|
||||
outputData.RepoTags, err = dockerImg.GetRepositoryTags(ctx)
|
||||
if err != nil {
|
||||
// some registries may decide to block the "list all tags" endpoint
|
||||
// gracefully allow the inspect to continue in this case. Currently
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
@@ -24,11 +25,18 @@ var layersCmd = cli.Command{
|
||||
if c.NArg() == 0 {
|
||||
return errors.New("Usage: layers imageReference [layer...]")
|
||||
}
|
||||
rawSource, err := parseImageSource(c, c.Args()[0])
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
sys, err := contextFromGlobalOptions(c, "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
src, err := image.FromSource(rawSource)
|
||||
rawSource, err := parseImageSource(ctx, c, c.Args()[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
src, err := image.FromSource(ctx, sys, rawSource)
|
||||
if err != nil {
|
||||
if closeErr := rawSource.Close(); closeErr != nil {
|
||||
return errors.Wrapf(err, " (close error: %v)", closeErr)
|
||||
@@ -42,7 +50,11 @@ var layersCmd = cli.Command{
|
||||
}
|
||||
}()
|
||||
|
||||
var blobDigests []digest.Digest
|
||||
type blobDigest struct {
|
||||
digest digest.Digest
|
||||
isConfig bool
|
||||
}
|
||||
var blobDigests []blobDigest
|
||||
for _, dString := range c.Args().Tail() {
|
||||
if !strings.HasPrefix(dString, "sha256:") {
|
||||
dString = "sha256:" + dString
|
||||
@@ -51,7 +63,7 @@ var layersCmd = cli.Command{
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
blobDigests = append(blobDigests, d)
|
||||
blobDigests = append(blobDigests, blobDigest{digest: d, isConfig: false})
|
||||
}
|
||||
|
||||
if len(blobDigests) == 0 {
|
||||
@@ -59,13 +71,13 @@ var layersCmd = cli.Command{
|
||||
seenLayers := map[digest.Digest]struct{}{}
|
||||
for _, info := range layers {
|
||||
if _, ok := seenLayers[info.Digest]; !ok {
|
||||
blobDigests = append(blobDigests, info.Digest)
|
||||
blobDigests = append(blobDigests, blobDigest{digest: info.Digest, isConfig: false})
|
||||
seenLayers[info.Digest] = struct{}{}
|
||||
}
|
||||
}
|
||||
configInfo := src.ConfigInfo()
|
||||
if configInfo.Digest != "" {
|
||||
blobDigests = append(blobDigests, configInfo.Digest)
|
||||
blobDigests = append(blobDigests, blobDigest{digest: configInfo.Digest, isConfig: true})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -77,7 +89,7 @@ var layersCmd = cli.Command{
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dest, err := tmpDirRef.NewImageDestination(nil)
|
||||
dest, err := tmpDirRef.NewImageDestination(ctx, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -88,12 +100,12 @@ var layersCmd = cli.Command{
|
||||
}
|
||||
}()
|
||||
|
||||
for _, digest := range blobDigests {
|
||||
r, blobSize, err := rawSource.GetBlob(types.BlobInfo{Digest: digest, Size: -1})
|
||||
for _, bd := range blobDigests {
|
||||
r, blobSize, err := rawSource.GetBlob(ctx, types.BlobInfo{Digest: bd.digest, Size: -1})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := dest.PutBlob(r, types.BlobInfo{Digest: digest, Size: blobSize}); err != nil {
|
||||
if _, err := dest.PutBlob(ctx, r, types.BlobInfo{Digest: bd.digest, Size: blobSize}, bd.isConfig); err != nil {
|
||||
if closeErr := r.Close(); closeErr != nil {
|
||||
return errors.Wrapf(err, " (close error: %v)", closeErr)
|
||||
}
|
||||
@@ -101,14 +113,14 @@ var layersCmd = cli.Command{
|
||||
}
|
||||
}
|
||||
|
||||
manifest, _, err := src.Manifest()
|
||||
manifest, _, err := src.Manifest(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := dest.PutManifest(manifest); err != nil {
|
||||
if err := dest.PutManifest(ctx, manifest); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return dest.Commit()
|
||||
return dest.Commit(ctx)
|
||||
},
|
||||
}
|
||||
|
||||
@@ -50,6 +50,16 @@ func createApp() *cli.App {
|
||||
Value: "",
|
||||
Usage: "use registry configuration files in `DIR` (e.g. for container signature storage)",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "override-arch",
|
||||
Value: "",
|
||||
Usage: "use `ARCH` instead of the architecture of the machine for choosing images",
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "override-os",
|
||||
Value: "",
|
||||
Usage: "use `OS` instead of the running OS for choosing images",
|
||||
},
|
||||
}
|
||||
app.Before = func(c *cli.Context) error {
|
||||
if c.GlobalBool("debug") {
|
||||
|
||||
@@ -10,14 +10,14 @@ import (
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
func standaloneSign(context *cli.Context) error {
|
||||
outputFile := context.String("output")
|
||||
if len(context.Args()) != 3 || outputFile == "" {
|
||||
func standaloneSign(c *cli.Context) error {
|
||||
outputFile := c.String("output")
|
||||
if len(c.Args()) != 3 || outputFile == "" {
|
||||
return errors.New("Usage: skopeo standalone-sign manifest docker-reference key-fingerprint -o signature")
|
||||
}
|
||||
manifestPath := context.Args()[0]
|
||||
dockerReference := context.Args()[1]
|
||||
fingerprint := context.Args()[2]
|
||||
manifestPath := c.Args()[0]
|
||||
dockerReference := c.Args()[1]
|
||||
fingerprint := c.Args()[2]
|
||||
|
||||
manifest, err := ioutil.ReadFile(manifestPath)
|
||||
if err != nil {
|
||||
@@ -53,14 +53,14 @@ var standaloneSignCmd = cli.Command{
|
||||
},
|
||||
}
|
||||
|
||||
func standaloneVerify(context *cli.Context) error {
|
||||
if len(context.Args()) != 4 {
|
||||
func standaloneVerify(c *cli.Context) error {
|
||||
if len(c.Args()) != 4 {
|
||||
return errors.New("Usage: skopeo standalone-verify manifest docker-reference key-fingerprint signature")
|
||||
}
|
||||
manifestPath := context.Args()[0]
|
||||
expectedDockerReference := context.Args()[1]
|
||||
expectedFingerprint := context.Args()[2]
|
||||
signaturePath := context.Args()[3]
|
||||
manifestPath := c.Args()[0]
|
||||
expectedDockerReference := c.Args()[1]
|
||||
expectedFingerprint := c.Args()[2]
|
||||
signaturePath := c.Args()[3]
|
||||
|
||||
unverifiedManifest, err := ioutil.ReadFile(manifestPath)
|
||||
if err != nil {
|
||||
@@ -81,7 +81,7 @@ func standaloneVerify(context *cli.Context) error {
|
||||
return fmt.Errorf("Error verifying signature: %v", err)
|
||||
}
|
||||
|
||||
fmt.Fprintf(context.App.Writer, "Signature verified, digest %s\n", sig.DockerManifestDigest)
|
||||
fmt.Fprintf(c.App.Writer, "Signature verified, digest %s\n", sig.DockerManifestDigest)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -92,11 +92,11 @@ var standaloneVerifyCmd = cli.Command{
|
||||
Action: standaloneVerify,
|
||||
}
|
||||
|
||||
func untrustedSignatureDump(context *cli.Context) error {
|
||||
if len(context.Args()) != 1 {
|
||||
func untrustedSignatureDump(c *cli.Context) error {
|
||||
if len(c.Args()) != 1 {
|
||||
return errors.New("Usage: skopeo untrusted-signature-dump-without-verification signature")
|
||||
}
|
||||
untrustedSignaturePath := context.Args()[0]
|
||||
untrustedSignaturePath := c.Args()[0]
|
||||
|
||||
untrustedSignature, err := ioutil.ReadFile(untrustedSignaturePath)
|
||||
if err != nil {
|
||||
@@ -111,7 +111,7 @@ func untrustedSignatureDump(context *cli.Context) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(context.App.Writer, string(untrustedOut))
|
||||
fmt.Fprintln(c.App.Writer, string(untrustedOut))
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"strings"
|
||||
|
||||
@@ -11,13 +12,20 @@ import (
|
||||
|
||||
func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemContext, error) {
|
||||
ctx := &types.SystemContext{
|
||||
RegistriesDirPath: c.GlobalString("registries.d"),
|
||||
DockerCertPath: c.String(flagPrefix + "cert-dir"),
|
||||
RegistriesDirPath: c.GlobalString("registries.d"),
|
||||
ArchitectureChoice: c.GlobalString("override-arch"),
|
||||
OSChoice: c.GlobalString("override-os"),
|
||||
DockerCertPath: c.String(flagPrefix + "cert-dir"),
|
||||
// DEPRECATED: keep this here for backward compatibility, but override
|
||||
// them if per subcommand flags are provided (see below).
|
||||
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
|
||||
OSTreeTmpDirPath: c.String(flagPrefix + "ostree-tmp-dir"),
|
||||
OCISharedBlobDirPath: c.String(flagPrefix + "shared-blob-dir"),
|
||||
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
|
||||
OSTreeTmpDirPath: c.String(flagPrefix + "ostree-tmp-dir"),
|
||||
OCISharedBlobDirPath: c.String(flagPrefix + "shared-blob-dir"),
|
||||
DirForceCompress: c.Bool(flagPrefix + "compress"),
|
||||
AuthFilePath: c.String("authfile"),
|
||||
DockerDaemonHost: c.String(flagPrefix + "daemon-host"),
|
||||
DockerDaemonCertPath: c.String(flagPrefix + "cert-dir"),
|
||||
DockerDaemonInsecureSkipTLSVerify: !c.BoolT(flagPrefix + "tls-verify"),
|
||||
}
|
||||
if c.IsSet(flagPrefix + "tls-verify") {
|
||||
ctx.DockerInsecureSkipTLSVerify = !c.BoolT(flagPrefix + "tls-verify")
|
||||
@@ -58,30 +66,30 @@ func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
|
||||
}
|
||||
|
||||
// parseImage converts image URL-like string to an initialized handler for that image.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
func parseImage(c *cli.Context) (types.Image, error) {
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
func parseImage(ctx context.Context, c *cli.Context) (types.ImageCloser, error) {
|
||||
imgName := c.Args().First()
|
||||
ref, err := alltransports.ParseImageName(imgName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ctx, err := contextFromGlobalOptions(c, "")
|
||||
sys, err := contextFromGlobalOptions(c, "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ref.NewImage(ctx)
|
||||
return ref.NewImage(ctx, sys)
|
||||
}
|
||||
|
||||
// parseImageSource converts image URL-like string to an ImageSource.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func parseImageSource(c *cli.Context, name string) (types.ImageSource, error) {
|
||||
func parseImageSource(ctx context.Context, c *cli.Context, name string) (types.ImageSource, error) {
|
||||
ref, err := alltransports.ParseImageName(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ctx, err := contextFromGlobalOptions(c, "")
|
||||
sys, err := contextFromGlobalOptions(c, "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ref.NewImageSource(ctx)
|
||||
return ref.NewImageSource(ctx, sys)
|
||||
}
|
||||
|
||||
@@ -20,30 +20,38 @@ _complete_() {
|
||||
}
|
||||
|
||||
_skopeo_copy() {
|
||||
local options_with_args="
|
||||
--sign-by
|
||||
--src-creds --screds
|
||||
--src-cert-dir
|
||||
--src-tls-verify
|
||||
--dest-creds --dcreds
|
||||
--dest-cert-dir
|
||||
--dest-ostree-tmp-dir
|
||||
--dest-tls-verify
|
||||
"
|
||||
local boolean_options="
|
||||
--remove-signatures
|
||||
"
|
||||
_complete_ "$options_with_args" "$boolean_options"
|
||||
local options_with_args="
|
||||
--authfile
|
||||
--format -f
|
||||
--sign-by
|
||||
--src-creds --screds
|
||||
--src-cert-dir
|
||||
--src-tls-verify
|
||||
--dest-creds --dcreds
|
||||
--dest-cert-dir
|
||||
--dest-ostree-tmp-dir
|
||||
--dest-tls-verify
|
||||
--src-daemon-host
|
||||
--dest-daemon-host
|
||||
"
|
||||
|
||||
local boolean_options="
|
||||
--dest-compress
|
||||
--remove-signatures
|
||||
"
|
||||
|
||||
_complete_ "$options_with_args" "$boolean_options"
|
||||
}
|
||||
|
||||
_skopeo_inspect() {
|
||||
local options_with_args="
|
||||
--creds
|
||||
--cert-dir
|
||||
--authfile
|
||||
--creds
|
||||
--cert-dir
|
||||
"
|
||||
local boolean_options="
|
||||
--raw
|
||||
--tls-verify
|
||||
--raw
|
||||
--tls-verify
|
||||
"
|
||||
_complete_ "$options_with_args" "$boolean_options"
|
||||
}
|
||||
@@ -75,11 +83,12 @@ _skopeo_manifest_digest() {
|
||||
|
||||
_skopeo_delete() {
|
||||
local options_with_args="
|
||||
--creds
|
||||
--cert-dir
|
||||
--authfile
|
||||
--creds
|
||||
--cert-dir
|
||||
"
|
||||
local boolean_options="
|
||||
--tls-verify
|
||||
--tls-verify
|
||||
"
|
||||
_complete_ "$options_with_args" "$boolean_options"
|
||||
}
|
||||
@@ -99,6 +108,8 @@ _skopeo_skopeo() {
|
||||
local options_with_args="
|
||||
--policy
|
||||
--registries.d
|
||||
--override-arch
|
||||
--override-os
|
||||
"
|
||||
local boolean_options="
|
||||
--insecure-policy
|
||||
|
||||
60
contrib/containers-storage.conf.5.md
Normal file
60
contrib/containers-storage.conf.5.md
Normal file
@@ -0,0 +1,60 @@
|
||||
% storage.conf(5) Container Storage Configuration File
|
||||
% Dan Walsh
|
||||
% May 2017
|
||||
|
||||
# NAME
|
||||
storage.conf - Syntax of Container Storage configuration file
|
||||
|
||||
# DESCRIPTION
|
||||
The STORAGE configuration file specifies all of the available container storage options
|
||||
for tools using shared container storage.
|
||||
|
||||
# FORMAT
|
||||
The [TOML format][toml] is used as the encoding of the configuration file.
|
||||
Every option and subtable listed here is nested under a global "storage" table.
|
||||
No bare options are used. The format of TOML can be simplified to:
|
||||
|
||||
[table]
|
||||
option = value
|
||||
|
||||
[table.subtable1]
|
||||
option = value
|
||||
|
||||
[table.subtable2]
|
||||
option = value
|
||||
|
||||
## STORAGE TABLE
|
||||
|
||||
The `storage` table supports the following options:
|
||||
|
||||
**graphroot**=""
|
||||
container storage graph dir (default: "/var/lib/containers/storage")
|
||||
Default directory to store all writable content created by container storage programs.
|
||||
|
||||
**runroot**=""
|
||||
container storage run dir (default: "/var/run/containers/storage")
|
||||
Default directory to store all temporary writable content created by container storage programs.
|
||||
|
||||
**driver**=""
|
||||
container storage driver (default is "overlay")
|
||||
Default Copy On Write (COW) container storage driver.
|
||||
|
||||
### STORAGE OPTIONS TABLE
|
||||
|
||||
The `storage.options` table supports the following options:
|
||||
|
||||
**additionalimagestores**=[]
|
||||
Paths to additional container image stores. Usually these are read-only and stored on remote network shares.
|
||||
|
||||
**size**=""
|
||||
Maximum size of a container image. Default is 10GB. This flag can be used to set quota
|
||||
on the size of container images.
|
||||
|
||||
**override_kernel_check**=""
|
||||
Tell storage drivers to ignore kernel version checks. Some storage drivers assume that if a kernel is too
|
||||
old, the driver is not supported. But for kernels that have had the drivers backported, this flag
|
||||
allows users to override the checks.
|
||||
|
||||
# HISTORY
|
||||
May 2017, Originally compiled by Dan Walsh <dwalsh@redhat.com>
|
||||
Format copied from crio.conf man page created by Aleksa Sarai <asarai@suse.de>
|
||||
28
contrib/storage.conf
Normal file
28
contrib/storage.conf
Normal file
@@ -0,0 +1,28 @@
|
||||
# storage.conf is the configuration file for all tools
|
||||
# that share the containers/storage libraries
|
||||
# See man 5 containers-storage.conf for more information
|
||||
|
||||
# The "container storage" table contains all of the server options.
|
||||
[storage]
|
||||
|
||||
# Default Storage Driver
|
||||
driver = "overlay"
|
||||
|
||||
# Temporary storage location
|
||||
runroot = "/var/run/containers/storage"
|
||||
|
||||
# Primary read-write location of container storage
|
||||
graphroot = "/var/lib/containers/storage"
|
||||
|
||||
[storage.options]
|
||||
# AdditionalImageStores is used to pass paths to additional read-only image stores
|
||||
# Must be comma separated list.
|
||||
additionalimagestores = [
|
||||
]
|
||||
|
||||
# Size is used to set a maximum size of the container image. Only supported by
|
||||
# certain container storage drivers (currently overlay, zfs, vfs, btrfs)
|
||||
size = ""
|
||||
|
||||
# OverrideKernelCheck tells the driver to ignore kernel checks based on kernel version
|
||||
override_kernel_check = "true"
|
||||
@@ -2,13 +2,25 @@
|
||||
% Jhon Honce
|
||||
% August 2016
|
||||
# NAME
|
||||
skopeo -- Various operations with container images and container image registries
|
||||
skopeo -- Command line utility used to interact with local and remote container images and container image registries
|
||||
# SYNOPSIS
|
||||
**skopeo** [_global options_] _command_ [_command options_]
|
||||
# DESCRIPTION
|
||||
`skopeo` is a command line utility providing various operations with container images and container image registries. For example, it is able to inspect a repository on a container registry and fetch image. It fetches the repository's manifest and it is able to show you a `docker inspect`-like json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - which tags are available for the given repository? which labels the image has?
|
||||
`skopeo` is a command line utility providing various operations with container images and container image registries.
|
||||
|
||||
`skopeo` can copy container images between various containers image stores, converting them as necessary. For example you can use `skopeo` to copy container images from one container registry to another.
|
||||
|
||||
`skopeo` can convert a Docker schema 2 or schema 1 container image to an OCI image.
|
||||
|
||||
`skopeo` can inspect a repository on a container registry without needlessly pulling the image. Pulling an image from a repository, especially a remote repository, is an expensive network and storage operation. Skopeo fetches the repository's manifest and displays a `docker inspect`-like json output about the repository or a tag. `skopeo`, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - Which tags are available for the given repository? Which labels does the image have?
|
||||
|
||||
`skopeo` can sign and verify container images.
|
||||
|
||||
`skopeo` can delete container images from a remote container registry.
|
||||
|
||||
Note: `skopeo` does not require any container runtimes to be running, to do most of
|
||||
its functionality. It also does not require root, unless you are copying images into a container runtime storage backend, like the docker daemon or github.com/containers/storage.
|
||||
|
||||
It also allows you to copy container images between various registries, possibly converting them as necessary, and to sign and verify images.
|
||||
## IMAGE NAMES
|
||||
Most commands refer to container images, using a _transport_`:`_details_ format. The following formats are supported:
|
||||
|
||||
@@ -19,13 +31,13 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
|
||||
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
|
||||
|
||||
**docker://**_docker-reference_
|
||||
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$HOME/.docker/config.json`, which is set e.g. using `(docker login)`.
|
||||
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(kpod login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
|
||||
|
||||
**docker-archive:**_path_[**:**_docker-reference_]
|
||||
An image is stored in the `docker save` formated file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
|
||||
An image is stored in the `docker save` formatted file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
|
||||
|
||||
**docker-daemon:**_docker-reference_
|
||||
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
|
||||
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can be docker-daemon:algo:digest (an image ID).
|
||||
|
||||
**oci:**_path_**:**_tag_
|
||||
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
|
||||
@@ -43,6 +55,10 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
|
||||
|
||||
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for container signature storage), overriding the default path.
|
||||
|
||||
**--override-arch** _arch_ Use _arch_ instead of the architecture of the machine for choosing images.
|
||||
|
||||
**--override-os** _OS_ Use _OS_ instead of the running OS for choosing images.
|
||||
|
||||
**--help**|**-h** Show help
|
||||
|
||||
**--version**|**-v** print the version number
|
||||
@@ -60,34 +76,59 @@ Uses the system's trust policy to validate images, rejects images not trusted by
|
||||
|
||||
_destination-image_ use the "image name" format described above
|
||||
|
||||
**--authfile** _path_
|
||||
|
||||
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
|
||||
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
|
||||
|
||||
**--format, -f** _manifest-type_ Manifest type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
|
||||
|
||||
**--remove-signatures** do not copy signatures, if any, from _source-image_. Necessary when copying a signed image to a destination which does not support signatures.
|
||||
|
||||
**--sign-by=**_key-id_ add a signature using that key ID for an image name corresponding to _destination-image_
|
||||
|
||||
**--src-creds** _username[:password]_ for accessing the source registry
|
||||
|
||||
**--dest-compress** _bool-value_ Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)
|
||||
|
||||
**--dest-creds** _username[:password]_ for accessing the destination registry
|
||||
|
||||
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry
|
||||
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry or daemon
|
||||
|
||||
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry (defaults to true)
|
||||
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry or daemon (defaults to true)
|
||||
|
||||
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
|
||||
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry or daemon
|
||||
|
||||
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
|
||||
|
||||
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry (defaults to true)
|
||||
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry or daemon (defaults to true)
|
||||
|
||||
**--src-daemon-host** _host_ Copy from docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
|
||||
|
||||
**--dest-daemon-host** _host_ Copy to docker daemon at _host_. If _host_ starts with `tcp://`, HTTPS is enabled by default. To use plain HTTP, use the form `http://` (default is `unix:///var/run/docker.sock`).
|
||||
|
||||
Existing signatures, if any, are preserved as well.
|
||||
|
||||
## skopeo delete
|
||||
**skopeo delete** _image-name_
|
||||
|
||||
Mark _image-name_ for deletion. To release the allocated disk space, you need to execute the container registry garabage collector. E.g.,
|
||||
Mark _image-name_ for deletion. To release the allocated disk space, you must login to the container registry server and execute the container registry garbage collector. E.g.,
|
||||
|
||||
```sh
|
||||
$ docker exec -it registry bin/registry garbage-collect /etc/docker/registry/config.yml
|
||||
```
|
||||
/usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
|
||||
|
||||
Note: sometimes the config.yml is stored in /etc/docker/registry/config.yml
|
||||
|
||||
If you are running the container registry inside of a container you would execute something like:
|
||||
|
||||
$ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distribution/registry/config.yml
|
||||
|
||||
```
|
||||
|
||||
**--authfile** _path_
|
||||
|
||||
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
|
||||
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
|
||||
|
||||
**--creds** _username[:password]_ for accessing the registry
|
||||
|
||||
@@ -106,6 +147,11 @@ Return low-level information about _image-name_ in a registry
|
||||
|
||||
_image-name_ name of image to retrieve information about
|
||||
|
||||
**--authfile** _path_
|
||||
|
||||
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
|
||||
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
|
||||
|
||||
**--creds** _username[:password]_ for accessing the registry
|
||||
|
||||
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
|
||||
@@ -237,6 +283,9 @@ $ skopeo standalone-verify busybox-manifest.json registry.example.com/example/bu
|
||||
Signature verified, digest sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603e0d01b4e7f60427e55
|
||||
```
|
||||
|
||||
# SEE ALSO
|
||||
kpod-login(1), docker-login(1)
|
||||
|
||||
# AUTHORS
|
||||
|
||||
Antonio Murdaca <runcom@redhat.com>, Miloslav Trmac <mitr@redhat.com>, Jhon Honce <jhonce@redhat.com>
|
||||
|
||||
546
docs/skopeo.svg
Normal file
546
docs/skopeo.svg
Normal file
@@ -0,0 +1,546 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
||||
|
||||
<svg
|
||||
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||
xmlns:cc="http://creativecommons.org/ns#"
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
width="480.61456"
|
||||
height="472.66098"
|
||||
viewBox="0 0 127.1626 125.05822"
|
||||
version="1.1"
|
||||
id="svg8"
|
||||
inkscape:version="0.92.2 5c3e80d, 2017-08-06"
|
||||
sodipodi:docname="skopeo.svg"
|
||||
inkscape:export-filename="/home/duffy/Documents/Projects/Favors/skopeo-logo/skopeo.color.png"
|
||||
inkscape:export-xdpi="90"
|
||||
inkscape:export-ydpi="90">
|
||||
<defs
|
||||
id="defs2">
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
id="linearGradient84477">
|
||||
<stop
|
||||
style="stop-color:#0093d9;stop-opacity:1"
|
||||
offset="0"
|
||||
id="stop84473" />
|
||||
<stop
|
||||
style="stop-color:#ffffff;stop-opacity:1"
|
||||
offset="1"
|
||||
id="stop84475" />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
id="linearGradient84469">
|
||||
<stop
|
||||
style="stop-color:#f6e6c8;stop-opacity:1"
|
||||
offset="0"
|
||||
id="stop84465" />
|
||||
<stop
|
||||
style="stop-color:#dc9f2e;stop-opacity:1"
|
||||
offset="1"
|
||||
id="stop84467" />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
id="linearGradient84461">
|
||||
<stop
|
||||
style="stop-color:#bfdce8;stop-opacity:1;"
|
||||
offset="0"
|
||||
id="stop84457" />
|
||||
<stop
|
||||
style="stop-color:#2a72ac;stop-opacity:1"
|
||||
offset="1"
|
||||
id="stop84459" />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
id="linearGradient84420">
|
||||
<stop
|
||||
style="stop-color:#a7a9ac;stop-opacity:1;"
|
||||
offset="0"
|
||||
id="stop84416" />
|
||||
<stop
|
||||
style="stop-color:#e7e8e9;stop-opacity:1"
|
||||
offset="1"
|
||||
id="stop84418" />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
id="linearGradient84347">
|
||||
<stop
|
||||
style="stop-color:#2c2d2f;stop-opacity:1;"
|
||||
offset="0"
|
||||
id="stop84343" />
|
||||
<stop
|
||||
style="stop-color:#000000;stop-opacity:1"
|
||||
offset="1"
|
||||
id="stop84345" />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
id="linearGradient84339">
|
||||
<stop
|
||||
style="stop-color:#002442;stop-opacity:1;"
|
||||
offset="0"
|
||||
id="stop84335" />
|
||||
<stop
|
||||
style="stop-color:#151617;stop-opacity:1"
|
||||
offset="1"
|
||||
id="stop84337" />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
id="linearGradient84331">
|
||||
<stop
|
||||
style="stop-color:#003d6e;stop-opacity:1;"
|
||||
offset="0"
|
||||
id="stop84327" />
|
||||
<stop
|
||||
style="stop-color:#59b5ff;stop-opacity:1"
|
||||
offset="1"
|
||||
id="stop84329" />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
id="linearGradient84323">
|
||||
<stop
|
||||
style="stop-color:#dc9f2e;stop-opacity:1;"
|
||||
offset="0"
|
||||
id="stop84319" />
|
||||
<stop
|
||||
style="stop-color:#ffffff;stop-opacity:1"
|
||||
offset="1"
|
||||
id="stop84321" />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84323"
|
||||
id="linearGradient84325"
|
||||
x1="221.5741"
|
||||
y1="250.235"
|
||||
x2="219.20772"
|
||||
y2="221.99771"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84331"
|
||||
id="linearGradient84333"
|
||||
x1="223.23239"
|
||||
y1="212.83418"
|
||||
x2="245.52328"
|
||||
y2="129.64345"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84339"
|
||||
id="linearGradient84341"
|
||||
x1="190.36137"
|
||||
y1="217.8925"
|
||||
x2="205.20828"
|
||||
y2="209.32063"
|
||||
gradientUnits="userSpaceOnUse" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84347"
|
||||
id="linearGradient84349"
|
||||
x1="212.05453"
|
||||
y1="215.20055"
|
||||
x2="237.73705"
|
||||
y2="230.02835"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84323"
|
||||
id="linearGradient84363"
|
||||
x1="193.61516"
|
||||
y1="225.045"
|
||||
x2="224.08698"
|
||||
y2="223.54327"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84323"
|
||||
id="linearGradient84377"
|
||||
x1="182.72513"
|
||||
y1="222.54439"
|
||||
x2="184.01024"
|
||||
y2="210.35291"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84420"
|
||||
id="linearGradient84408"
|
||||
x1="211.73801"
|
||||
y1="225.48302"
|
||||
x2="204.24324"
|
||||
y2="238.46432"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84420"
|
||||
id="linearGradient84422"
|
||||
x1="190.931"
|
||||
y1="221.83777"
|
||||
x2="187.53873"
|
||||
y2="229.26593"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84339"
|
||||
id="linearGradient84425"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
x1="190.36137"
|
||||
y1="217.8925"
|
||||
x2="205.20828"
|
||||
y2="209.32063"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84420"
|
||||
id="linearGradient84441"
|
||||
x1="169.95944"
|
||||
y1="215.77036"
|
||||
x2="174.0289"
|
||||
y2="207.81528"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84420"
|
||||
id="linearGradient84455"
|
||||
x1="234.08092"
|
||||
y1="252.39755"
|
||||
x2="245.88477"
|
||||
y2="251.21777"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
<radialGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84461"
|
||||
id="radialGradient84463"
|
||||
cx="213.19594"
|
||||
cy="223.40646"
|
||||
fx="214.12064"
|
||||
fy="217.34077"
|
||||
r="33.39888"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="matrix(2.6813748,0.05304973,-0.0423372,2.1399146,-349.74924,-255.6421)" />
|
||||
<radialGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84469"
|
||||
id="radialGradient84471"
|
||||
cx="207.18298"
|
||||
cy="211.06483"
|
||||
fx="207.18298"
|
||||
fy="211.06483"
|
||||
r="2.77954"
|
||||
gradientTransform="matrix(1.4407627,0.18685239,-0.24637721,1.8997405,-38.989952,-218.98841)"
|
||||
gradientUnits="userSpaceOnUse" />
|
||||
<linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient84477"
|
||||
id="linearGradient84479"
|
||||
x1="241.60336"
|
||||
y1="255.46982"
|
||||
x2="244.45177"
|
||||
y2="250.4846"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
gradientTransform="translate(0,10.583333)" />
|
||||
</defs>
|
||||
<sodipodi:namedview
|
||||
id="base"
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#666666"
|
||||
borderopacity="1.0"
|
||||
inkscape:pageopacity="0.0"
|
||||
inkscape:pageshadow="2"
|
||||
inkscape:zoom="1"
|
||||
inkscape:cx="517.27113"
|
||||
inkscape:cy="314.79773"
|
||||
inkscape:document-units="mm"
|
||||
inkscape:current-layer="layer1"
|
||||
inkscape:document-rotation="0"
|
||||
showgrid="false"
|
||||
units="px"
|
||||
inkscape:snap-global="false"
|
||||
inkscape:window-width="2560"
|
||||
inkscape:window-height="1376"
|
||||
inkscape:window-x="0"
|
||||
inkscape:window-y="27"
|
||||
inkscape:window-maximized="1"
|
||||
fit-margin-top="0"
|
||||
fit-margin-left="0"
|
||||
fit-margin-right="0"
|
||||
fit-margin-bottom="0" />
|
||||
<metadata
|
||||
id="metadata5">
|
||||
<rdf:RDF>
|
||||
<cc:Work
|
||||
rdf:about="">
|
||||
<dc:format>image/svg+xml</dc:format>
|
||||
<dc:type
|
||||
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
|
||||
<dc:title />
|
||||
</cc:Work>
|
||||
</rdf:RDF>
|
||||
</metadata>
|
||||
<g
|
||||
inkscape:label="Layer 1"
|
||||
inkscape:groupmode="layer"
|
||||
id="layer1"
|
||||
transform="translate(-149.15784,-175.92614)">
|
||||
<g
|
||||
id="g84497"
|
||||
style="stroke-width:1.32291663;stroke-miterlimit:4;stroke-dasharray:none"
|
||||
transform="translate(0,10.583333)">
|
||||
<rect
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
id="rect84485"
|
||||
width="31.605196"
|
||||
height="19.16976"
|
||||
x="299.48376"
|
||||
y="87.963303"
|
||||
transform="rotate(30)" />
|
||||
<rect
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
id="rect84487"
|
||||
width="16.725054"
|
||||
height="9.8947001"
|
||||
x="258.07639"
|
||||
y="92.60083"
|
||||
transform="rotate(30)" />
|
||||
<rect
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
id="rect84489"
|
||||
width="4.8383565"
|
||||
height="11.503917"
|
||||
x="253.2236"
|
||||
y="91.796227"
|
||||
transform="rotate(30)" />
|
||||
<rect
|
||||
y="86.859642"
|
||||
x="331.21924"
|
||||
height="21.377089"
|
||||
width="4.521956"
|
||||
id="rect84491"
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
transform="rotate(30)" />
|
||||
</g>
|
||||
<path
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 246.61693,255.0795 -9.11198,15.78242 a 2.6351497,9.1643514 30 0 0 6.60453,-6.7032 2.6351497,9.1643514 30 0 0 2.50745,-9.07922 z"
|
||||
id="path84483"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:nodetypes="cccccc"
|
||||
inkscape:connector-curvature="0"
|
||||
id="path84481"
|
||||
d="m 202.36709,199.05917 26.65552,8.43269 21.69622,19.51455 -8.68507,12.39398 -46.04559,-26.61429 z"
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952" />
|
||||
<circle
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:1.32291663;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
id="path84224"
|
||||
cx="213.64427"
|
||||
cy="234.18927"
|
||||
r="35.482784" />
|
||||
<circle
|
||||
r="33.39888"
|
||||
cy="234.18927"
|
||||
cx="213.64427"
|
||||
id="circle84226"
|
||||
style="fill:url(#radialGradient84463);fill-opacity:1;stroke:none;stroke-width:0.52916664;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952" />
|
||||
<rect
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
id="rect84114"
|
||||
width="31.605196"
|
||||
height="19.16976"
|
||||
x="304.77545"
|
||||
y="97.128738"
|
||||
transform="rotate(30)" />
|
||||
<rect
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
id="rect84116"
|
||||
width="4.521956"
|
||||
height="21.377089"
|
||||
x="300.27435"
|
||||
y="96.025078"
|
||||
transform="rotate(30)" />
|
||||
<rect
|
||||
y="99.087395"
|
||||
x="283.71848"
|
||||
height="15.252436"
|
||||
width="16.459545"
|
||||
id="rect84118"
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
transform="rotate(30)" />
|
||||
<rect
|
||||
y="98.190086"
|
||||
x="280.00021"
|
||||
height="17.047071"
|
||||
width="3.617183"
|
||||
id="rect84120"
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
transform="rotate(30)" />
|
||||
<rect
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
id="rect84122"
|
||||
width="16.725054"
|
||||
height="9.8947001"
|
||||
x="263.36807"
|
||||
y="101.76627"
|
||||
transform="rotate(30)" />
|
||||
<rect
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
id="rect84124"
|
||||
width="4.8383565"
|
||||
height="11.503917"
|
||||
x="258.51526"
|
||||
y="100.96166"
|
||||
transform="rotate(30)" />
|
||||
<rect
|
||||
y="96.025078"
|
||||
x="336.51093"
|
||||
height="21.377089"
|
||||
width="4.521956"
|
||||
id="rect84126"
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
transform="rotate(30)" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84325);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 207.24023,252.71811 25.53907,14.74414 8.52539,-14.76953 -25.53711,-14.74415 z"
|
||||
id="rect84313"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
inkscape:connector-curvature="0"
|
||||
id="path84128"
|
||||
d="m 215.3335,241.36799 22.49734,12.98884"
|
||||
style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:0.52916664;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
|
||||
<path
|
||||
inkscape:connector-curvature="0"
|
||||
id="path84130"
|
||||
d="m 246.61693,255.0795 -9.11198,15.78242 a 2.6351497,9.1643514 30 0 0 6.60453,-6.7032 2.6351497,9.1643514 30 0 0 2.50745,-9.07922 z"
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952" />
|
||||
<path
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
|
||||
d="m 195.97877,212.80238 46.0456,26.61429 -3.50256,6.07342 -46.0456,-26.61429 z"
|
||||
id="path84134"
|
||||
inkscape:connector-curvature="0"
|
||||
sodipodi:nodetypes="ccccc" />
|
||||
<path
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
|
||||
d="m 202.36709,199.05917 26.65552,8.43269 21.69622,19.51455 -8.68507,12.39398 -46.04559,-26.61429 z"
|
||||
id="path84136"
|
||||
inkscape:connector-curvature="0"
|
||||
sodipodi:nodetypes="cccccc" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84422);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 186.31445,239.41146 1.30078,0.75 7.46485,-12.92968 -1.30078,-0.75 z"
|
||||
id="rect84410"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84349);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
|
||||
d="m 193.92188,218.48568 44.21289,25.55469 2.44335,-4.23242 -44.21289,-25.55664 z"
|
||||
id="path84284"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84363);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 189.98438,240.4935 12.42187,7.16992 6.56641,-11.375 -12.42188,-7.16992 z"
|
||||
id="rect84351"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84377);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 173.69727,227.99936 12.65234,7.30273 3.88867,-6.73633 -12.65234,-7.30273 z"
|
||||
id="rect84365"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:nodetypes="ccccc"
|
||||
inkscape:connector-curvature="0"
|
||||
id="path84138"
|
||||
d="m 192.47621,218.8758 -11.1013,8.29627 c 0,0 6.16202,4.57403 15.2798,4.67656 9.1178,0.1025 11.46925,-3.93799 11.46925,-3.93799 z"
|
||||
style="fill:#ffffff;fill-rule:evenodd;stroke:#000000;stroke-width:0.79374999;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
|
||||
<ellipse
|
||||
cy="223.01579"
|
||||
cx="207.08998"
|
||||
id="circle84140"
|
||||
style="fill:#ffffff;stroke:#000000;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
rx="3.8395541"
|
||||
ry="3.8438656" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84333);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:round;stroke-dashoffset:5.99999952"
|
||||
d="m 197.35938,212.35287 44.36523,25.64453 7.58984,-10.83203 -20.82617,-18.73242 -25.55078,-8.08399 z"
|
||||
id="path84272"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
inkscape:connector-curvature="0"
|
||||
id="path84142"
|
||||
d="m 200.6837,212.37603 11.49279,-6.98413 -8.11935,-2.73742"
|
||||
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.5291667;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
|
||||
<path
|
||||
inkscape:connector-curvature="0"
|
||||
id="path84144"
|
||||
d="m 241.31895,235.3047 -8.04514,-4.75769 10.057,-4.72299"
|
||||
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.5291667;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
|
||||
sodipodi:nodetypes="ccc" />
|
||||
<path
|
||||
sodipodi:nodetypes="ccc"
|
||||
style="fill:none;fill-rule:evenodd;stroke:#2a72ac;stroke-width:0.52899998;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
|
||||
d="m 241.06868,235.79543 -8.9307,-5.38071 10.81942,-5.07707"
|
||||
id="path84280"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
style="fill:none;fill-rule:evenodd;stroke:#2a72ac;stroke-width:0.5291667;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
|
||||
d="m 200.60886,211.70589 10.37702,-6.1817 -7.12581,-2.30459"
|
||||
id="path84290"
|
||||
inkscape:connector-curvature="0"
|
||||
sodipodi:nodetypes="ccc" />
|
||||
<path
|
||||
style="fill:url(#radialGradient84471);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 206.89258,220.23959 -0.29297,0.0352 -0.23633,0.0527 -0.26953,0.0898 -0.2793,0.125 -0.23437,0.13477 -0.20508,0.14648 -0.2207,0.19532 -0.18946,0.20117 -0.006,0.008 0.004,-0.008 -0.006,0.01 -0.008,0.01 -0.004,0.004 -0.006,0.006 -0.12109,0.1582 -0.002,0.004 -0.002,0.002 -0.16406,0.26758 -0.12109,0.24804 -0.0996,0.28125 -0.0645,0.24219 -0.0371,0.26367 -0.0176,0.31641 0.008,0.18164 0.0332,0.28711 0.0527,0.23437 0.004,0.0117 0.0937,0.28516 0.11133,0.24805 0.13086,0.23046 0.16992,0.23829 0.1836,0.20898 0.21093,0.19727 0.19532,0.14843 0.25586,0.15625 0.24218,0.11719 0.26172,0.0977 0.27344,0.0684 0.27344,0.043 0.29297,0.0137 0.18164,-0.008 0.29687,-0.0351 0.24024,-0.0547 0.27539,-0.0898 0.24218,-0.10938 0.25,-0.14453 0.23047,-0.16406 0.20899,-0.1836 0.20508,-0.21875 0.125,-0.16406 0.004,-0.006 0.1582,-0.25781 0.004,-0.008 0.12695,-0.26172 0.0996,-0.27344 0.002,-0.006 0.0586,-0.24023 0.0391,-0.26563 0.0176,-0.3125 -0.008,-0.17968 -0.0332,-0.28711 -0.0527,-0.23438 -0.004,-0.0117 -0.0937,-0.28515 -0.11132,-0.24805 -0.13086,-0.23047 -0.16993,-0.23828 -0.18554,-0.20899 -0.19922,-0.18945 -0.21875,-0.16406 -0.23828,-0.14844 -0.26563,-0.12695 -0.01,-0.004 -0.21875,-0.0801 -0.28516,-0.0723 -0.27344,-0.043 -0.29492,-0.0137 z"
|
||||
id="ellipse84292"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84425);fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:0.79374999;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
|
||||
d="m 183.23633,227.10092 c 5.59753,3.20336 12.36881,4.51528 18.71366,3.17108 1.59516,-0.38 3.17489,-0.99021 4.44874,-2.04739 -0.73893,-0.64617 -1.68301,-0.99544 -2.49844,-1.53493 -3.78032,-2.18293 -7.56064,-4.36587 -11.34096,-6.5488 -3.10767,2.32001 -6.21533,4.64003 -9.323,6.96004 z"
|
||||
id="path84298"
|
||||
inkscape:connector-curvature="0"
|
||||
sodipodi:nodetypes="cccccc" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84479);fill-opacity:1;stroke:none;stroke-width:0.79375005;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 238.62695,269.97787 0.006,-0.002 0.39453,-0.27735 0.41797,-0.34179 0.002,-0.002 0.45703,-0.42382 0.47851,-0.49219 0.0156,-0.0176 0.47656,-0.53711 0.002,-0.002 0.0117,-0.0137 0.48438,-0.5918 0.0117,-0.0156 0.49023,-0.64257 0.01,-0.0137 0.49609,-0.69726 0.48047,-0.71875 0.01,-0.0137 0.46485,-0.74805 0.004,-0.008 0.002,-0.002 0.30468,-0.51562 0.008,-0.0117 0.4375,-0.78711 0.40625,-0.77734 0.008,-0.0137 0.37109,-0.77149 0.008,-0.0156 0.33789,-0.75977 0.006,-0.0156 0.30078,-0.73829 0.27148,-0.74609 0.21289,-0.66602 0.17969,-0.66796 v -0.002 l 0.12305,-0.58203 0.002,-0.0137 0.0723,-0.51562 0.0176,-0.31836 z"
|
||||
id="path84379"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84408);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 202.78906,251.42318 2.08399,1.20118 9.6289,-16.67969 -2.08203,-1.20117 z"
|
||||
id="rect84396"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84441);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 169.0918,226.26889 2.35937,1.36133 4.69336,-8.13086 -2.35937,-1.36133 z"
|
||||
id="rect84429"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
style="fill:url(#linearGradient84455);fill-opacity:1;stroke:none;stroke-width:0.79374999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:5.99999952"
|
||||
d="m 234.17188,269.53842 2.08203,1.20312 9.63086,-16.67773 -2.08399,-1.20313 z"
|
||||
id="rect84443"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
style="fill:#ffffff;fill-rule:evenodd;stroke:#f8ead2;stroke-width:0.52916664;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
|
||||
d="m 215.55025,240.82707 22.49734,12.98884"
|
||||
id="path84521"
|
||||
inkscape:connector-curvature="0" />
|
||||
</g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 24 KiB |
@@ -1,28 +1,13 @@
|
||||
#!/bin/bash
|
||||
|
||||
source "$(dirname "$BASH_SOURCE")/.validate"
|
||||
errors=$(go vet $(go list -e ./... | grep -v "$SKOPEO_PKG"/vendor))
|
||||
|
||||
IFS=$'\n'
|
||||
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/' || true) )
|
||||
unset IFS
|
||||
|
||||
errors=()
|
||||
for f in "${files[@]}"; do
|
||||
failedVet=$(go vet "$f")
|
||||
if [ "$failedVet" ]; then
|
||||
errors+=( "$failedVet" )
|
||||
fi
|
||||
done
|
||||
|
||||
|
||||
if [ ${#errors[@]} -eq 0 ]; then
|
||||
if [ -z "$errors" ]; then
|
||||
echo 'Congratulations! All Go source files have been vetted.'
|
||||
else
|
||||
{
|
||||
echo "Errors from go vet:"
|
||||
for err in "${errors[@]}"; do
|
||||
echo " - $err"
|
||||
done
|
||||
echo "$errors"
|
||||
echo
|
||||
echo 'Please fix the above errors. You can test via "go vet" and commit the result.'
|
||||
echo
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# This file is just wrapper around vndr (github.com/LK4D4/vndr) tool.
|
||||
# For updating dependencies you should change `vendor.conf` file in root of the
|
||||
# project. Please refer to https://github.com/LK4D4/vndr/blob/master/README.md for
|
||||
# vndr usage.
|
||||
|
||||
set -e
|
||||
|
||||
if ! hash vndr; then
|
||||
echo "Please install vndr with \"go get github.com/LK4D4/vndr\" and put it in your \$GOPATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
vndr "$@"
|
||||
@@ -15,6 +15,7 @@ import (
|
||||
"github.com/containers/image/signature"
|
||||
"github.com/go-check/check"
|
||||
"github.com/opencontainers/go-digest"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/opencontainers/image-tools/image"
|
||||
)
|
||||
|
||||
@@ -89,9 +90,11 @@ func (s *CopySuite) TearDownSuite(c *check.C) {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *CopySuite) TestCopyFailsWithManifestList(c *check.C) {
|
||||
c.ExpectFailure("manifest-list-hotfix sacrificed hotfixes for being able to copy images")
|
||||
assertSkopeoFails(c, ".*can not copy docker://estesp/busybox:latest: manifest contains multiple images.*", "copy", "docker://estesp/busybox:latest", "dir:somedir")
|
||||
func (s *CopySuite) TestCopyWithManifestList(c *check.C) {
|
||||
dir, err := ioutil.TempDir("", "copy-manifest-list")
|
||||
c.Assert(err, check.IsNil)
|
||||
defer os.RemoveAll(dir)
|
||||
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:latest", "dir:"+dir)
|
||||
}
|
||||
|
||||
func (s *CopySuite) TestCopyFailsWhenImageOSDoesntMatchRuntimeOS(c *check.C) {
|
||||
@@ -151,7 +154,10 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
|
||||
// docker v2s2 -> OCI image layout without image name
|
||||
ociDest = "busybox-latest-noimage"
|
||||
defer os.RemoveAll(ociDest)
|
||||
assertSkopeoFails(c, ".*Error initializing destination oci:busybox-latest-noimage:: cannot save image with empty image.ref.name.*", "copy", "docker://busybox:latest", "oci:"+ociDest)
|
||||
assertSkopeoSucceeds(c, "", "copy", "docker://busybox:latest", "oci:"+ociDest)
|
||||
_, err = os.Stat(ociDest)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
}
|
||||
|
||||
// Check whether dir: images in dir1 and dir2 are equal, ignoring schema1 signatures.
|
||||
@@ -376,7 +382,7 @@ func (s *CopySuite) TestCopyDirSignatures(c *check.C) {
|
||||
|
||||
// Compression during copy
|
||||
func (s *CopySuite) TestCopyCompression(c *check.C) {
|
||||
const uncompresssedLayerFile = "160d823fdc48e62f97ba62df31e55424f8f5eb6b679c865eec6e59adfe304710.tar"
|
||||
const uncompresssedLayerFile = "160d823fdc48e62f97ba62df31e55424f8f5eb6b679c865eec6e59adfe304710"
|
||||
|
||||
topDir, err := ioutil.TempDir("", "compression-top")
|
||||
c.Assert(err, check.IsNil)
|
||||
@@ -408,9 +414,7 @@ func (s *CopySuite) TestCopyCompression(c *check.C) {
|
||||
fis, err := dirf.Readdir(-1)
|
||||
c.Assert(err, check.IsNil)
|
||||
for _, fi := range fis {
|
||||
if strings.HasSuffix(fi.Name(), ".tar") {
|
||||
c.Assert(fi.Size() < 2048, check.Equals, true)
|
||||
}
|
||||
c.Assert(fi.Size() < 2048, check.Equals, true)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -591,6 +595,32 @@ func (s *CopySuite) TestCopySchemaConversion(c *check.C) {
|
||||
s.testCopySchemaConversionRegistries(c, "docker://"+v2s1DockerRegistryURL+"/schema1", "docker://"+v2DockerRegistryURL+"/schema2")
|
||||
}
|
||||
|
||||
func (s *CopySuite) TestCopyManifestConversion(c *check.C) {
|
||||
topDir, err := ioutil.TempDir("", "manifest-conversion")
|
||||
c.Assert(err, check.IsNil)
|
||||
defer os.RemoveAll(topDir)
|
||||
srcDir := filepath.Join(topDir, "source")
|
||||
destDir1 := filepath.Join(topDir, "dest1")
|
||||
destDir2 := filepath.Join(topDir, "dest2")
|
||||
|
||||
// oci to v2s1 and vice-versa not supported yet
|
||||
// get v2s2 manifest type
|
||||
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+srcDir)
|
||||
verifyManifestMIMEType(c, srcDir, manifest.DockerV2Schema2MediaType)
|
||||
// convert from v2s2 to oci
|
||||
assertSkopeoSucceeds(c, "", "copy", "--format=oci", "dir:"+srcDir, "dir:"+destDir1)
|
||||
verifyManifestMIMEType(c, destDir1, imgspecv1.MediaTypeImageManifest)
|
||||
// convert from oci to v2s2
|
||||
assertSkopeoSucceeds(c, "", "copy", "--format=v2s2", "dir:"+destDir1, "dir:"+destDir2)
|
||||
verifyManifestMIMEType(c, destDir2, manifest.DockerV2Schema2MediaType)
|
||||
// convert from v2s2 to v2s1
|
||||
assertSkopeoSucceeds(c, "", "copy", "--format=v2s1", "dir:"+srcDir, "dir:"+destDir1)
|
||||
verifyManifestMIMEType(c, destDir1, manifest.DockerV2Schema1SignedMediaType)
|
||||
// convert from v2s1 to v2s2
|
||||
assertSkopeoSucceeds(c, "", "copy", "--format=v2s2", "dir:"+destDir1, "dir:"+destDir2)
|
||||
verifyManifestMIMEType(c, destDir2, manifest.DockerV2Schema2MediaType)
|
||||
}
|
||||
|
||||
func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Registry, schema2Registry string) {
|
||||
topDir, err := ioutil.TempDir("", "schema-conversion")
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
12
vendor.conf
12
vendor.conf
@@ -22,10 +22,20 @@ github.com/gogo/protobuf fcdc5011193ff531a548e9b0301828d5a5b97fd8
|
||||
# end docker deps
|
||||
golang.org/x/text master
|
||||
github.com/docker/distribution master
|
||||
# docker/distributions dependencies
|
||||
github.com/docker/go-metrics 399ea8c73916000c64c2c76e8da00ca82f8387ab
|
||||
github.com/prometheus/client_golang c332b6f63c0658a65eca15c0e5247ded801cf564
|
||||
github.com/prometheus/client_model 99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c
|
||||
github.com/prometheus/common 89604d197083d4781071d3c65855d24ecfb0a563
|
||||
github.com/prometheus/procfs cb4147076ac75738c9a7d279075a253c0cc5acbd
|
||||
github.com/beorn7/perks 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
|
||||
github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c
|
||||
github.com/golang/protobuf 8d92cf5fc15a4382f8964b08e1f42a75c0591aa3
|
||||
# end of docker/distribution dependencies
|
||||
github.com/docker/libtrust master
|
||||
github.com/docker/docker-credential-helpers d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1
|
||||
github.com/opencontainers/runc master
|
||||
github.com/opencontainers/image-spec v1.0.0
|
||||
github.com/opencontainers/image-spec 149252121d044fddff670adcdc67f33148e16226
|
||||
# -- start OCI image validation requirements.
|
||||
github.com/opencontainers/runtime-spec v1.0.0
|
||||
github.com/opencontainers/image-tools 6d941547fa1df31900990b3fb47ec2468c9c6469
|
||||
|
||||
20
vendor/github.com/beorn7/perks/LICENSE
generated
vendored
Normal file
20
vendor/github.com/beorn7/perks/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
Copyright (C) 2013 Blake Mizerany
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
31
vendor/github.com/beorn7/perks/README.md
generated
vendored
Normal file
31
vendor/github.com/beorn7/perks/README.md
generated
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
# Perks for Go (golang.org)
|
||||
|
||||
Perks contains the Go package quantile that computes approximate quantiles over
|
||||
an unbounded data stream within low memory and CPU bounds.
|
||||
|
||||
For more information and examples, see:
|
||||
http://godoc.org/github.com/bmizerany/perks
|
||||
|
||||
A very special thank you and shout out to Graham Cormode (Rutgers University),
|
||||
Flip Korn (AT&T Labs–Research), S. Muthukrishnan (Rutgers University), and
|
||||
Divesh Srivastava (AT&T Labs–Research) for their research and publication of
|
||||
[Effective Computation of Biased Quantiles over Data Streams](http://www.cs.rutgers.edu/~muthu/bquant.pdf)
|
||||
|
||||
Thank you, also:
|
||||
* Armon Dadgar (@armon)
|
||||
* Andrew Gerrand (@nf)
|
||||
* Brad Fitzpatrick (@bradfitz)
|
||||
* Keith Rarick (@kr)
|
||||
|
||||
FAQ:
|
||||
|
||||
Q: Why not move the quantile package into the project root?
|
||||
A: I want to add more packages to perks later.
|
||||
|
||||
Copyright (C) 2013 Blake Mizerany
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
292
vendor/github.com/beorn7/perks/quantile/stream.go
generated
vendored
Normal file
292
vendor/github.com/beorn7/perks/quantile/stream.go
generated
vendored
Normal file
@@ -0,0 +1,292 @@
|
||||
// Package quantile computes approximate quantiles over an unbounded data
|
||||
// stream within low memory and CPU bounds.
|
||||
//
|
||||
// A small amount of accuracy is traded to achieve the above properties.
|
||||
//
|
||||
// Multiple streams can be merged before calling Query to generate a single set
|
||||
// of results. This is meaningful when the streams represent the same type of
|
||||
// data. See Merge and Samples.
|
||||
//
|
||||
// For more detailed information about the algorithm used, see:
|
||||
//
|
||||
// Effective Computation of Biased Quantiles over Data Streams
|
||||
//
|
||||
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
|
||||
package quantile
|
||||
|
||||
import (
|
||||
"math"
|
||||
"sort"
|
||||
)
|
||||
|
||||
// Sample holds an observed value and meta information for compression. JSON
|
||||
// tags have been added for convenience.
|
||||
type Sample struct {
|
||||
Value float64 `json:",string"`
|
||||
Width float64 `json:",string"`
|
||||
Delta float64 `json:",string"`
|
||||
}
|
||||
|
||||
// Samples represents a slice of samples. It implements sort.Interface.
|
||||
type Samples []Sample
|
||||
|
||||
func (a Samples) Len() int { return len(a) }
|
||||
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
|
||||
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||
|
||||
type invariant func(s *stream, r float64) float64
|
||||
|
||||
// NewLowBiased returns an initialized Stream for low-biased quantiles
|
||||
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
|
||||
// error guarantees can still be given even for the lower ranks of the data
|
||||
// distribution.
|
||||
//
|
||||
// The provided epsilon is a relative error, i.e. the true quantile of a value
|
||||
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
|
||||
//
|
||||
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
|
||||
// properties.
|
||||
func NewLowBiased(epsilon float64) *Stream {
|
||||
ƒ := func(s *stream, r float64) float64 {
|
||||
return 2 * epsilon * r
|
||||
}
|
||||
return newStream(ƒ)
|
||||
}
|
||||
|
||||
// NewHighBiased returns an initialized Stream for high-biased quantiles
|
||||
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
|
||||
// error guarantees can still be given even for the higher ranks of the data
|
||||
// distribution.
|
||||
//
|
||||
// The provided epsilon is a relative error, i.e. the true quantile of a value
|
||||
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
|
||||
//
|
||||
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
|
||||
// properties.
|
||||
func NewHighBiased(epsilon float64) *Stream {
|
||||
ƒ := func(s *stream, r float64) float64 {
|
||||
return 2 * epsilon * (s.n - r)
|
||||
}
|
||||
return newStream(ƒ)
|
||||
}
|
||||
|
||||
// NewTargeted returns an initialized Stream concerned with a particular set of
|
||||
// quantile values that are supplied a priori. Knowing these a priori reduces
|
||||
// space and computation time. The targets map maps the desired quantiles to
|
||||
// their absolute errors, i.e. the true quantile of a value returned by a query
|
||||
// is guaranteed to be within (Quantile±Epsilon).
|
||||
//
|
||||
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
|
||||
func NewTargeted(targets map[float64]float64) *Stream {
|
||||
ƒ := func(s *stream, r float64) float64 {
|
||||
var m = math.MaxFloat64
|
||||
var f float64
|
||||
for quantile, epsilon := range targets {
|
||||
if quantile*s.n <= r {
|
||||
f = (2 * epsilon * r) / quantile
|
||||
} else {
|
||||
f = (2 * epsilon * (s.n - r)) / (1 - quantile)
|
||||
}
|
||||
if f < m {
|
||||
m = f
|
||||
}
|
||||
}
|
||||
return m
|
||||
}
|
||||
return newStream(ƒ)
|
||||
}
|
||||
|
||||
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
|
||||
// design. Take care when using across multiple goroutines.
|
||||
type Stream struct {
|
||||
*stream
|
||||
b Samples
|
||||
sorted bool
|
||||
}
|
||||
|
||||
func newStream(ƒ invariant) *Stream {
|
||||
x := &stream{ƒ: ƒ}
|
||||
return &Stream{x, make(Samples, 0, 500), true}
|
||||
}
|
||||
|
||||
// Insert inserts v into the stream.
|
||||
func (s *Stream) Insert(v float64) {
|
||||
s.insert(Sample{Value: v, Width: 1})
|
||||
}
|
||||
|
||||
func (s *Stream) insert(sample Sample) {
|
||||
s.b = append(s.b, sample)
|
||||
s.sorted = false
|
||||
if len(s.b) == cap(s.b) {
|
||||
s.flush()
|
||||
}
|
||||
}
|
||||
|
||||
// Query returns the computed qth percentiles value. If s was created with
|
||||
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
|
||||
// will return an unspecified result.
|
||||
func (s *Stream) Query(q float64) float64 {
|
||||
if !s.flushed() {
|
||||
// Fast path when there hasn't been enough data for a flush;
|
||||
// this also yields better accuracy for small sets of data.
|
||||
l := len(s.b)
|
||||
if l == 0 {
|
||||
return 0
|
||||
}
|
||||
i := int(math.Ceil(float64(l) * q))
|
||||
if i > 0 {
|
||||
i -= 1
|
||||
}
|
||||
s.maybeSort()
|
||||
return s.b[i].Value
|
||||
}
|
||||
s.flush()
|
||||
return s.stream.query(q)
|
||||
}
|
||||
|
||||
// Merge merges samples into the underlying streams samples. This is handy when
|
||||
// merging multiple streams from separate threads, database shards, etc.
|
||||
//
|
||||
// ATTENTION: This method is broken and does not yield correct results. The
|
||||
// underlying algorithm is not capable of merging streams correctly.
|
||||
func (s *Stream) Merge(samples Samples) {
|
||||
sort.Sort(samples)
|
||||
s.stream.merge(samples)
|
||||
}
|
||||
|
||||
// Reset reinitializes and clears the list reusing the samples buffer memory.
|
||||
func (s *Stream) Reset() {
|
||||
s.stream.reset()
|
||||
s.b = s.b[:0]
|
||||
}
|
||||
|
||||
// Samples returns stream samples held by s.
|
||||
func (s *Stream) Samples() Samples {
|
||||
if !s.flushed() {
|
||||
return s.b
|
||||
}
|
||||
s.flush()
|
||||
return s.stream.samples()
|
||||
}
|
||||
|
||||
// Count returns the total number of samples observed in the stream
|
||||
// since initialization.
|
||||
func (s *Stream) Count() int {
|
||||
return len(s.b) + s.stream.count()
|
||||
}
|
||||
|
||||
func (s *Stream) flush() {
|
||||
s.maybeSort()
|
||||
s.stream.merge(s.b)
|
||||
s.b = s.b[:0]
|
||||
}
|
||||
|
||||
func (s *Stream) maybeSort() {
|
||||
if !s.sorted {
|
||||
s.sorted = true
|
||||
sort.Sort(s.b)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Stream) flushed() bool {
|
||||
return len(s.stream.l) > 0
|
||||
}
|
||||
|
||||
type stream struct {
|
||||
n float64
|
||||
l []Sample
|
||||
ƒ invariant
|
||||
}
|
||||
|
||||
func (s *stream) reset() {
|
||||
s.l = s.l[:0]
|
||||
s.n = 0
|
||||
}
|
||||
|
||||
func (s *stream) insert(v float64) {
|
||||
s.merge(Samples{{v, 1, 0}})
|
||||
}
|
||||
|
||||
func (s *stream) merge(samples Samples) {
|
||||
// TODO(beorn7): This tries to merge not only individual samples, but
|
||||
// whole summaries. The paper doesn't mention merging summaries at
|
||||
// all. Unittests show that the merging is inaccurate. Find out how to
|
||||
// do merges properly.
|
||||
var r float64
|
||||
i := 0
|
||||
for _, sample := range samples {
|
||||
for ; i < len(s.l); i++ {
|
||||
c := s.l[i]
|
||||
if c.Value > sample.Value {
|
||||
// Insert at position i.
|
||||
s.l = append(s.l, Sample{})
|
||||
copy(s.l[i+1:], s.l[i:])
|
||||
s.l[i] = Sample{
|
||||
sample.Value,
|
||||
sample.Width,
|
||||
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
|
||||
// TODO(beorn7): How to calculate delta correctly?
|
||||
}
|
||||
i++
|
||||
goto inserted
|
||||
}
|
||||
r += c.Width
|
||||
}
|
||||
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
|
||||
i++
|
||||
inserted:
|
||||
s.n += sample.Width
|
||||
r += sample.Width
|
||||
}
|
||||
s.compress()
|
||||
}
|
||||
|
||||
func (s *stream) count() int {
|
||||
return int(s.n)
|
||||
}
|
||||
|
||||
func (s *stream) query(q float64) float64 {
|
||||
t := math.Ceil(q * s.n)
|
||||
t += math.Ceil(s.ƒ(s, t) / 2)
|
||||
p := s.l[0]
|
||||
var r float64
|
||||
for _, c := range s.l[1:] {
|
||||
r += p.Width
|
||||
if r+c.Width+c.Delta > t {
|
||||
return p.Value
|
||||
}
|
||||
p = c
|
||||
}
|
||||
return p.Value
|
||||
}
|
||||
|
||||
func (s *stream) compress() {
|
||||
if len(s.l) < 2 {
|
||||
return
|
||||
}
|
||||
x := s.l[len(s.l)-1]
|
||||
xi := len(s.l) - 1
|
||||
r := s.n - 1 - x.Width
|
||||
|
||||
for i := len(s.l) - 2; i >= 0; i-- {
|
||||
c := s.l[i]
|
||||
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
|
||||
x.Width += c.Width
|
||||
s.l[xi] = x
|
||||
// Remove element at i.
|
||||
copy(s.l[i:], s.l[i+1:])
|
||||
s.l = s.l[:len(s.l)-1]
|
||||
xi -= 1
|
||||
} else {
|
||||
x = c
|
||||
xi = i
|
||||
}
|
||||
r -= c.Width
|
||||
}
|
||||
}
|
||||
|
||||
func (s *stream) samples() Samples {
|
||||
samples := make(Samples, len(s.l))
|
||||
copy(samples, s.l)
|
||||
return samples
|
||||
}
|
||||
348
vendor/github.com/containers/image/copy/copy.go
generated
vendored
348
vendor/github.com/containers/image/copy/copy.go
generated
vendored
@@ -12,8 +12,6 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
pb "gopkg.in/cheggaaa/pb.v1"
|
||||
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/pkg/compression"
|
||||
"github.com/containers/image/signature"
|
||||
@@ -22,6 +20,7 @@ import (
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
pb "gopkg.in/cheggaaa/pb.v1"
|
||||
)
|
||||
|
||||
type digestingReader struct {
|
||||
@@ -31,23 +30,6 @@ type digestingReader struct {
|
||||
validationFailed bool
|
||||
}
|
||||
|
||||
// imageCopier allows us to keep track of diffID values for blobs, and other
|
||||
// data, that we're copying between images, and cache other information that
|
||||
// might allow us to take some shortcuts
|
||||
type imageCopier struct {
|
||||
copiedBlobs map[digest.Digest]digest.Digest
|
||||
cachedDiffIDs map[digest.Digest]digest.Digest
|
||||
manifestUpdates *types.ManifestUpdateOptions
|
||||
dest types.ImageDestination
|
||||
src types.Image
|
||||
rawSource types.ImageSource
|
||||
diffIDsAreNeeded bool
|
||||
canModifyManifest bool
|
||||
reportWriter io.Writer
|
||||
progressInterval time.Duration
|
||||
progress chan types.ProgressProperties
|
||||
}
|
||||
|
||||
// newDigestingReader returns an io.Reader implementation with contents of source, which will eventually return a non-EOF error
|
||||
// and set validationFailed to true if the source stream does not match expectedDigest.
|
||||
func newDigestingReader(source io.Reader, expectedDigest digest.Digest) (*digestingReader, error) {
|
||||
@@ -86,6 +68,27 @@ func (d *digestingReader) Read(p []byte) (int, error) {
|
||||
return n, err
|
||||
}
|
||||
|
||||
// copier allows us to keep track of diffID values for blobs, and other
|
||||
// data shared across one or more images in a possible manifest list.
|
||||
type copier struct {
|
||||
copiedBlobs map[digest.Digest]digest.Digest
|
||||
cachedDiffIDs map[digest.Digest]digest.Digest
|
||||
dest types.ImageDestination
|
||||
rawSource types.ImageSource
|
||||
reportWriter io.Writer
|
||||
progressInterval time.Duration
|
||||
progress chan types.ProgressProperties
|
||||
}
|
||||
|
||||
// imageCopier tracks state specific to a single image (possibly an item of a manifest list)
|
||||
type imageCopier struct {
|
||||
c *copier
|
||||
manifestUpdates *types.ManifestUpdateOptions
|
||||
src types.Image
|
||||
diffIDsAreNeeded bool
|
||||
canModifyManifest bool
|
||||
}
|
||||
|
||||
// Options allows supplying non-default configuration modifying the behavior of CopyImage.
|
||||
type Options struct {
|
||||
RemoveSignatures bool // Remove any pre-existing signatures. SignBy will still add a new signature.
|
||||
@@ -95,11 +98,13 @@ type Options struct {
|
||||
DestinationCtx *types.SystemContext
|
||||
ProgressInterval time.Duration // time to wait between reports to signal the progress channel
|
||||
Progress chan types.ProgressProperties // Reported to when ProgressInterval has arrived for a single artifact+offset.
|
||||
// manifest MIME type of image set by user. "" is default and means use the autodetection to the the manifest MIME type
|
||||
ForceManifestMIMEType string
|
||||
}
|
||||
|
||||
// Image copies image from srcRef to destRef, using policyContext to validate
|
||||
// source image admissibility.
|
||||
func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) (retErr error) {
|
||||
func Image(ctx context.Context, policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) (retErr error) {
|
||||
// NOTE this function uses an output parameter for the error return value.
|
||||
// Setting this and returning is the ideal way to return an error.
|
||||
//
|
||||
@@ -115,11 +120,7 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
|
||||
reportWriter = options.ReportWriter
|
||||
}
|
||||
|
||||
writeReport := func(f string, a ...interface{}) {
|
||||
fmt.Fprintf(reportWriter, f, a...)
|
||||
}
|
||||
|
||||
dest, err := destRef.NewImageDestination(options.DestinationCtx)
|
||||
dest, err := destRef.NewImageDestination(ctx, options.DestinationCtx)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Error initializing destination %s", transports.ImageName(destRef))
|
||||
}
|
||||
@@ -129,91 +130,129 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
|
||||
}
|
||||
}()
|
||||
|
||||
rawSource, err := srcRef.NewImageSource(options.SourceCtx)
|
||||
rawSource, err := srcRef.NewImageSource(ctx, options.SourceCtx)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
|
||||
}
|
||||
unparsedImage := image.UnparsedFromSource(rawSource)
|
||||
defer func() {
|
||||
if unparsedImage != nil {
|
||||
if err := unparsedImage.Close(); err != nil {
|
||||
retErr = errors.Wrapf(retErr, " (unparsed: %v)", err)
|
||||
}
|
||||
if err := rawSource.Close(); err != nil {
|
||||
retErr = errors.Wrapf(retErr, " (src: %v)", err)
|
||||
}
|
||||
}()
|
||||
|
||||
c := &copier{
|
||||
copiedBlobs: make(map[digest.Digest]digest.Digest),
|
||||
cachedDiffIDs: make(map[digest.Digest]digest.Digest),
|
||||
dest: dest,
|
||||
rawSource: rawSource,
|
||||
reportWriter: reportWriter,
|
||||
progressInterval: options.ProgressInterval,
|
||||
progress: options.Progress,
|
||||
}
|
||||
|
||||
unparsedToplevel := image.UnparsedInstance(rawSource, nil)
|
||||
multiImage, err := isMultiImage(ctx, unparsedToplevel)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Error determining manifest MIME type for %s", transports.ImageName(srcRef))
|
||||
}
|
||||
|
||||
if !multiImage {
|
||||
// The simple case: Just copy a single image.
|
||||
if err := c.copyOneImage(ctx, policyContext, options, unparsedToplevel); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
// This is a manifest list. Choose a single image and copy it.
|
||||
// FIXME: Copy to destinations which support manifest lists, one image at a time.
|
||||
instanceDigest, err := image.ChooseManifestInstanceFromManifestList(ctx, options.SourceCtx, unparsedToplevel)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Error choosing an image from manifest list %s", transports.ImageName(srcRef))
|
||||
}
|
||||
logrus.Debugf("Source is a manifest list; copying (only) instance %s", instanceDigest)
|
||||
unparsedInstance := image.UnparsedInstance(rawSource, &instanceDigest)
|
||||
|
||||
if err := c.copyOneImage(ctx, policyContext, options, unparsedInstance); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err := c.dest.Commit(ctx); err != nil {
|
||||
return errors.Wrap(err, "Error committing the finished image")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Image copies a single (on-manifest-list) image unparsedImage, using policyContext to validate
|
||||
// source image admissibility.
|
||||
func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.PolicyContext, options *Options, unparsedImage *image.UnparsedImage) (retErr error) {
|
||||
// The caller is handling manifest lists; this could happen only if a manifest list contains a manifest list.
|
||||
// Make sure we fail cleanly in such cases.
|
||||
multiImage, err := isMultiImage(ctx, unparsedImage)
|
||||
if err != nil {
|
||||
// FIXME FIXME: How to name a reference for the sub-image?
|
||||
return errors.Wrapf(err, "Error determining manifest MIME type for %s", transports.ImageName(unparsedImage.Reference()))
|
||||
}
|
||||
if multiImage {
|
||||
return fmt.Errorf("Unexpectedly received a manifest list instead of a manifest for a single image")
|
||||
}
|
||||
|
||||
// Please keep this policy check BEFORE reading any other information about the image.
|
||||
if allowed, err := policyContext.IsRunningImageAllowed(unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
|
||||
// (the multiImage check above only matches the MIME type, which we have received anyway.
|
||||
// Actual parsing of anything should be deferred.)
|
||||
if allowed, err := policyContext.IsRunningImageAllowed(ctx, unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
|
||||
return errors.Wrap(err, "Source image rejected")
|
||||
}
|
||||
src, err := image.FromUnparsedImage(unparsedImage)
|
||||
src, err := image.FromUnparsedImage(ctx, options.SourceCtx, unparsedImage)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(srcRef))
|
||||
return errors.Wrapf(err, "Error initializing image from source %s", transports.ImageName(c.rawSource.Reference()))
|
||||
}
|
||||
unparsedImage = nil
|
||||
defer func() {
|
||||
if err := src.Close(); err != nil {
|
||||
retErr = errors.Wrapf(retErr, " (source: %v)", err)
|
||||
}
|
||||
}()
|
||||
|
||||
if err := checkImageDestinationForCurrentRuntimeOS(src, dest); err != nil {
|
||||
if err := checkImageDestinationForCurrentRuntimeOS(ctx, options.DestinationCtx, src, c.dest); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if src.IsMultiImage() {
|
||||
return errors.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
|
||||
}
|
||||
|
||||
var sigs [][]byte
|
||||
if options.RemoveSignatures {
|
||||
sigs = [][]byte{}
|
||||
} else {
|
||||
writeReport("Getting image source signatures\n")
|
||||
s, err := src.Signatures(context.TODO())
|
||||
c.Printf("Getting image source signatures\n")
|
||||
s, err := src.Signatures(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error reading signatures")
|
||||
}
|
||||
sigs = s
|
||||
}
|
||||
if len(sigs) != 0 {
|
||||
writeReport("Checking if image destination supports signatures\n")
|
||||
if err := dest.SupportsSignatures(); err != nil {
|
||||
c.Printf("Checking if image destination supports signatures\n")
|
||||
if err := c.dest.SupportsSignatures(ctx); err != nil {
|
||||
return errors.Wrap(err, "Can not copy signatures")
|
||||
}
|
||||
}
|
||||
|
||||
canModifyManifest := len(sigs) == 0
|
||||
manifestUpdates := types.ManifestUpdateOptions{}
|
||||
manifestUpdates.InformationOnly.Destination = dest
|
||||
ic := imageCopier{
|
||||
c: c,
|
||||
manifestUpdates: &types.ManifestUpdateOptions{InformationOnly: types.ManifestUpdateInformation{Destination: c.dest}},
|
||||
src: src,
|
||||
// diffIDsAreNeeded is computed later
|
||||
canModifyManifest: len(sigs) == 0,
|
||||
}
|
||||
|
||||
if err := updateEmbeddedDockerReference(&manifestUpdates, dest, src, canModifyManifest); err != nil {
|
||||
if err := ic.updateEmbeddedDockerReference(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// We compute preferredManifestMIMEType only to show it in error messages.
|
||||
// Without having to add this context in an error message, we would be happy enough to know only that no conversion is needed.
|
||||
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := determineManifestConversion(&manifestUpdates, src, dest.SupportedManifestMIMETypes(), canModifyManifest)
|
||||
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := ic.determineManifestConversion(ctx, c.dest.SupportedManifestMIMETypes(), options.ForceManifestMIMEType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// If src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates) will be true, it needs to be true by the time we get here.
|
||||
ic := imageCopier{
|
||||
copiedBlobs: make(map[digest.Digest]digest.Digest),
|
||||
cachedDiffIDs: make(map[digest.Digest]digest.Digest),
|
||||
manifestUpdates: &manifestUpdates,
|
||||
dest: dest,
|
||||
src: src,
|
||||
rawSource: rawSource,
|
||||
diffIDsAreNeeded: src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates),
|
||||
canModifyManifest: canModifyManifest,
|
||||
reportWriter: reportWriter,
|
||||
progressInterval: options.ProgressInterval,
|
||||
progress: options.Progress,
|
||||
}
|
||||
// If src.UpdatedImageNeedsLayerDiffIDs(ic.manifestUpdates) will be true, it needs to be true by the time we get here.
|
||||
ic.diffIDsAreNeeded = src.UpdatedImageNeedsLayerDiffIDs(*ic.manifestUpdates)
|
||||
|
||||
if err := ic.copyLayers(); err != nil {
|
||||
if err := ic.copyLayers(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -221,7 +260,7 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
|
||||
// and at least with the OpenShift registry "acceptschema2" option, there is no way to detect the support
|
||||
// without actually trying to upload something and getting a types.ManifestTypeRejectedError.
|
||||
// So, try the preferred manifest MIME type. If the process succeeds, fine…
|
||||
manifest, err := ic.copyUpdatedConfigAndManifest()
|
||||
manifest, err := ic.copyUpdatedConfigAndManifest(ctx)
|
||||
if err != nil {
|
||||
logrus.Debugf("Writing manifest using preferred type %s failed: %v", preferredManifestMIMEType, err)
|
||||
// … if it fails, _and_ the failure is because the manifest is rejected, we may have other options.
|
||||
@@ -233,9 +272,9 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
|
||||
}
|
||||
// If the original MIME type is acceptable, determineManifestConversion always uses it as preferredManifestMIMEType.
|
||||
// So if we are here, we will definitely be trying to convert the manifest.
|
||||
// With !canModifyManifest, that would just be a string of repeated failures for the same reason,
|
||||
// With !ic.canModifyManifest, that would just be a string of repeated failures for the same reason,
|
||||
// so let’s bail out early and with a better error message.
|
||||
if !canModifyManifest {
|
||||
if !ic.canModifyManifest {
|
||||
return errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
|
||||
}
|
||||
|
||||
@@ -243,8 +282,8 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
|
||||
errs := []string{fmt.Sprintf("%s(%v)", preferredManifestMIMEType, err)}
|
||||
for _, manifestMIMEType := range otherManifestMIMETypeCandidates {
|
||||
logrus.Debugf("Trying to use manifest type %s…", manifestMIMEType)
|
||||
manifestUpdates.ManifestMIMEType = manifestMIMEType
|
||||
attemptedManifest, err := ic.copyUpdatedConfigAndManifest()
|
||||
ic.manifestUpdates.ManifestMIMEType = manifestMIMEType
|
||||
attemptedManifest, err := ic.copyUpdatedConfigAndManifest(ctx)
|
||||
if err != nil {
|
||||
logrus.Debugf("Upload of manifest type %s failed: %v", manifestMIMEType, err)
|
||||
errs = append(errs, fmt.Sprintf("%s(%v)", manifestMIMEType, err))
|
||||
@@ -262,35 +301,44 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
|
||||
}
|
||||
|
||||
if options.SignBy != "" {
|
||||
newSig, err := createSignature(dest, manifest, options.SignBy, reportWriter)
|
||||
newSig, err := c.createSignature(manifest, options.SignBy)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
sigs = append(sigs, newSig)
|
||||
}
|
||||
|
||||
writeReport("Storing signatures\n")
|
||||
if err := dest.PutSignatures(sigs); err != nil {
|
||||
c.Printf("Storing signatures\n")
|
||||
if err := c.dest.PutSignatures(ctx, sigs); err != nil {
|
||||
return errors.Wrap(err, "Error writing signatures")
|
||||
}
|
||||
|
||||
if err := dest.Commit(); err != nil {
|
||||
return errors.Wrap(err, "Error committing the finished image")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkImageDestinationForCurrentRuntimeOS(src types.Image, dest types.ImageDestination) error {
|
||||
// Printf writes a formatted string to c.reportWriter.
|
||||
// Note that the method name Printf is not entirely arbitrary: (go tool vet)
|
||||
// has a built-in list of functions/methods (whatever object they are for)
|
||||
// which have their format strings checked; for other names we would have
|
||||
// to pass a parameter to every (go tool vet) invocation.
|
||||
func (c *copier) Printf(format string, a ...interface{}) {
|
||||
fmt.Fprintf(c.reportWriter, format, a...)
|
||||
}
|
||||
|
||||
func checkImageDestinationForCurrentRuntimeOS(ctx context.Context, sys *types.SystemContext, src types.Image, dest types.ImageDestination) error {
|
||||
if dest.MustMatchRuntimeOS() {
|
||||
c, err := src.OCIConfig()
|
||||
wantedOS := runtime.GOOS
|
||||
if sys != nil && sys.OSChoice != "" {
|
||||
wantedOS = sys.OSChoice
|
||||
}
|
||||
c, err := src.OCIConfig(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Error parsing image configuration")
|
||||
}
|
||||
osErr := fmt.Errorf("image operating system %q cannot be used on %q", c.OS, runtime.GOOS)
|
||||
if runtime.GOOS == "windows" && c.OS == "linux" {
|
||||
osErr := fmt.Errorf("image operating system %q cannot be used on %q", c.OS, wantedOS)
|
||||
if wantedOS == "windows" && c.OS == "linux" {
|
||||
return osErr
|
||||
} else if runtime.GOOS != "windows" && c.OS == "windows" {
|
||||
} else if wantedOS != "windows" && c.OS == "windows" {
|
||||
return osErr
|
||||
}
|
||||
}
|
||||
@@ -298,35 +346,47 @@ func checkImageDestinationForCurrentRuntimeOS(src types.Image, dest types.ImageD
|
||||
}
|
||||
|
||||
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
|
||||
func updateEmbeddedDockerReference(manifestUpdates *types.ManifestUpdateOptions, dest types.ImageDestination, src types.Image, canModifyManifest bool) error {
|
||||
destRef := dest.Reference().DockerReference()
|
||||
func (ic *imageCopier) updateEmbeddedDockerReference() error {
|
||||
destRef := ic.c.dest.Reference().DockerReference()
|
||||
if destRef == nil {
|
||||
return nil // Destination does not care about Docker references
|
||||
}
|
||||
if !src.EmbeddedDockerReferenceConflicts(destRef) {
|
||||
if !ic.src.EmbeddedDockerReferenceConflicts(destRef) {
|
||||
return nil // No reference embedded in the manifest, or it matches destRef already.
|
||||
}
|
||||
|
||||
if !canModifyManifest {
|
||||
if !ic.canModifyManifest {
|
||||
return errors.Errorf("Copying a schema1 image with an embedded Docker reference to %s (Docker reference %s) would invalidate existing signatures. Explicitly enable signature removal to proceed anyway",
|
||||
transports.ImageName(dest.Reference()), destRef.String())
|
||||
transports.ImageName(ic.c.dest.Reference()), destRef.String())
|
||||
}
|
||||
manifestUpdates.EmbeddedDockerReference = destRef
|
||||
ic.manifestUpdates.EmbeddedDockerReference = destRef
|
||||
return nil
|
||||
}
|
||||
|
||||
// copyLayers copies layers from src/rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
|
||||
func (ic *imageCopier) copyLayers() error {
|
||||
// copyLayers copies layers from ic.src/ic.c.rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
|
||||
func (ic *imageCopier) copyLayers(ctx context.Context) error {
|
||||
srcInfos := ic.src.LayerInfos()
|
||||
destInfos := []types.BlobInfo{}
|
||||
diffIDs := []digest.Digest{}
|
||||
updatedSrcInfos, err := ic.src.LayerInfosForCopy(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srcInfosUpdated := false
|
||||
if updatedSrcInfos != nil && !reflect.DeepEqual(srcInfos, updatedSrcInfos) {
|
||||
if !ic.canModifyManifest {
|
||||
return errors.Errorf("Internal error: copyLayers() needs to use an updated manifest but that was known to be forbidden")
|
||||
}
|
||||
srcInfos = updatedSrcInfos
|
||||
srcInfosUpdated = true
|
||||
}
|
||||
for _, srcLayer := range srcInfos {
|
||||
var (
|
||||
destInfo types.BlobInfo
|
||||
diffID digest.Digest
|
||||
err error
|
||||
)
|
||||
if ic.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
|
||||
if ic.c.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
|
||||
// DiffIDs are, currently, needed only when converting from schema1.
|
||||
// In which case src.LayerInfos will not have URLs because schema1
|
||||
// does not support them.
|
||||
@@ -334,9 +394,9 @@ func (ic *imageCopier) copyLayers() error {
|
||||
return errors.New("getting DiffID for foreign layers is unimplemented")
|
||||
}
|
||||
destInfo = srcLayer
|
||||
fmt.Fprintf(ic.reportWriter, "Skipping foreign layer %q copy to %s\n", destInfo.Digest, ic.dest.Reference().Transport().Name())
|
||||
ic.c.Printf("Skipping foreign layer %q copy to %s\n", destInfo.Digest, ic.c.dest.Reference().Transport().Name())
|
||||
} else {
|
||||
destInfo, diffID, err = ic.copyLayer(srcLayer)
|
||||
destInfo, diffID, err = ic.copyLayer(ctx, srcLayer)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -348,7 +408,7 @@ func (ic *imageCopier) copyLayers() error {
|
||||
if ic.diffIDsAreNeeded {
|
||||
ic.manifestUpdates.InformationOnly.LayerDiffIDs = diffIDs
|
||||
}
|
||||
if layerDigestsDiffer(srcInfos, destInfos) {
|
||||
if srcInfosUpdated || layerDigestsDiffer(srcInfos, destInfos) {
|
||||
ic.manifestUpdates.LayerInfos = destInfos
|
||||
}
|
||||
return nil
|
||||
@@ -369,7 +429,7 @@ func layerDigestsDiffer(a, b []types.BlobInfo) bool {
|
||||
|
||||
// copyUpdatedConfigAndManifest updates the image per ic.manifestUpdates, if necessary,
|
||||
// stores the resulting config and manifest to the destination, and returns the stored manifest.
|
||||
func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
|
||||
func (ic *imageCopier) copyUpdatedConfigAndManifest(ctx context.Context) ([]byte, error) {
|
||||
pendingImage := ic.src
|
||||
if !reflect.DeepEqual(*ic.manifestUpdates, types.ManifestUpdateOptions{InformationOnly: ic.manifestUpdates.InformationOnly}) {
|
||||
if !ic.canModifyManifest {
|
||||
@@ -379,43 +439,43 @@ func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
|
||||
// We have set ic.diffIDsAreNeeded based on the preferred MIME type returned by determineManifestConversion.
|
||||
// So, this can only happen if we are trying to upload using one of the other MIME type candidates.
|
||||
// Because UpdatedImageNeedsLayerDiffIDs is true only when converting from s1 to s2, this case should only arise
|
||||
// when ic.dest.SupportedManifestMIMETypes() includes both s1 and s2, the upload using s1 failed, and we are now trying s2.
|
||||
// when ic.c.dest.SupportedManifestMIMETypes() includes both s1 and s2, the upload using s1 failed, and we are now trying s2.
|
||||
// Supposedly s2-only registries do not exist or are extremely rare, so failing with this error message is good enough for now.
|
||||
// If handling such registries turns out to be necessary, we could compute ic.diffIDsAreNeeded based on the full list of manifest MIME type candidates.
|
||||
return nil, errors.Errorf("Can not convert image to %s, preparing DiffIDs for this case is not supported", ic.manifestUpdates.ManifestMIMEType)
|
||||
}
|
||||
pi, err := ic.src.UpdatedImage(*ic.manifestUpdates)
|
||||
pi, err := ic.src.UpdatedImage(ctx, *ic.manifestUpdates)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error creating an updated image manifest")
|
||||
}
|
||||
pendingImage = pi
|
||||
}
|
||||
manifest, _, err := pendingImage.Manifest()
|
||||
manifest, _, err := pendingImage.Manifest(ctx)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error reading manifest")
|
||||
}
|
||||
|
||||
if err := ic.copyConfig(pendingImage); err != nil {
|
||||
if err := ic.c.copyConfig(ctx, pendingImage); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fmt.Fprintf(ic.reportWriter, "Writing manifest to image destination\n")
|
||||
if err := ic.dest.PutManifest(manifest); err != nil {
|
||||
ic.c.Printf("Writing manifest to image destination\n")
|
||||
if err := ic.c.dest.PutManifest(ctx, manifest); err != nil {
|
||||
return nil, errors.Wrap(err, "Error writing manifest")
|
||||
}
|
||||
return manifest, nil
|
||||
}
|
||||
|
||||
// copyConfig copies config.json, if any, from src to dest.
|
||||
func (ic *imageCopier) copyConfig(src types.Image) error {
|
||||
func (c *copier) copyConfig(ctx context.Context, src types.Image) error {
|
||||
srcInfo := src.ConfigInfo()
|
||||
if srcInfo.Digest != "" {
|
||||
fmt.Fprintf(ic.reportWriter, "Copying config %s\n", srcInfo.Digest)
|
||||
configBlob, err := src.ConfigBlob()
|
||||
c.Printf("Copying config %s\n", srcInfo.Digest)
|
||||
configBlob, err := src.ConfigBlob(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Error reading config blob %s", srcInfo.Digest)
|
||||
}
|
||||
destInfo, err := ic.copyBlobFromStream(bytes.NewReader(configBlob), srcInfo, nil, false)
|
||||
destInfo, err := c.copyBlobFromStream(ctx, bytes.NewReader(configBlob), srcInfo, nil, false, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -435,14 +495,14 @@ type diffIDResult struct {
|
||||
|
||||
// copyLayer copies a layer with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
|
||||
// and returns a complete blobInfo of the copied layer, and a value for LayerDiffIDs if diffIDIsNeeded
|
||||
func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest.Digest, error) {
|
||||
func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo) (types.BlobInfo, digest.Digest, error) {
|
||||
// Check if we already have a blob with this digest
|
||||
haveBlob, extantBlobSize, err := ic.dest.HasBlob(srcInfo)
|
||||
haveBlob, extantBlobSize, err := ic.c.dest.HasBlob(ctx, srcInfo)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, "", errors.Wrapf(err, "Error checking for blob %s at destination", srcInfo.Digest)
|
||||
}
|
||||
// If we already have a cached diffID for this blob, we don't need to compute it
|
||||
diffIDIsNeeded := ic.diffIDsAreNeeded && (ic.cachedDiffIDs[srcInfo.Digest] == "")
|
||||
diffIDIsNeeded := ic.diffIDsAreNeeded && (ic.c.cachedDiffIDs[srcInfo.Digest] == "")
|
||||
// If we already have the blob, and we don't need to recompute the diffID, then we might be able to avoid reading it again
|
||||
if haveBlob && !diffIDIsNeeded {
|
||||
// Check the blob sizes match, if we were given a size this time
|
||||
@@ -451,35 +511,39 @@ func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest
|
||||
}
|
||||
srcInfo.Size = extantBlobSize
|
||||
// Tell the image destination that this blob's delta is being applied again. For some image destinations, this can be faster than using GetBlob/PutBlob
|
||||
blobinfo, err := ic.dest.ReapplyBlob(srcInfo)
|
||||
blobinfo, err := ic.c.dest.ReapplyBlob(ctx, srcInfo)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reapplying blob %s at destination", srcInfo.Digest)
|
||||
}
|
||||
fmt.Fprintf(ic.reportWriter, "Skipping fetch of repeat blob %s\n", srcInfo.Digest)
|
||||
return blobinfo, ic.cachedDiffIDs[srcInfo.Digest], err
|
||||
ic.c.Printf("Skipping fetch of repeat blob %s\n", srcInfo.Digest)
|
||||
return blobinfo, ic.c.cachedDiffIDs[srcInfo.Digest], err
|
||||
}
|
||||
|
||||
// Fallback: copy the layer, computing the diffID if we need to do so
|
||||
fmt.Fprintf(ic.reportWriter, "Copying blob %s\n", srcInfo.Digest)
|
||||
srcStream, srcBlobSize, err := ic.rawSource.GetBlob(srcInfo)
|
||||
ic.c.Printf("Copying blob %s\n", srcInfo.Digest)
|
||||
srcStream, srcBlobSize, err := ic.c.rawSource.GetBlob(ctx, srcInfo)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, "", errors.Wrapf(err, "Error reading blob %s", srcInfo.Digest)
|
||||
}
|
||||
defer srcStream.Close()
|
||||
|
||||
blobInfo, diffIDChan, err := ic.copyLayerFromStream(srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
|
||||
blobInfo, diffIDChan, err := ic.copyLayerFromStream(ctx, srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
|
||||
diffIDIsNeeded)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, "", err
|
||||
}
|
||||
var diffIDResult diffIDResult // = {digest:""}
|
||||
if diffIDIsNeeded {
|
||||
diffIDResult = <-diffIDChan
|
||||
if diffIDResult.err != nil {
|
||||
return types.BlobInfo{}, "", errors.Wrap(diffIDResult.err, "Error computing layer DiffID")
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return types.BlobInfo{}, "", ctx.Err()
|
||||
case diffIDResult = <-diffIDChan:
|
||||
if diffIDResult.err != nil {
|
||||
return types.BlobInfo{}, "", errors.Wrap(diffIDResult.err, "Error computing layer DiffID")
|
||||
}
|
||||
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
|
||||
ic.c.cachedDiffIDs[srcInfo.Digest] = diffIDResult.digest
|
||||
}
|
||||
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
|
||||
ic.cachedDiffIDs[srcInfo.Digest] = diffIDResult.digest
|
||||
}
|
||||
return blobInfo, diffIDResult.digest, nil
|
||||
}
|
||||
@@ -488,7 +552,7 @@ func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest
|
||||
// it copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
|
||||
// perhaps compressing the stream if canCompress,
|
||||
// and returns a complete blobInfo of the copied blob and perhaps a <-chan diffIDResult if diffIDIsNeeded, to be read by the caller.
|
||||
func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
|
||||
func (ic *imageCopier) copyLayerFromStream(ctx context.Context, srcStream io.Reader, srcInfo types.BlobInfo,
|
||||
diffIDIsNeeded bool) (types.BlobInfo, <-chan diffIDResult, error) {
|
||||
var getDiffIDRecorder func(compression.DecompressorFunc) io.Writer // = nil
|
||||
var diffIDChan chan diffIDResult
|
||||
@@ -513,7 +577,7 @@ func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.Bl
|
||||
return pipeWriter
|
||||
}
|
||||
}
|
||||
blobInfo, err := ic.copyBlobFromStream(srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest) // Sets err to nil on success
|
||||
blobInfo, err := ic.c.copyBlobFromStream(ctx, srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest, false) // Sets err to nil on success
|
||||
return blobInfo, diffIDChan, err
|
||||
// We need the defer … pipeWriter.CloseWithError() to happen HERE so that the caller can block on reading from diffIDChan
|
||||
}
|
||||
@@ -547,9 +611,9 @@ func computeDiffID(stream io.Reader, decompressor compression.DecompressorFunc)
|
||||
// perhaps sending a copy to an io.Writer if getOriginalLayerCopyWriter != nil,
|
||||
// perhaps compressing it if canCompress,
|
||||
// and returns a complete blobInfo of the copied blob.
|
||||
func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
|
||||
func (c *copier) copyBlobFromStream(ctx context.Context, srcStream io.Reader, srcInfo types.BlobInfo,
|
||||
getOriginalLayerCopyWriter func(decompressor compression.DecompressorFunc) io.Writer,
|
||||
canCompress bool) (types.BlobInfo, error) {
|
||||
canModifyBlob bool, isConfig bool) (types.BlobInfo, error) {
|
||||
// The copying happens through a pipeline of connected io.Readers.
|
||||
// === Input: srcStream
|
||||
|
||||
@@ -575,7 +639,7 @@ func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.Blo
|
||||
|
||||
// === Report progress using a pb.Reader.
|
||||
bar := pb.New(int(srcInfo.Size)).SetUnits(pb.U_BYTES)
|
||||
bar.Output = ic.reportWriter
|
||||
bar.Output = c.reportWriter
|
||||
bar.SetMaxWidth(80)
|
||||
bar.ShowTimeLeft = false
|
||||
bar.ShowPercent = false
|
||||
@@ -590,12 +654,9 @@ func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.Blo
|
||||
originalLayerReader = destStream
|
||||
}
|
||||
|
||||
// === Compress the layer if it is uncompressed and compression is desired
|
||||
// === Deal with layer compression/decompression if necessary
|
||||
var inputInfo types.BlobInfo
|
||||
if !canCompress || isCompressed || !ic.dest.ShouldCompressLayers() {
|
||||
logrus.Debugf("Using original blob without modification")
|
||||
inputInfo = srcInfo
|
||||
} else {
|
||||
if canModifyBlob && c.dest.DesiredLayerCompression() == types.Compress && !isCompressed {
|
||||
logrus.Debugf("Compressing blob on the fly")
|
||||
pipeReader, pipeWriter := io.Pipe()
|
||||
defer pipeReader.Close()
|
||||
@@ -607,21 +668,32 @@ func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.Blo
|
||||
destStream = pipeReader
|
||||
inputInfo.Digest = ""
|
||||
inputInfo.Size = -1
|
||||
} else if canModifyBlob && c.dest.DesiredLayerCompression() == types.Decompress && isCompressed {
|
||||
logrus.Debugf("Blob will be decompressed")
|
||||
destStream, err = decompressor(destStream)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
inputInfo.Digest = ""
|
||||
inputInfo.Size = -1
|
||||
} else {
|
||||
logrus.Debugf("Using original blob without modification")
|
||||
inputInfo = srcInfo
|
||||
}
|
||||
|
||||
// === Report progress using the ic.progress channel, if required.
|
||||
if ic.progress != nil && ic.progressInterval > 0 {
|
||||
// === Report progress using the c.progress channel, if required.
|
||||
if c.progress != nil && c.progressInterval > 0 {
|
||||
destStream = &progressReader{
|
||||
source: destStream,
|
||||
channel: ic.progress,
|
||||
interval: ic.progressInterval,
|
||||
channel: c.progress,
|
||||
interval: c.progressInterval,
|
||||
artifact: srcInfo,
|
||||
lastTime: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
// === Finally, send the layer stream to dest.
|
||||
uploadedInfo, err := ic.dest.PutBlob(destStream, inputInfo)
|
||||
uploadedInfo, err := c.dest.PutBlob(ctx, destStream, inputInfo, isConfig)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, errors.Wrap(err, "Error writing blob")
|
||||
}
|
||||
|
||||
35
vendor/github.com/containers/image/copy/manifest.go
generated
vendored
35
vendor/github.com/containers/image/copy/manifest.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package copy
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
@@ -37,15 +38,24 @@ func (os *orderedSet) append(s string) {
|
||||
}
|
||||
}
|
||||
|
||||
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
|
||||
// Note that the conversion will only happen later, through src.UpdatedImage
|
||||
// determineManifestConversion updates ic.manifestUpdates to convert manifest to a supported MIME type, if necessary and ic.canModifyManifest.
|
||||
// Note that the conversion will only happen later, through ic.src.UpdatedImage
|
||||
// Returns the preferred manifest MIME type (whether we are converting to it or using it unmodified),
|
||||
// and a list of other possible alternatives, in order.
|
||||
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) (string, []string, error) {
|
||||
_, srcType, err := src.Manifest()
|
||||
func (ic *imageCopier) determineManifestConversion(ctx context.Context, destSupportedManifestMIMETypes []string, forceManifestMIMEType string) (string, []string, error) {
|
||||
_, srcType, err := ic.src.Manifest(ctx)
|
||||
if err != nil { // This should have been cached?!
|
||||
return "", nil, errors.Wrap(err, "Error reading manifest")
|
||||
}
|
||||
normalizedSrcType := manifest.NormalizedMIMEType(srcType)
|
||||
if srcType != normalizedSrcType {
|
||||
logrus.Debugf("Source manifest MIME type %s, treating it as %s", srcType, normalizedSrcType)
|
||||
srcType = normalizedSrcType
|
||||
}
|
||||
|
||||
if forceManifestMIMEType != "" {
|
||||
destSupportedManifestMIMETypes = []string{forceManifestMIMEType}
|
||||
}
|
||||
|
||||
if len(destSupportedManifestMIMETypes) == 0 {
|
||||
return srcType, []string{}, nil // Anything goes; just use the original as is, do not try any conversions.
|
||||
@@ -67,10 +77,10 @@ func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, s
|
||||
if _, ok := supportedByDest[srcType]; ok {
|
||||
prioritizedTypes.append(srcType)
|
||||
}
|
||||
if !canModifyManifest {
|
||||
// We could also drop the !canModifyManifest parameter and have the caller
|
||||
if !ic.canModifyManifest {
|
||||
// We could also drop the !ic.canModifyManifest check and have the caller
|
||||
// make the choice; it is already doing that to an extent, to improve error
|
||||
// messages. But it is nice to hide the “if !canModifyManifest, do no conversion”
|
||||
// messages. But it is nice to hide the “if !ic.canModifyManifest, do no conversion”
|
||||
// special case in here; the caller can then worry (or not) only about a good UI.
|
||||
logrus.Debugf("We can't modify the manifest, hoping for the best...")
|
||||
return srcType, []string{}, nil // Take our chances - FIXME? Or should we fail without trying?
|
||||
@@ -94,9 +104,18 @@ func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, s
|
||||
}
|
||||
preferredType := prioritizedTypes.list[0]
|
||||
if preferredType != srcType {
|
||||
manifestUpdates.ManifestMIMEType = preferredType
|
||||
ic.manifestUpdates.ManifestMIMEType = preferredType
|
||||
} else {
|
||||
logrus.Debugf("... will first try using the original manifest unmodified")
|
||||
}
|
||||
return preferredType, prioritizedTypes.list[1:], nil
|
||||
}
|
||||
|
||||
// isMultiImage returns true if img is a list of images
|
||||
func isMultiImage(ctx context.Context, img types.UnparsedImage) (bool, error) {
|
||||
_, mt, err := img.Manifest(ctx)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return manifest.MIMETypeIsMultiImage(mt), nil
|
||||
}
|
||||
|
||||
14
vendor/github.com/containers/image/copy/sign.go
generated
vendored
14
vendor/github.com/containers/image/copy/sign.go
generated
vendored
@@ -1,17 +1,13 @@
|
||||
package copy
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/containers/image/signature"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// createSignature creates a new signature of manifest at (identified by) dest using keyIdentity.
|
||||
func createSignature(dest types.ImageDestination, manifest []byte, keyIdentity string, reportWriter io.Writer) ([]byte, error) {
|
||||
// createSignature creates a new signature of manifest using keyIdentity.
|
||||
func (c *copier) createSignature(manifest []byte, keyIdentity string) ([]byte, error) {
|
||||
mech, err := signature.NewGPGSigningMechanism()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error initializing GPG")
|
||||
@@ -21,12 +17,12 @@ func createSignature(dest types.ImageDestination, manifest []byte, keyIdentity s
|
||||
return nil, errors.Wrap(err, "Signing not supported")
|
||||
}
|
||||
|
||||
dockerReference := dest.Reference().DockerReference()
|
||||
dockerReference := c.dest.Reference().DockerReference()
|
||||
if dockerReference == nil {
|
||||
return nil, errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
|
||||
return nil, errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(c.dest.Reference()))
|
||||
}
|
||||
|
||||
fmt.Fprintf(reportWriter, "Signing manifest\n")
|
||||
c.Printf("Signing manifest\n")
|
||||
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, keyIdentity)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error creating signature")
|
||||
|
||||
126
vendor/github.com/containers/image/directory/directory_dest.go
generated
vendored
126
vendor/github.com/containers/image/directory/directory_dest.go
generated
vendored
@@ -1,22 +1,81 @@
|
||||
package directory
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/containers/image/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
const version = "Directory Transport Version: 1.1\n"
|
||||
|
||||
// ErrNotContainerImageDir indicates that the directory doesn't match the expected contents of a directory created
|
||||
// using the 'dir' transport
|
||||
var ErrNotContainerImageDir = errors.New("not a containers image directory, don't want to overwrite important data")
|
||||
|
||||
type dirImageDestination struct {
|
||||
ref dirReference
|
||||
ref dirReference
|
||||
compress bool
|
||||
}
|
||||
|
||||
// newImageDestination returns an ImageDestination for writing to an existing directory.
|
||||
func newImageDestination(ref dirReference) types.ImageDestination {
|
||||
return &dirImageDestination{ref}
|
||||
// newImageDestination returns an ImageDestination for writing to a directory.
|
||||
func newImageDestination(ref dirReference, compress bool) (types.ImageDestination, error) {
|
||||
d := &dirImageDestination{ref: ref, compress: compress}
|
||||
|
||||
// If directory exists check if it is empty
|
||||
// if not empty, check whether the contents match that of a container image directory and overwrite the contents
|
||||
// if the contents don't match throw an error
|
||||
dirExists, err := pathExists(d.ref.resolvedPath)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error checking for path %q", d.ref.resolvedPath)
|
||||
}
|
||||
if dirExists {
|
||||
isEmpty, err := isDirEmpty(d.ref.resolvedPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !isEmpty {
|
||||
versionExists, err := pathExists(d.ref.versionPath())
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error checking if path exists %q", d.ref.versionPath())
|
||||
}
|
||||
if versionExists {
|
||||
contents, err := ioutil.ReadFile(d.ref.versionPath())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// check if contents of version file is what we expect it to be
|
||||
if string(contents) != version {
|
||||
return nil, ErrNotContainerImageDir
|
||||
}
|
||||
} else {
|
||||
return nil, ErrNotContainerImageDir
|
||||
}
|
||||
// delete directory contents so that only one image is in the directory at a time
|
||||
if err = removeDirContents(d.ref.resolvedPath); err != nil {
|
||||
return nil, errors.Wrapf(err, "error erasing contents in %q", d.ref.resolvedPath)
|
||||
}
|
||||
logrus.Debugf("overwriting existing container image directory %q", d.ref.resolvedPath)
|
||||
}
|
||||
} else {
|
||||
// create directory if it doesn't exist
|
||||
if err := os.MkdirAll(d.ref.resolvedPath, 0755); err != nil {
|
||||
return nil, errors.Wrapf(err, "unable to create directory %q", d.ref.resolvedPath)
|
||||
}
|
||||
}
|
||||
// create version file
|
||||
err = ioutil.WriteFile(d.ref.versionPath(), []byte(version), 0644)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error creating version file %q", d.ref.versionPath())
|
||||
}
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
|
||||
@@ -36,13 +95,15 @@ func (d *dirImageDestination) SupportedManifestMIMETypes() []string {
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *dirImageDestination) SupportsSignatures() error {
|
||||
func (d *dirImageDestination) SupportsSignatures(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *dirImageDestination) ShouldCompressLayers() bool {
|
||||
return false
|
||||
func (d *dirImageDestination) DesiredLayerCompression() types.LayerCompression {
|
||||
if d.compress {
|
||||
return types.Compress
|
||||
}
|
||||
return types.PreserveOriginal
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
@@ -62,7 +123,7 @@ func (d *dirImageDestination) MustMatchRuntimeOS() bool {
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
func (d *dirImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
|
||||
blobFile, err := ioutil.TempFile(d.ref.path, "dir-put-blob")
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
@@ -78,6 +139,7 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
|
||||
digester := digest.Canonical.Digester()
|
||||
tee := io.TeeReader(stream, digester.Hash())
|
||||
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
|
||||
size, err := io.Copy(blobFile, tee)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
@@ -104,7 +166,7 @@ func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
|
||||
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
|
||||
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
|
||||
// it returns a non-nil error only on an unexpected failure.
|
||||
func (d *dirImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
func (d *dirImageDestination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
|
||||
if info.Digest == "" {
|
||||
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
|
||||
}
|
||||
@@ -119,7 +181,7 @@ func (d *dirImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error)
|
||||
return true, finfo.Size(), nil
|
||||
}
|
||||
|
||||
func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
func (d *dirImageDestination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return info, nil
|
||||
}
|
||||
|
||||
@@ -127,11 +189,11 @@ func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo,
|
||||
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
|
||||
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
|
||||
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
|
||||
func (d *dirImageDestination) PutManifest(manifest []byte) error {
|
||||
func (d *dirImageDestination) PutManifest(ctx context.Context, manifest []byte) error {
|
||||
return ioutil.WriteFile(d.ref.manifestPath(), manifest, 0644)
|
||||
}
|
||||
|
||||
func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
func (d *dirImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
|
||||
for i, sig := range signatures {
|
||||
if err := ioutil.WriteFile(d.ref.signaturePath(i), sig, 0644); err != nil {
|
||||
return err
|
||||
@@ -144,6 +206,42 @@ func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *dirImageDestination) Commit() error {
|
||||
func (d *dirImageDestination) Commit(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// returns true if path exists
|
||||
func pathExists(path string) (bool, error) {
|
||||
_, err := os.Stat(path)
|
||||
if err == nil {
|
||||
return true, nil
|
||||
}
|
||||
if err != nil && os.IsNotExist(err) {
|
||||
return false, nil
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
|
||||
// returns true if directory is empty
|
||||
func isDirEmpty(path string) (bool, error) {
|
||||
files, err := ioutil.ReadDir(path)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return len(files) == 0, nil
|
||||
}
|
||||
|
||||
// deletes the contents of a directory
|
||||
func removeDirContents(path string) error {
|
||||
files, err := ioutil.ReadDir(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if err := os.RemoveAll(filepath.Join(path, file.Name())); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
31
vendor/github.com/containers/image/directory/directory_src.go
generated
vendored
31
vendor/github.com/containers/image/directory/directory_src.go
generated
vendored
@@ -35,7 +35,12 @@ func (s *dirImageSource) Close() error {
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *dirImageSource) GetManifest() ([]byte, string, error) {
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
|
||||
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
|
||||
func (s *dirImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
|
||||
if instanceDigest != nil {
|
||||
return nil, "", errors.Errorf(`Getting target manifest not supported by "dir:"`)
|
||||
}
|
||||
m, err := ioutil.ReadFile(s.ref.manifestPath())
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
@@ -43,24 +48,27 @@ func (s *dirImageSource) GetManifest() ([]byte, string, error) {
|
||||
return m, manifest.GuessMIMEType(m), err
|
||||
}
|
||||
|
||||
func (s *dirImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
return nil, "", errors.Errorf(`Getting target manifest not supported by "dir:"`)
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
func (s *dirImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
r, err := os.Open(s.ref.layerPath(info.Digest))
|
||||
if err != nil {
|
||||
return nil, 0, nil
|
||||
return nil, -1, err
|
||||
}
|
||||
fi, err := r.Stat()
|
||||
if err != nil {
|
||||
return nil, 0, nil
|
||||
return nil, -1, err
|
||||
}
|
||||
return r, fi.Size(), nil
|
||||
}
|
||||
|
||||
func (s *dirImageSource) GetSignatures(ctx context.Context) ([][]byte, error) {
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
|
||||
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
|
||||
// (e.g. if the source never returns manifest lists).
|
||||
func (s *dirImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
if instanceDigest != nil {
|
||||
return nil, errors.Errorf(`Manifests lists are not supported by "dir:"`)
|
||||
}
|
||||
signatures := [][]byte{}
|
||||
for i := 0; ; i++ {
|
||||
signature, err := ioutil.ReadFile(s.ref.signaturePath(i))
|
||||
@@ -74,3 +82,8 @@ func (s *dirImageSource) GetSignatures(ctx context.Context) ([][]byte, error) {
|
||||
}
|
||||
return signatures, nil
|
||||
}
|
||||
|
||||
// LayerInfosForCopy() returns updated layer info that should be used when copying, in preference to values in the manifest, if specified.
|
||||
func (s *dirImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
32
vendor/github.com/containers/image/directory/directory_transport.go
generated
vendored
32
vendor/github.com/containers/image/directory/directory_transport.go
generated
vendored
@@ -1,18 +1,18 @@
|
||||
package directory
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/containers/image/directory/explicitfilepath"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -134,29 +134,34 @@ func (ref dirReference) PolicyConfigurationNamespaces() []string {
|
||||
return res
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref dirReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
func (ref dirReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
src := newImageSource(ref)
|
||||
return image.FromSource(src)
|
||||
return image.FromSource(ctx, sys, src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref dirReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
func (ref dirReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ref), nil
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref dirReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ref), nil
|
||||
func (ref dirReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
compress := false
|
||||
if sys != nil {
|
||||
compress = sys.DirForceCompress
|
||||
}
|
||||
return newImageDestination(ref, compress)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref dirReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
func (ref dirReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
return errors.Errorf("Deleting images not implemented for dir: images")
|
||||
}
|
||||
|
||||
@@ -168,10 +173,15 @@ func (ref dirReference) manifestPath() string {
|
||||
// layerPath returns a path for a layer tarball within a directory using our conventions.
|
||||
func (ref dirReference) layerPath(digest digest.Digest) string {
|
||||
// FIXME: Should we keep the digest identification?
|
||||
return filepath.Join(ref.path, digest.Hex()+".tar")
|
||||
return filepath.Join(ref.path, digest.Hex())
|
||||
}
|
||||
|
||||
// signaturePath returns a path for a signature within a directory using our conventions.
|
||||
func (ref dirReference) signaturePath(index int) string {
|
||||
return filepath.Join(ref.path, fmt.Sprintf("signature-%d", index+1))
|
||||
}
|
||||
|
||||
// versionPath returns a path for the version file within a directory using our conventions.
|
||||
func (ref dirReference) versionPath() string {
|
||||
return filepath.Join(ref.path, "version")
|
||||
}
|
||||
|
||||
22
vendor/github.com/containers/image/docker/archive/dest.go
generated
vendored
22
vendor/github.com/containers/image/docker/archive/dest.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package archive
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"os"
|
||||
|
||||
@@ -15,11 +16,7 @@ type archiveImageDestination struct {
|
||||
writer io.Closer
|
||||
}
|
||||
|
||||
func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.ImageDestination, error) {
|
||||
if ref.destinationRef == nil {
|
||||
return nil, errors.Errorf("docker-archive: destination reference not supplied (must be of form <path>:<reference:tag>)")
|
||||
}
|
||||
|
||||
func newImageDestination(sys *types.SystemContext, ref archiveReference) (types.ImageDestination, error) {
|
||||
// ref.path can be either a pipe or a regular file
|
||||
// in the case of a pipe, we require that we can open it for write
|
||||
// in the case of a regular file, we don't want to overwrite any pre-existing file
|
||||
@@ -39,13 +36,22 @@ func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.
|
||||
return nil, errors.New("docker-archive doesn't support modifying existing images")
|
||||
}
|
||||
|
||||
tarDest := tarfile.NewDestination(fh, ref.destinationRef)
|
||||
if sys != nil && sys.DockerArchiveAdditionalTags != nil {
|
||||
tarDest.AddRepoTags(sys.DockerArchiveAdditionalTags)
|
||||
}
|
||||
return &archiveImageDestination{
|
||||
Destination: tarfile.NewDestination(fh, ref.destinationRef),
|
||||
Destination: tarDest,
|
||||
ref: ref,
|
||||
writer: fh,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// DesiredLayerCompression indicates if layers must be compressed, decompressed or preserved
|
||||
func (d *archiveImageDestination) DesiredLayerCompression() types.LayerCompression {
|
||||
return types.Decompress
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
|
||||
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
|
||||
func (d *archiveImageDestination) Reference() types.ImageReference {
|
||||
@@ -61,6 +67,6 @@ func (d *archiveImageDestination) Close() error {
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *archiveImageDestination) Commit() error {
|
||||
return d.Destination.Commit()
|
||||
func (d *archiveImageDestination) Commit(ctx context.Context) error {
|
||||
return d.Destination.Commit(ctx)
|
||||
}
|
||||
|
||||
16
vendor/github.com/containers/image/docker/archive/src.go
generated
vendored
16
vendor/github.com/containers/image/docker/archive/src.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package archive
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/containers/image/docker/tarfile"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -13,15 +14,18 @@ type archiveImageSource struct {
|
||||
|
||||
// newImageSource returns a types.ImageSource for the specified image reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func newImageSource(ctx *types.SystemContext, ref archiveReference) types.ImageSource {
|
||||
func newImageSource(ctx context.Context, ref archiveReference) (types.ImageSource, error) {
|
||||
if ref.destinationRef != nil {
|
||||
logrus.Warnf("docker-archive: references are not supported for sources (ignoring)")
|
||||
}
|
||||
src := tarfile.NewSource(ref.path)
|
||||
src, err := tarfile.NewSourceFromFile(ref.path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &archiveImageSource{
|
||||
Source: src,
|
||||
ref: ref,
|
||||
}
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
@@ -30,7 +34,7 @@ func (s *archiveImageSource) Reference() types.ImageReference {
|
||||
return s.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
func (s *archiveImageSource) Close() error {
|
||||
return nil
|
||||
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
|
||||
func (s *archiveImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
29
vendor/github.com/containers/image/docker/archive/transport.go
generated
vendored
29
vendor/github.com/containers/image/docker/archive/transport.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package archive
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
@@ -40,7 +41,9 @@ func (t archiveTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
|
||||
// archiveReference is an ImageReference for Docker images.
|
||||
type archiveReference struct {
|
||||
destinationRef reference.NamedTagged // only used for destinations
|
||||
// only used for destinations
|
||||
// archiveReference.destinationRef is optional and can be nil for destinations as well.
|
||||
destinationRef reference.NamedTagged
|
||||
path string
|
||||
}
|
||||
|
||||
@@ -125,29 +128,33 @@ func (ref archiveReference) PolicyConfigurationNamespaces() []string {
|
||||
return []string{}
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref archiveReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src := newImageSource(ctx, ref)
|
||||
return ctrImage.FromSource(src)
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
func (ref archiveReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
src, err := newImageSource(ctx, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ctrImage.FromSource(ctx, sys, src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref archiveReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref), nil
|
||||
func (ref archiveReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref archiveReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, ref)
|
||||
func (ref archiveReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(sys, ref)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref archiveReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
func (ref archiveReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
// Not really supported, for safety reasons.
|
||||
return errors.New("Deleting images not implemented for docker-archive: images")
|
||||
}
|
||||
|
||||
42
vendor/github.com/containers/image/docker/daemon/client.go
generated
vendored
42
vendor/github.com/containers/image/docker/daemon/client.go
generated
vendored
@@ -15,10 +15,10 @@ const (
|
||||
)
|
||||
|
||||
// NewDockerClient initializes a new API client based on the passed SystemContext.
|
||||
func newDockerClient(ctx *types.SystemContext) (*dockerclient.Client, error) {
|
||||
func newDockerClient(sys *types.SystemContext) (*dockerclient.Client, error) {
|
||||
host := dockerclient.DefaultDockerHost
|
||||
if ctx != nil && ctx.DockerDaemonHost != "" {
|
||||
host = ctx.DockerDaemonHost
|
||||
if sys != nil && sys.DockerDaemonHost != "" {
|
||||
host = sys.DockerDaemonHost
|
||||
}
|
||||
|
||||
// Sadly, unix:// sockets don't work transparently with dockerclient.NewClient.
|
||||
@@ -27,32 +27,39 @@ func newDockerClient(ctx *types.SystemContext) (*dockerclient.Client, error) {
|
||||
// regardless of the values in the *tls.Config), and we would have to call sockets.ConfigureTransport.
|
||||
//
|
||||
// We don't really want to configure anything for unix:// sockets, so just pass a nil *http.Client.
|
||||
//
|
||||
// Similarly, if we want to communicate over plain HTTP on a TCP socket, we also need to set
|
||||
// TLSClientConfig to nil. This can be achieved by using the form `http://`
|
||||
proto, _, _, err := dockerclient.ParseHost(host)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var httpClient *http.Client
|
||||
if proto != "unix" {
|
||||
hc, err := tlsConfig(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
if proto == "http" {
|
||||
httpClient = httpConfig()
|
||||
} else {
|
||||
hc, err := tlsConfig(sys)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
httpClient = hc
|
||||
}
|
||||
httpClient = hc
|
||||
}
|
||||
|
||||
return dockerclient.NewClient(host, defaultAPIVersion, httpClient, nil)
|
||||
}
|
||||
|
||||
func tlsConfig(ctx *types.SystemContext) (*http.Client, error) {
|
||||
func tlsConfig(sys *types.SystemContext) (*http.Client, error) {
|
||||
options := tlsconfig.Options{}
|
||||
if ctx != nil && ctx.DockerDaemonInsecureSkipTLSVerify {
|
||||
if sys != nil && sys.DockerDaemonInsecureSkipTLSVerify {
|
||||
options.InsecureSkipVerify = true
|
||||
}
|
||||
|
||||
if ctx != nil && ctx.DockerDaemonCertPath != "" {
|
||||
options.CAFile = filepath.Join(ctx.DockerDaemonCertPath, "ca.pem")
|
||||
options.CertFile = filepath.Join(ctx.DockerDaemonCertPath, "cert.pem")
|
||||
options.KeyFile = filepath.Join(ctx.DockerDaemonCertPath, "key.pem")
|
||||
if sys != nil && sys.DockerDaemonCertPath != "" {
|
||||
options.CAFile = filepath.Join(sys.DockerDaemonCertPath, "ca.pem")
|
||||
options.CertFile = filepath.Join(sys.DockerDaemonCertPath, "cert.pem")
|
||||
options.KeyFile = filepath.Join(sys.DockerDaemonCertPath, "key.pem")
|
||||
}
|
||||
|
||||
tlsc, err := tlsconfig.Client(options)
|
||||
@@ -67,3 +74,12 @@ func tlsConfig(ctx *types.SystemContext) (*http.Client, error) {
|
||||
CheckRedirect: dockerclient.CheckRedirect,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func httpConfig() *http.Client {
|
||||
return &http.Client{
|
||||
Transport: &http.Transport{
|
||||
TLSClientConfig: nil,
|
||||
},
|
||||
CheckRedirect: dockerclient.CheckRedirect,
|
||||
}
|
||||
}
|
||||
|
||||
46
vendor/github.com/containers/image/docker/daemon/daemon_dest.go
generated
vendored
46
vendor/github.com/containers/image/docker/daemon/daemon_dest.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package daemon
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
@@ -9,11 +10,11 @@ import (
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
type daemonImageDestination struct {
|
||||
ref daemonReference
|
||||
mustMatchRuntimeOS bool
|
||||
*tarfile.Destination // Implements most of types.ImageDestination
|
||||
// For talking to imageLoadGoroutine
|
||||
goroutineCancel context.CancelFunc
|
||||
@@ -24,7 +25,7 @@ type daemonImageDestination struct {
|
||||
}
|
||||
|
||||
// newImageDestination returns a types.ImageDestination for the specified image reference.
|
||||
func newImageDestination(ctx *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
|
||||
func newImageDestination(ctx context.Context, sys *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
|
||||
if ref.ref == nil {
|
||||
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
|
||||
}
|
||||
@@ -33,7 +34,12 @@ func newImageDestination(ctx *types.SystemContext, ref daemonReference) (types.I
|
||||
return nil, errors.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
|
||||
}
|
||||
|
||||
c, err := newDockerClient(ctx)
|
||||
var mustMatchRuntimeOS = true
|
||||
if sys != nil && sys.DockerDaemonHost != client.DefaultDockerHost {
|
||||
mustMatchRuntimeOS = false
|
||||
}
|
||||
|
||||
c, err := newDockerClient(sys)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error initializing docker engine client")
|
||||
}
|
||||
@@ -42,16 +48,17 @@ func newImageDestination(ctx *types.SystemContext, ref daemonReference) (types.I
|
||||
// Commit() may never be called, so we may never read from this channel; so, make this buffered to allow imageLoadGoroutine to write status and terminate even if we never read it.
|
||||
statusChannel := make(chan error, 1)
|
||||
|
||||
goroutineContext, goroutineCancel := context.WithCancel(context.Background())
|
||||
goroutineContext, goroutineCancel := context.WithCancel(ctx)
|
||||
go imageLoadGoroutine(goroutineContext, c, reader, statusChannel)
|
||||
|
||||
return &daemonImageDestination{
|
||||
ref: ref,
|
||||
Destination: tarfile.NewDestination(writer, namedTaggedRef),
|
||||
goroutineCancel: goroutineCancel,
|
||||
statusChannel: statusChannel,
|
||||
writer: writer,
|
||||
committed: false,
|
||||
ref: ref,
|
||||
mustMatchRuntimeOS: mustMatchRuntimeOS,
|
||||
Destination: tarfile.NewDestination(writer, namedTaggedRef),
|
||||
goroutineCancel: goroutineCancel,
|
||||
statusChannel: statusChannel,
|
||||
writer: writer,
|
||||
committed: false,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -78,9 +85,14 @@ func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeRe
|
||||
defer resp.Body.Close()
|
||||
}
|
||||
|
||||
// DesiredLayerCompression indicates if layers must be compressed, decompressed or preserved
|
||||
func (d *daemonImageDestination) DesiredLayerCompression() types.LayerCompression {
|
||||
return types.PreserveOriginal
|
||||
}
|
||||
|
||||
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
|
||||
func (d *daemonImageDestination) MustMatchRuntimeOS() bool {
|
||||
return true
|
||||
return d.mustMatchRuntimeOS
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageDestination, if any.
|
||||
@@ -112,9 +124,9 @@ func (d *daemonImageDestination) Reference() types.ImageReference {
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *daemonImageDestination) Commit() error {
|
||||
func (d *daemonImageDestination) Commit(ctx context.Context) error {
|
||||
logrus.Debugf("docker-daemon: Closing tar stream")
|
||||
if err := d.Destination.Commit(); err != nil {
|
||||
if err := d.Destination.Commit(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := d.writer.Close(); err != nil {
|
||||
@@ -123,6 +135,10 @@ func (d *daemonImageDestination) Commit() error {
|
||||
d.committed = true // We may still fail, but we are done sending to imageLoadGoroutine.
|
||||
|
||||
logrus.Debugf("docker-daemon: Waiting for status")
|
||||
err := <-d.statusChannel
|
||||
return err
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case err := <-d.statusChannel:
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
42
vendor/github.com/containers/image/docker/daemon/daemon_src.go
generated
vendored
42
vendor/github.com/containers/image/docker/daemon/daemon_src.go
generated
vendored
@@ -1,22 +1,16 @@
|
||||
package daemon
|
||||
|
||||
import (
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"context"
|
||||
|
||||
"github.com/containers/image/docker/tarfile"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
|
||||
|
||||
type daemonImageSource struct {
|
||||
ref daemonReference
|
||||
*tarfile.Source // Implements most of types.ImageSource
|
||||
tarCopyPath string
|
||||
}
|
||||
|
||||
type layerInfo struct {
|
||||
@@ -33,42 +27,26 @@ type layerInfo struct {
|
||||
// (We could, perhaps, expect an exact sequence, assume that the first plaintext file
|
||||
// is the config, and that the following len(RootFS) files are the layers, but that feels
|
||||
// way too brittle.)
|
||||
func newImageSource(ctx *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
|
||||
c, err := newDockerClient(ctx)
|
||||
func newImageSource(ctx context.Context, sys *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
|
||||
c, err := newDockerClient(sys)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error initializing docker engine client")
|
||||
}
|
||||
// Per NewReference(), ref.StringWithinTransport() is either an image ID (config digest), or a !reference.NameOnly() reference.
|
||||
// Either way ImageSave should create a tarball with exactly one image.
|
||||
inputStream, err := c.ImageSave(context.TODO(), []string{ref.StringWithinTransport()})
|
||||
inputStream, err := c.ImageSave(ctx, []string{ref.StringWithinTransport()})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error loading image from docker engine")
|
||||
}
|
||||
defer inputStream.Close()
|
||||
|
||||
// FIXME: use SystemContext here.
|
||||
tarCopyFile, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-daemon-tar")
|
||||
src, err := tarfile.NewSourceFromStream(inputStream)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer tarCopyFile.Close()
|
||||
|
||||
succeeded := false
|
||||
defer func() {
|
||||
if !succeeded {
|
||||
os.Remove(tarCopyFile.Name())
|
||||
}
|
||||
}()
|
||||
|
||||
if _, err := io.Copy(tarCopyFile, inputStream); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
succeeded = true
|
||||
return &daemonImageSource{
|
||||
ref: ref,
|
||||
Source: tarfile.NewSource(tarCopyFile.Name()),
|
||||
tarCopyPath: tarCopyFile.Name(),
|
||||
ref: ref,
|
||||
Source: src,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -78,7 +56,7 @@ func (s *daemonImageSource) Reference() types.ImageReference {
|
||||
return s.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
func (s *daemonImageSource) Close() error {
|
||||
return os.Remove(s.tarCopyPath)
|
||||
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
|
||||
func (s *daemonImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
75
vendor/github.com/containers/image/docker/daemon/daemon_transport.go
generated
vendored
75
vendor/github.com/containers/image/docker/daemon/daemon_transport.go
generated
vendored
@@ -1,13 +1,16 @@
|
||||
package daemon
|
||||
|
||||
import (
|
||||
"github.com/pkg/errors"
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/docker/policyconfiguration"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -34,8 +37,15 @@ func (t daemonTransport) ParseReference(reference string) (types.ImageReference,
|
||||
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
|
||||
// scope passed to this function will not be "", that value is always allowed.
|
||||
func (t daemonTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
// See the explanation in daemonReference.PolicyConfigurationIdentity.
|
||||
return errors.New(`docker-daemon: does not support any scopes except the default "" one`)
|
||||
// ID values cannot be effectively namespaced, and are clearly invalid host:port values.
|
||||
if _, err := digest.Parse(scope); err == nil {
|
||||
return errors.Errorf(`docker-daemon: can not use algo:digest value %s as a namespace`, scope)
|
||||
}
|
||||
|
||||
// FIXME? We could be verifying the various character set and length restrictions
|
||||
// from docker/distribution/reference.regexp.go, but other than that there
|
||||
// are few semantically invalid strings.
|
||||
return nil
|
||||
}
|
||||
|
||||
// daemonReference is an ImageReference for images managed by a local Docker daemon
|
||||
@@ -87,6 +97,8 @@ func NewReference(id digest.Digest, ref reference.Named) (types.ImageReference,
|
||||
// A github.com/distribution/reference value can have a tag and a digest at the same time!
|
||||
// Most versions of docker/reference do not handle that (ignoring the tag), so reject such input.
|
||||
// This MAY be accepted in the future.
|
||||
// (Even if it were supported, the semantics of policy namespaces are unclear - should we drop
|
||||
// the tag or the digest first?)
|
||||
_, isTagged := ref.(reference.NamedTagged)
|
||||
_, isDigested := ref.(reference.Canonical)
|
||||
if isTagged && isDigested {
|
||||
@@ -136,9 +148,28 @@ func (ref daemonReference) DockerReference() reference.Named {
|
||||
// Returns "" if configuration identities for these references are not supported.
|
||||
func (ref daemonReference) PolicyConfigurationIdentity() string {
|
||||
// We must allow referring to images in the daemon by image ID, otherwise untagged images would not be accessible.
|
||||
// But the existence of image IDs means that we can’t truly well namespace the input; the untagged images would have to fall into the default policy,
|
||||
// which can be unexpected. So, punt.
|
||||
return "" // This still allows using the default "" scope to define a policy for this transport.
|
||||
// But the existence of image IDs means that we can’t truly well namespace the input:
|
||||
// a single image can be namespaced either using the name or the ID depending on how it is named.
|
||||
//
|
||||
// That’s fairly unexpected, but we have to cope somehow.
|
||||
//
|
||||
// So, use the ordinary docker/policyconfiguration namespacing for named images.
|
||||
// image IDs all fall into the root namespace.
|
||||
// Users can set up the root namespace to be either untrusted or rejected,
|
||||
// and to set up specific trust for named namespaces. This allows verifying image
|
||||
// identity when a name is known, and unnamed images would be untrusted or rejected.
|
||||
switch {
|
||||
case ref.id != "":
|
||||
return "" // This still allows using the default "" scope to define a global policy for ID-identified images.
|
||||
case ref.ref != nil:
|
||||
res, err := policyconfiguration.DockerReferenceIdentity(ref.ref)
|
||||
if res == "" || err != nil { // Coverage: Should never happen, NewReference above should refuse values which could cause a failure.
|
||||
panic(fmt.Sprintf("Internal inconsistency: policyconfiguration.DockerReferenceIdentity returned %#v, %v", res, err))
|
||||
}
|
||||
return res
|
||||
default: // Coverage: Should never happen, NewReference above should refuse such values.
|
||||
panic("Internal inconsistency: daemonReference has empty id and nil ref")
|
||||
}
|
||||
}
|
||||
|
||||
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
|
||||
@@ -148,33 +179,43 @@ func (ref daemonReference) PolicyConfigurationIdentity() string {
|
||||
// and each following element to be a prefix of the element preceding it.
|
||||
func (ref daemonReference) PolicyConfigurationNamespaces() []string {
|
||||
// See the explanation in daemonReference.PolicyConfigurationIdentity.
|
||||
return []string{}
|
||||
switch {
|
||||
case ref.id != "":
|
||||
return []string{}
|
||||
case ref.ref != nil:
|
||||
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
|
||||
default: // Coverage: Should never happen, NewReference above should refuse such values.
|
||||
panic("Internal inconsistency: daemonReference has empty id and nil ref")
|
||||
}
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
func (ref daemonReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src, err := newImageSource(ctx, ref)
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
func (ref daemonReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
src, err := newImageSource(ctx, sys, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return image.FromSource(src)
|
||||
return image.FromSource(ctx, sys, src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref daemonReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref)
|
||||
func (ref daemonReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, sys, ref)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref daemonReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, ref)
|
||||
func (ref daemonReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, sys, ref)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref daemonReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
func (ref daemonReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
// Should this just untag the image? Should this stop running containers?
|
||||
// The semantics is not quite as clear as for remote repositories.
|
||||
// The user can run (docker rmi) directly anyway, so, for now(?), punt instead of trying to guess what the user meant.
|
||||
|
||||
228
vendor/github.com/containers/image/docker/docker_client.go
generated
vendored
228
vendor/github.com/containers/image/docker/docker_client.go
generated
vendored
@@ -8,7 +8,10 @@ import (
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -24,10 +27,9 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
dockerHostname = "docker.io"
|
||||
dockerRegistry = "registry-1.docker.io"
|
||||
|
||||
systemPerHostCertDirPath = "/etc/docker/certs.d"
|
||||
dockerHostname = "docker.io"
|
||||
dockerV1Hostname = "index.docker.io"
|
||||
dockerRegistry = "registry-1.docker.io"
|
||||
|
||||
resolvedPingV2URL = "%s://%s/v2/"
|
||||
resolvedPingV1URL = "%s://%s/v1/_ping"
|
||||
@@ -49,6 +51,7 @@ var (
|
||||
ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
|
||||
// ErrUnauthorizedForCredentials is returned when the status code returned is 401
|
||||
ErrUnauthorizedForCredentials = errors.New("unable to retrieve auth token: invalid username/password")
|
||||
systemPerHostCertDirPaths = [2]string{"/etc/containers/certs.d", "/etc/docker/certs.d"}
|
||||
)
|
||||
|
||||
// extensionSignature and extensionSignatureList come from github.com/openshift/origin/pkg/dockerregistry/server/signaturedispatcher.go:
|
||||
@@ -66,15 +69,16 @@ type extensionSignatureList struct {
|
||||
}
|
||||
|
||||
type bearerToken struct {
|
||||
Token string `json:"token"`
|
||||
ExpiresIn int `json:"expires_in"`
|
||||
IssuedAt time.Time `json:"issued_at"`
|
||||
Token string `json:"token"`
|
||||
AccessToken string `json:"access_token"`
|
||||
ExpiresIn int `json:"expires_in"`
|
||||
IssuedAt time.Time `json:"issued_at"`
|
||||
}
|
||||
|
||||
// dockerClient is configuration for dealing with a single Docker registry.
|
||||
type dockerClient struct {
|
||||
// The following members are set by newDockerClient and do not change afterwards.
|
||||
ctx *types.SystemContext
|
||||
sys *types.SystemContext
|
||||
registry string
|
||||
username string
|
||||
password string
|
||||
@@ -96,6 +100,24 @@ type authScope struct {
|
||||
actions string
|
||||
}
|
||||
|
||||
func newBearerTokenFromJSONBlob(blob []byte) (*bearerToken, error) {
|
||||
token := new(bearerToken)
|
||||
if err := json.Unmarshal(blob, &token); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if token.Token == "" {
|
||||
token.Token = token.AccessToken
|
||||
}
|
||||
if token.ExpiresIn < minimumTokenLifetimeSeconds {
|
||||
token.ExpiresIn = minimumTokenLifetimeSeconds
|
||||
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
|
||||
}
|
||||
if token.IssuedAt.IsZero() {
|
||||
token.IssuedAt = time.Now().UTC()
|
||||
}
|
||||
return token, nil
|
||||
}
|
||||
|
||||
// this is cloned from docker/go-connections because upstream docker has changed
|
||||
// it and make deps here fails otherwise.
|
||||
// We'll drop this once we upgrade to docker 1.13.x deps.
|
||||
@@ -109,40 +131,63 @@ func serverDefault() *tls.Config {
|
||||
}
|
||||
|
||||
// dockerCertDir returns a path to a directory to be consumed by tlsclientconfig.SetupCertificates() depending on ctx and hostPort.
|
||||
func dockerCertDir(ctx *types.SystemContext, hostPort string) string {
|
||||
if ctx != nil && ctx.DockerCertPath != "" {
|
||||
return ctx.DockerCertPath
|
||||
func dockerCertDir(sys *types.SystemContext, hostPort string) (string, error) {
|
||||
if sys != nil && sys.DockerCertPath != "" {
|
||||
return sys.DockerCertPath, nil
|
||||
}
|
||||
var hostCertDir string
|
||||
if ctx != nil && ctx.DockerPerHostCertDirPath != "" {
|
||||
hostCertDir = ctx.DockerPerHostCertDirPath
|
||||
} else if ctx != nil && ctx.RootForImplicitAbsolutePaths != "" {
|
||||
hostCertDir = filepath.Join(ctx.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
|
||||
} else {
|
||||
hostCertDir = systemPerHostCertDirPath
|
||||
if sys != nil && sys.DockerPerHostCertDirPath != "" {
|
||||
return filepath.Join(sys.DockerPerHostCertDirPath, hostPort), nil
|
||||
}
|
||||
return filepath.Join(hostCertDir, hostPort)
|
||||
|
||||
var (
|
||||
hostCertDir string
|
||||
fullCertDirPath string
|
||||
)
|
||||
for _, systemPerHostCertDirPath := range systemPerHostCertDirPaths {
|
||||
if sys != nil && sys.RootForImplicitAbsolutePaths != "" {
|
||||
hostCertDir = filepath.Join(sys.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
|
||||
} else {
|
||||
hostCertDir = systemPerHostCertDirPath
|
||||
}
|
||||
|
||||
fullCertDirPath = filepath.Join(hostCertDir, hostPort)
|
||||
_, err := os.Stat(fullCertDirPath)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
if os.IsNotExist(err) {
|
||||
continue
|
||||
}
|
||||
if os.IsPermission(err) {
|
||||
logrus.Debugf("error accessing certs directory due to permissions: %v", err)
|
||||
continue
|
||||
}
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
return fullCertDirPath, nil
|
||||
}
|
||||
|
||||
// newDockerClientFromRef returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
|
||||
// “write” specifies whether the client will be used for "write" access (in particular passed to lookaside.go:toplevelFromSection)
|
||||
func newDockerClientFromRef(ctx *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
|
||||
func newDockerClientFromRef(sys *types.SystemContext, ref dockerReference, write bool, actions string) (*dockerClient, error) {
|
||||
registry := reference.Domain(ref.ref)
|
||||
username, password, err := config.GetAuthentication(ctx, reference.Domain(ref.ref))
|
||||
username, password, err := config.GetAuthentication(sys, reference.Domain(ref.ref))
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error getting username and password")
|
||||
}
|
||||
sigBase, err := configuredSignatureStorageBase(ctx, ref, write)
|
||||
sigBase, err := configuredSignatureStorageBase(sys, ref, write)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
remoteName := reference.Path(ref.ref)
|
||||
|
||||
return newDockerClientWithDetails(ctx, registry, username, password, actions, sigBase, remoteName)
|
||||
return newDockerClientWithDetails(sys, registry, username, password, actions, sigBase, remoteName)
|
||||
}
|
||||
|
||||
// newDockerClientWithDetails returns a new dockerClient instance for the given parameters
|
||||
func newDockerClientWithDetails(ctx *types.SystemContext, registry, username, password, actions string, sigBase signatureStorageBase, remoteName string) (*dockerClient, error) {
|
||||
func newDockerClientWithDetails(sys *types.SystemContext, registry, username, password, actions string, sigBase signatureStorageBase, remoteName string) (*dockerClient, error) {
|
||||
hostName := registry
|
||||
if registry == dockerHostname {
|
||||
registry = dockerRegistry
|
||||
@@ -155,17 +200,20 @@ func newDockerClientWithDetails(ctx *types.SystemContext, registry, username, pa
|
||||
// dockerHostname here, because it is more symmetrical to read the configuration in that case as well, and because
|
||||
// generally the UI hides the existence of the different dockerRegistry. But note that this behavior is
|
||||
// undocumented and may change if docker/docker changes.
|
||||
certDir := dockerCertDir(ctx, hostName)
|
||||
certDir, err := dockerCertDir(sys, hostName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := tlsclientconfig.SetupCertificates(certDir, tr.TLSClientConfig); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if ctx != nil && ctx.DockerInsecureSkipTLSVerify {
|
||||
if sys != nil && sys.DockerInsecureSkipTLSVerify {
|
||||
tr.TLSClientConfig.InsecureSkipVerify = true
|
||||
}
|
||||
|
||||
return &dockerClient{
|
||||
ctx: ctx,
|
||||
sys: sys,
|
||||
registry: registry,
|
||||
username: username,
|
||||
password: password,
|
||||
@@ -180,8 +228,8 @@ func newDockerClientWithDetails(ctx *types.SystemContext, registry, username, pa
|
||||
|
||||
// CheckAuth validates the credentials by attempting to log into the registry
|
||||
// returns an error if an error occcured while making the http request or the status code received was 401
|
||||
func CheckAuth(ctx context.Context, sCtx *types.SystemContext, username, password, registry string) error {
|
||||
newLoginClient, err := newDockerClientWithDetails(sCtx, registry, username, password, "", nil, "")
|
||||
func CheckAuth(ctx context.Context, sys *types.SystemContext, username, password, registry string) error {
|
||||
newLoginClient, err := newDockerClientWithDetails(sys, registry, username, password, "", nil, "")
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "error creating new docker client")
|
||||
}
|
||||
@@ -202,6 +250,100 @@ func CheckAuth(ctx context.Context, sCtx *types.SystemContext, username, passwor
|
||||
}
|
||||
}
|
||||
|
||||
// SearchResult holds the information of each matching image
|
||||
// It matches the output returned by the v1 endpoint
|
||||
type SearchResult struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
// StarCount states the number of stars the image has
|
||||
StarCount int `json:"star_count"`
|
||||
IsTrusted bool `json:"is_trusted"`
|
||||
// IsAutomated states whether the image is an automated build
|
||||
IsAutomated bool `json:"is_automated"`
|
||||
// IsOfficial states whether the image is an official build
|
||||
IsOfficial bool `json:"is_official"`
|
||||
}
|
||||
|
||||
// SearchRegistry queries a registry for images that contain "image" in their name
|
||||
// The limit is the max number of results desired
|
||||
// Note: The limit value doesn't work with all registries
|
||||
// for example registry.access.redhat.com returns all the results without limiting it to the limit value
|
||||
func SearchRegistry(ctx context.Context, sys *types.SystemContext, registry, image string, limit int) ([]SearchResult, error) {
|
||||
type V2Results struct {
|
||||
// Repositories holds the results returned by the /v2/_catalog endpoint
|
||||
Repositories []string `json:"repositories"`
|
||||
}
|
||||
type V1Results struct {
|
||||
// Results holds the results returned by the /v1/search endpoint
|
||||
Results []SearchResult `json:"results"`
|
||||
}
|
||||
v2Res := &V2Results{}
|
||||
v1Res := &V1Results{}
|
||||
|
||||
// The /v2/_catalog endpoint has been disabled for docker.io therefore the call made to that endpoint will fail
|
||||
// So using the v1 hostname for docker.io for simplicity of implementation and the fact that it returns search results
|
||||
if registry == dockerHostname {
|
||||
registry = dockerV1Hostname
|
||||
}
|
||||
|
||||
client, err := newDockerClientWithDetails(sys, registry, "", "", "", nil, "")
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error creating new docker client")
|
||||
}
|
||||
|
||||
logrus.Debugf("trying to talk to v2 search endpoint\n")
|
||||
resp, err := client.makeRequest(ctx, "GET", "/v2/_catalog", nil, nil)
|
||||
if err != nil {
|
||||
logrus.Debugf("error getting search results from v2 endpoint %q: %v", registry, err)
|
||||
} else {
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
logrus.Debugf("error getting search results from v2 endpoint %q, status code %q", registry, resp.StatusCode)
|
||||
} else {
|
||||
if err := json.NewDecoder(resp.Body).Decode(v2Res); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
searchRes := []SearchResult{}
|
||||
for _, repo := range v2Res.Repositories {
|
||||
if strings.Contains(repo, image) {
|
||||
res := SearchResult{
|
||||
Name: repo,
|
||||
}
|
||||
searchRes = append(searchRes, res)
|
||||
}
|
||||
}
|
||||
return searchRes, nil
|
||||
}
|
||||
}
|
||||
|
||||
// set up the query values for the v1 endpoint
|
||||
u := url.URL{
|
||||
Path: "/v1/search",
|
||||
}
|
||||
q := u.Query()
|
||||
q.Set("q", image)
|
||||
q.Set("n", strconv.Itoa(limit))
|
||||
u.RawQuery = q.Encode()
|
||||
|
||||
logrus.Debugf("trying to talk to v1 search endpoint\n")
|
||||
resp, err = client.makeRequest(ctx, "GET", u.String(), nil, nil)
|
||||
if err != nil {
|
||||
logrus.Debugf("error getting search results from v1 endpoint %q: %v", registry, err)
|
||||
} else {
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
logrus.Debugf("error getting search results from v1 endpoint %q, status code %q", registry, resp.StatusCode)
|
||||
} else {
|
||||
if err := json.NewDecoder(resp.Body).Decode(v1Res); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return v1Res.Results, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, errors.Wrapf(err, "couldn't search registry %q", registry)
|
||||
}
|
||||
|
||||
// makeRequest creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
|
||||
// The host name and schema is taken from the client or autodetected, and the path is relative to it, i.e. the path usually starts with /v2/.
|
||||
func (c *dockerClient) makeRequest(ctx context.Context, method, path string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
|
||||
@@ -232,8 +374,8 @@ func (c *dockerClient) makeRequestToResolvedURL(ctx context.Context, method, url
|
||||
req.Header.Add(n, hh)
|
||||
}
|
||||
}
|
||||
if c.ctx != nil && c.ctx.DockerRegistryUserAgent != "" {
|
||||
req.Header.Add("User-Agent", c.ctx.DockerRegistryUserAgent)
|
||||
if c.sys != nil && c.sys.DockerRegistryUserAgent != "" {
|
||||
req.Header.Add("User-Agent", c.sys.DockerRegistryUserAgent)
|
||||
}
|
||||
if sendAuth {
|
||||
if err := c.setupRequestAuth(req); err != nil {
|
||||
@@ -332,18 +474,8 @@ func (c *dockerClient) getBearerToken(ctx context.Context, realm, service, scope
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var token bearerToken
|
||||
if err := json.Unmarshal(tokenBlob, &token); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if token.ExpiresIn < minimumTokenLifetimeSeconds {
|
||||
token.ExpiresIn = minimumTokenLifetimeSeconds
|
||||
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
|
||||
}
|
||||
if token.IssuedAt.IsZero() {
|
||||
token.IssuedAt = time.Now().UTC()
|
||||
}
|
||||
return &token, nil
|
||||
|
||||
return newBearerTokenFromJSONBlob(tokenBlob)
|
||||
}
|
||||
|
||||
// detectProperties detects various properties of the registry.
|
||||
@@ -363,7 +495,7 @@ func (c *dockerClient) detectProperties(ctx context.Context) error {
|
||||
defer resp.Body.Close()
|
||||
logrus.Debugf("Ping %s status %d", url, resp.StatusCode)
|
||||
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
|
||||
return errors.Errorf("error pinging repository, response code %d", resp.StatusCode)
|
||||
return errors.Errorf("error pinging registry %s, response code %d", c.registry, resp.StatusCode)
|
||||
}
|
||||
c.challenges = parseAuthHeader(resp.Header)
|
||||
c.scheme = scheme
|
||||
@@ -371,12 +503,12 @@ func (c *dockerClient) detectProperties(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
err := ping("https")
|
||||
if err != nil && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
|
||||
if err != nil && c.sys != nil && c.sys.DockerInsecureSkipTLSVerify {
|
||||
err = ping("http")
|
||||
}
|
||||
if err != nil {
|
||||
err = errors.Wrap(err, "pinging docker registry returned")
|
||||
if c.ctx != nil && c.ctx.DockerDisableV1Ping {
|
||||
if c.sys != nil && c.sys.DockerDisableV1Ping {
|
||||
return err
|
||||
}
|
||||
// best effort to understand if we're talking to a V1 registry
|
||||
@@ -395,7 +527,7 @@ func (c *dockerClient) detectProperties(ctx context.Context) error {
|
||||
return true
|
||||
}
|
||||
isV1 := pingV1("https")
|
||||
if !isV1 && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
|
||||
if !isV1 && c.sys != nil && c.sys.DockerInsecureSkipTLSVerify {
|
||||
isV1 = pingV1("http")
|
||||
}
|
||||
if isV1 {
|
||||
@@ -415,7 +547,7 @@ func (c *dockerClient) getExtensionsSignatures(ctx context.Context, ref dockerRe
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusOK {
|
||||
return nil, client.HandleErrorResponse(res)
|
||||
return nil, errors.Wrapf(client.HandleErrorResponse(res), "Error downloading signatures for %s in %s", manifestDigest, ref.ref.Name())
|
||||
}
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
|
||||
98
vendor/github.com/containers/image/docker/docker_image.go
generated
vendored
98
vendor/github.com/containers/image/docker/docker_image.go
generated
vendored
@@ -5,6 +5,8 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/image"
|
||||
@@ -12,26 +14,26 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Image is a Docker-specific implementation of types.Image with a few extra methods
|
||||
// Image is a Docker-specific implementation of types.ImageCloser with a few extra methods
|
||||
// which are specific to Docker.
|
||||
type Image struct {
|
||||
types.Image
|
||||
types.ImageCloser
|
||||
src *dockerImageSource
|
||||
}
|
||||
|
||||
// newImage returns a new Image interface type after setting up
|
||||
// a client to the registry hosting the given image.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
func newImage(ctx *types.SystemContext, ref dockerReference) (types.Image, error) {
|
||||
s, err := newImageSource(ctx, ref)
|
||||
func newImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) (types.ImageCloser, error) {
|
||||
s, err := newImageSource(sys, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
img, err := image.FromSource(s)
|
||||
img, err := image.FromSource(ctx, sys, s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &Image{Image: img, src: s}, nil
|
||||
return &Image{ImageCloser: img, src: s}, nil
|
||||
}
|
||||
|
||||
// SourceRefFullName returns a fully expanded name for the repository this image is in.
|
||||
@@ -39,25 +41,67 @@ func (i *Image) SourceRefFullName() string {
|
||||
return i.src.ref.ref.Name()
|
||||
}
|
||||
|
||||
// GetRepositoryTags list all tags available in the repository. Note that this has no connection with the tag(s) used for this specific image, if any.
|
||||
func (i *Image) GetRepositoryTags() ([]string, error) {
|
||||
path := fmt.Sprintf(tagsPath, reference.Path(i.src.ref.ref))
|
||||
// FIXME: Pass the context.Context
|
||||
res, err := i.src.c.makeRequest(context.TODO(), "GET", path, nil, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusOK {
|
||||
// print url also
|
||||
return nil, errors.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
|
||||
}
|
||||
type tagsRes struct {
|
||||
Tags []string
|
||||
}
|
||||
tags := &tagsRes{}
|
||||
if err := json.NewDecoder(res.Body).Decode(tags); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return tags.Tags, nil
|
||||
// GetRepositoryTags list all tags available in the repository. The tag
|
||||
// provided inside the ImageReference will be ignored. (This is a
|
||||
// backward-compatible shim method which calls the module-level
|
||||
// GetRepositoryTags)
|
||||
func (i *Image) GetRepositoryTags(ctx context.Context) ([]string, error) {
|
||||
return GetRepositoryTags(ctx, i.src.c.sys, i.src.ref)
|
||||
}
|
||||
|
||||
// GetRepositoryTags list all tags available in the repository. The tag
|
||||
// provided inside the ImageReference will be ignored.
|
||||
func GetRepositoryTags(ctx context.Context, sys *types.SystemContext, ref types.ImageReference) ([]string, error) {
|
||||
dr, ok := ref.(dockerReference)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("ref must be a dockerReference")
|
||||
}
|
||||
|
||||
path := fmt.Sprintf(tagsPath, reference.Path(dr.ref))
|
||||
client, err := newDockerClientFromRef(sys, dr, false, "pull")
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to create client")
|
||||
}
|
||||
|
||||
tags := make([]string, 0)
|
||||
|
||||
for {
|
||||
res, err := client.makeRequest(ctx, "GET", path, nil, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusOK {
|
||||
// print url also
|
||||
return nil, errors.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
|
||||
}
|
||||
|
||||
var tagsHolder struct {
|
||||
Tags []string
|
||||
}
|
||||
if err = json.NewDecoder(res.Body).Decode(&tagsHolder); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tags = append(tags, tagsHolder.Tags...)
|
||||
|
||||
link := res.Header.Get("Link")
|
||||
if link == "" {
|
||||
break
|
||||
}
|
||||
|
||||
linkURLStr := strings.Trim(strings.Split(link, ";")[0], "<>")
|
||||
linkURL, err := url.Parse(linkURLStr)
|
||||
if err != nil {
|
||||
return tags, err
|
||||
}
|
||||
|
||||
// can be relative or absolute, but we only want the path (and I
|
||||
// guess we're in trouble if it forwards to a new place...)
|
||||
path = linkURL.Path
|
||||
if linkURL.RawQuery != "" {
|
||||
path += "?"
|
||||
path += linkURL.RawQuery
|
||||
}
|
||||
}
|
||||
return tags, nil
|
||||
}
|
||||
|
||||
65
vendor/github.com/containers/image/docker/docker_image_dest.go
generated
vendored
65
vendor/github.com/containers/image/docker/docker_image_dest.go
generated
vendored
@@ -33,8 +33,8 @@ type dockerImageDestination struct {
|
||||
}
|
||||
|
||||
// newImageDestination creates a new ImageDestination for the specified image reference.
|
||||
func newImageDestination(ctx *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
|
||||
c, err := newDockerClientFromRef(ctx, ref, true, "pull,push")
|
||||
func newImageDestination(sys *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
|
||||
c, err := newDockerClientFromRef(sys, ref, true, "pull,push")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -66,8 +66,8 @@ func (d *dockerImageDestination) SupportedManifestMIMETypes() []string {
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *dockerImageDestination) SupportsSignatures() error {
|
||||
if err := d.c.detectProperties(context.TODO()); err != nil {
|
||||
func (d *dockerImageDestination) SupportsSignatures(ctx context.Context) error {
|
||||
if err := d.c.detectProperties(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
switch {
|
||||
@@ -80,9 +80,8 @@ func (d *dockerImageDestination) SupportsSignatures() error {
|
||||
}
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *dockerImageDestination) ShouldCompressLayers() bool {
|
||||
return true
|
||||
func (d *dockerImageDestination) DesiredLayerCompression() types.LayerCompression {
|
||||
return types.Compress
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
@@ -110,9 +109,9 @@ func (c *sizeCounter) Write(p []byte) (n int, err error) {
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
func (d *dockerImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
|
||||
if inputInfo.Digest.String() != "" {
|
||||
haveBlob, size, err := d.HasBlob(inputInfo)
|
||||
haveBlob, size, err := d.HasBlob(ctx, inputInfo)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
@@ -124,14 +123,14 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
|
||||
// FIXME? Chunked upload, progress reporting, etc.
|
||||
uploadPath := fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref))
|
||||
logrus.Debugf("Uploading %s", uploadPath)
|
||||
res, err := d.c.makeRequest(context.TODO(), "POST", uploadPath, nil, nil)
|
||||
res, err := d.c.makeRequest(ctx, "POST", uploadPath, nil, nil)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusAccepted {
|
||||
logrus.Debugf("Error initiating layer upload, response %#v", *res)
|
||||
return types.BlobInfo{}, errors.Errorf("Error initiating layer upload to %s, status %d", uploadPath, res.StatusCode)
|
||||
return types.BlobInfo{}, errors.Wrapf(client.HandleErrorResponse(res), "Error initiating layer upload to %s in %s", uploadPath, d.c.registry)
|
||||
}
|
||||
uploadLocation, err := res.Location()
|
||||
if err != nil {
|
||||
@@ -141,7 +140,7 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
|
||||
digester := digest.Canonical.Digester()
|
||||
sizeCounter := &sizeCounter{}
|
||||
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
|
||||
res, err = d.c.makeRequestToResolvedURL(context.TODO(), "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
|
||||
res, err = d.c.makeRequestToResolvedURL(ctx, "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
|
||||
if err != nil {
|
||||
logrus.Debugf("Error uploading layer chunked, response %#v", res)
|
||||
return types.BlobInfo{}, err
|
||||
@@ -160,14 +159,14 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
|
||||
// TODO: check inputInfo.Digest == computedDigest https://github.com/containers/image/pull/70#discussion_r77646717
|
||||
locationQuery.Set("digest", computedDigest.String())
|
||||
uploadLocation.RawQuery = locationQuery.Encode()
|
||||
res, err = d.c.makeRequestToResolvedURL(context.TODO(), "PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
|
||||
res, err = d.c.makeRequestToResolvedURL(ctx, "PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusCreated {
|
||||
logrus.Debugf("Error uploading layer, response %#v", *res)
|
||||
return types.BlobInfo{}, errors.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
|
||||
return types.BlobInfo{}, errors.Wrapf(client.HandleErrorResponse(res), "Error uploading layer to %s", uploadLocation)
|
||||
}
|
||||
|
||||
logrus.Debugf("Upload of layer %s complete", computedDigest)
|
||||
@@ -178,14 +177,14 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
|
||||
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
|
||||
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
|
||||
// it returns a non-nil error only on an unexpected failure.
|
||||
func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
func (d *dockerImageDestination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
|
||||
if info.Digest == "" {
|
||||
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
|
||||
}
|
||||
checkPath := fmt.Sprintf(blobsPath, reference.Path(d.ref.ref), info.Digest.String())
|
||||
|
||||
logrus.Debugf("Checking %s", checkPath)
|
||||
res, err := d.c.makeRequest(context.TODO(), "HEAD", checkPath, nil, nil)
|
||||
res, err := d.c.makeRequest(ctx, "HEAD", checkPath, nil, nil)
|
||||
if err != nil {
|
||||
return false, -1, err
|
||||
}
|
||||
@@ -196,7 +195,7 @@ func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, erro
|
||||
return true, getBlobSize(res), nil
|
||||
case http.StatusUnauthorized:
|
||||
logrus.Debugf("... not authorized")
|
||||
return false, -1, errors.Errorf("not authorized to read from destination repository %s", reference.Path(d.ref.ref))
|
||||
return false, -1, errors.Wrapf(client.HandleErrorResponse(res), "Error checking whether a blob %s exists in %s", info.Digest, d.ref.ref.Name())
|
||||
case http.StatusNotFound:
|
||||
logrus.Debugf("... not present")
|
||||
return false, -1, nil
|
||||
@@ -205,7 +204,7 @@ func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, erro
|
||||
}
|
||||
}
|
||||
|
||||
func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
func (d *dockerImageDestination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return info, nil
|
||||
}
|
||||
|
||||
@@ -213,7 +212,7 @@ func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInf
|
||||
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
|
||||
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
|
||||
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
|
||||
func (d *dockerImageDestination) PutManifest(m []byte) error {
|
||||
func (d *dockerImageDestination) PutManifest(ctx context.Context, m []byte) error {
|
||||
digest, err := manifest.Digest(m)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -231,13 +230,13 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
|
||||
if mimeType != "" {
|
||||
headers["Content-Type"] = []string{mimeType}
|
||||
}
|
||||
res, err := d.c.makeRequest(context.TODO(), "PUT", path, headers, bytes.NewReader(m))
|
||||
res, err := d.c.makeRequest(ctx, "PUT", path, headers, bytes.NewReader(m))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusCreated {
|
||||
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest to %s", path)
|
||||
if !successStatus(res.StatusCode) {
|
||||
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest %s to %s", refTail, d.ref.ref.Name())
|
||||
if isManifestInvalidError(errors.Cause(err)) {
|
||||
err = types.ManifestTypeRejectedError{Err: err}
|
||||
}
|
||||
@@ -246,6 +245,12 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// successStatus returns true if the argument is a successful HTTP response
|
||||
// code (in the range 200 - 399 inclusive).
|
||||
func successStatus(status int) bool {
|
||||
return status >= 200 && status <= 399
|
||||
}
|
||||
|
||||
// isManifestInvalidError returns true iff err from client.HandleErrorReponse is a “manifest invalid” error.
|
||||
func isManifestInvalidError(err error) bool {
|
||||
errors, ok := err.(errcode.Errors)
|
||||
@@ -262,19 +267,19 @@ func isManifestInvalidError(err error) bool {
|
||||
return ec.ErrorCode() == v2.ErrorCodeManifestInvalid || ec.ErrorCode() == v2.ErrorCodeTagInvalid
|
||||
}
|
||||
|
||||
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
func (d *dockerImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
|
||||
// Do not fail if we don’t really need to support signatures.
|
||||
if len(signatures) == 0 {
|
||||
return nil
|
||||
}
|
||||
if err := d.c.detectProperties(context.TODO()); err != nil {
|
||||
if err := d.c.detectProperties(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
switch {
|
||||
case d.c.signatureBase != nil:
|
||||
return d.putSignaturesToLookaside(signatures)
|
||||
case d.c.supportsSignatures:
|
||||
return d.putSignaturesToAPIExtension(signatures)
|
||||
return d.putSignaturesToAPIExtension(ctx, signatures)
|
||||
default:
|
||||
return errors.Errorf("X-Registry-Supports-Signatures extension not supported, and lookaside is not configured")
|
||||
}
|
||||
@@ -373,7 +378,7 @@ func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error
|
||||
}
|
||||
|
||||
// putSignaturesToAPIExtension implements PutSignatures() using the X-Registry-Supports-Signatures API extension.
|
||||
func (d *dockerImageDestination) putSignaturesToAPIExtension(signatures [][]byte) error {
|
||||
func (d *dockerImageDestination) putSignaturesToAPIExtension(ctx context.Context, signatures [][]byte) error {
|
||||
// Skip dealing with the manifest digest, or reading the old state, if not necessary.
|
||||
if len(signatures) == 0 {
|
||||
return nil
|
||||
@@ -388,7 +393,7 @@ func (d *dockerImageDestination) putSignaturesToAPIExtension(signatures [][]byte
|
||||
// always adds signatures. Eventually we should also allow removing signatures,
|
||||
// but the X-Registry-Supports-Signatures API extension does not support that yet.
|
||||
|
||||
existingSignatures, err := d.c.getExtensionsSignatures(context.TODO(), d.ref, d.manifestDigest)
|
||||
existingSignatures, err := d.c.getExtensionsSignatures(ctx, d.ref, d.manifestDigest)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -430,7 +435,7 @@ sigExists:
|
||||
}
|
||||
|
||||
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), d.manifestDigest.String())
|
||||
res, err := d.c.makeRequest(context.TODO(), "PUT", path, nil, bytes.NewReader(body))
|
||||
res, err := d.c.makeRequest(ctx, "PUT", path, nil, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -441,7 +446,7 @@ sigExists:
|
||||
logrus.Debugf("Error body %s", string(body))
|
||||
}
|
||||
logrus.Debugf("Error uploading signature, status %d, %#v", res.StatusCode, res)
|
||||
return errors.Errorf("Error uploading signature to %s, status %d", path, res.StatusCode)
|
||||
return errors.Wrapf(client.HandleErrorResponse(res), "Error uploading signature to %s in %s", path, d.c.registry)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -452,6 +457,6 @@ sigExists:
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *dockerImageDestination) Commit() error {
|
||||
func (d *dockerImageDestination) Commit(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
74
vendor/github.com/containers/image/docker/docker_image_src.go
generated
vendored
74
vendor/github.com/containers/image/docker/docker_image_src.go
generated
vendored
@@ -30,8 +30,8 @@ type dockerImageSource struct {
|
||||
|
||||
// newImageSource creates a new ImageSource for the specified image reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func newImageSource(ctx *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
|
||||
c, err := newDockerClientFromRef(ctx, ref, false, "pull")
|
||||
func newImageSource(sys *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
|
||||
c, err := newDockerClientFromRef(sys, ref, false, "pull")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -52,6 +52,11 @@ func (s *dockerImageSource) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
|
||||
func (s *dockerImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// simplifyContentType drops parameters from a HTTP media type (see https://tools.ietf.org/html/rfc7231#section-3.1.1.1)
|
||||
// Alternatively, an empty string is returned unchanged, and invalid values are "simplified" to an empty string.
|
||||
func simplifyContentType(contentType string) string {
|
||||
@@ -67,8 +72,13 @@ func simplifyContentType(contentType string) string {
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
|
||||
err := s.ensureManifestIsLoaded(context.TODO())
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
|
||||
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
|
||||
func (s *dockerImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
|
||||
if instanceDigest != nil {
|
||||
return s.fetchManifest(ctx, instanceDigest.String())
|
||||
}
|
||||
err := s.ensureManifestIsLoaded(ctx)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
@@ -85,7 +95,7 @@ func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest strin
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusOK {
|
||||
return nil, "", client.HandleErrorResponse(res)
|
||||
return nil, "", errors.Wrapf(client.HandleErrorResponse(res), "Error reading manifest %s in %s", tagOrDigest, s.ref.ref.Name())
|
||||
}
|
||||
manblob, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
@@ -94,18 +104,12 @@ func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest strin
|
||||
return manblob, simplifyContentType(res.Header.Get("Content-Type")), nil
|
||||
}
|
||||
|
||||
// GetTargetManifest returns an image's manifest given a digest.
|
||||
// This is mainly used to retrieve a single image's manifest out of a manifest list.
|
||||
func (s *dockerImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
return s.fetchManifest(context.TODO(), digest.String())
|
||||
}
|
||||
|
||||
// ensureManifestIsLoaded sets s.cachedManifest and s.cachedManifestMIMEType
|
||||
//
|
||||
// ImageSource implementations are not required or expected to do any caching,
|
||||
// but because our signatures are “attached” to the manifest digest,
|
||||
// we need to ensure that the digest of the manifest returned by GetManifest
|
||||
// and used by GetSignatures are consistent, otherwise we would get spurious
|
||||
// we need to ensure that the digest of the manifest returned by GetManifest(ctx, nil)
|
||||
// and used by GetSignatures(ctx, nil) are consistent, otherwise we would get spurious
|
||||
// signature verification failures when pulling while a tag is being updated.
|
||||
func (s *dockerImageSource) ensureManifestIsLoaded(ctx context.Context) error {
|
||||
if s.cachedManifest != nil {
|
||||
@@ -127,13 +131,13 @@ func (s *dockerImageSource) ensureManifestIsLoaded(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *dockerImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, error) {
|
||||
func (s *dockerImageSource) getExternalBlob(ctx context.Context, urls []string) (io.ReadCloser, int64, error) {
|
||||
var (
|
||||
resp *http.Response
|
||||
err error
|
||||
)
|
||||
for _, url := range urls {
|
||||
resp, err = s.c.makeRequestToResolvedURL(context.TODO(), "GET", url, nil, nil, -1, false)
|
||||
resp, err = s.c.makeRequestToResolvedURL(ctx, "GET", url, nil, nil, -1, false)
|
||||
if err == nil {
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
err = errors.Errorf("error fetching external blob from %q: %d", url, resp.StatusCode)
|
||||
@@ -158,14 +162,14 @@ func getBlobSize(resp *http.Response) int64 {
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
func (s *dockerImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
if len(info.URLs) != 0 {
|
||||
return s.getExternalBlob(info.URLs)
|
||||
return s.getExternalBlob(ctx, info.URLs)
|
||||
}
|
||||
|
||||
path := fmt.Sprintf(blobsPath, reference.Path(s.ref.ref), info.Digest.String())
|
||||
logrus.Debugf("Downloading %s", path)
|
||||
res, err := s.c.makeRequest(context.TODO(), "GET", path, nil, nil)
|
||||
res, err := s.c.makeRequest(ctx, "GET", path, nil, nil)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
@@ -176,22 +180,30 @@ func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64,
|
||||
return res.Body, getBlobSize(res), nil
|
||||
}
|
||||
|
||||
func (s *dockerImageSource) GetSignatures(ctx context.Context) ([][]byte, error) {
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
|
||||
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
|
||||
// (e.g. if the source never returns manifest lists).
|
||||
func (s *dockerImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
if err := s.c.detectProperties(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
switch {
|
||||
case s.c.signatureBase != nil:
|
||||
return s.getSignaturesFromLookaside(ctx)
|
||||
return s.getSignaturesFromLookaside(ctx, instanceDigest)
|
||||
case s.c.supportsSignatures:
|
||||
return s.getSignaturesFromAPIExtension(ctx)
|
||||
return s.getSignaturesFromAPIExtension(ctx, instanceDigest)
|
||||
default:
|
||||
return [][]byte{}, nil
|
||||
}
|
||||
}
|
||||
|
||||
// manifestDigest returns a digest of the manifest, either from the supplied reference or from a fetched manifest.
|
||||
func (s *dockerImageSource) manifestDigest(ctx context.Context) (digest.Digest, error) {
|
||||
// manifestDigest returns a digest of the manifest, from instanceDigest if non-nil; or from the supplied reference,
|
||||
// or finally, from a fetched manifest.
|
||||
func (s *dockerImageSource) manifestDigest(ctx context.Context, instanceDigest *digest.Digest) (digest.Digest, error) {
|
||||
if instanceDigest != nil {
|
||||
return *instanceDigest, nil
|
||||
}
|
||||
if digested, ok := s.ref.ref.(reference.Digested); ok {
|
||||
d := digested.Digest()
|
||||
if d.Algorithm() == digest.Canonical {
|
||||
@@ -206,8 +218,8 @@ func (s *dockerImageSource) manifestDigest(ctx context.Context) (digest.Digest,
|
||||
|
||||
// getSignaturesFromLookaside implements GetSignatures() from the lookaside location configured in s.c.signatureBase,
|
||||
// which is not nil.
|
||||
func (s *dockerImageSource) getSignaturesFromLookaside(ctx context.Context) ([][]byte, error) {
|
||||
manifestDigest, err := s.manifestDigest(ctx)
|
||||
func (s *dockerImageSource) getSignaturesFromLookaside(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
manifestDigest, err := s.manifestDigest(ctx, instanceDigest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -276,8 +288,8 @@ func (s *dockerImageSource) getOneSignature(ctx context.Context, url *url.URL) (
|
||||
}
|
||||
|
||||
// getSignaturesFromAPIExtension implements GetSignatures() using the X-Registry-Supports-Signatures API extension.
|
||||
func (s *dockerImageSource) getSignaturesFromAPIExtension(ctx context.Context) ([][]byte, error) {
|
||||
manifestDigest, err := s.manifestDigest(ctx)
|
||||
func (s *dockerImageSource) getSignaturesFromAPIExtension(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
manifestDigest, err := s.manifestDigest(ctx, instanceDigest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -297,8 +309,8 @@ func (s *dockerImageSource) getSignaturesFromAPIExtension(ctx context.Context) (
|
||||
}
|
||||
|
||||
// deleteImage deletes the named image from the registry, if supported.
|
||||
func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
|
||||
c, err := newDockerClientFromRef(ctx, ref, true, "push")
|
||||
func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) error {
|
||||
c, err := newDockerClientFromRef(sys, ref, true, "push")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -313,7 +325,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
|
||||
return err
|
||||
}
|
||||
getPath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), refTail)
|
||||
get, err := c.makeRequest(context.TODO(), "GET", getPath, headers, nil)
|
||||
get, err := c.makeRequest(ctx, "GET", getPath, headers, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -335,7 +347,7 @@ func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
|
||||
|
||||
// When retrieving the digest from a registry >= 2.3 use the following header:
|
||||
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
|
||||
delete, err := c.makeRequest(context.TODO(), "DELETE", deletePath, headers, nil)
|
||||
delete, err := c.makeRequest(ctx, "DELETE", deletePath, headers, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
22
vendor/github.com/containers/image/docker/docker_transport.go
generated
vendored
22
vendor/github.com/containers/image/docker/docker_transport.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
@@ -122,29 +123,30 @@ func (ref dockerReference) PolicyConfigurationNamespaces() []string {
|
||||
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref dockerReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
return newImage(ctx, ref)
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
func (ref dockerReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
return newImage(ctx, sys, ref)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref dockerReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref)
|
||||
func (ref dockerReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(sys, ref)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref dockerReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, ref)
|
||||
func (ref dockerReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(sys, ref)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref dockerReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
return deleteImage(ctx, ref)
|
||||
func (ref dockerReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
return deleteImage(ctx, sys, ref)
|
||||
}
|
||||
|
||||
// tagOrDigest returns a tag or digest from the reference.
|
||||
|
||||
16
vendor/github.com/containers/image/docker/lookaside.go
generated
vendored
16
vendor/github.com/containers/image/docker/lookaside.go
generated
vendored
@@ -45,9 +45,9 @@ type registryNamespace struct {
|
||||
type signatureStorageBase *url.URL // The only documented value is nil, meaning storage is not supported.
|
||||
|
||||
// configuredSignatureStorageBase reads configuration to find an appropriate signature storage URL for ref, for write access if “write”.
|
||||
func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReference, write bool) (signatureStorageBase, error) {
|
||||
func configuredSignatureStorageBase(sys *types.SystemContext, ref dockerReference, write bool) (signatureStorageBase, error) {
|
||||
// FIXME? Loading and parsing the config could be cached across calls.
|
||||
dirPath := registriesDirPath(ctx)
|
||||
dirPath := registriesDirPath(sys)
|
||||
logrus.Debugf(`Using registries.d directory %s for sigstore configuration`, dirPath)
|
||||
config, err := loadAndMergeConfig(dirPath)
|
||||
if err != nil {
|
||||
@@ -74,13 +74,13 @@ func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReferenc
|
||||
}
|
||||
|
||||
// registriesDirPath returns a path to registries.d
|
||||
func registriesDirPath(ctx *types.SystemContext) string {
|
||||
if ctx != nil {
|
||||
if ctx.RegistriesDirPath != "" {
|
||||
return ctx.RegistriesDirPath
|
||||
func registriesDirPath(sys *types.SystemContext) string {
|
||||
if sys != nil {
|
||||
if sys.RegistriesDirPath != "" {
|
||||
return sys.RegistriesDirPath
|
||||
}
|
||||
if ctx.RootForImplicitAbsolutePaths != "" {
|
||||
return filepath.Join(ctx.RootForImplicitAbsolutePaths, systemRegistriesDirPath)
|
||||
if sys.RootForImplicitAbsolutePaths != "" {
|
||||
return filepath.Join(sys.RootForImplicitAbsolutePaths, systemRegistriesDirPath)
|
||||
}
|
||||
}
|
||||
return systemRegistriesDirPath
|
||||
|
||||
282
vendor/github.com/containers/image/docker/tarfile/dest.go
generated
vendored
282
vendor/github.com/containers/image/docker/tarfile/dest.go
generated
vendored
@@ -3,14 +3,17 @@ package tarfile
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/internal/tmpdir"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
@@ -18,42 +21,33 @@ import (
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
|
||||
|
||||
// Destination is a partial implementation of types.ImageDestination for writing to an io.Writer.
|
||||
type Destination struct {
|
||||
writer io.Writer
|
||||
tar *tar.Writer
|
||||
repoTag string
|
||||
writer io.Writer
|
||||
tar *tar.Writer
|
||||
repoTags []reference.NamedTagged
|
||||
// Other state.
|
||||
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
|
||||
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
|
||||
config []byte
|
||||
}
|
||||
|
||||
// NewDestination returns a tarfile.Destination for the specified io.Writer.
|
||||
func NewDestination(dest io.Writer, ref reference.NamedTagged) *Destination {
|
||||
// For github.com/docker/docker consumers, this works just as well as
|
||||
// refString := ref.String()
|
||||
// because when reading the RepoTags strings, github.com/docker/docker/reference
|
||||
// normalizes both of them to the same value.
|
||||
//
|
||||
// Doing it this way to include the normalized-out `docker.io[/library]` does make
|
||||
// a difference for github.com/projectatomic/docker consumers, with the
|
||||
// “Add --add-registry and --block-registry options to docker daemon” patch.
|
||||
// These consumers treat reference strings which include a hostname and reference
|
||||
// strings without a hostname differently.
|
||||
//
|
||||
// Using the host name here is more explicit about the intent, and it has the same
|
||||
// effect as (docker pull) in projectatomic/docker, which tags the result using
|
||||
// a hostname-qualified reference.
|
||||
// See https://github.com/containers/image/issues/72 for a more detailed
|
||||
// analysis and explanation.
|
||||
refString := fmt.Sprintf("%s:%s", ref.Name(), ref.Tag())
|
||||
return &Destination{
|
||||
writer: dest,
|
||||
tar: tar.NewWriter(dest),
|
||||
repoTag: refString,
|
||||
blobs: make(map[digest.Digest]types.BlobInfo),
|
||||
repoTags := []reference.NamedTagged{}
|
||||
if ref != nil {
|
||||
repoTags = append(repoTags, ref)
|
||||
}
|
||||
return &Destination{
|
||||
writer: dest,
|
||||
tar: tar.NewWriter(dest),
|
||||
repoTags: repoTags,
|
||||
blobs: make(map[digest.Digest]types.BlobInfo),
|
||||
}
|
||||
}
|
||||
|
||||
// AddRepoTags adds the specified tags to the destination's repoTags.
|
||||
func (d *Destination) AddRepoTags(tags []reference.NamedTagged) {
|
||||
d.repoTags = append(d.repoTags, tags...)
|
||||
}
|
||||
|
||||
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
|
||||
@@ -66,15 +60,10 @@ func (d *Destination) SupportedManifestMIMETypes() []string {
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *Destination) SupportsSignatures() error {
|
||||
func (d *Destination) SupportsSignatures(ctx context.Context) error {
|
||||
return errors.Errorf("Storing signatures for docker tar files is not supported")
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *Destination) ShouldCompressLayers() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
// uploaded to the image destination, true otherwise.
|
||||
func (d *Destination) AcceptsForeignLayerURLs() bool {
|
||||
@@ -92,29 +81,22 @@ func (d *Destination) MustMatchRuntimeOS() bool {
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
if inputInfo.Digest.String() == "" {
|
||||
return types.BlobInfo{}, errors.Errorf("Can not stream a blob with unknown digest to docker tarfile")
|
||||
}
|
||||
|
||||
ok, size, err := d.HasBlob(inputInfo)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
if ok {
|
||||
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
|
||||
}
|
||||
|
||||
if inputInfo.Size == -1 { // Ouch, we need to stream the blob into a temporary file just to determine the size.
|
||||
func (d *Destination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
|
||||
// Ouch, we need to stream the blob into a temporary file just to determine the size.
|
||||
// When the layer is decompressed, we also have to generate the digest on uncompressed datas.
|
||||
if inputInfo.Size == -1 || inputInfo.Digest.String() == "" {
|
||||
logrus.Debugf("docker tarfile: input with unknown size, streaming to disk first ...")
|
||||
streamCopy, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-tarfile-blob")
|
||||
streamCopy, err := ioutil.TempFile(tmpdir.TemporaryDirectoryForBigFiles(), "docker-tarfile-blob")
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
defer os.Remove(streamCopy.Name())
|
||||
defer streamCopy.Close()
|
||||
|
||||
size, err := io.Copy(streamCopy, stream)
|
||||
digester := digest.Canonical.Digester()
|
||||
tee := io.TeeReader(stream, digester.Hash())
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
|
||||
size, err := io.Copy(streamCopy, tee)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
@@ -123,17 +105,43 @@ func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
inputInfo.Size = size // inputInfo is a struct, so we are only modifying our copy.
|
||||
if inputInfo.Digest == "" {
|
||||
inputInfo.Digest = digester.Digest()
|
||||
}
|
||||
stream = streamCopy
|
||||
logrus.Debugf("... streaming done")
|
||||
}
|
||||
|
||||
digester := digest.Canonical.Digester()
|
||||
tee := io.TeeReader(stream, digester.Hash())
|
||||
if err := d.sendFile(inputInfo.Digest.String(), inputInfo.Size, tee); err != nil {
|
||||
// Maybe the blob has been already sent
|
||||
ok, size, err := d.HasBlob(ctx, inputInfo)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}
|
||||
return types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}, nil
|
||||
if ok {
|
||||
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
|
||||
}
|
||||
|
||||
if isConfig {
|
||||
buf, err := ioutil.ReadAll(stream)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, errors.Wrap(err, "Error reading Config file stream")
|
||||
}
|
||||
d.config = buf
|
||||
if err := d.sendFile(inputInfo.Digest.Hex()+".json", inputInfo.Size, bytes.NewReader(buf)); err != nil {
|
||||
return types.BlobInfo{}, errors.Wrap(err, "Error writing Config file")
|
||||
}
|
||||
} else {
|
||||
// Note that this can't be e.g. filepath.Join(l.Digest.Hex(), legacyLayerFileName); due to the way
|
||||
// writeLegacyLayerMetadata constructs layer IDs differently from inputinfo.Digest values (as described
|
||||
// inside it), most of the layers would end up in subdirectories alone without any metadata; (docker load)
|
||||
// tries to load every subdirectory as an image and fails if the config is missing. So, keep the layers
|
||||
// in the root of the tarball.
|
||||
if err := d.sendFile(inputInfo.Digest.Hex()+".tar", inputInfo.Size, stream); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
}
|
||||
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: inputInfo.Digest, Size: inputInfo.Size}
|
||||
return types.BlobInfo{Digest: inputInfo.Digest, Size: inputInfo.Size}, nil
|
||||
}
|
||||
|
||||
// HasBlob returns true iff the image destination already contains a blob with
|
||||
@@ -142,7 +150,7 @@ func (d *Destination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types
|
||||
// the blob must also be returned. If the destination does not contain the
|
||||
// blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil); it
|
||||
// returns a non-nil error only on an unexpected failure.
|
||||
func (d *Destination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
func (d *Destination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
|
||||
if info.Digest == "" {
|
||||
return false, -1, errors.Errorf("Can not check for a blob with unknown digest")
|
||||
}
|
||||
@@ -157,18 +165,38 @@ func (d *Destination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
// returned false. Like HasBlob and unlike PutBlob, the digest can not be
|
||||
// empty. If the blob is a filesystem layer, this signifies that the changes
|
||||
// it describes need to be applied again when composing a filesystem tree.
|
||||
func (d *Destination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
func (d *Destination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return info, nil
|
||||
}
|
||||
|
||||
func (d *Destination) createRepositoriesFile(rootLayerID string) error {
|
||||
repositories := map[string]map[string]string{}
|
||||
for _, repoTag := range d.repoTags {
|
||||
if val, ok := repositories[repoTag.Name()]; ok {
|
||||
val[repoTag.Tag()] = rootLayerID
|
||||
} else {
|
||||
repositories[repoTag.Name()] = map[string]string{repoTag.Tag(): rootLayerID}
|
||||
}
|
||||
}
|
||||
|
||||
b, err := json.Marshal(repositories)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error marshaling repositories")
|
||||
}
|
||||
if err := d.sendBytes(legacyRepositoriesFileName, b); err != nil {
|
||||
return errors.Wrap(err, "Error writing config json file")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// PutManifest writes manifest to the destination.
|
||||
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
|
||||
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
|
||||
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
|
||||
func (d *Destination) PutManifest(m []byte) error {
|
||||
func (d *Destination) PutManifest(ctx context.Context, m []byte) error {
|
||||
// We do not bother with types.ManifestTypeRejectedError; our .SupportedManifestMIMETypes() above is already providing only one alternative,
|
||||
// so the caller trying a different manifest kind would be pointless.
|
||||
var man schema2Manifest
|
||||
var man manifest.Schema2
|
||||
if err := json.Unmarshal(m, &man); err != nil {
|
||||
return errors.Wrap(err, "Error parsing manifest")
|
||||
}
|
||||
@@ -176,14 +204,42 @@ func (d *Destination) PutManifest(m []byte) error {
|
||||
return errors.Errorf("Unsupported manifest type, need a Docker schema 2 manifest")
|
||||
}
|
||||
|
||||
layerPaths := []string{}
|
||||
for _, l := range man.Layers {
|
||||
layerPaths = append(layerPaths, l.Digest.String())
|
||||
layerPaths, lastLayerID, err := d.writeLegacyLayerMetadata(man.LayersDescriptors)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(man.LayersDescriptors) > 0 {
|
||||
if err := d.createRepositoriesFile(lastLayerID); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
repoTags := []string{}
|
||||
for _, tag := range d.repoTags {
|
||||
// For github.com/docker/docker consumers, this works just as well as
|
||||
// refString := ref.String()
|
||||
// because when reading the RepoTags strings, github.com/docker/docker/reference
|
||||
// normalizes both of them to the same value.
|
||||
//
|
||||
// Doing it this way to include the normalized-out `docker.io[/library]` does make
|
||||
// a difference for github.com/projectatomic/docker consumers, with the
|
||||
// “Add --add-registry and --block-registry options to docker daemon” patch.
|
||||
// These consumers treat reference strings which include a hostname and reference
|
||||
// strings without a hostname differently.
|
||||
//
|
||||
// Using the host name here is more explicit about the intent, and it has the same
|
||||
// effect as (docker pull) in projectatomic/docker, which tags the result using
|
||||
// a hostname-qualified reference.
|
||||
// See https://github.com/containers/image/issues/72 for a more detailed
|
||||
// analysis and explanation.
|
||||
refString := fmt.Sprintf("%s:%s", tag.Name(), tag.Tag())
|
||||
repoTags = append(repoTags, refString)
|
||||
}
|
||||
|
||||
items := []ManifestItem{{
|
||||
Config: man.Config.Digest.String(),
|
||||
RepoTags: []string{d.repoTag},
|
||||
Config: man.ConfigDescriptor.Digest.Hex() + ".json",
|
||||
RepoTags: repoTags,
|
||||
Layers: layerPaths,
|
||||
Parent: "",
|
||||
LayerSources: nil,
|
||||
@@ -194,12 +250,81 @@ func (d *Destination) PutManifest(m []byte) error {
|
||||
}
|
||||
|
||||
// FIXME? Do we also need to support the legacy format?
|
||||
return d.sendFile(manifestFileName, int64(len(itemsBytes)), bytes.NewReader(itemsBytes))
|
||||
return d.sendBytes(manifestFileName, itemsBytes)
|
||||
}
|
||||
|
||||
// writeLegacyLayerMetadata writes legacy VERSION and configuration files for all layers
|
||||
func (d *Destination) writeLegacyLayerMetadata(layerDescriptors []manifest.Schema2Descriptor) (layerPaths []string, lastLayerID string, err error) {
|
||||
var chainID digest.Digest
|
||||
lastLayerID = ""
|
||||
for i, l := range layerDescriptors {
|
||||
// This chainID value matches the computation in docker/docker/layer.CreateChainID …
|
||||
if chainID == "" {
|
||||
chainID = l.Digest
|
||||
} else {
|
||||
chainID = digest.Canonical.FromString(chainID.String() + " " + l.Digest.String())
|
||||
}
|
||||
// … but note that this image ID does not match docker/docker/image/v1.CreateID. At least recent
|
||||
// versions allocate new IDs on load, as long as the IDs we use are unique / cannot loop.
|
||||
//
|
||||
// Overall, the goal of computing a digest dependent on the full history is to avoid reusing an image ID
|
||||
// (and possibly creating a loop in the "parent" links) if a layer with the same DiffID appears two or more
|
||||
// times in layersDescriptors. The ChainID values are sufficient for this, the v1.CreateID computation
|
||||
// which also mixes in the full image configuration seems unnecessary, at least as long as we are storing
|
||||
// only a single image per tarball, i.e. all DiffID prefixes are unique (can’t differ only with
|
||||
// configuration).
|
||||
layerID := chainID.Hex()
|
||||
|
||||
physicalLayerPath := l.Digest.Hex() + ".tar"
|
||||
// The layer itself has been stored into physicalLayerPath in PutManifest.
|
||||
// So, use that path for layerPaths used in the non-legacy manifest
|
||||
layerPaths = append(layerPaths, physicalLayerPath)
|
||||
// ... and create a symlink for the legacy format;
|
||||
if err := d.sendSymlink(filepath.Join(layerID, legacyLayerFileName), filepath.Join("..", physicalLayerPath)); err != nil {
|
||||
return nil, "", errors.Wrap(err, "Error creating layer symbolic link")
|
||||
}
|
||||
|
||||
b := []byte("1.0")
|
||||
if err := d.sendBytes(filepath.Join(layerID, legacyVersionFileName), b); err != nil {
|
||||
return nil, "", errors.Wrap(err, "Error writing VERSION file")
|
||||
}
|
||||
|
||||
// The legacy format requires a config file per layer
|
||||
layerConfig := make(map[string]interface{})
|
||||
layerConfig["id"] = layerID
|
||||
|
||||
// The root layer doesn't have any parent
|
||||
if lastLayerID != "" {
|
||||
layerConfig["parent"] = lastLayerID
|
||||
}
|
||||
// The root layer configuration file is generated by using subpart of the image configuration
|
||||
if i == len(layerDescriptors)-1 {
|
||||
var config map[string]*json.RawMessage
|
||||
err := json.Unmarshal(d.config, &config)
|
||||
if err != nil {
|
||||
return nil, "", errors.Wrap(err, "Error unmarshaling config")
|
||||
}
|
||||
for _, attr := range [7]string{"architecture", "config", "container", "container_config", "created", "docker_version", "os"} {
|
||||
layerConfig[attr] = config[attr]
|
||||
}
|
||||
}
|
||||
b, err := json.Marshal(layerConfig)
|
||||
if err != nil {
|
||||
return nil, "", errors.Wrap(err, "Error marshaling layer config")
|
||||
}
|
||||
if err := d.sendBytes(filepath.Join(layerID, legacyConfigFileName), b); err != nil {
|
||||
return nil, "", errors.Wrap(err, "Error writing config json file")
|
||||
}
|
||||
|
||||
lastLayerID = layerID
|
||||
}
|
||||
return layerPaths, lastLayerID, nil
|
||||
}
|
||||
|
||||
type tarFI struct {
|
||||
path string
|
||||
size int64
|
||||
path string
|
||||
size int64
|
||||
isSymlink bool
|
||||
}
|
||||
|
||||
func (t *tarFI) Name() string {
|
||||
@@ -209,6 +334,9 @@ func (t *tarFI) Size() int64 {
|
||||
return t.size
|
||||
}
|
||||
func (t *tarFI) Mode() os.FileMode {
|
||||
if t.isSymlink {
|
||||
return os.ModeSymlink
|
||||
}
|
||||
return 0444
|
||||
}
|
||||
func (t *tarFI) ModTime() time.Time {
|
||||
@@ -221,6 +349,21 @@ func (t *tarFI) Sys() interface{} {
|
||||
return nil
|
||||
}
|
||||
|
||||
// sendSymlink sends a symlink into the tar stream.
|
||||
func (d *Destination) sendSymlink(path string, target string) error {
|
||||
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: 0, isSymlink: true}, target)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
logrus.Debugf("Sending as tar link %s -> %s", path, target)
|
||||
return d.tar.WriteHeader(hdr)
|
||||
}
|
||||
|
||||
// sendBytes sends a path into the tar stream.
|
||||
func (d *Destination) sendBytes(path string, b []byte) error {
|
||||
return d.sendFile(path, int64(len(b)), bytes.NewReader(b))
|
||||
}
|
||||
|
||||
// sendFile sends a file into the tar stream.
|
||||
func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader) error {
|
||||
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: expectedSize}, "")
|
||||
@@ -231,6 +374,7 @@ func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader
|
||||
if err := d.tar.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using a context.Context.
|
||||
size, err := io.Copy(d.tar, stream)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -244,7 +388,7 @@ func (d *Destination) sendFile(path string, expectedSize int64, stream io.Reader
|
||||
// PutSignatures adds the given signatures to the docker tarfile (currently not
|
||||
// supported). MUST be called after PutManifest (signatures reference manifest
|
||||
// contents)
|
||||
func (d *Destination) PutSignatures(signatures [][]byte) error {
|
||||
func (d *Destination) PutSignatures(ctx context.Context, signatures [][]byte) error {
|
||||
if len(signatures) != 0 {
|
||||
return errors.Errorf("Storing signatures for docker tar files is not supported")
|
||||
}
|
||||
@@ -253,6 +397,6 @@ func (d *Destination) PutSignatures(signatures [][]byte) error {
|
||||
|
||||
// Commit finishes writing data to the underlying io.Writer.
|
||||
// It is the caller's responsibility to close it, if necessary.
|
||||
func (d *Destination) Commit() error {
|
||||
func (d *Destination) Commit(ctx context.Context) error {
|
||||
return d.tar.Close()
|
||||
}
|
||||
|
||||
120
vendor/github.com/containers/image/docker/tarfile/src.go
generated
vendored
120
vendor/github.com/containers/image/docker/tarfile/src.go
generated
vendored
@@ -3,6 +3,7 @@ package tarfile
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
@@ -10,6 +11,7 @@ import (
|
||||
"os"
|
||||
"path"
|
||||
|
||||
"github.com/containers/image/internal/tmpdir"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/pkg/compression"
|
||||
"github.com/containers/image/types"
|
||||
@@ -19,13 +21,14 @@ import (
|
||||
|
||||
// Source is a partial implementation of types.ImageSource for reading from tarPath.
|
||||
type Source struct {
|
||||
tarPath string
|
||||
tarPath string
|
||||
removeTarPathOnClose bool // Remove temp file on close if true
|
||||
// The following data is only available after ensureCachedDataIsPresent() succeeds
|
||||
tarManifest *ManifestItem // nil if not available yet.
|
||||
configBytes []byte
|
||||
configDigest digest.Digest
|
||||
orderedDiffIDList []diffID
|
||||
knownLayers map[diffID]*layerInfo
|
||||
orderedDiffIDList []digest.Digest
|
||||
knownLayers map[digest.Digest]*layerInfo
|
||||
// Other state
|
||||
generatedManifest []byte // Private cache for GetManifest(), nil if not set yet.
|
||||
}
|
||||
@@ -35,14 +38,59 @@ type layerInfo struct {
|
||||
size int64
|
||||
}
|
||||
|
||||
// NewSource returns a tarfile.Source for the specified path.
|
||||
func NewSource(path string) *Source {
|
||||
// TODO: We could add support for multiple images in a single archive, so
|
||||
// that people could use docker-archive:opensuse.tar:opensuse:leap as
|
||||
// the source of an image.
|
||||
return &Source{
|
||||
tarPath: path,
|
||||
// TODO: We could add support for multiple images in a single archive, so
|
||||
// that people could use docker-archive:opensuse.tar:opensuse:leap as
|
||||
// the source of an image.
|
||||
// To do for both the NewSourceFromFile and NewSourceFromStream functions
|
||||
|
||||
// NewSourceFromFile returns a tarfile.Source for the specified path
|
||||
// NewSourceFromFile supports both conpressed and uncompressed input
|
||||
func NewSourceFromFile(path string) (*Source, error) {
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error opening file %q", path)
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
reader, err := gzip.NewReader(file)
|
||||
if err != nil {
|
||||
return &Source{
|
||||
tarPath: path,
|
||||
}, nil
|
||||
}
|
||||
defer reader.Close()
|
||||
|
||||
return NewSourceFromStream(reader)
|
||||
}
|
||||
|
||||
// NewSourceFromStream returns a tarfile.Source for the specified inputStream, which must be uncompressed.
|
||||
// The caller can close the inputStream immediately after NewSourceFromFile returns.
|
||||
func NewSourceFromStream(inputStream io.Reader) (*Source, error) {
|
||||
// FIXME: use SystemContext here.
|
||||
// Save inputStream to a temporary file
|
||||
tarCopyFile, err := ioutil.TempFile(tmpdir.TemporaryDirectoryForBigFiles(), "docker-tar")
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error creating temporary file")
|
||||
}
|
||||
defer tarCopyFile.Close()
|
||||
|
||||
succeeded := false
|
||||
defer func() {
|
||||
if !succeeded {
|
||||
os.Remove(tarCopyFile.Name())
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using a context.Context.
|
||||
if _, err := io.Copy(tarCopyFile, inputStream); err != nil {
|
||||
return nil, errors.Wrapf(err, "error copying contents to temporary file %q", tarCopyFile.Name())
|
||||
}
|
||||
succeeded = true
|
||||
|
||||
return &Source{
|
||||
tarPath: tarCopyFile.Name(),
|
||||
removeTarPathOnClose: true,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// tarReadCloser is a way to close the backing file of a tar.Reader when the user no longer needs the tar component.
|
||||
@@ -156,7 +204,7 @@ func (s *Source) ensureCachedDataIsPresent() error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var parsedConfig image // Most fields ommitted, we only care about layer DiffIDs.
|
||||
var parsedConfig manifest.Schema2Image // There's a lot of info there, but we only really care about layer DiffIDs.
|
||||
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
|
||||
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest[0].Config)
|
||||
}
|
||||
@@ -189,17 +237,25 @@ func (s *Source) loadTarManifest() ([]ManifestItem, error) {
|
||||
return items, nil
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized Source, if any.
|
||||
func (s *Source) Close() error {
|
||||
if s.removeTarPathOnClose {
|
||||
return os.Remove(s.tarPath)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// LoadTarManifest loads and decodes the manifest.json
|
||||
func (s *Source) LoadTarManifest() ([]ManifestItem, error) {
|
||||
return s.loadTarManifest()
|
||||
}
|
||||
|
||||
func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *image) (map[diffID]*layerInfo, error) {
|
||||
func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *manifest.Schema2Image) (map[digest.Digest]*layerInfo, error) {
|
||||
// Collect layer data available in manifest and config.
|
||||
if len(tarManifest.Layers) != len(parsedConfig.RootFS.DiffIDs) {
|
||||
return nil, errors.Errorf("Inconsistent layer count: %d in manifest, %d in config", len(tarManifest.Layers), len(parsedConfig.RootFS.DiffIDs))
|
||||
}
|
||||
knownLayers := map[diffID]*layerInfo{}
|
||||
knownLayers := map[digest.Digest]*layerInfo{}
|
||||
unknownLayerSizes := map[string]*layerInfo{} // Points into knownLayers, a "to do list" of items with unknown sizes.
|
||||
for i, diffID := range parsedConfig.RootFS.DiffIDs {
|
||||
if _, ok := knownLayers[diffID]; ok {
|
||||
@@ -249,28 +305,34 @@ func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *image
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *Source) GetManifest() ([]byte, string, error) {
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
|
||||
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
|
||||
func (s *Source) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
|
||||
if instanceDigest != nil {
|
||||
// How did we even get here? GetManifest(ctx, nil) has returned a manifest.DockerV2Schema2MediaType.
|
||||
return nil, "", errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
|
||||
}
|
||||
if s.generatedManifest == nil {
|
||||
if err := s.ensureCachedDataIsPresent(); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
m := schema2Manifest{
|
||||
m := manifest.Schema2{
|
||||
SchemaVersion: 2,
|
||||
MediaType: manifest.DockerV2Schema2MediaType,
|
||||
Config: distributionDescriptor{
|
||||
ConfigDescriptor: manifest.Schema2Descriptor{
|
||||
MediaType: manifest.DockerV2Schema2ConfigMediaType,
|
||||
Size: int64(len(s.configBytes)),
|
||||
Digest: s.configDigest,
|
||||
},
|
||||
Layers: []distributionDescriptor{},
|
||||
LayersDescriptors: []manifest.Schema2Descriptor{},
|
||||
}
|
||||
for _, diffID := range s.orderedDiffIDList {
|
||||
li, ok := s.knownLayers[diffID]
|
||||
if !ok {
|
||||
return nil, "", errors.Errorf("Internal inconsistency: Information about layer %s missing", diffID)
|
||||
}
|
||||
m.Layers = append(m.Layers, distributionDescriptor{
|
||||
Digest: digest.Digest(diffID), // diffID is a digest of the uncompressed tarball
|
||||
m.LayersDescriptors = append(m.LayersDescriptors, manifest.Schema2Descriptor{
|
||||
Digest: diffID, // diffID is a digest of the uncompressed tarball
|
||||
MediaType: manifest.DockerV2Schema2LayerMediaType,
|
||||
Size: li.size,
|
||||
})
|
||||
@@ -284,13 +346,6 @@ func (s *Source) GetManifest() ([]byte, string, error) {
|
||||
return s.generatedManifest, manifest.DockerV2Schema2MediaType, nil
|
||||
}
|
||||
|
||||
// GetTargetManifest returns an image's manifest given a digest. This is mainly used to retrieve a single image's manifest
|
||||
// out of a manifest list.
|
||||
func (s *Source) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
// How did we even get here? GetManifest() above has returned a manifest.DockerV2Schema2MediaType.
|
||||
return nil, "", errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
|
||||
}
|
||||
|
||||
type readCloseWrapper struct {
|
||||
io.Reader
|
||||
closeFunc func() error
|
||||
@@ -304,7 +359,7 @@ func (r readCloseWrapper) Close() error {
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
func (s *Source) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
if err := s.ensureCachedDataIsPresent(); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
@@ -313,7 +368,7 @@ func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
return ioutil.NopCloser(bytes.NewReader(s.configBytes)), int64(len(s.configBytes)), nil
|
||||
}
|
||||
|
||||
if li, ok := s.knownLayers[diffID(info.Digest)]; ok { // diffID is a digest of the uncompressed tarball,
|
||||
if li, ok := s.knownLayers[info.Digest]; ok { // diffID is a digest of the uncompressed tarball,
|
||||
stream, err := s.openTarComponent(li.path)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
@@ -355,6 +410,13 @@ func (s *Source) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
}
|
||||
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
func (s *Source) GetSignatures(ctx context.Context) ([][]byte, error) {
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
|
||||
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
|
||||
// (e.g. if the source never returns manifest lists).
|
||||
func (s *Source) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
if instanceDigest != nil {
|
||||
// How did we even get here? GetManifest(ctx, nil) has returned a manifest.DockerV2Schema2MediaType.
|
||||
return nil, errors.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
|
||||
}
|
||||
return [][]byte{}, nil
|
||||
}
|
||||
|
||||
48
vendor/github.com/containers/image/docker/tarfile/types.go
generated
vendored
48
vendor/github.com/containers/image/docker/tarfile/types.go
generated
vendored
@@ -1,16 +1,19 @@
|
||||
package tarfile
|
||||
|
||||
import "github.com/opencontainers/go-digest"
|
||||
import (
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/opencontainers/go-digest"
|
||||
)
|
||||
|
||||
// Various data structures.
|
||||
|
||||
// Based on github.com/docker/docker/image/tarexport/tarexport.go
|
||||
const (
|
||||
manifestFileName = "manifest.json"
|
||||
// legacyLayerFileName = "layer.tar"
|
||||
// legacyConfigFileName = "json"
|
||||
// legacyVersionFileName = "VERSION"
|
||||
// legacyRepositoriesFileName = "repositories"
|
||||
manifestFileName = "manifest.json"
|
||||
legacyLayerFileName = "layer.tar"
|
||||
legacyConfigFileName = "json"
|
||||
legacyVersionFileName = "VERSION"
|
||||
legacyRepositoriesFileName = "repositories"
|
||||
)
|
||||
|
||||
// ManifestItem is an element of the array stored in the top-level manifest.json file.
|
||||
@@ -18,37 +21,8 @@ type ManifestItem struct {
|
||||
Config string
|
||||
RepoTags []string
|
||||
Layers []string
|
||||
Parent imageID `json:",omitempty"`
|
||||
LayerSources map[diffID]distributionDescriptor `json:",omitempty"`
|
||||
Parent imageID `json:",omitempty"`
|
||||
LayerSources map[digest.Digest]manifest.Schema2Descriptor `json:",omitempty"`
|
||||
}
|
||||
|
||||
type imageID string
|
||||
type diffID digest.Digest
|
||||
|
||||
// Based on github.com/docker/distribution/blobs.go
|
||||
type distributionDescriptor struct {
|
||||
MediaType string `json:"mediaType,omitempty"`
|
||||
Size int64 `json:"size,omitempty"`
|
||||
Digest digest.Digest `json:"digest,omitempty"`
|
||||
URLs []string `json:"urls,omitempty"`
|
||||
}
|
||||
|
||||
// Based on github.com/docker/distribution/manifest/schema2/manifest.go
|
||||
// FIXME: We are repeating this all over the place; make a public copy?
|
||||
type schema2Manifest struct {
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
MediaType string `json:"mediaType,omitempty"`
|
||||
Config distributionDescriptor `json:"config"`
|
||||
Layers []distributionDescriptor `json:"layers"`
|
||||
}
|
||||
|
||||
// Based on github.com/docker/docker/image/image.go
|
||||
// MOST CONTENT OMITTED AS UNNECESSARY
|
||||
type image struct {
|
||||
RootFS *rootFS `json:"rootfs,omitempty"`
|
||||
}
|
||||
|
||||
type rootFS struct {
|
||||
Type string `json:"type"`
|
||||
DiffIDs []diffID `json:"diff_ids,omitempty"`
|
||||
}
|
||||
|
||||
66
vendor/github.com/containers/image/docs/atomic-signature-embedded-json.json
generated
vendored
Normal file
66
vendor/github.com/containers/image/docs/atomic-signature-embedded-json.json
generated
vendored
Normal file
@@ -0,0 +1,66 @@
|
||||
{
|
||||
"title": "JSON embedded in an atomic container signature",
|
||||
"description": "This schema is a supplement to atomic-signature.md in this directory.\n\nConsumers of the JSON MUST use the processing rules documented in atomic-signature.md, especially the requirements for the 'critical' subjobject.\n\nWhenever this schema and atomic-signature.md, or the github.com/containers/image/signature implementation, differ,\nit is the atomic-signature.md document, or the github.com/containers/image/signature implementation, which governs.\n\nUsers are STRONGLY RECOMMENDED to use the github.com/containeres/image/signature implementation instead of writing\ntheir own, ESPECIALLY when consuming signatures, so that the policy.json format can be shared by all image consumers.\n",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"critical",
|
||||
"optional"
|
||||
],
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"critical": {
|
||||
"type": "object",
|
||||
"required": [
|
||||
"type",
|
||||
"image",
|
||||
"identity"
|
||||
],
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"atomic container signature"
|
||||
]
|
||||
},
|
||||
"image": {
|
||||
"type": "object",
|
||||
"required": [
|
||||
"docker-manifest-digest"
|
||||
],
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"docker-manifest-digest": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"identity": {
|
||||
"type": "object",
|
||||
"required": [
|
||||
"docker-reference"
|
||||
],
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"docker-reference": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"optional": {
|
||||
"type": "object",
|
||||
"description": "All members are optional, but if they are included, they must be valid.",
|
||||
"additionalProperties": true,
|
||||
"properties": {
|
||||
"creator": {
|
||||
"type": "string"
|
||||
},
|
||||
"timestamp": {
|
||||
"type": "integer"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
241
vendor/github.com/containers/image/docs/atomic-signature.md
generated
vendored
Normal file
241
vendor/github.com/containers/image/docs/atomic-signature.md
generated
vendored
Normal file
@@ -0,0 +1,241 @@
|
||||
% atomic-signature(5) Atomic signature format
|
||||
% Miloslav Trmač
|
||||
% March 2017
|
||||
|
||||
# Atomic signature format
|
||||
|
||||
This document describes the format of “atomic” container signatures,
|
||||
as implemented by the `github.com/containers/image/signature` package.
|
||||
|
||||
Most users should be able to consume these signatures by using the `github.com/containers/image/signature` package
|
||||
(preferably through the higher-level `signature.PolicyContext` interface)
|
||||
without having to care about the details of the format described below.
|
||||
This documentation exists primarily for maintainers of the package
|
||||
and to allow independent reimplementations.
|
||||
|
||||
## High-level overview
|
||||
|
||||
The signature provides an end-to-end authenticated claim that a container image
|
||||
has been approved by a specific party (e.g. the creator of the image as their work,
|
||||
an automated build system as a result of an automated build,
|
||||
a company IT department approving the image for production) under a specified _identity_
|
||||
(e.g. an OS base image / specific application, with a specific version).
|
||||
|
||||
An atomic container signature consists of a cryptographic signature which identifies
|
||||
and authenticates who signed the image, and carries as a signed payload a JSON document.
|
||||
The JSON document identifies the image being signed, claims a specific identity of the
|
||||
image and if applicable, contains other information about the image.
|
||||
|
||||
The signatures do not modify the container image (the layers, configuration, manifest, …);
|
||||
e.g. their presence does not change the manifest digest used to identify the image in
|
||||
docker/distribution servers; rather, the signatures are associated with an immutable image.
|
||||
An image can have any number of signatures so signature distribution systems SHOULD support
|
||||
associating more than one signature with an image.
|
||||
|
||||
## The cryptographic signature
|
||||
|
||||
As distributed, the atomic container signature is a blob which contains a cryptographic signature
|
||||
in an industry-standard format, carrying a signed JSON payload (i.e. the blob contains both the
|
||||
JSON document and a signature of the JSON document; it is not a “detached signature” with
|
||||
independent blobs containing the JSON document and a cryptographic signature).
|
||||
|
||||
Currently the only defined cryptographic signature format is an OpenPGP signature (RFC 4880),
|
||||
but others may be added in the future. (The blob does not contain metadata identifying the
|
||||
cryptographic signature format. It is expected that most formats are sufficiently self-describing
|
||||
that this is not necessary and the configured expected public key provides another indication
|
||||
of the expected cryptographic signature format. Such metadata may be added in the future for
|
||||
newly added cryptographic signature formats, if necessary.)
|
||||
|
||||
Consumers of atomic container signatures SHOULD verify the cryptographic signature
|
||||
against one or more trusted public keys
|
||||
(e.g. defined in a [policy.json signature verification policy file](policy.json.md))
|
||||
before parsing or processing the JSON payload in _any_ way,
|
||||
in particular they SHOULD stop processing the container signature
|
||||
if the cryptographic signature verification fails, without even starting to process the JSON payload.
|
||||
|
||||
(Consumers MAY extract identification of the signing key and other metadata from the cryptographic signature,
|
||||
and the JSON payload, without verifying the signature, if the purpose is to allow managing the signature blobs,
|
||||
e.g. to list the authors and image identities of signatures associated with a single container image;
|
||||
if so, they SHOULD design the output of such processing to minimize the risk of users considering the output trusted
|
||||
or in any way usable for making policy decisions about the image.)
|
||||
|
||||
### OpenPGP signature verification
|
||||
|
||||
When verifying a cryptographic signature in the OpenPGP format,
|
||||
the consumer MUST verify at least the following aspects of the signature
|
||||
(like the `github.com/containers/image/signature` package does):
|
||||
|
||||
- The blob MUST be a “Signed Message” as defined RFC 4880 section 11.3.
|
||||
(e.g. it MUST NOT be an unsigned “Literal Message”, or any other non-signature format).
|
||||
- The signature MUST have been made by an expected key trusted for the purpose (and the specific container image).
|
||||
- The signature MUST be correctly formed and pass the cryptographic validation.
|
||||
- The signature MUST correctly authenticate the included JSON payload
|
||||
(in particular, the parsing of the JSON payload MUST NOT start before the complete payload has been cryptographically authenticated).
|
||||
- The signature MUST NOT be expired.
|
||||
|
||||
The consumer SHOULD have tests for its verification code which verify that signatures failing any of the above are rejected.
|
||||
|
||||
## JSON processing and forward compatibility
|
||||
|
||||
The payload of the cryptographic signature is a JSON document (RFC 7159).
|
||||
Consumers SHOULD parse it very strictly,
|
||||
refusing any signature which violates the expected format (e.g. missing members, incorrect member types)
|
||||
or can be interpreted ambiguously (e.g. a duplicated member in a JSON object).
|
||||
|
||||
Any violations of the JSON format or of other requirements in this document MAY be accepted if the JSON document can be recognized
|
||||
to have been created by a known-incorrect implementation (see [`optional.creator`](#optionalcreator) below)
|
||||
and if the semantics of the invalid document, as created by such an implementation, is clear.
|
||||
|
||||
The top-level value of the JSON document MUST be a JSON object with exactly two members, `critical` and `optional`,
|
||||
each a JSON object.
|
||||
|
||||
The `critical` object MUST contain a `type` member identifying the document as an atomic container signature
|
||||
(as defined [below](#criticaltype))
|
||||
and signature consumers MUST reject signatures which do not have this member or in which this member does not have the expected value.
|
||||
|
||||
To ensure forward compatibility (allowing older signature consumers to correctly
|
||||
accept or reject signatures created at a later date, with possible extensions to this format),
|
||||
consumers MUST reject the signature if the `critical` object, or _any_ of its subobjects,
|
||||
contain _any_ member or data value which is unrecognized, unsupported, invalid, or in any other way unexpected.
|
||||
At a minimum, this includes unrecognized members in a JSON object, or incorrect types of expected members.
|
||||
|
||||
For the same reason, consumers SHOULD accept any members with unrecognized names in the `optional` object,
|
||||
and MAY accept signatures where the object member is recognized but unsupported, or the value of the member is unsupported.
|
||||
Consumers still SHOULD reject signatures where a member of an `optional` object is supported but the value is recognized as invalid.
|
||||
|
||||
## JSON data format
|
||||
|
||||
An example of the full format follows, with detailed description below.
|
||||
To reiterate, consumers of the signature SHOULD perform successful cryptographic verification,
|
||||
and MUST reject unexpected data in the `critical` object, or in the top-level object, as described above.
|
||||
|
||||
```json
|
||||
{
|
||||
"critical": {
|
||||
"type": "atomic container signature",
|
||||
"image": {
|
||||
"docker-manifest-digest": "sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e"
|
||||
},
|
||||
"identity": {
|
||||
"docker-reference": "docker.io/library/busybox:latest"
|
||||
}
|
||||
},
|
||||
"optional": {
|
||||
"creator": "some software package v1.0.1-35",
|
||||
"timestamp": 1483228800,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### `critical`
|
||||
|
||||
This MUST be a JSON object which contains data critical to correctly evaluating the validity of a signature.
|
||||
|
||||
Consumers MUST reject any signature where the `critical` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data.
|
||||
|
||||
### `critical.type`
|
||||
|
||||
This MUST be a string with a string value exactly equal to `atomic container signature` (three words, including the spaces).
|
||||
|
||||
Signature consumers MUST reject signatures which do not have this member or this member does not have exactly the expected value.
|
||||
|
||||
(The consumers MAY support signatures with a different value of the `type` member, if any is defined in the future;
|
||||
if so, the rest of the JSON document is interpreted according to rules defining that value of `critical.type`,
|
||||
not by this document.)
|
||||
|
||||
### `critical.image`
|
||||
|
||||
This MUST be a JSON object which identifies the container image this signature applies to.
|
||||
|
||||
Consumers MUST reject any signature where the `critical.image` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data.
|
||||
|
||||
(Currently only the `docker-manifest-digest` way of identifying a container image is defined;
|
||||
alternatives to this may be defined in the future,
|
||||
but existing consumers are required to reject signatures which use formats they do not support.)
|
||||
|
||||
### `critical.image.docker-manifest-digest`
|
||||
|
||||
This MUST be a JSON string, in the `github.com/opencontainers/go-digest.Digest` string format.
|
||||
|
||||
The value of this member MUST match the manifest of the signed container image, as implemented in the docker/distribution manifest addressing system.
|
||||
|
||||
The consumer of the signature SHOULD verify the manifest digest against a fully verified signature before processing the contents of the image manifest in any other way
|
||||
(e.g. parsing the manifest further or downloading layers of the image).
|
||||
|
||||
Implementation notes:
|
||||
* A single container image manifest may have several valid manifest digest values, using different algorithms.
|
||||
* For “signed” [docker/distribution schema 1](https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md) manifests,
|
||||
the manifest digest applies to the payload of the JSON web signature, not to the raw manifest blob.
|
||||
|
||||
### `critical.identity`
|
||||
|
||||
This MUST be a JSON object which identifies the claimed identity of the image (usually the purpose of the image, or the application, along with a version information),
|
||||
as asserted by the author of the signature.
|
||||
|
||||
Consumers MUST reject any signature where the `critical.identity` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data.
|
||||
|
||||
(Currently only the `docker-reference` way of claiming an image identity/purpose is defined;
|
||||
alternatives to this may be defined in the future,
|
||||
but existing consumers are required to reject signatures which use formats they do not support.)
|
||||
|
||||
### `critical.identity.docker-reference`
|
||||
|
||||
This MUST be a JSON string, in the `github.com/docker/distribution/reference` string format,
|
||||
and using the same normalization semantics (where e.g. `busybox:latest` is equivalent to `docker.io/library/busybox:latest`).
|
||||
If the normalization semantics allows multiple string representations of the claimed identity with equivalent meaning,
|
||||
the `critical.identity.docker-reference` member SHOULD use the fully explicit form (including the full host name and namespaces).
|
||||
|
||||
The value of this member MUST match the image identity/purpose expected by the consumer of the image signature and the image
|
||||
(again, accounting for the `docker/distribution/reference` normalization semantics).
|
||||
|
||||
In the most common case, this means that the `critical.identity.docker-reference` value must be equal to the docker/distribution reference used to refer to or download the image.
|
||||
However, depending on the specific application, users or system administrators may accept less specific matches
|
||||
(e.g. ignoring the tag value in the signature when pulling the `:latest` tag or when referencing an image by digest),
|
||||
or they may require `critical.identity.docker-reference` values with a completely different namespace to the reference used to refer to/download the image
|
||||
(e.g. requiring a `critical.identity.docker-reference` value which identifies the image as coming from a supplier when fetching it from a company-internal mirror of approved images).
|
||||
The software performing this verification SHOULD allow the users to define such a policy using the [policy.json signature verification policy file format](policy.json.md).
|
||||
|
||||
The `critical.identity.docker-reference` value SHOULD contain either a tag or digest;
|
||||
in most cases, it SHOULD use a tag rather than a digest. (See also the default [`matchRepoDigestOrExact` matching semantics in `policy.json`](policy.json.md#signedby).)
|
||||
|
||||
### `optional`
|
||||
|
||||
This MUST be a JSON object.
|
||||
|
||||
Consumers SHOULD accept any members with unrecognized names in the `optional` object,
|
||||
and MAY accept a signature where the object member is recognized but unsupported, or the value of the member is valid but unsupported.
|
||||
Consumers still SHOULD reject any signature where a member of an `optional` object is supported but the value is recognized as invalid.
|
||||
|
||||
### `optional.creator`
|
||||
|
||||
If present, this MUST be a JSON string, identifying the name and version of the software which has created the signature.
|
||||
|
||||
The contents of this string is not defined in detail; however each implementation creating atomic container signatures:
|
||||
|
||||
- SHOULD define the contents to unambiguously define the software in practice (e.g. it SHOULD contain the name of the software, not only the version number)
|
||||
- SHOULD use a build and versioning process which ensures that the contents of this string (e.g. an included version number)
|
||||
changes whenever the format or semantics of the generated signature changes in any way;
|
||||
it SHOULD not be possible for two implementations which use a different format or semantics to have the same `optional.creator` value
|
||||
- SHOULD use a format which is reasonably easy to parse in software (perhaps using a regexp),
|
||||
and which makes it easy enough to recognize a range of versions of a specific implementation
|
||||
(e.g. the version of the implementation SHOULD NOT be only a git hash, because they don’t have an easily defined ordering;
|
||||
the string should contain a version number, or at least a date of the commit).
|
||||
|
||||
Consumers of atomic container signatures MAY recognize specific values or sets of values of `optional.creator`
|
||||
(perhaps augmented with `optional.timestamp`),
|
||||
and MAY change their processing of the signature based on these values
|
||||
(usually to acommodate violations of this specification in past versions of the signing software which cannot be fixed retroactively),
|
||||
as long as the semantics of the invalid document, as created by such an implementation, is clear.
|
||||
|
||||
If consumers of signatures do change their behavior based on the `optional.creator` value,
|
||||
they SHOULD take care that the way they process the signatures is not inconsistent with
|
||||
strictly validating signature consumers.
|
||||
(I.e. it is acceptable for a consumer to accept a signature based on a specific `optional.creator` value
|
||||
if other implementations would completely reject the signature,
|
||||
but it would be very undesirable for the two kinds of implementations to accept the signature in different
|
||||
and inconsistent situations.)
|
||||
|
||||
### `optional.timestamp`
|
||||
|
||||
If present, this MUST be a JSON number, which is representable as a 64-bit integer, and identifies the time when the signature was created
|
||||
as the number of seconds since the UNIX epoch (Jan 1 1970 00:00 UTC).
|
||||
281
vendor/github.com/containers/image/docs/policy.json.md
generated
vendored
Normal file
281
vendor/github.com/containers/image/docs/policy.json.md
generated
vendored
Normal file
@@ -0,0 +1,281 @@
|
||||
% POLICY.JSON(5) policy.json Man Page
|
||||
% Miloslav Trmač
|
||||
% September 2016
|
||||
|
||||
# NAME
|
||||
policy.json - syntax for the signature verification policy file
|
||||
|
||||
## DESCRIPTION
|
||||
|
||||
Signature verification policy files are used to specify policy, e.g. trusted keys,
|
||||
applicable when deciding whether to accept an image, or individual signatures of that image, as valid.
|
||||
|
||||
The default policy is stored (unless overridden at compile-time) at `/etc/containers/policy.json`;
|
||||
applications performing verification may allow using a different policy instead.
|
||||
|
||||
## FORMAT
|
||||
|
||||
The signature verification policy file, usually called `policy.json`,
|
||||
uses a JSON format. Unlike some other JSON files, its parsing is fairly strict:
|
||||
unrecognized, duplicated or otherwise invalid fields cause the entire file,
|
||||
and usually the entire operation, to be rejected.
|
||||
|
||||
The purpose of the policy file is to define a set of *policy requirements* for a container image,
|
||||
usually depending on its location (where it is being pulled from) or otherwise defined identity.
|
||||
|
||||
Policy requirements can be defined for:
|
||||
|
||||
- An individual *scope* in a *transport*.
|
||||
The *transport* values are the same as the transport prefixes when pushing/pulling images (e.g. `docker:`, `atomic:`),
|
||||
and *scope* values are defined by each transport; see below for more details.
|
||||
|
||||
Usually, a scope can be defined to match a single image, and various prefixes of
|
||||
such a most specific scope define namespaces of matching images.
|
||||
- A default policy for a single transport, expressed using an empty string as a scope
|
||||
- A global default policy.
|
||||
|
||||
If multiple policy requirements match a given image, only the requirements from the most specific match apply,
|
||||
the more general policy requirements definitions are ignored.
|
||||
|
||||
This is expressed in JSON using the top-level syntax
|
||||
```js
|
||||
{
|
||||
"default": [/* policy requirements: global default */]
|
||||
"transports": {
|
||||
transport_name: {
|
||||
"": [/* policy requirements: default for transport $transport_name */],
|
||||
scope_1: [/* policy requirements: default for $scope_1 in $transport_name */],
|
||||
scope_2: [/*…*/]
|
||||
/*…*/
|
||||
},
|
||||
transport_name_2: {/*…*/}
|
||||
/*…*/
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The global `default` set of policy requirements is mandatory; all of the other fields
|
||||
(`transports` itself, any specific transport, the transport-specific default, etc.) are optional.
|
||||
|
||||
<!-- NOTE: Keep this in sync with transports/transports.go! -->
|
||||
## Supported transports and their scopes
|
||||
|
||||
### `atomic:`
|
||||
|
||||
The `atomic:` transport refers to images in an Atomic Registry.
|
||||
|
||||
Supported scopes use the form _hostname_[`:`_port_][`/`_namespace_[`/`_imagestream_ [`:`_tag_]]],
|
||||
i.e. either specifying a complete name of a tagged image, or prefix denoting
|
||||
a host/namespace/image stream.
|
||||
|
||||
*Note:* The _hostname_ and _port_ refer to the Docker registry host and port (the one used
|
||||
e.g. for `docker pull`), _not_ to the OpenShift API host and port.
|
||||
|
||||
### `dir:`
|
||||
|
||||
The `dir:` transport refers to images stored in local directories.
|
||||
|
||||
Supported scopes are paths of directories (either containing a single image or
|
||||
subdirectories possibly containing images).
|
||||
|
||||
*Note:* The paths must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored.
|
||||
|
||||
The top-level scope `"/"` is forbidden; use the transport default scope `""`,
|
||||
for consistency with other transports.
|
||||
|
||||
### `docker:`
|
||||
|
||||
The `docker:` transport refers to images in a registry implementing the "Docker Registry HTTP API V2".
|
||||
|
||||
Scopes matching individual images are named Docker references *in the fully expanded form*, either
|
||||
using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`).
|
||||
|
||||
More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest),
|
||||
a repository namespace, or a registry host (by only specifying the host name).
|
||||
|
||||
### `oci:`
|
||||
|
||||
The `oci:` transport refers to images in directories compliant with "Open Container Image Layout Specification".
|
||||
|
||||
Supported scopes use the form _directory_`:`_tag_, and _directory_ referring to
|
||||
a directory containing one or more tags, or any of the parent directories.
|
||||
|
||||
*Note:* See `dir:` above for semantics and restrictions on the directory paths, they apply to `oci:` equivalently.
|
||||
|
||||
### `tarball:`
|
||||
|
||||
The `tarball:` transport refers to tarred up container root filesystems.
|
||||
|
||||
Scopes are ignored.
|
||||
|
||||
## Policy Requirements
|
||||
|
||||
Using the mechanisms above, a set of policy requirements is looked up. The policy requirements
|
||||
are represented as a JSON array of individual requirement objects. For an image to be accepted,
|
||||
*all* of the requirements must be satisfied simulatenously.
|
||||
|
||||
The policy requirements can also be used to decide whether an individual signature is accepted (= is signed by a recognized key of a known author);
|
||||
in that case some requirements may apply only to some signatures, but each signature must be accepted by *at least one* requirement object.
|
||||
|
||||
The following requirement objects are supported:
|
||||
|
||||
### `insecureAcceptAnything`
|
||||
|
||||
A simple requirement with the following syntax
|
||||
|
||||
```json
|
||||
{"type":"insecureAcceptAnything"}
|
||||
```
|
||||
|
||||
This requirement accepts any image (but note that other requirements in the array still apply).
|
||||
|
||||
When deciding to accept an individual signature, this requirement does not have any effect; it does *not* cause the signature to be accepted, though.
|
||||
|
||||
This is useful primarily for policy scopes where no signature verification is required;
|
||||
because the array of policy requirements must not be empty, this requirement is used
|
||||
to represent the lack of requirements explicitly.
|
||||
|
||||
### `reject`
|
||||
|
||||
A simple requirement with the following syntax:
|
||||
|
||||
```json
|
||||
{"type":"reject"}
|
||||
```
|
||||
|
||||
This requirement rejects every image, and every signature.
|
||||
|
||||
### `signedBy`
|
||||
|
||||
This requirement requires an image to be signed with an expected identity, or accepts a signature if it is using an expected identity and key.
|
||||
|
||||
```js
|
||||
{
|
||||
"type": "signedBy",
|
||||
"keyType": "GPGKeys", /* The only currently supported value */
|
||||
"keyPath": "/path/to/local/keyring/file",
|
||||
"keyData": "base64-encoded-keyring-data",
|
||||
"signedIdentity": identity_requirement
|
||||
}
|
||||
```
|
||||
<!-- Later: other keyType values -->
|
||||
|
||||
Exactly one of `keyPath` and `keyData` must be present, containing a GPG keyring of one or more public keys. Only signatures made by these keys are accepted.
|
||||
|
||||
The `signedIdentity` field, a JSON object, specifies what image identity the signature claims about the image.
|
||||
One of the following alternatives are supported:
|
||||
|
||||
- The identity in the signature must exactly match the image identity. Note that with this, referencing an image by digest (with a signature claiming a _repository_`:`_tag_ identity) will fail.
|
||||
|
||||
```json
|
||||
{"type":"matchExact"}
|
||||
```
|
||||
- If the image identity carries a tag, the identity in the signature must exactly match;
|
||||
if the image identity uses a digest reference, the identity in the signature must be in the same repository as the image identity (using any tag).
|
||||
|
||||
(Note that with images identified using digest references, the digest from the reference is validated even before signature verification starts.)
|
||||
|
||||
```json
|
||||
{"type":"matchRepoDigestOrExact"}
|
||||
```
|
||||
- The identity in the signature must be in the same repository as the image identity. This is useful e.g. to pull an image using the `:latest` tag when the image is signed with a tag specifing an exact image version.
|
||||
|
||||
```json
|
||||
{"type":"matchRepository"}
|
||||
```
|
||||
- The identity in the signature must exactly match a specified identity.
|
||||
This is useful e.g. when locally mirroring images signed using their public identity.
|
||||
|
||||
```js
|
||||
{
|
||||
"type": "exactReference",
|
||||
"dockerReference": docker_reference_value
|
||||
}
|
||||
```
|
||||
- The identity in the signature must be in the same repository as a specified identity.
|
||||
This combines the properties of `matchRepository` and `exactReference`.
|
||||
|
||||
```js
|
||||
{
|
||||
"type": "exactRepository",
|
||||
"dockerRepository": docker_repository_value
|
||||
}
|
||||
```
|
||||
|
||||
If the `signedIdentity` field is missing, it is treated as `matchRepoDigestOrExact`.
|
||||
|
||||
*Note*: `matchExact`, `matchRepoDigestOrExact` and `matchRepository` can be only used if a Docker-like image identity is
|
||||
provided by the transport. In particular, the `dir:` and `oci:` transports can be only
|
||||
used with `exactReference` or `exactRepository`.
|
||||
|
||||
<!-- ### `signedBaseLayer` -->
|
||||
|
||||
## Examples
|
||||
|
||||
It is *strongly* recommended to set the `default` policy to `reject`, and then
|
||||
selectively allow individual transports and scopes as desired.
|
||||
|
||||
### A reasonably locked-down system
|
||||
|
||||
(Note that the `/*`…`*/` comments are not valid in JSON, and must not be used in real policies.)
|
||||
|
||||
```js
|
||||
{
|
||||
"default": [{"type": "reject"}], /* Reject anything not explicitly allowed */
|
||||
"transports": {
|
||||
"docker": {
|
||||
/* Allow installing images from a specific repository namespace, without cryptographic verification.
|
||||
This namespace includes images like openshift/hello-openshift and openshift/origin. */
|
||||
"docker.io/openshift": [{"type": "insecureAcceptAnything"}],
|
||||
/* Similarly, allow installing the “official” busybox images. Note how the fully expanded
|
||||
form, with the explicit /library/, must be used. */
|
||||
"docker.io/library/busybox": [{"type": "insecureAcceptAnything"}]
|
||||
/* Other docker: images use the global default policy and are rejected */
|
||||
},
|
||||
"dir": {
|
||||
"": [{"type": "insecureAcceptAnything"}] /* Allow any images originating in local directories */
|
||||
},
|
||||
"atomic": {
|
||||
/* The common case: using a known key for a repository or set of repositories */
|
||||
"hostname:5000/myns/official": [
|
||||
{
|
||||
"type": "signedBy",
|
||||
"keyType": "GPGKeys",
|
||||
"keyPath": "/path/to/official-pubkey.gpg"
|
||||
}
|
||||
],
|
||||
/* A more complex example, for a repository which contains a mirror of a third-party product,
|
||||
which must be signed-off by local IT */
|
||||
"hostname:5000/vendor/product": [
|
||||
{ /* Require the image to be signed by the original vendor, using the vendor's repository location. */
|
||||
"type": "signedBy",
|
||||
"keyType": "GPGKeys",
|
||||
"keyPath": "/path/to/vendor-pubkey.gpg",
|
||||
"signedIdentity": {
|
||||
"type": "exactRepository",
|
||||
"dockerRepository": "vendor-hostname/product/repository"
|
||||
}
|
||||
},
|
||||
{ /* Require the image to _also_ be signed by a local reviewer. */
|
||||
"type": "signedBy",
|
||||
"keyType": "GPGKeys",
|
||||
"keyPath": "/path/to/reviewer-pubkey.gpg"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Completely disable security, allow all images, do not trust any signatures
|
||||
|
||||
```json
|
||||
{
|
||||
"default": [{"type": "insecureAcceptAnything"}]
|
||||
}
|
||||
```
|
||||
## SEE ALSO
|
||||
atomic(1)
|
||||
|
||||
## HISTORY
|
||||
September 2016, Originally compiled by Miloslav Trmač <mitr@redhat.com>
|
||||
41
vendor/github.com/containers/image/docs/registries.conf.5.md
generated
vendored
Normal file
41
vendor/github.com/containers/image/docs/registries.conf.5.md
generated
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
% registries.conf(5) System-wide registry configuration file
|
||||
% Brent Baude
|
||||
% Aug 2017
|
||||
|
||||
# NAME
|
||||
registries.conf - Syntax of System Registry Configuration File
|
||||
|
||||
# DESCRIPTION
|
||||
The REGISTRIES configuration file is a system-wide configuration file for container image
|
||||
registries. The file format is TOML.
|
||||
|
||||
# FORMAT
|
||||
The TOML_format is used to build simple list format for registries under two
|
||||
categories: `search` and `insecure`. You can list multiple registries using
|
||||
as a comma separated list.
|
||||
|
||||
Search registries are used when the caller of a container runtime does not fully specify the
|
||||
container image that they want to execute. These registries are prepended onto the front
|
||||
of the specified container image until the named image is found at a registry.
|
||||
|
||||
Insecure Registries. By default container runtimes use TLS when retrieving images
|
||||
from a registry. If the registry is not setup with TLS, then the container runtime
|
||||
will fail to pull images from the registry. If you add the registry to the list of
|
||||
insecure registries then the container runtime will attempt use standard web protocols to
|
||||
pull the image. It also allows you to pull from a registry with self-signed certificates.
|
||||
Note insecure registries can be used for any registry, not just the
|
||||
registries listed under search.
|
||||
|
||||
The following example configuration defines two searchable registries and one
|
||||
insecure registry.
|
||||
|
||||
```
|
||||
[registries.search]
|
||||
registries = ["registry1.com", "registry2.com"]
|
||||
|
||||
[registries.insecure]
|
||||
registries = ["registry3.com"]
|
||||
```
|
||||
|
||||
# HISTORY
|
||||
Aug 2017, Originally compiled by Brent Baude <bbaude@redhat.com>
|
||||
124
vendor/github.com/containers/image/docs/registries.d.md
generated
vendored
Normal file
124
vendor/github.com/containers/image/docs/registries.d.md
generated
vendored
Normal file
@@ -0,0 +1,124 @@
|
||||
% REGISTRIES.D(5) Registries.d Man Page
|
||||
% Miloslav Trmač
|
||||
% August 2016
|
||||
# Registries Configuration Directory
|
||||
|
||||
The registries configuration directory contains configuration for various registries
|
||||
(servers storing remote container images), and for content stored in them,
|
||||
so that the configuration does not have to be provided in command-line options over and over for every command,
|
||||
and so that it can be shared by all users of containers/image.
|
||||
|
||||
By default (unless overridden at compile-time), the registries configuration directory is `/etc/containers/registries.d`;
|
||||
applications may allow using a different directory instead.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
The directory may contain any number of files with the extension `.yaml`,
|
||||
each using the YAML format. Other than the mandatory extension, names of the files
|
||||
don’t matter.
|
||||
|
||||
The contents of these files are merged together; to have a well-defined and easy to understand
|
||||
behavior, there can be only one configuration section describing a single namespace within a registry
|
||||
(in particular there can be at most one one `default-docker` section across all files,
|
||||
and there can be at most one instance of any key under the the `docker` section;
|
||||
these sections are documented later).
|
||||
|
||||
Thus, it is forbidden to have two conflicting configurations for a single registry or scope,
|
||||
and it is also forbidden to split a configuration for a single registry or scope across
|
||||
more than one file (even if they are not semantically in conflict).
|
||||
|
||||
## Registries, Scopes and Search Order
|
||||
|
||||
Each YAML file must contain a “YAML mapping” (key-value pairs). Two top-level keys are defined:
|
||||
|
||||
- `default-docker` is the _configuration section_ (as documented below)
|
||||
for registries implementing "Docker Registry HTTP API V2".
|
||||
|
||||
This key is optional.
|
||||
|
||||
- `docker` is a mapping, using individual registries implementing "Docker Registry HTTP API V2",
|
||||
or namespaces and individual images within these registries, as keys;
|
||||
the value assigned to any such key is a _configuration section_.
|
||||
|
||||
This key is optional.
|
||||
|
||||
Scopes matching individual images are named Docker references *in the fully expanded form*, either
|
||||
using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`).
|
||||
|
||||
More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest),
|
||||
a repository namespace, or a registry host (and a port if it differs from the default).
|
||||
|
||||
Note that if a registry is accessed using a hostname+port configuration, the port-less hostname
|
||||
is _not_ used as parent scope.
|
||||
|
||||
When searching for a configuration to apply for an individual container image, only
|
||||
the configuration for the most-precisely matching scope is used; configuration using
|
||||
more general scopes is ignored. For example, if _any_ configuration exists for
|
||||
`docker.io/library/busybox`, the configuration for `docker.io` is ignored
|
||||
(even if some element of the configuration is defined for `docker.io` and not for `docker.io/library/busybox`).
|
||||
|
||||
## Individual Configuration Sections
|
||||
|
||||
A single configuration section is selected for a container image using the process
|
||||
described above. The configuration section is a YAML mapping, with the following keys:
|
||||
|
||||
- `sigstore-staging` defines an URL of of the signature storage, used for editing it (adding or deleting signatures).
|
||||
|
||||
This key is optional; if it is missing, `sigstore` below is used.
|
||||
|
||||
- `sigstore` defines an URL of the signature storage.
|
||||
This URL is used for reading existing signatures,
|
||||
and if `sigstore-staging` does not exist, also for adding or removing them.
|
||||
|
||||
This key is optional; if it is missing, no signature storage is defined (no signatures
|
||||
are download along with images, adding new signatures is possible only if `sigstore-staging` is defined).
|
||||
|
||||
## Examples
|
||||
|
||||
### Using Containers from Various Origins
|
||||
|
||||
The following demonstrates how to to consume and run images from various registries and namespaces:
|
||||
|
||||
```yaml
|
||||
docker:
|
||||
registry.database-supplier.com:
|
||||
sigstore: https://sigstore.database-supplier.com
|
||||
distribution.great-middleware.org:
|
||||
sigstore: https://security-team.great-middleware.org/sigstore
|
||||
docker.io/web-framework:
|
||||
sigstore: https://sigstore.web-framework.io:8080
|
||||
```
|
||||
|
||||
### Developing and Signing Containers, Staging Signatures
|
||||
|
||||
For developers in `example.com`:
|
||||
|
||||
- Consume most container images using the public servers also used by clients.
|
||||
- Use a separate sigure storage for an container images in a namespace corresponding to the developers' department, with a staging storage used before publishing signatures.
|
||||
- Craft an individual exception for a single branch a specific developer is working on locally.
|
||||
|
||||
```yaml
|
||||
docker:
|
||||
registry.example.com:
|
||||
sigstore: https://registry-sigstore.example.com
|
||||
registry.example.com/mydepartment:
|
||||
sigstore: https://sigstore.mydepartment.example.com
|
||||
sigstore-staging: file:///mnt/mydepartment/sigstore-staging
|
||||
registry.example.com/mydepartment/myproject:mybranch:
|
||||
sigstore: http://localhost:4242/sigstore
|
||||
sigstore-staging: file:///home/useraccount/webroot/sigstore
|
||||
```
|
||||
|
||||
### A Global Default
|
||||
|
||||
If a company publishes its products using a different domain, and different registry hostname for each of them, it is still possible to use a single signature storage server
|
||||
without listing each domain individually. This is expected to rarely happen, usually only for staging new signatures.
|
||||
|
||||
```yaml
|
||||
default-docker:
|
||||
sigstore-staging: file:///mnt/company/common-sigstore-staging
|
||||
```
|
||||
|
||||
# AUTHORS
|
||||
|
||||
Miloslav Trmač <mitr@redhat.com>
|
||||
136
vendor/github.com/containers/image/docs/signature-protocols.md
generated
vendored
Normal file
136
vendor/github.com/containers/image/docs/signature-protocols.md
generated
vendored
Normal file
@@ -0,0 +1,136 @@
|
||||
# Signature access protocols
|
||||
|
||||
The `github.com/containers/image` library supports signatures implemented as blobs “attached to” an image.
|
||||
Some image transports (local storage formats and remote procotocols) implement these signatures natively
|
||||
or trivially; for others, the protocol extensions described below are necessary.
|
||||
|
||||
## docker/distribution registries—separate storage
|
||||
|
||||
### Usage
|
||||
|
||||
Any existing docker/distribution registry, whether or not it natively supports signatures,
|
||||
can be augmented with separate signature storage by configuring a signature storage URL in [`registries.d`](registries.d.md).
|
||||
`registries.d` can be configured to use one storage URL for a whole docker/distribution server,
|
||||
or also separate URLs for smaller namespaces or individual repositories within the server
|
||||
(which e.g. allows image authors to manage their own signature storage while publishing
|
||||
the images on the public `docker.io` server).
|
||||
|
||||
The signature storage URL defines a root of a path hierarchy.
|
||||
It can be either a `file:///…` URL, pointing to a local directory structure,
|
||||
or a `http`/`https` URL, pointing to a remote server.
|
||||
`file:///` signature storage can be both read and written, `http`/`https` only supports reading.
|
||||
|
||||
The same path hierarchy is used in both cases, so the HTTP/HTTPS server can be
|
||||
a simple static web server serving a directory structure created by writing to a `file:///` signature storage.
|
||||
(This of course does not prevent other server implementations,
|
||||
e.g. a HTTP server reading signatures from a database.)
|
||||
|
||||
The usual workflow for producing and distributing images using the separate storage mechanism
|
||||
is to configure the repository in `registries.d` with `sigstore-staging` URL pointing to a private
|
||||
`file:///` staging area, and a `sigstore` URL pointing to a public web server.
|
||||
To publish an image, the image author would sign the image as necessary (e.g. using `skopeo copy`),
|
||||
and then copy the created directory structure from the `file:///` staging area
|
||||
to a subdirectory of a webroot of the public web server so that they are accessible using the public `sigstore` URL.
|
||||
The author would also instruct consumers of the image to, or provide a `registries.d` configuration file to,
|
||||
set up a `sigstore` URL pointing to the public web server.
|
||||
|
||||
### Path structure
|
||||
|
||||
Given a _base_ signature storage URL configured in `registries.d` as mentioned above,
|
||||
and a container image stored in a docker/distribution registry using the _fully-expanded_ name
|
||||
_hostname_`/`_namespaces_`/`_name_{`@`_digest_,`:`_tag_} (e.g. for `docker.io/library/busybox:latest`,
|
||||
_namespaces_ is `library`, even if the user refers to the image using the shorter syntax as `busybox:latest`),
|
||||
signatures are accessed using URLs of the form
|
||||
> _base_`/`_namespaces_`/`_name_`@`_digest-algo_`=`_digest-value_`/signature-`_index_
|
||||
|
||||
where _digest-algo_`:`_digest-value_ is a manifest digest usable for referencing the relevant image manifest
|
||||
(i.e. even if the user referenced the image using a tag,
|
||||
the signature storage is always disambiguated using digest references).
|
||||
Note that in the URLs used for signatures,
|
||||
_digest-algo_ and _digest-value_ are separated using the `=` character,
|
||||
not `:` like when acessing the manifest using the docker/distribution API.
|
||||
|
||||
Within the URL, _index_ is a decimal integer (in the canonical form), starting with 1.
|
||||
Signatures are stored at URLs with successive _index_ values; to read all of them, start with _index_=1,
|
||||
and continue reading signatures and increasing _index_ as long as signatures with these _index_ values exist.
|
||||
Similarly, to add one more signature to an image, find the first _index_ which does not exist, and
|
||||
then store the new signature using that _index_ value.
|
||||
|
||||
There is no way to list existing signatures other than iterating through the successive _index_ values,
|
||||
and no way to download all of the signatures at once.
|
||||
|
||||
### Examples
|
||||
|
||||
For a docker/distribution image available as `busybox@sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e`
|
||||
(or as `busybox:latest` if the `latest` tag points to to a manifest with the same digest),
|
||||
and with a `registries.d` configuration specifying a `sigstore` URL `https://example.com/sigstore` for the same image,
|
||||
the following URLs would be accessed to download all signatures:
|
||||
> - `https://example.com/sigstore/library/busybox@sha256=817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e/signature-1`
|
||||
> - `https://example.com/sigstore/library/busybox@sha256=817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e/signature-2`
|
||||
> - …
|
||||
|
||||
For a docker/distribution image available as `example.com/ns1/ns2/ns3/repo@somedigest:digestvalue` and the same
|
||||
`sigstore` URL, the signatures would be available at
|
||||
> `https://example.com/sigstore/ns1/ns2/ns3/repo@somedigest=digestvalue/signature-1`
|
||||
|
||||
and so on.
|
||||
|
||||
## (OpenShift) docker/distribution API extension
|
||||
|
||||
As of https://github.com/openshift/origin/pull/12504/ , the OpenShift-embedded registry also provides
|
||||
an extension of the docker/distribution API which allows simpler access to the signatures,
|
||||
using only the docker/distribution API endpoint.
|
||||
|
||||
This API is not inherently OpenShift-specific (e.g. the client does not need to know the OpenShift API endpoint,
|
||||
and credentials sufficient to access the docker/distribution API server are sufficient to access signatures as well),
|
||||
and it is the preferred way implement signature storage in registries.
|
||||
|
||||
See https://github.com/openshift/openshift-docs/pull/3556 for the upstream documentation of the API.
|
||||
|
||||
To read the signature, any user with access to an image can use the `/extensions/v2/…/signatures/…`
|
||||
path to read an array of signatures. Use only the signature objects
|
||||
which have `version` equal to `2`, `type` equal to `atomic`, and read the signature from `content`;
|
||||
ignore the other fields of the signature object.
|
||||
|
||||
To add a single signature, `PUT` a new object with `version` set to `2`, `type` set to `atomic`,
|
||||
and `content` set to the signature. Also set `name` to an unique name with the form
|
||||
_digest_`@`_per-image-name_, where _digest_ is an image manifest digest (also used in the URL),
|
||||
and _per-image-name_ is any unique identifier.
|
||||
|
||||
To add more than one signature, add them one at a time. This API does not allow deleting signatures.
|
||||
|
||||
Note that because signatures are stored within the cluster-wide image objects,
|
||||
i.e. different namespaces can not associate different sets of signatures to the same image,
|
||||
updating signatures requires a cluster-wide access to the `imagesignatures` resource
|
||||
(by default available to the `system:image-signer` role),
|
||||
|
||||
## OpenShift-embedded registries
|
||||
|
||||
The OpenShift-embedded registry implements the ordinary docker/distribution API,
|
||||
and it also exposes images through the OpenShift REST API (available through the “API master” servers).
|
||||
|
||||
Note: OpenShift versions 1.5 and later support the above-described [docker/distribution API extension](#openshift-dockerdistribution-api-extension),
|
||||
which is easier to set up and should usually be preferred.
|
||||
Continue reading for details on using older versions of OpenShift.
|
||||
|
||||
As of https://github.com/openshift/origin/pull/9181,
|
||||
signatures are exposed through the OpenShift API
|
||||
(i.e. to access the complete image, it is necessary to use both APIs,
|
||||
in particular to know the URLs for both the docker/distribution and the OpenShift API master endpoints).
|
||||
|
||||
To read the signature, any user with access to an image can use the `imagestreamimages` namespaced
|
||||
resource to read an `Image` object and its `Signatures` array. Use only the `ImageSignature` objects
|
||||
which have `Type` equal to `atomic`, and read the signature from `Content`; ignore the other fields of
|
||||
the `ImageSignature` object.
|
||||
|
||||
To add or remove signatures, use the cluster-wide (non-namespaced) `imagesignatures` resource,
|
||||
with `Type` set to `atomic` and `Content` set to the signature. Signature names must have the form
|
||||
_digest_`@`_per-image-name_, where _digest_ is an image manifest digest (OpenShift “image name”),
|
||||
and _per-image-name_ is any unique identifier.
|
||||
|
||||
Note that because signatures are stored within the cluster-wide image objects,
|
||||
i.e. different namespaces can not associate different sets of signatures to the same image,
|
||||
updating signatures requires a cluster-wide access to the `imagesignatures` resource
|
||||
(by default available to the `system:image-signer` role),
|
||||
and deleting signatures is strongly discouraged
|
||||
(it deletes the signature from all namespaces which contain the same image).
|
||||
57
vendor/github.com/containers/image/image/docker_list.go
generated
vendored
57
vendor/github.com/containers/image/image/docker_list.go
generated
vendored
@@ -1,7 +1,9 @@
|
||||
package image
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"runtime"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
@@ -21,7 +23,7 @@ type platformSpec struct {
|
||||
|
||||
// A manifestDescriptor references a platform-specific manifest.
|
||||
type manifestDescriptor struct {
|
||||
descriptor
|
||||
manifest.Schema2Descriptor
|
||||
Platform platformSpec `json:"platform"`
|
||||
}
|
||||
|
||||
@@ -31,22 +33,36 @@ type manifestList struct {
|
||||
Manifests []manifestDescriptor `json:"manifests"`
|
||||
}
|
||||
|
||||
func manifestSchema2FromManifestList(src types.ImageSource, manblob []byte) (genericManifest, error) {
|
||||
list := manifestList{}
|
||||
if err := json.Unmarshal(manblob, &list); err != nil {
|
||||
return nil, err
|
||||
// chooseDigestFromManifestList parses blob as a schema2 manifest list,
|
||||
// and returns the digest of the image appropriate for the current environment.
|
||||
func chooseDigestFromManifestList(sys *types.SystemContext, blob []byte) (digest.Digest, error) {
|
||||
wantedArch := runtime.GOARCH
|
||||
if sys != nil && sys.ArchitectureChoice != "" {
|
||||
wantedArch = sys.ArchitectureChoice
|
||||
}
|
||||
wantedOS := runtime.GOOS
|
||||
if sys != nil && sys.OSChoice != "" {
|
||||
wantedOS = sys.OSChoice
|
||||
}
|
||||
|
||||
list := manifestList{}
|
||||
if err := json.Unmarshal(blob, &list); err != nil {
|
||||
return "", err
|
||||
}
|
||||
var targetManifestDigest digest.Digest
|
||||
for _, d := range list.Manifests {
|
||||
if d.Platform.Architecture == runtime.GOARCH && d.Platform.OS == runtime.GOOS {
|
||||
targetManifestDigest = d.Digest
|
||||
break
|
||||
if d.Platform.Architecture == wantedArch && d.Platform.OS == wantedOS {
|
||||
return d.Digest, nil
|
||||
}
|
||||
}
|
||||
if targetManifestDigest == "" {
|
||||
return nil, errors.New("no supported platform found in manifest list")
|
||||
return "", fmt.Errorf("no image found in manifest list for architecture %s, OS %s", wantedArch, wantedOS)
|
||||
}
|
||||
|
||||
func manifestSchema2FromManifestList(ctx context.Context, sys *types.SystemContext, src types.ImageSource, manblob []byte) (genericManifest, error) {
|
||||
targetManifestDigest, err := chooseDigestFromManifestList(sys, manblob)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
manblob, mt, err := src.GetTargetManifest(targetManifestDigest)
|
||||
manblob, mt, err := src.GetManifest(ctx, &targetManifestDigest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -59,5 +75,20 @@ func manifestSchema2FromManifestList(src types.ImageSource, manblob []byte) (gen
|
||||
return nil, errors.Errorf("Manifest image does not match selected manifest digest %s", targetManifestDigest)
|
||||
}
|
||||
|
||||
return manifestInstanceFromBlob(src, manblob, mt)
|
||||
return manifestInstanceFromBlob(ctx, sys, src, manblob, mt)
|
||||
}
|
||||
|
||||
// ChooseManifestInstanceFromManifestList returns a digest of a manifest appropriate
|
||||
// for the current system from the manifest available from src.
|
||||
func ChooseManifestInstanceFromManifestList(ctx context.Context, sys *types.SystemContext, src types.UnparsedImage) (digest.Digest, error) {
|
||||
// For now this only handles manifest.DockerV2ListMediaType; we can generalize it later,
|
||||
// probably along with manifest list editing.
|
||||
blob, mt, err := src.Manifest(ctx)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if mt != manifest.DockerV2ListMediaType {
|
||||
return "", fmt.Errorf("Internal error: Trying to select an image from a non-manifest-list manifest type %s", mt)
|
||||
}
|
||||
return chooseDigestFromManifestList(sys, blob)
|
||||
}
|
||||
|
||||
309
vendor/github.com/containers/image/image/docker_schema1.go
generated
vendored
309
vendor/github.com/containers/image/image/docker_schema1.go
generated
vendored
@@ -1,10 +1,8 @@
|
||||
package image
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/manifest"
|
||||
@@ -14,87 +12,25 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
validHex = regexp.MustCompile(`^([a-f0-9]{64})$`)
|
||||
)
|
||||
|
||||
type fsLayersSchema1 struct {
|
||||
BlobSum digest.Digest `json:"blobSum"`
|
||||
}
|
||||
|
||||
type historySchema1 struct {
|
||||
V1Compatibility string `json:"v1Compatibility"`
|
||||
}
|
||||
|
||||
// historySchema1 is a string containing this. It is similar to v1Image but not the same, in particular note the ThrowAway field.
|
||||
type v1Compatibility struct {
|
||||
ID string `json:"id"`
|
||||
Parent string `json:"parent,omitempty"`
|
||||
Comment string `json:"comment,omitempty"`
|
||||
Created time.Time `json:"created"`
|
||||
ContainerConfig struct {
|
||||
Cmd []string
|
||||
} `json:"container_config,omitempty"`
|
||||
Author string `json:"author,omitempty"`
|
||||
ThrowAway bool `json:"throwaway,omitempty"`
|
||||
}
|
||||
|
||||
type manifestSchema1 struct {
|
||||
Name string `json:"name"`
|
||||
Tag string `json:"tag"`
|
||||
Architecture string `json:"architecture"`
|
||||
FSLayers []fsLayersSchema1 `json:"fsLayers"`
|
||||
History []historySchema1 `json:"history"`
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
m *manifest.Schema1
|
||||
}
|
||||
|
||||
func manifestSchema1FromManifest(manifest []byte) (genericManifest, error) {
|
||||
mschema1 := &manifestSchema1{}
|
||||
if err := json.Unmarshal(manifest, mschema1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if mschema1.SchemaVersion != 1 {
|
||||
return nil, errors.Errorf("unsupported schema version %d", mschema1.SchemaVersion)
|
||||
}
|
||||
if len(mschema1.FSLayers) != len(mschema1.History) {
|
||||
return nil, errors.New("length of history not equal to number of layers")
|
||||
}
|
||||
if len(mschema1.FSLayers) == 0 {
|
||||
return nil, errors.New("no FSLayers in manifest")
|
||||
}
|
||||
|
||||
if err := fixManifestLayers(mschema1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return mschema1, nil
|
||||
}
|
||||
|
||||
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
|
||||
func manifestSchema1FromComponents(ref reference.Named, fsLayers []fsLayersSchema1, history []historySchema1, architecture string) genericManifest {
|
||||
var name, tag string
|
||||
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
|
||||
name = reference.Path(ref)
|
||||
if tagged, ok := ref.(reference.NamedTagged); ok {
|
||||
tag = tagged.Tag()
|
||||
}
|
||||
}
|
||||
return &manifestSchema1{
|
||||
Name: name,
|
||||
Tag: tag,
|
||||
Architecture: architecture,
|
||||
FSLayers: fsLayers,
|
||||
History: history,
|
||||
SchemaVersion: 1,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manifestSchema1) serialize() ([]byte, error) {
|
||||
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
|
||||
unsigned, err := json.Marshal(*m)
|
||||
func manifestSchema1FromManifest(manifestBlob []byte) (genericManifest, error) {
|
||||
m, err := manifest.Schema1FromManifest(manifestBlob)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return manifest.AddDummyV2S1Signature(unsigned)
|
||||
return &manifestSchema1{m: m}, nil
|
||||
}
|
||||
|
||||
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
|
||||
func manifestSchema1FromComponents(ref reference.Named, fsLayers []manifest.Schema1FSLayers, history []manifest.Schema1History, architecture string) genericManifest {
|
||||
return &manifestSchema1{m: manifest.Schema1FromComponents(ref, fsLayers, history, architecture)}
|
||||
}
|
||||
|
||||
func (m *manifestSchema1) serialize() ([]byte, error) {
|
||||
return m.m.Serialize()
|
||||
}
|
||||
|
||||
func (m *manifestSchema1) manifestMIMEType() string {
|
||||
@@ -104,35 +40,31 @@ func (m *manifestSchema1) manifestMIMEType() string {
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
|
||||
func (m *manifestSchema1) ConfigInfo() types.BlobInfo {
|
||||
return types.BlobInfo{}
|
||||
return m.m.ConfigInfo()
|
||||
}
|
||||
|
||||
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
func (m *manifestSchema1) ConfigBlob() ([]byte, error) {
|
||||
func (m *manifestSchema1) ConfigBlob(context.Context) ([]byte, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
|
||||
// layers in the resulting configuration isn't guaranteed to be returned to due how
|
||||
// old image manifests work (docker v2s1 especially).
|
||||
func (m *manifestSchema1) OCIConfig() (*imgspecv1.Image, error) {
|
||||
func (m *manifestSchema1) OCIConfig(ctx context.Context) (*imgspecv1.Image, error) {
|
||||
v2s2, err := m.convertToManifestSchema2(nil, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return v2s2.OCIConfig()
|
||||
return v2s2.OCIConfig(ctx)
|
||||
}
|
||||
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
|
||||
layers := make([]types.BlobInfo, len(m.FSLayers))
|
||||
for i, layer := range m.FSLayers { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
|
||||
layers[(len(m.FSLayers)-1)-i] = types.BlobInfo{Digest: layer.BlobSum, Size: -1}
|
||||
}
|
||||
return layers
|
||||
return m.m.LayerInfos()
|
||||
}
|
||||
|
||||
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
|
||||
@@ -153,56 +85,36 @@ func (m *manifestSchema1) EmbeddedDockerReferenceConflicts(ref reference.Named)
|
||||
} else {
|
||||
tag = ""
|
||||
}
|
||||
return m.Name != name || m.Tag != tag
|
||||
return m.m.Name != name || m.m.Tag != tag
|
||||
}
|
||||
|
||||
func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
|
||||
v1 := &v1Image{}
|
||||
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), v1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
i := &types.ImageInspectInfo{
|
||||
Tag: m.Tag,
|
||||
DockerVersion: v1.DockerVersion,
|
||||
Created: v1.Created,
|
||||
Architecture: v1.Architecture,
|
||||
Os: v1.OS,
|
||||
}
|
||||
if v1.Config != nil {
|
||||
i.Labels = v1.Config.Labels
|
||||
}
|
||||
return i, nil
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
func (m *manifestSchema1) Inspect(context.Context) (*types.ImageInspectInfo, error) {
|
||||
return m.m.Inspect(nil)
|
||||
}
|
||||
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
|
||||
// (most importantly it forces us to download the full layers even if they are already present at the destination).
|
||||
func (m *manifestSchema1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
|
||||
return options.ManifestMIMEType == manifest.DockerV2Schema2MediaType
|
||||
return (options.ManifestMIMEType == manifest.DockerV2Schema2MediaType || options.ManifestMIMEType == imgspecv1.MediaTypeImageManifest)
|
||||
}
|
||||
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// This does not change the state of the original Image object.
|
||||
func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
|
||||
copy := *m
|
||||
func (m *manifestSchema1) UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error) {
|
||||
copy := manifestSchema1{m: manifest.Schema1Clone(m.m)}
|
||||
if options.LayerInfos != nil {
|
||||
// Our LayerInfos includes empty layers (where m.History.V1Compatibility->ThrowAway), so expect them to be included here as well.
|
||||
if len(copy.FSLayers) != len(options.LayerInfos) {
|
||||
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.FSLayers), len(options.LayerInfos))
|
||||
}
|
||||
for i, info := range options.LayerInfos {
|
||||
// (docker push) sets up m.History.V1Compatibility->{Id,Parent} based on values of info.Digest,
|
||||
// but (docker pull) ignores them in favor of computing DiffIDs from uncompressed data, except verifying the child->parent links and uniqueness.
|
||||
// So, we don't bother recomputing the IDs in m.History.V1Compatibility.
|
||||
copy.FSLayers[(len(options.LayerInfos)-1)-i].BlobSum = info.Digest
|
||||
if err := copy.m.UpdateLayerInfos(options.LayerInfos); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if options.EmbeddedDockerReference != nil {
|
||||
copy.Name = reference.Path(options.EmbeddedDockerReference)
|
||||
copy.m.Name = reference.Path(options.EmbeddedDockerReference)
|
||||
if tagged, isTagged := options.EmbeddedDockerReference.(reference.NamedTagged); isTagged {
|
||||
copy.Tag = tagged.Tag()
|
||||
copy.m.Tag = tagged.Tag()
|
||||
} else {
|
||||
copy.Tag = ""
|
||||
copy.m.Tag = ""
|
||||
}
|
||||
}
|
||||
|
||||
@@ -212,7 +124,21 @@ func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (typ
|
||||
// We have 2 MIME types for schema 1, which are basically equivalent (even the un-"Signed" MIME type will be rejected if there isn’t a signature; so,
|
||||
// handle conversions between them by doing nothing.
|
||||
case manifest.DockerV2Schema2MediaType:
|
||||
return copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
|
||||
m2, err := copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return memoryImageFromManifest(m2), nil
|
||||
case imgspecv1.MediaTypeImageManifest:
|
||||
// We can't directly convert to OCI, but we can transitively convert via a Docker V2.2 Distribution manifest
|
||||
m2, err := copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m2.UpdatedImage(ctx, types.ManifestUpdateOptions{
|
||||
ManifestMIMEType: imgspecv1.MediaTypeImageManifest,
|
||||
InformationOnly: options.InformationOnly,
|
||||
})
|
||||
default:
|
||||
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema1SignedMediaType, options.ManifestMIMEType)
|
||||
}
|
||||
@@ -220,102 +146,32 @@ func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (typ
|
||||
return memoryImageFromManifest(©), nil
|
||||
}
|
||||
|
||||
// fixManifestLayers, after validating the supplied manifest
|
||||
// (to use correctly-formatted IDs, and to not have non-consecutive ID collisions in manifest.History),
|
||||
// modifies manifest to only have one entry for each layer ID in manifest.History (deleting the older duplicates,
|
||||
// both from manifest.History and manifest.FSLayers).
|
||||
// Note that even after this succeeds, manifest.FSLayers may contain duplicate entries
|
||||
// (for Dockerfile operations which change the configuration but not the filesystem).
|
||||
func fixManifestLayers(manifest *manifestSchema1) error {
|
||||
type imageV1 struct {
|
||||
ID string
|
||||
Parent string
|
||||
}
|
||||
// Per the specification, we can assume that len(manifest.FSLayers) == len(manifest.History)
|
||||
imgs := make([]*imageV1, len(manifest.FSLayers))
|
||||
for i := range manifest.FSLayers {
|
||||
img := &imageV1{}
|
||||
|
||||
if err := json.Unmarshal([]byte(manifest.History[i].V1Compatibility), img); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
imgs[i] = img
|
||||
if err := validateV1ID(img.ID); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if imgs[len(imgs)-1].Parent != "" {
|
||||
return errors.New("Invalid parent ID in the base layer of the image")
|
||||
}
|
||||
// check general duplicates to error instead of a deadlock
|
||||
idmap := make(map[string]struct{})
|
||||
var lastID string
|
||||
for _, img := range imgs {
|
||||
// skip IDs that appear after each other, we handle those later
|
||||
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
|
||||
return errors.Errorf("ID %+v appears multiple times in manifest", img.ID)
|
||||
}
|
||||
lastID = img.ID
|
||||
idmap[lastID] = struct{}{}
|
||||
}
|
||||
// backwards loop so that we keep the remaining indexes after removing items
|
||||
for i := len(imgs) - 2; i >= 0; i-- {
|
||||
if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue
|
||||
manifest.FSLayers = append(manifest.FSLayers[:i], manifest.FSLayers[i+1:]...)
|
||||
manifest.History = append(manifest.History[:i], manifest.History[i+1:]...)
|
||||
} else if imgs[i].Parent != imgs[i+1].ID {
|
||||
return errors.Errorf("Invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateV1ID(id string) error {
|
||||
if ok := validHex.MatchString(id); !ok {
|
||||
return errors.Errorf("image ID %q is invalid", id)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Based on github.com/docker/docker/distribution/pull_v2.go
|
||||
func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.BlobInfo, layerDiffIDs []digest.Digest) (types.Image, error) {
|
||||
if len(m.History) == 0 {
|
||||
func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.BlobInfo, layerDiffIDs []digest.Digest) (genericManifest, error) {
|
||||
if len(m.m.History) == 0 {
|
||||
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
|
||||
return nil, errors.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema2MediaType)
|
||||
}
|
||||
if len(m.History) != len(m.FSLayers) {
|
||||
return nil, errors.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.History), len(m.FSLayers))
|
||||
if len(m.m.History) != len(m.m.FSLayers) {
|
||||
return nil, errors.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.m.History), len(m.m.FSLayers))
|
||||
}
|
||||
if uploadedLayerInfos != nil && len(uploadedLayerInfos) != len(m.FSLayers) {
|
||||
return nil, errors.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.FSLayers))
|
||||
if uploadedLayerInfos != nil && len(uploadedLayerInfos) != len(m.m.FSLayers) {
|
||||
return nil, errors.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.m.FSLayers))
|
||||
}
|
||||
if layerDiffIDs != nil && len(layerDiffIDs) != len(m.FSLayers) {
|
||||
return nil, errors.Errorf("Internal error: collected %d DiffID values, but schema1 manifest has %d fsLayers", len(layerDiffIDs), len(m.FSLayers))
|
||||
if layerDiffIDs != nil && len(layerDiffIDs) != len(m.m.FSLayers) {
|
||||
return nil, errors.Errorf("Internal error: collected %d DiffID values, but schema1 manifest has %d fsLayers", len(layerDiffIDs), len(m.m.FSLayers))
|
||||
}
|
||||
|
||||
rootFS := rootFS{
|
||||
Type: "layers",
|
||||
DiffIDs: []digest.Digest{},
|
||||
BaseLayer: "",
|
||||
}
|
||||
var layers []descriptor
|
||||
history := make([]imageHistory, len(m.History))
|
||||
for v1Index := len(m.History) - 1; v1Index >= 0; v1Index-- {
|
||||
v2Index := (len(m.History) - 1) - v1Index
|
||||
// Build a list of the diffIDs for the non-empty layers.
|
||||
diffIDs := []digest.Digest{}
|
||||
var layers []manifest.Schema2Descriptor
|
||||
for v1Index := len(m.m.History) - 1; v1Index >= 0; v1Index-- {
|
||||
v2Index := (len(m.m.History) - 1) - v1Index
|
||||
|
||||
var v1compat v1Compatibility
|
||||
if err := json.Unmarshal([]byte(m.History[v1Index].V1Compatibility), &v1compat); err != nil {
|
||||
var v1compat manifest.Schema1V1Compatibility
|
||||
if err := json.Unmarshal([]byte(m.m.History[v1Index].V1Compatibility), &v1compat); err != nil {
|
||||
return nil, errors.Wrapf(err, "Error decoding history entry %d", v1Index)
|
||||
}
|
||||
history[v2Index] = imageHistory{
|
||||
Created: v1compat.Created,
|
||||
Author: v1compat.Author,
|
||||
CreatedBy: strings.Join(v1compat.ContainerConfig.Cmd, " "),
|
||||
Comment: v1compat.Comment,
|
||||
EmptyLayer: v1compat.ThrowAway,
|
||||
}
|
||||
|
||||
if !v1compat.ThrowAway {
|
||||
var size int64
|
||||
if uploadedLayerInfos != nil {
|
||||
@@ -325,54 +181,23 @@ func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.Bl
|
||||
if layerDiffIDs != nil {
|
||||
d = layerDiffIDs[v2Index]
|
||||
}
|
||||
layers = append(layers, descriptor{
|
||||
layers = append(layers, manifest.Schema2Descriptor{
|
||||
MediaType: "application/vnd.docker.image.rootfs.diff.tar.gzip",
|
||||
Size: size,
|
||||
Digest: m.FSLayers[v1Index].BlobSum,
|
||||
Digest: m.m.FSLayers[v1Index].BlobSum,
|
||||
})
|
||||
rootFS.DiffIDs = append(rootFS.DiffIDs, d)
|
||||
diffIDs = append(diffIDs, d)
|
||||
}
|
||||
}
|
||||
configJSON, err := configJSONFromV1Config([]byte(m.History[0].V1Compatibility), rootFS, history)
|
||||
configJSON, err := m.m.ToSchema2Config(diffIDs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
configDescriptor := descriptor{
|
||||
configDescriptor := manifest.Schema2Descriptor{
|
||||
MediaType: "application/vnd.docker.container.image.v1+json",
|
||||
Size: int64(len(configJSON)),
|
||||
Digest: digest.FromBytes(configJSON),
|
||||
}
|
||||
|
||||
m2 := manifestSchema2FromComponents(configDescriptor, nil, configJSON, layers)
|
||||
return memoryImageFromManifest(m2), nil
|
||||
}
|
||||
|
||||
func configJSONFromV1Config(v1ConfigJSON []byte, rootFS rootFS, history []imageHistory) ([]byte, error) {
|
||||
// github.com/docker/docker/image/v1/imagev1.go:MakeConfigFromV1Config unmarshals and re-marshals the input if docker_version is < 1.8.3 to remove blank fields;
|
||||
// we don't do that here. FIXME? Should we? AFAICT it would only affect the digest value of the schema2 manifest, and we don't particularly need that to be
|
||||
// a consistently reproducible value.
|
||||
|
||||
// Preserve everything we don't specifically know about.
|
||||
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
|
||||
rawContents := map[string]*json.RawMessage{}
|
||||
if err := json.Unmarshal(v1ConfigJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
|
||||
return nil, err
|
||||
}
|
||||
|
||||
delete(rawContents, "id")
|
||||
delete(rawContents, "parent")
|
||||
delete(rawContents, "Size")
|
||||
delete(rawContents, "parent_id")
|
||||
delete(rawContents, "layer_id")
|
||||
delete(rawContents, "throwaway")
|
||||
|
||||
updates := map[string]interface{}{"rootfs": rootFS, "history": history}
|
||||
for field, value := range updates {
|
||||
encoded, err := json.Marshal(value)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rawContents[field] = (*json.RawMessage)(&encoded)
|
||||
}
|
||||
return json.Marshal(rawContents)
|
||||
return manifestSchema2FromComponents(configDescriptor, nil, configJSON, layers), nil
|
||||
}
|
||||
|
||||
180
vendor/github.com/containers/image/image/docker_schema2.go
generated
vendored
180
vendor/github.com/containers/image/image/docker_schema2.go
generated
vendored
@@ -2,6 +2,7 @@ package image
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
@@ -29,61 +30,51 @@ var gzippedEmptyLayer = []byte{
|
||||
// gzippedEmptyLayerDigest is a digest of gzippedEmptyLayer
|
||||
const gzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
|
||||
|
||||
type descriptor struct {
|
||||
MediaType string `json:"mediaType"`
|
||||
Size int64 `json:"size"`
|
||||
Digest digest.Digest `json:"digest"`
|
||||
URLs []string `json:"urls,omitempty"`
|
||||
}
|
||||
|
||||
type manifestSchema2 struct {
|
||||
src types.ImageSource // May be nil if configBlob is not nil
|
||||
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
MediaType string `json:"mediaType"`
|
||||
ConfigDescriptor descriptor `json:"config"`
|
||||
LayersDescriptors []descriptor `json:"layers"`
|
||||
src types.ImageSource // May be nil if configBlob is not nil
|
||||
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
|
||||
m *manifest.Schema2
|
||||
}
|
||||
|
||||
func manifestSchema2FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
|
||||
v2s2 := manifestSchema2{src: src}
|
||||
if err := json.Unmarshal(manifest, &v2s2); err != nil {
|
||||
func manifestSchema2FromManifest(src types.ImageSource, manifestBlob []byte) (genericManifest, error) {
|
||||
m, err := manifest.Schema2FromManifest(manifestBlob)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &v2s2, nil
|
||||
return &manifestSchema2{
|
||||
src: src,
|
||||
m: m,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// manifestSchema2FromComponents builds a new manifestSchema2 from the supplied data:
|
||||
func manifestSchema2FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
|
||||
func manifestSchema2FromComponents(config manifest.Schema2Descriptor, src types.ImageSource, configBlob []byte, layers []manifest.Schema2Descriptor) genericManifest {
|
||||
return &manifestSchema2{
|
||||
src: src,
|
||||
configBlob: configBlob,
|
||||
SchemaVersion: 2,
|
||||
MediaType: manifest.DockerV2Schema2MediaType,
|
||||
ConfigDescriptor: config,
|
||||
LayersDescriptors: layers,
|
||||
src: src,
|
||||
configBlob: configBlob,
|
||||
m: manifest.Schema2FromComponents(config, layers),
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manifestSchema2) serialize() ([]byte, error) {
|
||||
return json.Marshal(*m)
|
||||
return m.m.Serialize()
|
||||
}
|
||||
|
||||
func (m *manifestSchema2) manifestMIMEType() string {
|
||||
return m.MediaType
|
||||
return m.m.MediaType
|
||||
}
|
||||
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
|
||||
func (m *manifestSchema2) ConfigInfo() types.BlobInfo {
|
||||
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
|
||||
return m.m.ConfigInfo()
|
||||
}
|
||||
|
||||
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
|
||||
// layers in the resulting configuration isn't guaranteed to be returned to due how
|
||||
// old image manifests work (docker v2s1 especially).
|
||||
func (m *manifestSchema2) OCIConfig() (*imgspecv1.Image, error) {
|
||||
configBlob, err := m.ConfigBlob()
|
||||
func (m *manifestSchema2) OCIConfig(ctx context.Context) (*imgspecv1.Image, error) {
|
||||
configBlob, err := m.ConfigBlob(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -99,15 +90,15 @@ func (m *manifestSchema2) OCIConfig() (*imgspecv1.Image, error) {
|
||||
|
||||
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
|
||||
func (m *manifestSchema2) ConfigBlob(ctx context.Context) ([]byte, error) {
|
||||
if m.configBlob == nil {
|
||||
if m.src == nil {
|
||||
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestSchema2")
|
||||
}
|
||||
stream, _, err := m.src.GetBlob(types.BlobInfo{
|
||||
Digest: m.ConfigDescriptor.Digest,
|
||||
Size: m.ConfigDescriptor.Size,
|
||||
URLs: m.ConfigDescriptor.URLs,
|
||||
stream, _, err := m.src.GetBlob(ctx, types.BlobInfo{
|
||||
Digest: m.m.ConfigDescriptor.Digest,
|
||||
Size: m.m.ConfigDescriptor.Size,
|
||||
URLs: m.m.ConfigDescriptor.URLs,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -118,8 +109,8 @@ func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
|
||||
return nil, err
|
||||
}
|
||||
computedDigest := digest.FromBytes(blob)
|
||||
if computedDigest != m.ConfigDescriptor.Digest {
|
||||
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
|
||||
if computedDigest != m.m.ConfigDescriptor.Digest {
|
||||
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.m.ConfigDescriptor.Digest)
|
||||
}
|
||||
m.configBlob = blob
|
||||
}
|
||||
@@ -130,15 +121,7 @@ func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
|
||||
blobs := []types.BlobInfo{}
|
||||
for _, layer := range m.LayersDescriptors {
|
||||
blobs = append(blobs, types.BlobInfo{
|
||||
Digest: layer.Digest,
|
||||
Size: layer.Size,
|
||||
URLs: layer.URLs,
|
||||
})
|
||||
}
|
||||
return blobs
|
||||
return m.m.LayerInfos()
|
||||
}
|
||||
|
||||
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
|
||||
@@ -148,25 +131,20 @@ func (m *manifestSchema2) EmbeddedDockerReferenceConflicts(ref reference.Named)
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
|
||||
config, err := m.ConfigBlob()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
func (m *manifestSchema2) Inspect(ctx context.Context) (*types.ImageInspectInfo, error) {
|
||||
getter := func(info types.BlobInfo) ([]byte, error) {
|
||||
if info.Digest != m.ConfigInfo().Digest {
|
||||
// Shouldn't ever happen
|
||||
return nil, errors.New("asked for a different config blob")
|
||||
}
|
||||
config, err := m.ConfigBlob(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return config, nil
|
||||
}
|
||||
v1 := &v1Image{}
|
||||
if err := json.Unmarshal(config, v1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
i := &types.ImageInspectInfo{
|
||||
DockerVersion: v1.DockerVersion,
|
||||
Created: v1.Created,
|
||||
Architecture: v1.Architecture,
|
||||
Os: v1.OS,
|
||||
}
|
||||
if v1.Config != nil {
|
||||
i.Labels = v1.Config.Labels
|
||||
}
|
||||
return i, nil
|
||||
return m.m.Inspect(getter)
|
||||
}
|
||||
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
@@ -178,18 +156,15 @@ func (m *manifestSchema2) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUp
|
||||
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// This does not change the state of the original Image object.
|
||||
func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
|
||||
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
|
||||
func (m *manifestSchema2) UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error) {
|
||||
copy := manifestSchema2{ // NOTE: This is not a deep copy, it still shares slices etc.
|
||||
src: m.src,
|
||||
configBlob: m.configBlob,
|
||||
m: manifest.Schema2Clone(m.m),
|
||||
}
|
||||
if options.LayerInfos != nil {
|
||||
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
|
||||
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
|
||||
}
|
||||
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
|
||||
for i, info := range options.LayerInfos {
|
||||
copy.LayersDescriptors[i].MediaType = m.LayersDescriptors[i].MediaType
|
||||
copy.LayersDescriptors[i].Digest = info.Digest
|
||||
copy.LayersDescriptors[i].Size = info.Size
|
||||
copy.LayersDescriptors[i].URLs = info.URLs
|
||||
if err := copy.m.UpdateLayerInfos(options.LayerInfos); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1 to schema2, but we really don't care.
|
||||
@@ -197,9 +172,9 @@ func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (typ
|
||||
switch options.ManifestMIMEType {
|
||||
case "": // No conversion, OK
|
||||
case manifest.DockerV2Schema1SignedMediaType, manifest.DockerV2Schema1MediaType:
|
||||
return copy.convertToManifestSchema1(options.InformationOnly.Destination)
|
||||
return copy.convertToManifestSchema1(ctx, options.InformationOnly.Destination)
|
||||
case imgspecv1.MediaTypeImageManifest:
|
||||
return copy.convertToManifestOCI1()
|
||||
return copy.convertToManifestOCI1(ctx)
|
||||
default:
|
||||
return nil, errors.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema2MediaType, options.ManifestMIMEType)
|
||||
}
|
||||
@@ -207,8 +182,17 @@ func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (typ
|
||||
return memoryImageFromManifest(©), nil
|
||||
}
|
||||
|
||||
func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
|
||||
configOCI, err := m.OCIConfig()
|
||||
func oci1DescriptorFromSchema2Descriptor(d manifest.Schema2Descriptor) imgspecv1.Descriptor {
|
||||
return imgspecv1.Descriptor{
|
||||
MediaType: d.MediaType,
|
||||
Size: d.Size,
|
||||
Digest: d.Digest,
|
||||
URLs: d.URLs,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manifestSchema2) convertToManifestOCI1(ctx context.Context) (types.Image, error) {
|
||||
configOCI, err := m.OCIConfig(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -217,18 +201,16 @@ func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
config := descriptorOCI1{
|
||||
descriptor: descriptor{
|
||||
MediaType: imgspecv1.MediaTypeImageConfig,
|
||||
Size: int64(len(configOCIBytes)),
|
||||
Digest: digest.FromBytes(configOCIBytes),
|
||||
},
|
||||
config := imgspecv1.Descriptor{
|
||||
MediaType: imgspecv1.MediaTypeImageConfig,
|
||||
Size: int64(len(configOCIBytes)),
|
||||
Digest: digest.FromBytes(configOCIBytes),
|
||||
}
|
||||
|
||||
layers := make([]descriptorOCI1, len(m.LayersDescriptors))
|
||||
layers := make([]imgspecv1.Descriptor, len(m.m.LayersDescriptors))
|
||||
for idx := range layers {
|
||||
layers[idx] = descriptorOCI1{descriptor: m.LayersDescriptors[idx]}
|
||||
if m.LayersDescriptors[idx].MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
|
||||
layers[idx] = oci1DescriptorFromSchema2Descriptor(m.m.LayersDescriptors[idx])
|
||||
if m.m.LayersDescriptors[idx].MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
|
||||
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable
|
||||
} else {
|
||||
// we assume layers are gzip'ed because docker v2s2 only deals with
|
||||
@@ -242,19 +224,19 @@ func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
|
||||
}
|
||||
|
||||
// Based on docker/distribution/manifest/schema1/config_builder.go
|
||||
func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination) (types.Image, error) {
|
||||
configBytes, err := m.ConfigBlob()
|
||||
func (m *manifestSchema2) convertToManifestSchema1(ctx context.Context, dest types.ImageDestination) (types.Image, error) {
|
||||
configBytes, err := m.ConfigBlob(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
imageConfig := &image{}
|
||||
imageConfig := &manifest.Schema2Image{}
|
||||
if err := json.Unmarshal(configBytes, imageConfig); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Build fsLayers and History, discarding all configs. We will patch the top-level config in later.
|
||||
fsLayers := make([]fsLayersSchema1, len(imageConfig.History))
|
||||
history := make([]historySchema1, len(imageConfig.History))
|
||||
fsLayers := make([]manifest.Schema1FSLayers, len(imageConfig.History))
|
||||
history := make([]manifest.Schema1History, len(imageConfig.History))
|
||||
nonemptyLayerIndex := 0
|
||||
var parentV1ID string // Set in the loop
|
||||
v1ID := ""
|
||||
@@ -271,7 +253,7 @@ func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination)
|
||||
if historyEntry.EmptyLayer {
|
||||
if !haveGzippedEmptyLayer {
|
||||
logrus.Debugf("Uploading empty layer during conversion to schema 1")
|
||||
info, err := dest.PutBlob(bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Digest: gzippedEmptyLayerDigest, Size: int64(len(gzippedEmptyLayer))})
|
||||
info, err := dest.PutBlob(ctx, bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Digest: gzippedEmptyLayerDigest, Size: int64(len(gzippedEmptyLayer))}, false)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error uploading empty layer")
|
||||
}
|
||||
@@ -282,10 +264,10 @@ func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination)
|
||||
}
|
||||
blobDigest = gzippedEmptyLayerDigest
|
||||
} else {
|
||||
if nonemptyLayerIndex >= len(m.LayersDescriptors) {
|
||||
return nil, errors.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.LayersDescriptors))
|
||||
if nonemptyLayerIndex >= len(m.m.LayersDescriptors) {
|
||||
return nil, errors.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.m.LayersDescriptors))
|
||||
}
|
||||
blobDigest = m.LayersDescriptors[nonemptyLayerIndex].Digest
|
||||
blobDigest = m.m.LayersDescriptors[nonemptyLayerIndex].Digest
|
||||
nonemptyLayerIndex++
|
||||
}
|
||||
|
||||
@@ -296,7 +278,7 @@ func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination)
|
||||
}
|
||||
v1ID = v
|
||||
|
||||
fakeImage := v1Compatibility{
|
||||
fakeImage := manifest.Schema1V1Compatibility{
|
||||
ID: v1ID,
|
||||
Parent: parentV1ID,
|
||||
Comment: historyEntry.Comment,
|
||||
@@ -310,8 +292,8 @@ func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination)
|
||||
return nil, errors.Errorf("Internal error: Error creating v1compatibility for %#v", fakeImage)
|
||||
}
|
||||
|
||||
fsLayers[v1Index] = fsLayersSchema1{BlobSum: blobDigest}
|
||||
history[v1Index] = historySchema1{V1Compatibility: string(v1CompatibilityBytes)}
|
||||
fsLayers[v1Index] = manifest.Schema1FSLayers{BlobSum: blobDigest}
|
||||
history[v1Index] = manifest.Schema1History{V1Compatibility: string(v1CompatibilityBytes)}
|
||||
// Note that parentV1ID of the top layer is preserved when exiting this loop
|
||||
}
|
||||
|
||||
|
||||
95
vendor/github.com/containers/image/image/manifest.go
generated
vendored
95
vendor/github.com/containers/image/image/manifest.go
generated
vendored
@@ -1,57 +1,15 @@
|
||||
package image
|
||||
|
||||
import (
|
||||
"time"
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/pkg/strslice"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
type config struct {
|
||||
Cmd strslice.StrSlice
|
||||
Labels map[string]string
|
||||
}
|
||||
|
||||
type v1Image struct {
|
||||
ID string `json:"id,omitempty"`
|
||||
Parent string `json:"parent,omitempty"`
|
||||
Comment string `json:"comment,omitempty"`
|
||||
Created time.Time `json:"created"`
|
||||
ContainerConfig *config `json:"container_config,omitempty"`
|
||||
DockerVersion string `json:"docker_version,omitempty"`
|
||||
Author string `json:"author,omitempty"`
|
||||
// Config is the configuration of the container received from the client
|
||||
Config *config `json:"config,omitempty"`
|
||||
// Architecture is the hardware that the image is build and runs on
|
||||
Architecture string `json:"architecture,omitempty"`
|
||||
// OS is the operating system used to build and run the image
|
||||
OS string `json:"os,omitempty"`
|
||||
}
|
||||
|
||||
type image struct {
|
||||
v1Image
|
||||
History []imageHistory `json:"history,omitempty"`
|
||||
RootFS *rootFS `json:"rootfs,omitempty"`
|
||||
}
|
||||
|
||||
type imageHistory struct {
|
||||
Created time.Time `json:"created"`
|
||||
Author string `json:"author,omitempty"`
|
||||
CreatedBy string `json:"created_by,omitempty"`
|
||||
Comment string `json:"comment,omitempty"`
|
||||
EmptyLayer bool `json:"empty_layer,omitempty"`
|
||||
}
|
||||
|
||||
type rootFS struct {
|
||||
Type string `json:"type"`
|
||||
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
|
||||
BaseLayer string `json:"base_layer,omitempty"`
|
||||
}
|
||||
|
||||
// genericManifest is an interface for parsing, modifying image manifests and related data.
|
||||
// Note that the public methods are intended to be a subset of types.Image
|
||||
// so that embedding a genericManifest into structs works.
|
||||
@@ -64,11 +22,11 @@ type genericManifest interface {
|
||||
ConfigInfo() types.BlobInfo
|
||||
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
ConfigBlob() ([]byte, error)
|
||||
ConfigBlob(context.Context) ([]byte, error)
|
||||
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
|
||||
// layers in the resulting configuration isn't guaranteed to be returned to due how
|
||||
// old image manifests work (docker v2s1 especially).
|
||||
OCIConfig() (*imgspecv1.Image, error)
|
||||
OCIConfig(context.Context) (*imgspecv1.Image, error)
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
@@ -77,53 +35,30 @@ type genericManifest interface {
|
||||
// It returns false if the manifest does not embed a Docker reference.
|
||||
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
|
||||
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
|
||||
imageInspectInfo() (*types.ImageInspectInfo, error) // To be called by inspectManifest
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
Inspect(context.Context) (*types.ImageInspectInfo, error)
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
|
||||
// (most importantly it forces us to download the full layers even if they are already present at the destination).
|
||||
UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// This does not change the state of the original Image object.
|
||||
UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error)
|
||||
UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error)
|
||||
}
|
||||
|
||||
func manifestInstanceFromBlob(src types.ImageSource, manblob []byte, mt string) (genericManifest, error) {
|
||||
switch mt {
|
||||
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
|
||||
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
|
||||
// need to happen within the ImageSource.
|
||||
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, "application/json":
|
||||
// manifestInstanceFromBlob returns a genericManifest implementation for (manblob, mt) in src.
|
||||
// If manblob is a manifest list, it implicitly chooses an appropriate image from the list.
|
||||
func manifestInstanceFromBlob(ctx context.Context, sys *types.SystemContext, src types.ImageSource, manblob []byte, mt string) (genericManifest, error) {
|
||||
switch manifest.NormalizedMIMEType(mt) {
|
||||
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
|
||||
return manifestSchema1FromManifest(manblob)
|
||||
case imgspecv1.MediaTypeImageManifest:
|
||||
return manifestOCI1FromManifest(src, manblob)
|
||||
case manifest.DockerV2Schema2MediaType:
|
||||
return manifestSchema2FromManifest(src, manblob)
|
||||
case manifest.DockerV2ListMediaType:
|
||||
return manifestSchema2FromManifestList(src, manblob)
|
||||
default:
|
||||
// If it's not a recognized manifest media type, or we have failed determining the type, we'll try one last time
|
||||
// to deserialize using v2s1 as per https://github.com/docker/distribution/blob/master/manifests.go#L108
|
||||
// and https://github.com/docker/distribution/blob/master/manifest/schema1/manifest.go#L50
|
||||
//
|
||||
// Crane registries can also return "text/plain", or pretty much anything else depending on a file extension “recognized” in the tag.
|
||||
// This makes no real sense, but it happens
|
||||
// because requests for manifests are
|
||||
// redirected to a content distribution
|
||||
// network which is configured that way. See https://bugzilla.redhat.com/show_bug.cgi?id=1389442
|
||||
return manifestSchema1FromManifest(manblob)
|
||||
return manifestSchema2FromManifestList(ctx, sys, src, manblob)
|
||||
default: // Note that this may not be reachable, manifest.NormalizedMIMEType has a default for unknown values.
|
||||
return nil, fmt.Errorf("Unimplemented manifest MIME type %s", mt)
|
||||
}
|
||||
}
|
||||
|
||||
// inspectManifest is an implementation of types.Image.Inspect
|
||||
func inspectManifest(m genericManifest) (*types.ImageInspectInfo, error) {
|
||||
info, err := m.imageInspectInfo()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
layers := m.LayerInfos()
|
||||
info.Layers = make([]string, len(layers))
|
||||
for i, layer := range layers {
|
||||
info.Layers[i] = layer.Digest.String()
|
||||
}
|
||||
return info, nil
|
||||
}
|
||||
|
||||
20
vendor/github.com/containers/image/image/memory.go
generated
vendored
20
vendor/github.com/containers/image/image/memory.go
generated
vendored
@@ -33,18 +33,13 @@ func (i *memoryImage) Reference() types.ImageReference {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized UnparsedImage, if any.
|
||||
func (i *memoryImage) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Size returns the size of the image as stored, if known, or -1 if not.
|
||||
func (i *memoryImage) Size() (int64, error) {
|
||||
return -1, nil
|
||||
}
|
||||
|
||||
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
|
||||
func (i *memoryImage) Manifest() ([]byte, string, error) {
|
||||
func (i *memoryImage) Manifest(ctx context.Context) ([]byte, string, error) {
|
||||
if i.serializedManifest == nil {
|
||||
m, err := i.genericManifest.serialize()
|
||||
if err != nil {
|
||||
@@ -62,12 +57,9 @@ func (i *memoryImage) Signatures(ctx context.Context) ([][]byte, error) {
|
||||
return nil, errors.New("Internal error: Image.Signatures() is not supported for images modified in memory")
|
||||
}
|
||||
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
func (i *memoryImage) Inspect() (*types.ImageInspectInfo, error) {
|
||||
return inspectManifest(i.genericManifest)
|
||||
}
|
||||
|
||||
// IsMultiImage returns true if the image's manifest is a list of images, false otherwise.
|
||||
func (i *memoryImage) IsMultiImage() bool {
|
||||
return false
|
||||
// LayerInfosForCopy returns an updated set of layer blob information which may not match the manifest.
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (i *memoryImage) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
140
vendor/github.com/containers/image/image/oci.go
generated
vendored
140
vendor/github.com/containers/image/image/oci.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package image
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
|
||||
@@ -12,41 +13,34 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type descriptorOCI1 struct {
|
||||
descriptor
|
||||
Annotations map[string]string `json:"annotations,omitempty"`
|
||||
}
|
||||
|
||||
type manifestOCI1 struct {
|
||||
src types.ImageSource // May be nil if configBlob is not nil
|
||||
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
ConfigDescriptor descriptorOCI1 `json:"config"`
|
||||
LayersDescriptors []descriptorOCI1 `json:"layers"`
|
||||
Annotations map[string]string `json:"annotations,omitempty"`
|
||||
src types.ImageSource // May be nil if configBlob is not nil
|
||||
configBlob []byte // If set, corresponds to contents of m.Config.
|
||||
m *manifest.OCI1
|
||||
}
|
||||
|
||||
func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
|
||||
oci := manifestOCI1{src: src}
|
||||
if err := json.Unmarshal(manifest, &oci); err != nil {
|
||||
func manifestOCI1FromManifest(src types.ImageSource, manifestBlob []byte) (genericManifest, error) {
|
||||
m, err := manifest.OCI1FromManifest(manifestBlob)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &oci, nil
|
||||
return &manifestOCI1{
|
||||
src: src,
|
||||
m: m,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// manifestOCI1FromComponents builds a new manifestOCI1 from the supplied data:
|
||||
func manifestOCI1FromComponents(config descriptorOCI1, src types.ImageSource, configBlob []byte, layers []descriptorOCI1) genericManifest {
|
||||
func manifestOCI1FromComponents(config imgspecv1.Descriptor, src types.ImageSource, configBlob []byte, layers []imgspecv1.Descriptor) genericManifest {
|
||||
return &manifestOCI1{
|
||||
src: src,
|
||||
configBlob: configBlob,
|
||||
SchemaVersion: 2,
|
||||
ConfigDescriptor: config,
|
||||
LayersDescriptors: layers,
|
||||
src: src,
|
||||
configBlob: configBlob,
|
||||
m: manifest.OCI1FromComponents(config, layers),
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manifestOCI1) serialize() ([]byte, error) {
|
||||
return json.Marshal(*m)
|
||||
return m.m.Serialize()
|
||||
}
|
||||
|
||||
func (m *manifestOCI1) manifestMIMEType() string {
|
||||
@@ -56,20 +50,20 @@ func (m *manifestOCI1) manifestMIMEType() string {
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
|
||||
func (m *manifestOCI1) ConfigInfo() types.BlobInfo {
|
||||
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size, Annotations: m.ConfigDescriptor.Annotations}
|
||||
return m.m.ConfigInfo()
|
||||
}
|
||||
|
||||
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
|
||||
func (m *manifestOCI1) ConfigBlob(ctx context.Context) ([]byte, error) {
|
||||
if m.configBlob == nil {
|
||||
if m.src == nil {
|
||||
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestOCI1")
|
||||
}
|
||||
stream, _, err := m.src.GetBlob(types.BlobInfo{
|
||||
Digest: m.ConfigDescriptor.Digest,
|
||||
Size: m.ConfigDescriptor.Size,
|
||||
URLs: m.ConfigDescriptor.URLs,
|
||||
stream, _, err := m.src.GetBlob(ctx, types.BlobInfo{
|
||||
Digest: m.m.Config.Digest,
|
||||
Size: m.m.Config.Size,
|
||||
URLs: m.m.Config.URLs,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -80,8 +74,8 @@ func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
|
||||
return nil, err
|
||||
}
|
||||
computedDigest := digest.FromBytes(blob)
|
||||
if computedDigest != m.ConfigDescriptor.Digest {
|
||||
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
|
||||
if computedDigest != m.m.Config.Digest {
|
||||
return nil, errors.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.m.Config.Digest)
|
||||
}
|
||||
m.configBlob = blob
|
||||
}
|
||||
@@ -91,8 +85,8 @@ func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
|
||||
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
|
||||
// layers in the resulting configuration isn't guaranteed to be returned to due how
|
||||
// old image manifests work (docker v2s1 especially).
|
||||
func (m *manifestOCI1) OCIConfig() (*imgspecv1.Image, error) {
|
||||
cb, err := m.ConfigBlob()
|
||||
func (m *manifestOCI1) OCIConfig(ctx context.Context) (*imgspecv1.Image, error) {
|
||||
cb, err := m.ConfigBlob(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -107,11 +101,7 @@ func (m *manifestOCI1) OCIConfig() (*imgspecv1.Image, error) {
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
|
||||
blobs := []types.BlobInfo{}
|
||||
for _, layer := range m.LayersDescriptors {
|
||||
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size, Annotations: layer.Annotations, URLs: layer.URLs, MediaType: layer.MediaType})
|
||||
}
|
||||
return blobs
|
||||
return m.m.LayerInfos()
|
||||
}
|
||||
|
||||
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
|
||||
@@ -121,25 +111,20 @@ func (m *manifestOCI1) EmbeddedDockerReferenceConflicts(ref reference.Named) boo
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
|
||||
config, err := m.ConfigBlob()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
func (m *manifestOCI1) Inspect(ctx context.Context) (*types.ImageInspectInfo, error) {
|
||||
getter := func(info types.BlobInfo) ([]byte, error) {
|
||||
if info.Digest != m.ConfigInfo().Digest {
|
||||
// Shouldn't ever happen
|
||||
return nil, errors.New("asked for a different config blob")
|
||||
}
|
||||
config, err := m.ConfigBlob(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return config, nil
|
||||
}
|
||||
v1 := &v1Image{}
|
||||
if err := json.Unmarshal(config, v1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
i := &types.ImageInspectInfo{
|
||||
DockerVersion: v1.DockerVersion,
|
||||
Created: v1.Created,
|
||||
Architecture: v1.Architecture,
|
||||
Os: v1.OS,
|
||||
}
|
||||
if v1.Config != nil {
|
||||
i.Labels = v1.Config.Labels
|
||||
}
|
||||
return i, nil
|
||||
return m.m.Inspect(getter)
|
||||
}
|
||||
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
@@ -151,25 +136,31 @@ func (m *manifestOCI1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdat
|
||||
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// This does not change the state of the original Image object.
|
||||
func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
|
||||
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
|
||||
func (m *manifestOCI1) UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error) {
|
||||
copy := manifestOCI1{ // NOTE: This is not a deep copy, it still shares slices etc.
|
||||
src: m.src,
|
||||
configBlob: m.configBlob,
|
||||
m: manifest.OCI1Clone(m.m),
|
||||
}
|
||||
if options.LayerInfos != nil {
|
||||
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
|
||||
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
|
||||
}
|
||||
copy.LayersDescriptors = make([]descriptorOCI1, len(options.LayerInfos))
|
||||
for i, info := range options.LayerInfos {
|
||||
copy.LayersDescriptors[i].MediaType = m.LayersDescriptors[i].MediaType
|
||||
copy.LayersDescriptors[i].Digest = info.Digest
|
||||
copy.LayersDescriptors[i].Size = info.Size
|
||||
copy.LayersDescriptors[i].Annotations = info.Annotations
|
||||
copy.LayersDescriptors[i].URLs = info.URLs
|
||||
if err := copy.m.UpdateLayerInfos(options.LayerInfos); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1, but we really don't care.
|
||||
|
||||
switch options.ManifestMIMEType {
|
||||
case "": // No conversion, OK
|
||||
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
|
||||
// We can't directly convert to V1, but we can transitively convert via a V2 image
|
||||
m2, err := copy.convertToManifestSchema2()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m2.UpdatedImage(ctx, types.ManifestUpdateOptions{
|
||||
ManifestMIMEType: options.ManifestMIMEType,
|
||||
InformationOnly: options.InformationOnly,
|
||||
})
|
||||
case manifest.DockerV2Schema2MediaType:
|
||||
return copy.convertToManifestSchema2()
|
||||
default:
|
||||
@@ -179,17 +170,26 @@ func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.
|
||||
return memoryImageFromManifest(©), nil
|
||||
}
|
||||
|
||||
func schema2DescriptorFromOCI1Descriptor(d imgspecv1.Descriptor) manifest.Schema2Descriptor {
|
||||
return manifest.Schema2Descriptor{
|
||||
MediaType: d.MediaType,
|
||||
Size: d.Size,
|
||||
Digest: d.Digest,
|
||||
URLs: d.URLs,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
|
||||
// Create a copy of the descriptor.
|
||||
config := m.ConfigDescriptor.descriptor
|
||||
config := schema2DescriptorFromOCI1Descriptor(m.m.Config)
|
||||
|
||||
// The only difference between OCI and DockerSchema2 is the mediatypes. The
|
||||
// media type of the manifest is handled by manifestSchema2FromComponents.
|
||||
config.MediaType = manifest.DockerV2Schema2ConfigMediaType
|
||||
|
||||
layers := make([]descriptor, len(m.LayersDescriptors))
|
||||
layers := make([]manifest.Schema2Descriptor, len(m.m.Layers))
|
||||
for idx := range layers {
|
||||
layers[idx] = m.LayersDescriptors[idx].descriptor
|
||||
layers[idx] = schema2DescriptorFromOCI1Descriptor(m.m.Layers[idx])
|
||||
layers[idx].MediaType = manifest.DockerV2Schema2LayerMediaType
|
||||
}
|
||||
|
||||
|
||||
59
vendor/github.com/containers/image/image/sourced.go
generated
vendored
59
vendor/github.com/containers/image/image/sourced.go
generated
vendored
@@ -4,12 +4,23 @@
|
||||
package image
|
||||
|
||||
import (
|
||||
"github.com/containers/image/manifest"
|
||||
"context"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// FromSource returns a types.Image implementation for source.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// imageCloser implements types.ImageCloser, perhaps allowing simple users
|
||||
// to use a single object without having keep a reference to a types.ImageSource
|
||||
// only to call types.ImageSource.Close().
|
||||
type imageCloser struct {
|
||||
types.Image
|
||||
src types.ImageSource
|
||||
}
|
||||
|
||||
// FromSource returns a types.ImageCloser implementation for the default instance of source.
|
||||
// If source is a manifest list, .Manifest() still returns the manifest list,
|
||||
// but other methods transparently return data from an appropriate image instance.
|
||||
//
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
//
|
||||
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
|
||||
// when the image is closed. (This does not prevent callers from using both the
|
||||
@@ -18,8 +29,19 @@ import (
|
||||
//
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage instead of calling this function.
|
||||
func FromSource(src types.ImageSource) (types.Image, error) {
|
||||
return FromUnparsedImage(UnparsedFromSource(src))
|
||||
func FromSource(ctx context.Context, sys *types.SystemContext, src types.ImageSource) (types.ImageCloser, error) {
|
||||
img, err := FromUnparsedImage(ctx, sys, UnparsedInstance(src, nil))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &imageCloser{
|
||||
Image: img,
|
||||
src: src,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (ic *imageCloser) Close() error {
|
||||
return ic.src.Close()
|
||||
}
|
||||
|
||||
// sourcedImage is a general set of utilities for working with container images,
|
||||
@@ -38,27 +60,22 @@ type sourcedImage struct {
|
||||
}
|
||||
|
||||
// FromUnparsedImage returns a types.Image implementation for unparsed.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// If unparsed represents a manifest list, .Manifest() still returns the manifest list,
|
||||
// but other methods transparently return data from an appropriate single image.
|
||||
//
|
||||
// FromSource “takes ownership” of the input UnparsedImage and will call uparsed.Close()
|
||||
// when the image is closed. (This does not prevent callers from using both the
|
||||
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
|
||||
// keep a reference to the Image.)
|
||||
func FromUnparsedImage(unparsed *UnparsedImage) (types.Image, error) {
|
||||
// The Image must not be used after the underlying ImageSource is Close()d.
|
||||
func FromUnparsedImage(ctx context.Context, sys *types.SystemContext, unparsed *UnparsedImage) (types.Image, error) {
|
||||
// Note that the input parameter above is specifically *image.UnparsedImage, not types.UnparsedImage:
|
||||
// we want to be able to use unparsed.src. We could make that an explicit interface, but, well,
|
||||
// this is the only UnparsedImage implementation around, anyway.
|
||||
|
||||
// Also, we do not explicitly implement types.Image.Close; we let the implementation fall through to
|
||||
// unparsed.Close.
|
||||
|
||||
// NOTE: It is essential for signature verification that all parsing done in this object happens on the same manifest which is returned by unparsed.Manifest().
|
||||
manifestBlob, manifestMIMEType, err := unparsed.Manifest()
|
||||
manifestBlob, manifestMIMEType, err := unparsed.Manifest(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
parsedManifest, err := manifestInstanceFromBlob(unparsed.src, manifestBlob, manifestMIMEType)
|
||||
parsedManifest, err := manifestInstanceFromBlob(ctx, sys, unparsed.src, manifestBlob, manifestMIMEType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -77,14 +94,10 @@ func (i *sourcedImage) Size() (int64, error) {
|
||||
}
|
||||
|
||||
// Manifest overrides the UnparsedImage.Manifest to always use the fields which we have already fetched.
|
||||
func (i *sourcedImage) Manifest() ([]byte, string, error) {
|
||||
func (i *sourcedImage) Manifest(ctx context.Context) ([]byte, string, error) {
|
||||
return i.manifestBlob, i.manifestMIMEType, nil
|
||||
}
|
||||
|
||||
func (i *sourcedImage) Inspect() (*types.ImageInspectInfo, error) {
|
||||
return inspectManifest(i.genericManifest)
|
||||
}
|
||||
|
||||
func (i *sourcedImage) IsMultiImage() bool {
|
||||
return i.manifestMIMEType == manifest.DockerV2ListMediaType
|
||||
func (i *sourcedImage) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return i.UnparsedImage.src.LayerInfosForCopy(ctx)
|
||||
}
|
||||
|
||||
64
vendor/github.com/containers/image/image/unparsed.go
generated
vendored
64
vendor/github.com/containers/image/image/unparsed.go
generated
vendored
@@ -11,8 +11,10 @@ import (
|
||||
)
|
||||
|
||||
// UnparsedImage implements types.UnparsedImage .
|
||||
// An UnparsedImage is a pair of (ImageSource, instance digest); it can represent either a manifest list or a single image instance.
|
||||
type UnparsedImage struct {
|
||||
src types.ImageSource
|
||||
instanceDigest *digest.Digest
|
||||
cachedManifest []byte // A private cache for Manifest(); nil if not yet known.
|
||||
// A private cache for Manifest(), may be the empty string if guessing failed.
|
||||
// Valid iff cachedManifest is not nil.
|
||||
@@ -20,49 +22,41 @@ type UnparsedImage struct {
|
||||
cachedSignatures [][]byte // A private cache for Signatures(); nil if not yet known.
|
||||
}
|
||||
|
||||
// UnparsedFromSource returns a types.UnparsedImage implementation for source.
|
||||
// The caller must call .Close() on the returned UnparsedImage.
|
||||
// UnparsedInstance returns a types.UnparsedImage implementation for (source, instanceDigest).
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list).
|
||||
//
|
||||
// UnparsedFromSource “takes ownership” of the input ImageSource and will call src.Close()
|
||||
// when the image is closed. (This does not prevent callers from using both the
|
||||
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
|
||||
// keep a reference to the UnparsedImage.)
|
||||
func UnparsedFromSource(src types.ImageSource) *UnparsedImage {
|
||||
return &UnparsedImage{src: src}
|
||||
// The UnparsedImage must not be used after the underlying ImageSource is Close()d.
|
||||
func UnparsedInstance(src types.ImageSource, instanceDigest *digest.Digest) *UnparsedImage {
|
||||
return &UnparsedImage{
|
||||
src: src,
|
||||
instanceDigest: instanceDigest,
|
||||
}
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
func (i *UnparsedImage) Reference() types.ImageReference {
|
||||
// Note that this does not depend on instanceDigest; e.g. all instances within a manifest list need to be signed with the manifest list identity.
|
||||
return i.src.Reference()
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized UnparsedImage, if any.
|
||||
func (i *UnparsedImage) Close() error {
|
||||
return i.src.Close()
|
||||
}
|
||||
|
||||
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
|
||||
func (i *UnparsedImage) Manifest() ([]byte, string, error) {
|
||||
func (i *UnparsedImage) Manifest(ctx context.Context) ([]byte, string, error) {
|
||||
if i.cachedManifest == nil {
|
||||
m, mt, err := i.src.GetManifest()
|
||||
m, mt, err := i.src.GetManifest(ctx, i.instanceDigest)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
// ImageSource.GetManifest does not do digest verification, but we do;
|
||||
// this immediately protects also any user of types.Image.
|
||||
ref := i.Reference().DockerReference()
|
||||
if ref != nil {
|
||||
if canonical, ok := ref.(reference.Canonical); ok {
|
||||
digest := digest.Digest(canonical.Digest())
|
||||
matches, err := manifest.MatchesDigest(m, digest)
|
||||
if err != nil {
|
||||
return nil, "", errors.Wrap(err, "Error computing manifest digest")
|
||||
}
|
||||
if !matches {
|
||||
return nil, "", errors.Errorf("Manifest does not match provided manifest digest %s", digest)
|
||||
}
|
||||
if digest, haveDigest := i.expectedManifestDigest(); haveDigest {
|
||||
matches, err := manifest.MatchesDigest(m, digest)
|
||||
if err != nil {
|
||||
return nil, "", errors.Wrap(err, "Error computing manifest digest")
|
||||
}
|
||||
if !matches {
|
||||
return nil, "", errors.Errorf("Manifest does not match provided manifest digest %s", digest)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -72,10 +66,26 @@ func (i *UnparsedImage) Manifest() ([]byte, string, error) {
|
||||
return i.cachedManifest, i.cachedManifestMIMEType, nil
|
||||
}
|
||||
|
||||
// expectedManifestDigest returns a the expected value of the manifest digest, and an indicator whether it is known.
|
||||
// The bool return value seems redundant with digest != ""; it is used explicitly
|
||||
// to refuse (unexpected) situations when the digest exists but is "".
|
||||
func (i *UnparsedImage) expectedManifestDigest() (digest.Digest, bool) {
|
||||
if i.instanceDigest != nil {
|
||||
return *i.instanceDigest, true
|
||||
}
|
||||
ref := i.Reference().DockerReference()
|
||||
if ref != nil {
|
||||
if canonical, ok := ref.(reference.Canonical); ok {
|
||||
return canonical.Digest(), true
|
||||
}
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
|
||||
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
|
||||
func (i *UnparsedImage) Signatures(ctx context.Context) ([][]byte, error) {
|
||||
if i.cachedSignatures == nil {
|
||||
sigs, err := i.src.GetSignatures(ctx)
|
||||
sigs, err := i.src.GetSignatures(ctx, i.instanceDigest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
29
vendor/github.com/containers/image/internal/tmpdir/tmpdir.go
generated
vendored
Normal file
29
vendor/github.com/containers/image/internal/tmpdir/tmpdir.go
generated
vendored
Normal file
@@ -0,0 +1,29 @@
|
||||
package tmpdir
|
||||
|
||||
import (
|
||||
"os"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
// unixTempDirForBigFiles is the directory path to store big files on non Windows systems.
|
||||
// You can override this at build time with
|
||||
// -ldflags '-X github.com/containers/image/internal/tmpdir.unixTempDirForBigFiles=$your_path'
|
||||
var unixTempDirForBigFiles = builtinUnixTempDirForBigFiles
|
||||
|
||||
// builtinUnixTempDirForBigFiles is the directory path to store big files.
|
||||
// Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
|
||||
// DO NOT change this, instead see unixTempDirForBigFiles above.
|
||||
const builtinUnixTempDirForBigFiles = "/var/tmp"
|
||||
|
||||
// TemporaryDirectoryForBigFiles returns a directory for temporary (big) files.
|
||||
// On non Windows systems it avoids the use of os.TempDir(), because the default temporary directory usually falls under /tmp
|
||||
// which on systemd based systems could be the unsuitable tmpfs filesystem.
|
||||
func TemporaryDirectoryForBigFiles() string {
|
||||
var temporaryDirectoryForBigFiles string
|
||||
if runtime.GOOS == "windows" {
|
||||
temporaryDirectoryForBigFiles = os.TempDir()
|
||||
} else {
|
||||
temporaryDirectoryForBigFiles = unixTempDirForBigFiles
|
||||
}
|
||||
return temporaryDirectoryForBigFiles
|
||||
}
|
||||
306
vendor/github.com/containers/image/manifest/docker_schema1.go
generated
vendored
Normal file
306
vendor/github.com/containers/image/manifest/docker_schema1.go
generated
vendored
Normal file
@@ -0,0 +1,306 @@
|
||||
package manifest
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/docker/api/types/versions"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Schema1FSLayers is an entry of the "fsLayers" array in docker/distribution schema 1.
|
||||
type Schema1FSLayers struct {
|
||||
BlobSum digest.Digest `json:"blobSum"`
|
||||
}
|
||||
|
||||
// Schema1History is an entry of the "history" array in docker/distribution schema 1.
|
||||
type Schema1History struct {
|
||||
V1Compatibility string `json:"v1Compatibility"`
|
||||
}
|
||||
|
||||
// Schema1 is a manifest in docker/distribution schema 1.
|
||||
type Schema1 struct {
|
||||
Name string `json:"name"`
|
||||
Tag string `json:"tag"`
|
||||
Architecture string `json:"architecture"`
|
||||
FSLayers []Schema1FSLayers `json:"fsLayers"`
|
||||
History []Schema1History `json:"history"`
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
}
|
||||
|
||||
// Schema1V1Compatibility is a v1Compatibility in docker/distribution schema 1.
|
||||
type Schema1V1Compatibility struct {
|
||||
ID string `json:"id"`
|
||||
Parent string `json:"parent,omitempty"`
|
||||
Comment string `json:"comment,omitempty"`
|
||||
Created time.Time `json:"created"`
|
||||
ContainerConfig struct {
|
||||
Cmd []string
|
||||
} `json:"container_config,omitempty"`
|
||||
Author string `json:"author,omitempty"`
|
||||
ThrowAway bool `json:"throwaway,omitempty"`
|
||||
}
|
||||
|
||||
// Schema1FromManifest creates a Schema1 manifest instance from a manifest blob.
|
||||
// (NOTE: The instance is not necessary a literal representation of the original blob,
|
||||
// layers with duplicate IDs are eliminated.)
|
||||
func Schema1FromManifest(manifest []byte) (*Schema1, error) {
|
||||
s1 := Schema1{}
|
||||
if err := json.Unmarshal(manifest, &s1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if s1.SchemaVersion != 1 {
|
||||
return nil, errors.Errorf("unsupported schema version %d", s1.SchemaVersion)
|
||||
}
|
||||
if len(s1.FSLayers) != len(s1.History) {
|
||||
return nil, errors.New("length of history not equal to number of layers")
|
||||
}
|
||||
if len(s1.FSLayers) == 0 {
|
||||
return nil, errors.New("no FSLayers in manifest")
|
||||
}
|
||||
if err := s1.fixManifestLayers(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &s1, nil
|
||||
}
|
||||
|
||||
// Schema1FromComponents creates an Schema1 manifest instance from the supplied data.
|
||||
func Schema1FromComponents(ref reference.Named, fsLayers []Schema1FSLayers, history []Schema1History, architecture string) *Schema1 {
|
||||
var name, tag string
|
||||
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
|
||||
name = reference.Path(ref)
|
||||
if tagged, ok := ref.(reference.NamedTagged); ok {
|
||||
tag = tagged.Tag()
|
||||
}
|
||||
}
|
||||
return &Schema1{
|
||||
Name: name,
|
||||
Tag: tag,
|
||||
Architecture: architecture,
|
||||
FSLayers: fsLayers,
|
||||
History: history,
|
||||
SchemaVersion: 1,
|
||||
}
|
||||
}
|
||||
|
||||
// Schema1Clone creates a copy of the supplied Schema1 manifest.
|
||||
func Schema1Clone(src *Schema1) *Schema1 {
|
||||
copy := *src
|
||||
return ©
|
||||
}
|
||||
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
func (m *Schema1) ConfigInfo() types.BlobInfo {
|
||||
return types.BlobInfo{}
|
||||
}
|
||||
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (m *Schema1) LayerInfos() []types.BlobInfo {
|
||||
layers := make([]types.BlobInfo, len(m.FSLayers))
|
||||
for i, layer := range m.FSLayers { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
|
||||
layers[(len(m.FSLayers)-1)-i] = types.BlobInfo{Digest: layer.BlobSum, Size: -1}
|
||||
}
|
||||
return layers
|
||||
}
|
||||
|
||||
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
|
||||
func (m *Schema1) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
|
||||
// Our LayerInfos includes empty layers (where m.History.V1Compatibility->ThrowAway), so expect them to be included here as well.
|
||||
if len(m.FSLayers) != len(layerInfos) {
|
||||
return errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.FSLayers), len(layerInfos))
|
||||
}
|
||||
m.FSLayers = make([]Schema1FSLayers, len(layerInfos))
|
||||
for i, info := range layerInfos {
|
||||
// (docker push) sets up m.History.V1Compatibility->{Id,Parent} based on values of info.Digest,
|
||||
// but (docker pull) ignores them in favor of computing DiffIDs from uncompressed data, except verifying the child->parent links and uniqueness.
|
||||
// So, we don't bother recomputing the IDs in m.History.V1Compatibility.
|
||||
m.FSLayers[(len(layerInfos)-1)-i].BlobSum = info.Digest
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Serialize returns the manifest in a blob format.
|
||||
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
|
||||
func (m *Schema1) Serialize() ([]byte, error) {
|
||||
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
|
||||
unsigned, err := json.Marshal(*m)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return AddDummyV2S1Signature(unsigned)
|
||||
}
|
||||
|
||||
// fixManifestLayers, after validating the supplied manifest
|
||||
// (to use correctly-formatted IDs, and to not have non-consecutive ID collisions in m.History),
|
||||
// modifies manifest to only have one entry for each layer ID in m.History (deleting the older duplicates,
|
||||
// both from m.History and m.FSLayers).
|
||||
// Note that even after this succeeds, m.FSLayers may contain duplicate entries
|
||||
// (for Dockerfile operations which change the configuration but not the filesystem).
|
||||
func (m *Schema1) fixManifestLayers() error {
|
||||
type imageV1 struct {
|
||||
ID string
|
||||
Parent string
|
||||
}
|
||||
// Per the specification, we can assume that len(m.FSLayers) == len(m.History)
|
||||
imgs := make([]*imageV1, len(m.FSLayers))
|
||||
for i := range m.FSLayers {
|
||||
img := &imageV1{}
|
||||
|
||||
if err := json.Unmarshal([]byte(m.History[i].V1Compatibility), img); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
imgs[i] = img
|
||||
if err := validateV1ID(img.ID); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if imgs[len(imgs)-1].Parent != "" {
|
||||
return errors.New("Invalid parent ID in the base layer of the image")
|
||||
}
|
||||
// check general duplicates to error instead of a deadlock
|
||||
idmap := make(map[string]struct{})
|
||||
var lastID string
|
||||
for _, img := range imgs {
|
||||
// skip IDs that appear after each other, we handle those later
|
||||
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
|
||||
return errors.Errorf("ID %+v appears multiple times in manifest", img.ID)
|
||||
}
|
||||
lastID = img.ID
|
||||
idmap[lastID] = struct{}{}
|
||||
}
|
||||
// backwards loop so that we keep the remaining indexes after removing items
|
||||
for i := len(imgs) - 2; i >= 0; i-- {
|
||||
if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue
|
||||
m.FSLayers = append(m.FSLayers[:i], m.FSLayers[i+1:]...)
|
||||
m.History = append(m.History[:i], m.History[i+1:]...)
|
||||
} else if imgs[i].Parent != imgs[i+1].ID {
|
||||
return errors.Errorf("Invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var validHex = regexp.MustCompile(`^([a-f0-9]{64})$`)
|
||||
|
||||
func validateV1ID(id string) error {
|
||||
if ok := validHex.MatchString(id); !ok {
|
||||
return errors.Errorf("image ID %q is invalid", id)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
func (m *Schema1) Inspect(_ func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error) {
|
||||
s1 := &Schema2V1Image{}
|
||||
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), s1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
i := &types.ImageInspectInfo{
|
||||
Tag: m.Tag,
|
||||
Created: &s1.Created,
|
||||
DockerVersion: s1.DockerVersion,
|
||||
Architecture: s1.Architecture,
|
||||
Os: s1.OS,
|
||||
Layers: LayerInfosToStrings(m.LayerInfos()),
|
||||
}
|
||||
if s1.Config != nil {
|
||||
i.Labels = s1.Config.Labels
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
// ToSchema2Config builds a schema2-style configuration blob using the supplied diffIDs.
|
||||
func (m *Schema1) ToSchema2Config(diffIDs []digest.Digest) ([]byte, error) {
|
||||
// Convert the schema 1 compat info into a schema 2 config, constructing some of the fields
|
||||
// that aren't directly comparable using info from the manifest.
|
||||
if len(m.History) == 0 {
|
||||
return nil, errors.New("image has no layers")
|
||||
}
|
||||
s1 := Schema2V1Image{}
|
||||
config := []byte(m.History[0].V1Compatibility)
|
||||
err := json.Unmarshal(config, &s1)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error decoding configuration")
|
||||
}
|
||||
// Images created with versions prior to 1.8.3 require us to re-encode the encoded object,
|
||||
// adding some fields that aren't "omitempty".
|
||||
if s1.DockerVersion != "" && versions.LessThan(s1.DockerVersion, "1.8.3") {
|
||||
config, err = json.Marshal(&s1)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error re-encoding compat image config %#v", s1)
|
||||
}
|
||||
}
|
||||
// Build the history.
|
||||
convertedHistory := []Schema2History{}
|
||||
for _, h := range m.History {
|
||||
compat := Schema1V1Compatibility{}
|
||||
if err := json.Unmarshal([]byte(h.V1Compatibility), &compat); err != nil {
|
||||
return nil, errors.Wrapf(err, "error decoding history information")
|
||||
}
|
||||
hitem := Schema2History{
|
||||
Created: compat.Created,
|
||||
CreatedBy: strings.Join(compat.ContainerConfig.Cmd, " "),
|
||||
Author: compat.Author,
|
||||
Comment: compat.Comment,
|
||||
EmptyLayer: compat.ThrowAway,
|
||||
}
|
||||
convertedHistory = append([]Schema2History{hitem}, convertedHistory...)
|
||||
}
|
||||
// Build the rootfs information. We need the decompressed sums that we've been
|
||||
// calculating to fill in the DiffIDs. It's expected (but not enforced by us)
|
||||
// that the number of diffIDs corresponds to the number of non-EmptyLayer
|
||||
// entries in the history.
|
||||
rootFS := &Schema2RootFS{
|
||||
Type: "layers",
|
||||
DiffIDs: diffIDs,
|
||||
}
|
||||
// And now for some raw manipulation.
|
||||
raw := make(map[string]*json.RawMessage)
|
||||
err = json.Unmarshal(config, &raw)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error re-decoding compat image config %#v: %v", s1)
|
||||
}
|
||||
// Drop some fields.
|
||||
delete(raw, "id")
|
||||
delete(raw, "parent")
|
||||
delete(raw, "parent_id")
|
||||
delete(raw, "layer_id")
|
||||
delete(raw, "throwaway")
|
||||
delete(raw, "Size")
|
||||
// Add the history and rootfs information.
|
||||
rootfs, err := json.Marshal(rootFS)
|
||||
if err != nil {
|
||||
return nil, errors.Errorf("error encoding rootfs information %#v: %v", rootFS, err)
|
||||
}
|
||||
rawRootfs := json.RawMessage(rootfs)
|
||||
raw["rootfs"] = &rawRootfs
|
||||
history, err := json.Marshal(convertedHistory)
|
||||
if err != nil {
|
||||
return nil, errors.Errorf("error encoding history information %#v: %v", convertedHistory, err)
|
||||
}
|
||||
rawHistory := json.RawMessage(history)
|
||||
raw["history"] = &rawHistory
|
||||
// Encode the result.
|
||||
config, err = json.Marshal(raw)
|
||||
if err != nil {
|
||||
return nil, errors.Errorf("error re-encoding compat image config %#v: %v", s1, err)
|
||||
}
|
||||
return config, nil
|
||||
}
|
||||
|
||||
// ImageID computes an ID which can uniquely identify this image by its contents.
|
||||
func (m *Schema1) ImageID(diffIDs []digest.Digest) (string, error) {
|
||||
image, err := m.ToSchema2Config(diffIDs)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return digest.FromBytes(image).Hex(), nil
|
||||
}
|
||||
244
vendor/github.com/containers/image/manifest/docker_schema2.go
generated
vendored
Normal file
244
vendor/github.com/containers/image/manifest/docker_schema2.go
generated
vendored
Normal file
@@ -0,0 +1,244 @@
|
||||
package manifest
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"time"
|
||||
|
||||
"github.com/containers/image/pkg/strslice"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Schema2Descriptor is a “descriptor” in docker/distribution schema 2.
|
||||
type Schema2Descriptor struct {
|
||||
MediaType string `json:"mediaType"`
|
||||
Size int64 `json:"size"`
|
||||
Digest digest.Digest `json:"digest"`
|
||||
URLs []string `json:"urls,omitempty"`
|
||||
}
|
||||
|
||||
// Schema2 is a manifest in docker/distribution schema 2.
|
||||
type Schema2 struct {
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
MediaType string `json:"mediaType"`
|
||||
ConfigDescriptor Schema2Descriptor `json:"config"`
|
||||
LayersDescriptors []Schema2Descriptor `json:"layers"`
|
||||
}
|
||||
|
||||
// Schema2Port is a Port, a string containing port number and protocol in the
|
||||
// format "80/tcp", from docker/go-connections/nat.
|
||||
type Schema2Port string
|
||||
|
||||
// Schema2PortSet is a PortSet, a collection of structs indexed by Port, from
|
||||
// docker/go-connections/nat.
|
||||
type Schema2PortSet map[Schema2Port]struct{}
|
||||
|
||||
// Schema2HealthConfig is a HealthConfig, which holds configuration settings
|
||||
// for the HEALTHCHECK feature, from docker/docker/api/types/container.
|
||||
type Schema2HealthConfig struct {
|
||||
// Test is the test to perform to check that the container is healthy.
|
||||
// An empty slice means to inherit the default.
|
||||
// The options are:
|
||||
// {} : inherit healthcheck
|
||||
// {"NONE"} : disable healthcheck
|
||||
// {"CMD", args...} : exec arguments directly
|
||||
// {"CMD-SHELL", command} : run command with system's default shell
|
||||
Test []string `json:",omitempty"`
|
||||
|
||||
// Zero means to inherit. Durations are expressed as integer nanoseconds.
|
||||
Interval time.Duration `json:",omitempty"` // Interval is the time to wait between checks.
|
||||
Timeout time.Duration `json:",omitempty"` // Timeout is the time to wait before considering the check to have hung.
|
||||
|
||||
// Retries is the number of consecutive failures needed to consider a container as unhealthy.
|
||||
// Zero means inherit.
|
||||
Retries int `json:",omitempty"`
|
||||
}
|
||||
|
||||
// Schema2Config is a Config in docker/docker/api/types/container.
|
||||
type Schema2Config struct {
|
||||
Hostname string // Hostname
|
||||
Domainname string // Domainname
|
||||
User string // User that will run the command(s) inside the container, also support user:group
|
||||
AttachStdin bool // Attach the standard input, makes possible user interaction
|
||||
AttachStdout bool // Attach the standard output
|
||||
AttachStderr bool // Attach the standard error
|
||||
ExposedPorts Schema2PortSet `json:",omitempty"` // List of exposed ports
|
||||
Tty bool // Attach standard streams to a tty, including stdin if it is not closed.
|
||||
OpenStdin bool // Open stdin
|
||||
StdinOnce bool // If true, close stdin after the 1 attached client disconnects.
|
||||
Env []string // List of environment variable to set in the container
|
||||
Cmd strslice.StrSlice // Command to run when starting the container
|
||||
Healthcheck *Schema2HealthConfig `json:",omitempty"` // Healthcheck describes how to check the container is healthy
|
||||
ArgsEscaped bool `json:",omitempty"` // True if command is already escaped (Windows specific)
|
||||
Image string // Name of the image as it was passed by the operator (e.g. could be symbolic)
|
||||
Volumes map[string]struct{} // List of volumes (mounts) used for the container
|
||||
WorkingDir string // Current directory (PWD) in the command will be launched
|
||||
Entrypoint strslice.StrSlice // Entrypoint to run when starting the container
|
||||
NetworkDisabled bool `json:",omitempty"` // Is network disabled
|
||||
MacAddress string `json:",omitempty"` // Mac Address of the container
|
||||
OnBuild []string // ONBUILD metadata that were defined on the image Dockerfile
|
||||
Labels map[string]string // List of labels set to this container
|
||||
StopSignal string `json:",omitempty"` // Signal to stop a container
|
||||
StopTimeout *int `json:",omitempty"` // Timeout (in seconds) to stop a container
|
||||
Shell strslice.StrSlice `json:",omitempty"` // Shell for shell-form of RUN, CMD, ENTRYPOINT
|
||||
}
|
||||
|
||||
// Schema2V1Image is a V1Image in docker/docker/image.
|
||||
type Schema2V1Image struct {
|
||||
// ID is a unique 64 character identifier of the image
|
||||
ID string `json:"id,omitempty"`
|
||||
// Parent is the ID of the parent image
|
||||
Parent string `json:"parent,omitempty"`
|
||||
// Comment is the commit message that was set when committing the image
|
||||
Comment string `json:"comment,omitempty"`
|
||||
// Created is the timestamp at which the image was created
|
||||
Created time.Time `json:"created"`
|
||||
// Container is the id of the container used to commit
|
||||
Container string `json:"container,omitempty"`
|
||||
// ContainerConfig is the configuration of the container that is committed into the image
|
||||
ContainerConfig Schema2Config `json:"container_config,omitempty"`
|
||||
// DockerVersion specifies the version of Docker that was used to build the image
|
||||
DockerVersion string `json:"docker_version,omitempty"`
|
||||
// Author is the name of the author that was specified when committing the image
|
||||
Author string `json:"author,omitempty"`
|
||||
// Config is the configuration of the container received from the client
|
||||
Config *Schema2Config `json:"config,omitempty"`
|
||||
// Architecture is the hardware that the image is build and runs on
|
||||
Architecture string `json:"architecture,omitempty"`
|
||||
// OS is the operating system used to build and run the image
|
||||
OS string `json:"os,omitempty"`
|
||||
// Size is the total size of the image including all layers it is composed of
|
||||
Size int64 `json:",omitempty"`
|
||||
}
|
||||
|
||||
// Schema2RootFS is a description of how to build up an image's root filesystem, from docker/docker/image.
|
||||
type Schema2RootFS struct {
|
||||
Type string `json:"type"`
|
||||
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
|
||||
}
|
||||
|
||||
// Schema2History stores build commands that were used to create an image, from docker/docker/image.
|
||||
type Schema2History struct {
|
||||
// Created is the timestamp at which the image was created
|
||||
Created time.Time `json:"created"`
|
||||
// Author is the name of the author that was specified when committing the image
|
||||
Author string `json:"author,omitempty"`
|
||||
// CreatedBy keeps the Dockerfile command used while building the image
|
||||
CreatedBy string `json:"created_by,omitempty"`
|
||||
// Comment is the commit message that was set when committing the image
|
||||
Comment string `json:"comment,omitempty"`
|
||||
// EmptyLayer is set to true if this history item did not generate a
|
||||
// layer. Otherwise, the history item is associated with the next
|
||||
// layer in the RootFS section.
|
||||
EmptyLayer bool `json:"empty_layer,omitempty"`
|
||||
}
|
||||
|
||||
// Schema2Image is an Image in docker/docker/image.
|
||||
type Schema2Image struct {
|
||||
Schema2V1Image
|
||||
Parent digest.Digest `json:"parent,omitempty"`
|
||||
RootFS *Schema2RootFS `json:"rootfs,omitempty"`
|
||||
History []Schema2History `json:"history,omitempty"`
|
||||
OSVersion string `json:"os.version,omitempty"`
|
||||
OSFeatures []string `json:"os.features,omitempty"`
|
||||
}
|
||||
|
||||
// Schema2FromManifest creates a Schema2 manifest instance from a manifest blob.
|
||||
func Schema2FromManifest(manifest []byte) (*Schema2, error) {
|
||||
s2 := Schema2{}
|
||||
if err := json.Unmarshal(manifest, &s2); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &s2, nil
|
||||
}
|
||||
|
||||
// Schema2FromComponents creates an Schema2 manifest instance from the supplied data.
|
||||
func Schema2FromComponents(config Schema2Descriptor, layers []Schema2Descriptor) *Schema2 {
|
||||
return &Schema2{
|
||||
SchemaVersion: 2,
|
||||
MediaType: DockerV2Schema2MediaType,
|
||||
ConfigDescriptor: config,
|
||||
LayersDescriptors: layers,
|
||||
}
|
||||
}
|
||||
|
||||
// Schema2Clone creates a copy of the supplied Schema2 manifest.
|
||||
func Schema2Clone(src *Schema2) *Schema2 {
|
||||
copy := *src
|
||||
return ©
|
||||
}
|
||||
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
func (m *Schema2) ConfigInfo() types.BlobInfo {
|
||||
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
|
||||
}
|
||||
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (m *Schema2) LayerInfos() []types.BlobInfo {
|
||||
blobs := []types.BlobInfo{}
|
||||
for _, layer := range m.LayersDescriptors {
|
||||
blobs = append(blobs, types.BlobInfo{
|
||||
Digest: layer.Digest,
|
||||
Size: layer.Size,
|
||||
URLs: layer.URLs,
|
||||
})
|
||||
}
|
||||
return blobs
|
||||
}
|
||||
|
||||
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
|
||||
func (m *Schema2) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
|
||||
if len(m.LayersDescriptors) != len(layerInfos) {
|
||||
return errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.LayersDescriptors), len(layerInfos))
|
||||
}
|
||||
original := m.LayersDescriptors
|
||||
m.LayersDescriptors = make([]Schema2Descriptor, len(layerInfos))
|
||||
for i, info := range layerInfos {
|
||||
m.LayersDescriptors[i].MediaType = original[i].MediaType
|
||||
m.LayersDescriptors[i].Digest = info.Digest
|
||||
m.LayersDescriptors[i].Size = info.Size
|
||||
m.LayersDescriptors[i].URLs = info.URLs
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Serialize returns the manifest in a blob format.
|
||||
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
|
||||
func (m *Schema2) Serialize() ([]byte, error) {
|
||||
return json.Marshal(*m)
|
||||
}
|
||||
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
func (m *Schema2) Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error) {
|
||||
config, err := configGetter(m.ConfigInfo())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s2 := &Schema2Image{}
|
||||
if err := json.Unmarshal(config, s2); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
i := &types.ImageInspectInfo{
|
||||
Tag: "",
|
||||
Created: &s2.Created,
|
||||
DockerVersion: s2.DockerVersion,
|
||||
Architecture: s2.Architecture,
|
||||
Os: s2.OS,
|
||||
Layers: LayerInfosToStrings(m.LayerInfos()),
|
||||
}
|
||||
if s2.Config != nil {
|
||||
i.Labels = s2.Config.Labels
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
// ImageID computes an ID which can uniquely identify this image by its contents.
|
||||
func (m *Schema2) ImageID([]digest.Digest) (string, error) {
|
||||
if err := m.ConfigDescriptor.Digest.Validate(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return m.ConfigDescriptor.Digest.Hex(), nil
|
||||
}
|
||||
96
vendor/github.com/containers/image/manifest/manifest.go
generated
vendored
96
vendor/github.com/containers/image/manifest/manifest.go
generated
vendored
@@ -2,7 +2,9 @@ package manifest
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/libtrust"
|
||||
"github.com/opencontainers/go-digest"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
@@ -35,7 +37,40 @@ var DefaultRequestedManifestMIMETypes = []string{
|
||||
DockerV2Schema2MediaType,
|
||||
DockerV2Schema1SignedMediaType,
|
||||
DockerV2Schema1MediaType,
|
||||
// DockerV2ListMediaType, // FIXME: Restore this ASAP
|
||||
DockerV2ListMediaType,
|
||||
}
|
||||
|
||||
// Manifest is an interface for parsing, modifying image manifests in isolation.
|
||||
// Callers can either use this abstract interface without understanding the details of the formats,
|
||||
// or instantiate a specific implementation (e.g. manifest.OCI1) and access the public members
|
||||
// directly.
|
||||
//
|
||||
// See types.Image for functionality not limited to manifests, including format conversions and config parsing.
|
||||
// This interface is similar to, but not strictly equivalent to, the equivalent methods in types.Image.
|
||||
type Manifest interface {
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
ConfigInfo() types.BlobInfo
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
LayerInfos() []types.BlobInfo
|
||||
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
|
||||
UpdateLayerInfos(layerInfos []types.BlobInfo) error
|
||||
|
||||
// ImageID computes an ID which can uniquely identify this image by its contents, irrespective
|
||||
// of which (of possibly more than one simultaneously valid) reference was used to locate the
|
||||
// image, and unchanged by whether or how the layers are compressed. The result takes the form
|
||||
// of the hexadecimal portion of a digest.Digest.
|
||||
ImageID(diffIDs []digest.Digest) (string, error)
|
||||
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest,
|
||||
// incorporating information from a configuration blob returned by configGetter, if
|
||||
// the underlying image format is expected to include a configuration blob.
|
||||
Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error)
|
||||
|
||||
// Serialize returns the manifest in a blob format.
|
||||
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
|
||||
Serialize() ([]byte, error)
|
||||
}
|
||||
|
||||
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
|
||||
@@ -142,3 +177,62 @@ func AddDummyV2S1Signature(manifest []byte) ([]byte, error) {
|
||||
}
|
||||
return js.PrettySignature("signatures")
|
||||
}
|
||||
|
||||
// MIMETypeIsMultiImage returns true if mimeType is a list of images
|
||||
func MIMETypeIsMultiImage(mimeType string) bool {
|
||||
return mimeType == DockerV2ListMediaType
|
||||
}
|
||||
|
||||
// NormalizedMIMEType returns the effective MIME type of a manifest MIME type returned by a server,
|
||||
// centralizing various workarounds.
|
||||
func NormalizedMIMEType(input string) string {
|
||||
switch input {
|
||||
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
|
||||
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
|
||||
// need to happen within the ImageSource.
|
||||
case "application/json":
|
||||
return DockerV2Schema1SignedMediaType
|
||||
case DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType,
|
||||
imgspecv1.MediaTypeImageManifest,
|
||||
DockerV2Schema2MediaType,
|
||||
DockerV2ListMediaType:
|
||||
return input
|
||||
default:
|
||||
// If it's not a recognized manifest media type, or we have failed determining the type, we'll try one last time
|
||||
// to deserialize using v2s1 as per https://github.com/docker/distribution/blob/master/manifests.go#L108
|
||||
// and https://github.com/docker/distribution/blob/master/manifest/schema1/manifest.go#L50
|
||||
//
|
||||
// Crane registries can also return "text/plain", or pretty much anything else depending on a file extension “recognized” in the tag.
|
||||
// This makes no real sense, but it happens
|
||||
// because requests for manifests are
|
||||
// redirected to a content distribution
|
||||
// network which is configured that way. See https://bugzilla.redhat.com/show_bug.cgi?id=1389442
|
||||
return DockerV2Schema1SignedMediaType
|
||||
}
|
||||
}
|
||||
|
||||
// FromBlob returns a Manifest instance for the specified manifest blob and the corresponding MIME type
|
||||
func FromBlob(manblob []byte, mt string) (Manifest, error) {
|
||||
switch NormalizedMIMEType(mt) {
|
||||
case DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType:
|
||||
return Schema1FromManifest(manblob)
|
||||
case imgspecv1.MediaTypeImageManifest:
|
||||
return OCI1FromManifest(manblob)
|
||||
case DockerV2Schema2MediaType:
|
||||
return Schema2FromManifest(manblob)
|
||||
case DockerV2ListMediaType:
|
||||
return nil, fmt.Errorf("Treating manifest lists as individual manifests is not implemented")
|
||||
default: // Note that this may not be reachable, NormalizedMIMEType has a default for unknown values.
|
||||
return nil, fmt.Errorf("Unimplemented manifest MIME type %s", mt)
|
||||
}
|
||||
}
|
||||
|
||||
// LayerInfosToStrings converts a list of layer infos, presumably obtained from a Manifest.LayerInfos()
|
||||
// method call, into a format suitable for inclusion in a types.ImageInspectInfo structure.
|
||||
func LayerInfosToStrings(infos []types.BlobInfo) []string {
|
||||
layers := make([]string, len(infos))
|
||||
for i, info := range infos {
|
||||
layers[i] = info.Digest.String()
|
||||
}
|
||||
return layers
|
||||
}
|
||||
|
||||
115
vendor/github.com/containers/image/manifest/oci.go
generated
vendored
Normal file
115
vendor/github.com/containers/image/manifest/oci.go
generated
vendored
Normal file
@@ -0,0 +1,115 @@
|
||||
package manifest
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
|
||||
"github.com/containers/image/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/opencontainers/image-spec/specs-go"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// OCI1 is a manifest.Manifest implementation for OCI images.
|
||||
// The underlying data from imgspecv1.Manifest is also available.
|
||||
type OCI1 struct {
|
||||
imgspecv1.Manifest
|
||||
}
|
||||
|
||||
// OCI1FromManifest creates an OCI1 manifest instance from a manifest blob.
|
||||
func OCI1FromManifest(manifest []byte) (*OCI1, error) {
|
||||
oci1 := OCI1{}
|
||||
if err := json.Unmarshal(manifest, &oci1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &oci1, nil
|
||||
}
|
||||
|
||||
// OCI1FromComponents creates an OCI1 manifest instance from the supplied data.
|
||||
func OCI1FromComponents(config imgspecv1.Descriptor, layers []imgspecv1.Descriptor) *OCI1 {
|
||||
return &OCI1{
|
||||
imgspecv1.Manifest{
|
||||
Versioned: specs.Versioned{SchemaVersion: 2},
|
||||
Config: config,
|
||||
Layers: layers,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// OCI1Clone creates a copy of the supplied OCI1 manifest.
|
||||
func OCI1Clone(src *OCI1) *OCI1 {
|
||||
return &OCI1{
|
||||
Manifest: src.Manifest,
|
||||
}
|
||||
}
|
||||
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
func (m *OCI1) ConfigInfo() types.BlobInfo {
|
||||
return types.BlobInfo{Digest: m.Config.Digest, Size: m.Config.Size, Annotations: m.Config.Annotations}
|
||||
}
|
||||
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (m *OCI1) LayerInfos() []types.BlobInfo {
|
||||
blobs := []types.BlobInfo{}
|
||||
for _, layer := range m.Layers {
|
||||
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size, Annotations: layer.Annotations, URLs: layer.URLs, MediaType: layer.MediaType})
|
||||
}
|
||||
return blobs
|
||||
}
|
||||
|
||||
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
|
||||
func (m *OCI1) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
|
||||
if len(m.Layers) != len(layerInfos) {
|
||||
return errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.Layers), len(layerInfos))
|
||||
}
|
||||
original := m.Layers
|
||||
m.Layers = make([]imgspecv1.Descriptor, len(layerInfos))
|
||||
for i, info := range layerInfos {
|
||||
m.Layers[i].MediaType = original[i].MediaType
|
||||
m.Layers[i].Digest = info.Digest
|
||||
m.Layers[i].Size = info.Size
|
||||
m.Layers[i].Annotations = info.Annotations
|
||||
m.Layers[i].URLs = info.URLs
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Serialize returns the manifest in a blob format.
|
||||
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
|
||||
func (m *OCI1) Serialize() ([]byte, error) {
|
||||
return json.Marshal(*m)
|
||||
}
|
||||
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
func (m *OCI1) Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error) {
|
||||
config, err := configGetter(m.ConfigInfo())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v1 := &imgspecv1.Image{}
|
||||
if err := json.Unmarshal(config, v1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
d1 := &Schema2V1Image{}
|
||||
json.Unmarshal(config, d1)
|
||||
i := &types.ImageInspectInfo{
|
||||
Tag: "",
|
||||
Created: v1.Created,
|
||||
DockerVersion: d1.DockerVersion,
|
||||
Labels: v1.Config.Labels,
|
||||
Architecture: v1.Architecture,
|
||||
Os: v1.OS,
|
||||
Layers: LayerInfosToStrings(m.LayerInfos()),
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
// ImageID computes an ID which can uniquely identify this image by its contents.
|
||||
func (m *OCI1) ImageID([]digest.Digest) (string, error) {
|
||||
if err := m.Config.Digest.Validate(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return m.Config.Digest.Hex(), nil
|
||||
}
|
||||
39
vendor/github.com/containers/image/oci/archive/oci_dest.go
generated
vendored
39
vendor/github.com/containers/image/oci/archive/oci_dest.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package archive
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"os"
|
||||
|
||||
@@ -16,12 +17,12 @@ type ociArchiveImageDestination struct {
|
||||
}
|
||||
|
||||
// newImageDestination returns an ImageDestination for writing to an existing directory.
|
||||
func newImageDestination(ctx *types.SystemContext, ref ociArchiveReference) (types.ImageDestination, error) {
|
||||
func newImageDestination(ctx context.Context, sys *types.SystemContext, ref ociArchiveReference) (types.ImageDestination, error) {
|
||||
tempDirRef, err := createOCIRef(ref.image)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error creating oci reference")
|
||||
}
|
||||
unpackedDest, err := tempDirRef.ociRefExtracted.NewImageDestination(ctx)
|
||||
unpackedDest, err := tempDirRef.ociRefExtracted.NewImageDestination(ctx, sys)
|
||||
if err != nil {
|
||||
if err := tempDirRef.deleteTempDir(); err != nil {
|
||||
return nil, errors.Wrapf(err, "error deleting temp directory", tempDirRef.tempDirectory)
|
||||
@@ -50,13 +51,12 @@ func (d *ociArchiveImageDestination) SupportedManifestMIMETypes() []string {
|
||||
}
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures
|
||||
func (d *ociArchiveImageDestination) SupportsSignatures() error {
|
||||
return d.unpackedDest.SupportsSignatures()
|
||||
func (d *ociArchiveImageDestination) SupportsSignatures(ctx context.Context) error {
|
||||
return d.unpackedDest.SupportsSignatures(ctx)
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination
|
||||
func (d *ociArchiveImageDestination) ShouldCompressLayers() bool {
|
||||
return d.unpackedDest.ShouldCompressLayers()
|
||||
func (d *ociArchiveImageDestination) DesiredLayerCompression() types.LayerCompression {
|
||||
return d.unpackedDest.DesiredLayerCompression()
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
@@ -73,32 +73,32 @@ func (d *ociArchiveImageDestination) MustMatchRuntimeOS() bool {
|
||||
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
|
||||
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
|
||||
// inputInfo.Size is the expected length of stream, if known.
|
||||
func (d *ociArchiveImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
return d.unpackedDest.PutBlob(stream, inputInfo)
|
||||
func (d *ociArchiveImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
|
||||
return d.unpackedDest.PutBlob(ctx, stream, inputInfo, isConfig)
|
||||
}
|
||||
|
||||
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob
|
||||
func (d *ociArchiveImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
return d.unpackedDest.HasBlob(info)
|
||||
func (d *ociArchiveImageDestination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
|
||||
return d.unpackedDest.HasBlob(ctx, info)
|
||||
}
|
||||
|
||||
func (d *ociArchiveImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return d.unpackedDest.ReapplyBlob(info)
|
||||
func (d *ociArchiveImageDestination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return d.unpackedDest.ReapplyBlob(ctx, info)
|
||||
}
|
||||
|
||||
// PutManifest writes manifest to the destination
|
||||
func (d *ociArchiveImageDestination) PutManifest(m []byte) error {
|
||||
return d.unpackedDest.PutManifest(m)
|
||||
func (d *ociArchiveImageDestination) PutManifest(ctx context.Context, m []byte) error {
|
||||
return d.unpackedDest.PutManifest(ctx, m)
|
||||
}
|
||||
|
||||
func (d *ociArchiveImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
return d.unpackedDest.PutSignatures(signatures)
|
||||
func (d *ociArchiveImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
|
||||
return d.unpackedDest.PutSignatures(ctx, signatures)
|
||||
}
|
||||
|
||||
// Commit marks the process of storing the image as successful and asks for the image to be persisted
|
||||
// after the directory is made, it is tarred up into a file and the directory is deleted
|
||||
func (d *ociArchiveImageDestination) Commit() error {
|
||||
if err := d.unpackedDest.Commit(); err != nil {
|
||||
func (d *ociArchiveImageDestination) Commit(ctx context.Context) error {
|
||||
if err := d.unpackedDest.Commit(ctx); err != nil {
|
||||
return errors.Wrapf(err, "error storing image %q", d.ref.image)
|
||||
}
|
||||
|
||||
@@ -125,6 +125,7 @@ func tarDirectory(src, dst string) error {
|
||||
defer outFile.Close()
|
||||
|
||||
// copies the contents of the directory to the tar file
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using a context.Context.
|
||||
_, err = io.Copy(outFile, input)
|
||||
|
||||
return err
|
||||
|
||||
35
vendor/github.com/containers/image/oci/archive/oci_src.go
generated
vendored
35
vendor/github.com/containers/image/oci/archive/oci_src.go
generated
vendored
@@ -19,13 +19,13 @@ type ociArchiveImageSource struct {
|
||||
|
||||
// newImageSource returns an ImageSource for reading from an existing directory.
|
||||
// newImageSource untars the file and saves it in a temp directory
|
||||
func newImageSource(ctx *types.SystemContext, ref ociArchiveReference) (types.ImageSource, error) {
|
||||
func newImageSource(ctx context.Context, sys *types.SystemContext, ref ociArchiveReference) (types.ImageSource, error) {
|
||||
tempDirRef, err := createUntarTempDir(ref)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error creating temp directory")
|
||||
}
|
||||
|
||||
unpackedSrc, err := tempDirRef.ociRefExtracted.NewImageSource(ctx)
|
||||
unpackedSrc, err := tempDirRef.ociRefExtracted.NewImageSource(ctx, sys)
|
||||
if err != nil {
|
||||
if err := tempDirRef.deleteTempDir(); err != nil {
|
||||
return nil, errors.Wrapf(err, "error deleting temp directory", tempDirRef.tempDirectory)
|
||||
@@ -68,21 +68,28 @@ func (s *ociArchiveImageSource) Close() error {
|
||||
return s.unpackedSrc.Close()
|
||||
}
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type
|
||||
// (which may be empty when it can't be determined but the manifest is available).
|
||||
func (s *ociArchiveImageSource) GetManifest() ([]byte, string, error) {
|
||||
return s.unpackedSrc.GetManifest()
|
||||
}
|
||||
|
||||
func (s *ociArchiveImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
return s.unpackedSrc.GetTargetManifest(digest)
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
|
||||
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
|
||||
func (s *ociArchiveImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
|
||||
return s.unpackedSrc.GetManifest(ctx, instanceDigest)
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob's size.
|
||||
func (s *ociArchiveImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
return s.unpackedSrc.GetBlob(info)
|
||||
func (s *ociArchiveImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
return s.unpackedSrc.GetBlob(ctx, info)
|
||||
}
|
||||
|
||||
func (s *ociArchiveImageSource) GetSignatures(c context.Context) ([][]byte, error) {
|
||||
return s.unpackedSrc.GetSignatures(c)
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
|
||||
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
|
||||
// (e.g. if the source never returns manifest lists).
|
||||
func (s *ociArchiveImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
return s.unpackedSrc.GetSignatures(ctx, instanceDigest)
|
||||
}
|
||||
|
||||
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
|
||||
func (s *ociArchiveImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
87
vendor/github.com/containers/image/oci/archive/oci_transport.go
generated
vendored
87
vendor/github.com/containers/image/oci/archive/oci_transport.go
generated
vendored
@@ -1,16 +1,17 @@
|
||||
package archive
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/directory/explicitfilepath"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/internal/tmpdir"
|
||||
"github.com/containers/image/oci/internal"
|
||||
ocilayout "github.com/containers/image/oci/layout"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
@@ -48,51 +49,12 @@ func (t ociArchiveTransport) ParseReference(reference string) (types.ImageRefere
|
||||
|
||||
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
|
||||
func (t ociArchiveTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
var file string
|
||||
sep := strings.SplitN(scope, ":", 2)
|
||||
file = sep[0]
|
||||
|
||||
if len(sep) == 2 {
|
||||
image := sep[1]
|
||||
if !refRegexp.MatchString(image) {
|
||||
return errors.Errorf("Invalid image %s", image)
|
||||
}
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(file, "/") {
|
||||
return errors.Errorf("Invalid scope %s: must be an absolute path", scope)
|
||||
}
|
||||
// Refuse also "/", otherwise "/" and "" would have the same semantics,
|
||||
// and "" could be unexpectedly shadowed by the "/" entry.
|
||||
// (Note: we do allow "/:someimage", a bit ridiculous but why refuse it?)
|
||||
if scope == "/" {
|
||||
return errors.New(`Invalid scope "/": Use the generic default scope ""`)
|
||||
}
|
||||
cleaned := filepath.Clean(file)
|
||||
if cleaned != file {
|
||||
return errors.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
|
||||
}
|
||||
return nil
|
||||
return internal.ValidateScope(scope)
|
||||
}
|
||||
|
||||
// annotation spex from https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys
|
||||
const (
|
||||
separator = `(?:[-._:@+]|--)`
|
||||
alphanum = `(?:[A-Za-z0-9]+)`
|
||||
component = `(?:` + alphanum + `(?:` + separator + alphanum + `)*)`
|
||||
)
|
||||
|
||||
var refRegexp = regexp.MustCompile(`^` + component + `(?:/` + component + `)*$`)
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an OCI ImageReference.
|
||||
func ParseReference(reference string) (types.ImageReference, error) {
|
||||
var file, image string
|
||||
sep := strings.SplitN(reference, ":", 2)
|
||||
file = sep[0]
|
||||
|
||||
if len(sep) == 2 {
|
||||
image = sep[1]
|
||||
}
|
||||
file, image := internal.SplitPathAndImage(reference)
|
||||
return NewReference(file, image)
|
||||
}
|
||||
|
||||
@@ -102,14 +64,15 @@ func NewReference(file, image string) (types.ImageReference, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
|
||||
// from being ambiguous with values of PolicyConfigurationIdentity.
|
||||
if strings.Contains(resolved, ":") {
|
||||
return nil, errors.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", file, image, resolved)
|
||||
|
||||
if err := internal.ValidateOCIPath(file); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(image) > 0 && !refRegexp.MatchString(image) {
|
||||
return nil, errors.Errorf("Invalid image %s", image)
|
||||
|
||||
if err := internal.ValidateImageName(image); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ociArchiveReference{file: file, resolvedFile: resolved, image: image}, nil
|
||||
}
|
||||
|
||||
@@ -154,30 +117,33 @@ func (ref ociArchiveReference) PolicyConfigurationNamespaces() []string {
|
||||
return res
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
func (ref ociArchiveReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src, err := newImageSource(ctx, ref)
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
func (ref ociArchiveReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
src, err := newImageSource(ctx, sys, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return image.FromSource(src)
|
||||
return image.FromSource(ctx, sys, src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref ociArchiveReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref)
|
||||
func (ref ociArchiveReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, sys, ref)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref ociArchiveReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, ref)
|
||||
func (ref ociArchiveReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, sys, ref)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref ociArchiveReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
func (ref ociArchiveReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
return errors.Errorf("Deleting images not implemented for oci: images")
|
||||
}
|
||||
|
||||
@@ -194,7 +160,7 @@ func (t *tempDirOCIRef) deleteTempDir() error {
|
||||
|
||||
// createOCIRef creates the oci reference of the image
|
||||
func createOCIRef(image string) (tempDirOCIRef, error) {
|
||||
dir, err := ioutil.TempDir("/var/tmp", "oci")
|
||||
dir, err := ioutil.TempDir(tmpdir.TemporaryDirectoryForBigFiles(), "oci")
|
||||
if err != nil {
|
||||
return tempDirOCIRef{}, errors.Wrapf(err, "error creating temp directory")
|
||||
}
|
||||
@@ -215,6 +181,7 @@ func createUntarTempDir(ref ociArchiveReference) (tempDirOCIRef, error) {
|
||||
}
|
||||
src := ref.resolvedFile
|
||||
dst := tempDirRef.tempDirectory
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using a context.Context.
|
||||
if err := archive.UntarPath(src, dst); err != nil {
|
||||
if err := tempDirRef.deleteTempDir(); err != nil {
|
||||
return tempDirOCIRef{}, errors.Wrapf(err, "error deleting temp directory %q", tempDirRef.tempDirectory)
|
||||
|
||||
126
vendor/github.com/containers/image/oci/internal/oci_util.go
generated
vendored
Normal file
126
vendor/github.com/containers/image/oci/internal/oci_util.go
generated
vendored
Normal file
@@ -0,0 +1,126 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"github.com/pkg/errors"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// annotation spex from https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys
|
||||
const (
|
||||
separator = `(?:[-._:@+]|--)`
|
||||
alphanum = `(?:[A-Za-z0-9]+)`
|
||||
component = `(?:` + alphanum + `(?:` + separator + alphanum + `)*)`
|
||||
)
|
||||
|
||||
var refRegexp = regexp.MustCompile(`^` + component + `(?:/` + component + `)*$`)
|
||||
var windowsRefRegexp = regexp.MustCompile(`^([a-zA-Z]:\\.+?):(.*)$`)
|
||||
|
||||
// ValidateImageName returns nil if the image name is empty or matches the open-containers image name specs.
|
||||
// In any other case an error is returned.
|
||||
func ValidateImageName(image string) error {
|
||||
if len(image) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
var err error
|
||||
if !refRegexp.MatchString(image) {
|
||||
err = errors.Errorf("Invalid image %s", image)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// SplitPathAndImage tries to split the provided OCI reference into the OCI path and image.
|
||||
// Neither path nor image parts are validated at this stage.
|
||||
func SplitPathAndImage(reference string) (string, string) {
|
||||
if runtime.GOOS == "windows" {
|
||||
return splitPathAndImageWindows(reference)
|
||||
}
|
||||
return splitPathAndImageNonWindows(reference)
|
||||
}
|
||||
|
||||
func splitPathAndImageWindows(reference string) (string, string) {
|
||||
groups := windowsRefRegexp.FindStringSubmatch(reference)
|
||||
// nil group means no match
|
||||
if groups == nil {
|
||||
return reference, ""
|
||||
}
|
||||
|
||||
// we expect three elements. First one full match, second the capture group for the path and
|
||||
// the third the capture group for the image
|
||||
if len(groups) != 3 {
|
||||
return reference, ""
|
||||
}
|
||||
return groups[1], groups[2]
|
||||
}
|
||||
|
||||
func splitPathAndImageNonWindows(reference string) (string, string) {
|
||||
sep := strings.SplitN(reference, ":", 2)
|
||||
path := sep[0]
|
||||
|
||||
var image string
|
||||
if len(sep) == 2 {
|
||||
image = sep[1]
|
||||
}
|
||||
return path, image
|
||||
}
|
||||
|
||||
// ValidateOCIPath takes the OCI path and validates it.
|
||||
func ValidateOCIPath(path string) error {
|
||||
if runtime.GOOS == "windows" {
|
||||
// On Windows we must allow for a ':' as part of the path
|
||||
if strings.Count(path, ":") > 1 {
|
||||
return errors.Errorf("Invalid OCI reference: path %s contains more than one colon", path)
|
||||
}
|
||||
} else {
|
||||
if strings.Contains(path, ":") {
|
||||
return errors.Errorf("Invalid OCI reference: path %s contains a colon", path)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ValidateScope validates a policy configuration scope for an OCI transport.
|
||||
func ValidateScope(scope string) error {
|
||||
var err error
|
||||
if runtime.GOOS == "windows" {
|
||||
err = validateScopeWindows(scope)
|
||||
} else {
|
||||
err = validateScopeNonWindows(scope)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cleaned := filepath.Clean(scope)
|
||||
if cleaned != scope {
|
||||
return errors.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateScopeWindows(scope string) error {
|
||||
matched, _ := regexp.Match(`^[a-zA-Z]:\\`, []byte(scope))
|
||||
if !matched {
|
||||
return errors.Errorf("Invalid scope '%s'. Must be an absolute path", scope)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateScopeNonWindows(scope string) error {
|
||||
if !strings.HasPrefix(scope, "/") {
|
||||
return errors.Errorf("Invalid scope %s: must be an absolute path", scope)
|
||||
}
|
||||
|
||||
// Refuse also "/", otherwise "/" and "" would have the same semantics,
|
||||
// and "" could be unexpectedly shadowed by the "/" entry.
|
||||
if scope == "/" {
|
||||
return errors.New(`Invalid scope "/": Use the generic default scope ""`)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
66
vendor/github.com/containers/image/oci/layout/oci_dest.go
generated
vendored
66
vendor/github.com/containers/image/oci/layout/oci_dest.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package layout
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
@@ -8,13 +9,12 @@ import (
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
imgspec "github.com/opencontainers/image-spec/specs-go"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type ociImageDestination struct {
|
||||
@@ -24,11 +24,7 @@ type ociImageDestination struct {
|
||||
}
|
||||
|
||||
// newImageDestination returns an ImageDestination for writing to an existing directory.
|
||||
func newImageDestination(ctx *types.SystemContext, ref ociReference) (types.ImageDestination, error) {
|
||||
if ref.image == "" {
|
||||
return nil, errors.Errorf("cannot save image with empty image.ref.name")
|
||||
}
|
||||
|
||||
func newImageDestination(sys *types.SystemContext, ref ociReference) (types.ImageDestination, error) {
|
||||
var index *imgspecv1.Index
|
||||
if indexExists(ref) {
|
||||
var err error
|
||||
@@ -45,8 +41,8 @@ func newImageDestination(ctx *types.SystemContext, ref ociReference) (types.Imag
|
||||
}
|
||||
|
||||
d := &ociImageDestination{ref: ref, index: *index}
|
||||
if ctx != nil {
|
||||
d.sharedBlobDir = ctx.OCISharedBlobDirPath
|
||||
if sys != nil {
|
||||
d.sharedBlobDir = sys.OCISharedBlobDirPath
|
||||
}
|
||||
|
||||
if err := ensureDirectoryExists(d.ref.dir); err != nil {
|
||||
@@ -80,13 +76,12 @@ func (d *ociImageDestination) SupportedManifestMIMETypes() []string {
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *ociImageDestination) SupportsSignatures() error {
|
||||
func (d *ociImageDestination) SupportsSignatures(ctx context.Context) error {
|
||||
return errors.Errorf("Pushing signatures for OCI images is not supported")
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *ociImageDestination) ShouldCompressLayers() bool {
|
||||
return true
|
||||
func (d *ociImageDestination) DesiredLayerCompression() types.LayerCompression {
|
||||
return types.Compress
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
@@ -106,14 +101,17 @@ func (d *ociImageDestination) MustMatchRuntimeOS() bool {
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
func (d *ociImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
|
||||
blobFile, err := ioutil.TempFile(d.ref.dir, "oci-put-blob")
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
succeeded := false
|
||||
explicitClosed := false
|
||||
defer func() {
|
||||
blobFile.Close()
|
||||
if !explicitClosed {
|
||||
blobFile.Close()
|
||||
}
|
||||
if !succeeded {
|
||||
os.Remove(blobFile.Name())
|
||||
}
|
||||
@@ -122,6 +120,7 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
|
||||
digester := digest.Canonical.Digester()
|
||||
tee := io.TeeReader(stream, digester.Hash())
|
||||
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
|
||||
size, err := io.Copy(blobFile, tee)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
@@ -133,8 +132,15 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
|
||||
if err := blobFile.Sync(); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
if err := blobFile.Chmod(0644); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
|
||||
// On POSIX systems, blobFile was created with mode 0600, so we need to make it readable.
|
||||
// On Windows, the “permissions of newly created files” argument to syscall.Open is
|
||||
// ignored and the file is already readable; besides, blobFile.Chmod, i.e. syscall.Fchmod,
|
||||
// always fails on Windows.
|
||||
if runtime.GOOS != "windows" {
|
||||
if err := blobFile.Chmod(0644); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
}
|
||||
|
||||
blobPath, err := d.ref.blobPath(computedDigest, d.sharedBlobDir)
|
||||
@@ -144,6 +150,10 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
|
||||
if err := ensureParentDirectoryExists(blobPath); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
|
||||
// need to explicitly close the file, since a rename won't otherwise not work on Windows
|
||||
blobFile.Close()
|
||||
explicitClosed = true
|
||||
if err := os.Rename(blobFile.Name(), blobPath); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
@@ -155,7 +165,7 @@ func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo
|
||||
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
|
||||
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
|
||||
// it returns a non-nil error only on an unexpected failure.
|
||||
func (d *ociImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
func (d *ociImageDestination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
|
||||
if info.Digest == "" {
|
||||
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
|
||||
}
|
||||
@@ -173,7 +183,7 @@ func (d *ociImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error)
|
||||
return true, finfo.Size(), nil
|
||||
}
|
||||
|
||||
func (d *ociImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
func (d *ociImageDestination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return info, nil
|
||||
}
|
||||
|
||||
@@ -181,7 +191,7 @@ func (d *ociImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo,
|
||||
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
|
||||
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
|
||||
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
|
||||
func (d *ociImageDestination) PutManifest(m []byte) error {
|
||||
func (d *ociImageDestination) PutManifest(ctx context.Context, m []byte) error {
|
||||
digest, err := manifest.Digest(m)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -203,13 +213,11 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
|
||||
return err
|
||||
}
|
||||
|
||||
if d.ref.image == "" {
|
||||
return errors.Errorf("cannot save image with empyt image.ref.name")
|
||||
if d.ref.image != "" {
|
||||
annotations := make(map[string]string)
|
||||
annotations["org.opencontainers.image.ref.name"] = d.ref.image
|
||||
desc.Annotations = annotations
|
||||
}
|
||||
|
||||
annotations := make(map[string]string)
|
||||
annotations["org.opencontainers.image.ref.name"] = d.ref.image
|
||||
desc.Annotations = annotations
|
||||
desc.Platform = &imgspecv1.Platform{
|
||||
Architecture: runtime.GOARCH,
|
||||
OS: runtime.GOOS,
|
||||
@@ -230,7 +238,7 @@ func (d *ociImageDestination) addManifest(desc *imgspecv1.Descriptor) {
|
||||
d.index.Manifests = append(d.index.Manifests, *desc)
|
||||
}
|
||||
|
||||
func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
func (d *ociImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
|
||||
if len(signatures) != 0 {
|
||||
return errors.Errorf("Pushing signatures for OCI images is not supported")
|
||||
}
|
||||
@@ -241,7 +249,7 @@ func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *ociImageDestination) Commit() error {
|
||||
func (d *ociImageDestination) Commit(ctx context.Context) error {
|
||||
if err := ioutil.WriteFile(d.ref.ociLayoutPath(), []byte(`{"imageLayoutVersion": "1.0.0"}`), 0644); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
80
vendor/github.com/containers/image/oci/layout/oci_src.go
generated
vendored
80
vendor/github.com/containers/image/oci/layout/oci_src.go
generated
vendored
@@ -24,15 +24,15 @@ type ociImageSource struct {
|
||||
}
|
||||
|
||||
// newImageSource returns an ImageSource for reading from an existing directory.
|
||||
func newImageSource(ctx *types.SystemContext, ref ociReference) (types.ImageSource, error) {
|
||||
func newImageSource(sys *types.SystemContext, ref ociReference) (types.ImageSource, error) {
|
||||
tr := tlsclientconfig.NewTransport()
|
||||
tr.TLSClientConfig = tlsconfig.ServerDefault()
|
||||
|
||||
if ctx != nil && ctx.OCICertPath != "" {
|
||||
if err := tlsclientconfig.SetupCertificates(ctx.OCICertPath, tr.TLSClientConfig); err != nil {
|
||||
if sys != nil && sys.OCICertPath != "" {
|
||||
if err := tlsclientconfig.SetupCertificates(sys.OCICertPath, tr.TLSClientConfig); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tr.TLSClientConfig.InsecureSkipVerify = ctx.OCIInsecureSkipTLSVerify
|
||||
tr.TLSClientConfig.InsecureSkipVerify = sys.OCIInsecureSkipTLSVerify
|
||||
}
|
||||
|
||||
client := &http.Client{}
|
||||
@@ -42,9 +42,9 @@ func newImageSource(ctx *types.SystemContext, ref ociReference) (types.ImageSour
|
||||
return nil, err
|
||||
}
|
||||
d := &ociImageSource{ref: ref, descriptor: descriptor, client: client}
|
||||
if ctx != nil {
|
||||
if sys != nil {
|
||||
// TODO(jonboulle): check dir existence?
|
||||
d.sharedBlobDir = ctx.OCISharedBlobDirPath
|
||||
d.sharedBlobDir = sys.OCISharedBlobDirPath
|
||||
}
|
||||
return d, nil
|
||||
}
|
||||
@@ -61,8 +61,26 @@ func (s *ociImageSource) Close() error {
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *ociImageSource) GetManifest() ([]byte, string, error) {
|
||||
manifestPath, err := s.ref.blobPath(digest.Digest(s.descriptor.Digest), s.sharedBlobDir)
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
|
||||
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
|
||||
func (s *ociImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
|
||||
var dig digest.Digest
|
||||
var mimeType string
|
||||
if instanceDigest == nil {
|
||||
dig = digest.Digest(s.descriptor.Digest)
|
||||
mimeType = s.descriptor.MediaType
|
||||
} else {
|
||||
dig = *instanceDigest
|
||||
// XXX: instanceDigest means that we don't immediately have the context of what
|
||||
// mediaType the manifest has. In OCI this means that we don't know
|
||||
// what reference it came from, so we just *assume* that its
|
||||
// MediaTypeImageManifest.
|
||||
// FIXME: We should actually be able to look up the manifest in the index,
|
||||
// and see the MIME type there.
|
||||
mimeType = imgspecv1.MediaTypeImageManifest
|
||||
}
|
||||
|
||||
manifestPath, err := s.ref.blobPath(dig, s.sharedBlobDir)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
@@ -71,31 +89,13 @@ func (s *ociImageSource) GetManifest() ([]byte, string, error) {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
return m, s.descriptor.MediaType, nil
|
||||
}
|
||||
|
||||
func (s *ociImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
manifestPath, err := s.ref.blobPath(digest, s.sharedBlobDir)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
m, err := ioutil.ReadFile(manifestPath)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
// XXX: GetTargetManifest means that we don't have the context of what
|
||||
// mediaType the manifest has. In OCI this means that we don't know
|
||||
// what reference it came from, so we just *assume* that its
|
||||
// MediaTypeImageManifest.
|
||||
return m, imgspecv1.MediaTypeImageManifest, nil
|
||||
return m, mimeType, nil
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob's size.
|
||||
func (s *ociImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
func (s *ociImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
if len(info.URLs) != 0 {
|
||||
return s.getExternalBlob(info.URLs)
|
||||
return s.getExternalBlob(ctx, info.URLs)
|
||||
}
|
||||
|
||||
path, err := s.ref.blobPath(info.Digest, s.sharedBlobDir)
|
||||
@@ -114,14 +114,25 @@ func (s *ociImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, err
|
||||
return r, fi.Size(), nil
|
||||
}
|
||||
|
||||
func (s *ociImageSource) GetSignatures(context.Context) ([][]byte, error) {
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
|
||||
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
|
||||
// (e.g. if the source never returns manifest lists).
|
||||
func (s *ociImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
return [][]byte{}, nil
|
||||
}
|
||||
|
||||
func (s *ociImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, error) {
|
||||
func (s *ociImageSource) getExternalBlob(ctx context.Context, urls []string) (io.ReadCloser, int64, error) {
|
||||
errWrap := errors.New("failed fetching external blob from all urls")
|
||||
for _, url := range urls {
|
||||
resp, err := s.client.Get(url)
|
||||
|
||||
req, err := http.NewRequest("GET", url, nil)
|
||||
if err != nil {
|
||||
errWrap = errors.Wrapf(errWrap, "fetching %s failed %s", url, err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
resp, err := s.client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
errWrap = errors.Wrapf(errWrap, "fetching %s failed %s", url, err.Error())
|
||||
continue
|
||||
@@ -139,6 +150,11 @@ func (s *ociImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, e
|
||||
return nil, 0, errWrap
|
||||
}
|
||||
|
||||
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
|
||||
func (s *ociImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func getBlobSize(resp *http.Response) int64 {
|
||||
size, err := strconv.ParseInt(resp.Header.Get("Content-Length"), 10, 64)
|
||||
if err != nil {
|
||||
|
||||
84
vendor/github.com/containers/image/oci/layout/oci_transport.go
generated
vendored
84
vendor/github.com/containers/image/oci/layout/oci_transport.go
generated
vendored
@@ -1,16 +1,17 @@
|
||||
package layout
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/directory/explicitfilepath"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/oci/internal"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
@@ -36,45 +37,12 @@ func (t ociTransport) ParseReference(reference string) (types.ImageReference, er
|
||||
return ParseReference(reference)
|
||||
}
|
||||
|
||||
// annotation spex from https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys
|
||||
const (
|
||||
separator = `(?:[-._:@+]|--)`
|
||||
alphanum = `(?:[A-Za-z0-9]+)`
|
||||
component = `(?:` + alphanum + `(?:` + separator + alphanum + `)*)`
|
||||
)
|
||||
|
||||
var refRegexp = regexp.MustCompile(`^` + component + `(?:/` + component + `)*$`)
|
||||
|
||||
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
|
||||
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
|
||||
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
|
||||
// scope passed to this function will not be "", that value is always allowed.
|
||||
func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
var dir string
|
||||
sep := strings.SplitN(scope, ":", 2)
|
||||
dir = sep[0]
|
||||
|
||||
if len(sep) == 2 {
|
||||
image := sep[1]
|
||||
if !refRegexp.MatchString(image) {
|
||||
return errors.Errorf("Invalid image %s", image)
|
||||
}
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(dir, "/") {
|
||||
return errors.Errorf("Invalid scope %s: must be an absolute path", scope)
|
||||
}
|
||||
// Refuse also "/", otherwise "/" and "" would have the same semantics,
|
||||
// and "" could be unexpectedly shadowed by the "/" entry.
|
||||
// (Note: we do allow "/:someimage", a bit ridiculous but why refuse it?)
|
||||
if scope == "/" {
|
||||
return errors.New(`Invalid scope "/": Use the generic default scope ""`)
|
||||
}
|
||||
cleaned := filepath.Clean(dir)
|
||||
if cleaned != dir {
|
||||
return errors.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
|
||||
}
|
||||
return nil
|
||||
return internal.ValidateScope(scope)
|
||||
}
|
||||
|
||||
// ociReference is an ImageReference for OCI directory paths.
|
||||
@@ -87,18 +55,14 @@ type ociReference struct {
|
||||
// (But in general, we make no attempt to be completely safe against concurrent hostile filesystem modifications.)
|
||||
dir string // As specified by the user. May be relative, contain symlinks, etc.
|
||||
resolvedDir string // Absolute path with no symlinks, at least at the time of its creation. Primarily used for policy namespaces.
|
||||
image string // If image=="", it means the only image in the index.json is used
|
||||
// If image=="", it means the "only image" in the index.json is used in the case it is a source
|
||||
// for destinations, the image name annotation "image.ref.name" is not added to the index.json
|
||||
image string
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an OCI ImageReference.
|
||||
func ParseReference(reference string) (types.ImageReference, error) {
|
||||
var dir, image string
|
||||
sep := strings.SplitN(reference, ":", 2)
|
||||
dir = sep[0]
|
||||
|
||||
if len(sep) == 2 {
|
||||
image = sep[1]
|
||||
}
|
||||
dir, image := internal.SplitPathAndImage(reference)
|
||||
return NewReference(dir, image)
|
||||
}
|
||||
|
||||
@@ -111,14 +75,15 @@ func NewReference(dir, image string) (types.ImageReference, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
|
||||
// from being ambiguous with values of PolicyConfigurationIdentity.
|
||||
if strings.Contains(resolved, ":") {
|
||||
return nil, errors.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, image, resolved)
|
||||
|
||||
if err := internal.ValidateOCIPath(dir); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(image) > 0 && !refRegexp.MatchString(image) {
|
||||
return nil, errors.Errorf("Invalid image %s", image)
|
||||
|
||||
if err = internal.ValidateImageName(image); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ociReference{dir: dir, resolvedDir: resolved, image: image}, nil
|
||||
}
|
||||
|
||||
@@ -177,16 +142,17 @@ func (ref ociReference) PolicyConfigurationNamespaces() []string {
|
||||
return res
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src, err := newImageSource(ctx, ref)
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
func (ref ociReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
src, err := newImageSource(sys, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return image.FromSource(src)
|
||||
return image.FromSource(ctx, sys, src)
|
||||
}
|
||||
|
||||
// getIndex returns a pointer to the index references by this ociReference. If an error occurs opening an index nil is returned together
|
||||
@@ -254,18 +220,18 @@ func LoadManifestDescriptor(imgRef types.ImageReference) (imgspecv1.Descriptor,
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref ociReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref)
|
||||
func (ref ociReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(sys, ref)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref ociReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, ref)
|
||||
func (ref ociReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(sys, ref)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref ociReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
func (ref ociReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
return errors.Errorf("Deleting images not implemented for oci: images")
|
||||
}
|
||||
|
||||
|
||||
90
vendor/github.com/containers/image/openshift/openshift.go
generated
vendored
90
vendor/github.com/containers/image/openshift/openshift.go
generated
vendored
@@ -162,7 +162,7 @@ func (c *openshiftClient) convertDockerImageReference(ref string) (string, error
|
||||
type openshiftImageSource struct {
|
||||
client *openshiftClient
|
||||
// Values specific to this image
|
||||
ctx *types.SystemContext
|
||||
sys *types.SystemContext
|
||||
// State
|
||||
docker types.ImageSource // The Docker Registry endpoint, or nil if not resolved yet
|
||||
imageStreamImageName string // Resolved image identifier, or "" if not known yet
|
||||
@@ -170,7 +170,7 @@ type openshiftImageSource struct {
|
||||
|
||||
// newImageSource creates a new ImageSource for the specified reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func newImageSource(ctx *types.SystemContext, ref openshiftReference) (types.ImageSource, error) {
|
||||
func newImageSource(sys *types.SystemContext, ref openshiftReference) (types.ImageSource, error) {
|
||||
client, err := newOpenshiftClient(ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -178,7 +178,7 @@ func newImageSource(ctx *types.SystemContext, ref openshiftReference) (types.Ima
|
||||
|
||||
return &openshiftImageSource{
|
||||
client: client,
|
||||
ctx: ctx,
|
||||
sys: sys,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -200,36 +200,40 @@ func (s *openshiftImageSource) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *openshiftImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
if err := s.ensureImageIsResolved(context.TODO()); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return s.docker.GetTargetManifest(digest)
|
||||
}
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
|
||||
if err := s.ensureImageIsResolved(context.TODO()); err != nil {
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
|
||||
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
|
||||
func (s *openshiftImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
|
||||
if err := s.ensureImageIsResolved(ctx); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return s.docker.GetManifest()
|
||||
return s.docker.GetManifest(ctx, instanceDigest)
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
func (s *openshiftImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
if err := s.ensureImageIsResolved(context.TODO()); err != nil {
|
||||
func (s *openshiftImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
if err := s.ensureImageIsResolved(ctx); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
return s.docker.GetBlob(info)
|
||||
return s.docker.GetBlob(ctx, info)
|
||||
}
|
||||
|
||||
func (s *openshiftImageSource) GetSignatures(ctx context.Context) ([][]byte, error) {
|
||||
if err := s.ensureImageIsResolved(ctx); err != nil {
|
||||
return nil, err
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
|
||||
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
|
||||
// (e.g. if the source never returns manifest lists).
|
||||
func (s *openshiftImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
var imageName string
|
||||
if instanceDigest == nil {
|
||||
if err := s.ensureImageIsResolved(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
imageName = s.imageStreamImageName
|
||||
} else {
|
||||
imageName = instanceDigest.String()
|
||||
}
|
||||
|
||||
image, err := s.client.getImage(ctx, s.imageStreamImageName)
|
||||
image, err := s.client.getImage(ctx, imageName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -242,6 +246,11 @@ func (s *openshiftImageSource) GetSignatures(ctx context.Context) ([][]byte, err
|
||||
return sigs, nil
|
||||
}
|
||||
|
||||
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
|
||||
func (s *openshiftImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// ensureImageIsResolved sets up s.docker and s.imageStreamImageName
|
||||
func (s *openshiftImageSource) ensureImageIsResolved(ctx context.Context) error {
|
||||
if s.docker != nil {
|
||||
@@ -282,7 +291,7 @@ func (s *openshiftImageSource) ensureImageIsResolved(ctx context.Context) error
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
d, err := dockerRef.NewImageSource(s.ctx)
|
||||
d, err := dockerRef.NewImageSource(ctx, s.sys)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -299,7 +308,7 @@ type openshiftImageDestination struct {
|
||||
}
|
||||
|
||||
// newImageDestination creates a new ImageDestination for the specified reference.
|
||||
func newImageDestination(ctx *types.SystemContext, ref openshiftReference) (types.ImageDestination, error) {
|
||||
func newImageDestination(ctx context.Context, sys *types.SystemContext, ref openshiftReference) (types.ImageDestination, error) {
|
||||
client, err := newOpenshiftClient(ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -313,7 +322,7 @@ func newImageDestination(ctx *types.SystemContext, ref openshiftReference) (type
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
docker, err := dockerRef.NewImageDestination(ctx)
|
||||
docker, err := dockerRef.NewImageDestination(ctx, sys)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -341,13 +350,12 @@ func (d *openshiftImageDestination) SupportedManifestMIMETypes() []string {
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *openshiftImageDestination) SupportsSignatures() error {
|
||||
func (d *openshiftImageDestination) SupportsSignatures(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *openshiftImageDestination) ShouldCompressLayers() bool {
|
||||
return true
|
||||
func (d *openshiftImageDestination) DesiredLayerCompression() types.LayerCompression {
|
||||
return types.Compress
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
@@ -367,37 +375,37 @@ func (d *openshiftImageDestination) MustMatchRuntimeOS() bool {
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *openshiftImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
return d.docker.PutBlob(stream, inputInfo)
|
||||
func (d *openshiftImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
|
||||
return d.docker.PutBlob(ctx, stream, inputInfo, isConfig)
|
||||
}
|
||||
|
||||
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
|
||||
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
|
||||
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
|
||||
// it returns a non-nil error only on an unexpected failure.
|
||||
func (d *openshiftImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
return d.docker.HasBlob(info)
|
||||
func (d *openshiftImageDestination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
|
||||
return d.docker.HasBlob(ctx, info)
|
||||
}
|
||||
|
||||
func (d *openshiftImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return d.docker.ReapplyBlob(info)
|
||||
func (d *openshiftImageDestination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return d.docker.ReapplyBlob(ctx, info)
|
||||
}
|
||||
|
||||
// PutManifest writes manifest to the destination.
|
||||
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
|
||||
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
|
||||
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
|
||||
func (d *openshiftImageDestination) PutManifest(m []byte) error {
|
||||
func (d *openshiftImageDestination) PutManifest(ctx context.Context, m []byte) error {
|
||||
manifestDigest, err := manifest.Digest(m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
d.imageStreamImageName = manifestDigest.String()
|
||||
|
||||
return d.docker.PutManifest(m)
|
||||
return d.docker.PutManifest(ctx, m)
|
||||
}
|
||||
|
||||
func (d *openshiftImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
func (d *openshiftImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
|
||||
if d.imageStreamImageName == "" {
|
||||
return errors.Errorf("Internal error: Unknown manifest digest, can't add signatures")
|
||||
}
|
||||
@@ -408,7 +416,7 @@ func (d *openshiftImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
return nil // No need to even read the old state.
|
||||
}
|
||||
|
||||
image, err := d.client.getImage(context.TODO(), d.imageStreamImageName)
|
||||
image, err := d.client.getImage(ctx, d.imageStreamImageName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -449,7 +457,7 @@ sigExists:
|
||||
Content: newSig,
|
||||
}
|
||||
body, err := json.Marshal(sig)
|
||||
_, err = d.client.doRequest(context.TODO(), "POST", "/oapi/v1/imagesignatures", body)
|
||||
_, err = d.client.doRequest(ctx, "POST", "/oapi/v1/imagesignatures", body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -462,8 +470,8 @@ sigExists:
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *openshiftImageDestination) Commit() error {
|
||||
return d.docker.Commit()
|
||||
func (d *openshiftImageDestination) Commit(ctx context.Context) error {
|
||||
return d.docker.Commit(ctx)
|
||||
}
|
||||
|
||||
// These structs are subsets of github.com/openshift/origin/pkg/image/api/v1 and its dependencies.
|
||||
|
||||
22
vendor/github.com/containers/image/openshift/openshift_transport.go
generated
vendored
22
vendor/github.com/containers/image/openshift/openshift_transport.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package openshift
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
@@ -125,31 +126,32 @@ func (ref openshiftReference) PolicyConfigurationNamespaces() []string {
|
||||
return policyconfiguration.DockerReferenceNamespaces(ref.dockerReference)
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref openshiftReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src, err := newImageSource(ctx, ref)
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
func (ref openshiftReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
src, err := newImageSource(sys, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return genericImage.FromSource(src)
|
||||
return genericImage.FromSource(ctx, sys, src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref openshiftReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref)
|
||||
func (ref openshiftReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(sys, ref)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref openshiftReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, ref)
|
||||
func (ref openshiftReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, sys, ref)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref openshiftReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
func (ref openshiftReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
return errors.Errorf("Deleting images not implemented for atomic: images")
|
||||
}
|
||||
|
||||
242
vendor/github.com/containers/image/ostree/ostree_dest.go
generated
vendored
242
vendor/github.com/containers/image/ostree/ostree_dest.go
generated
vendored
@@ -4,6 +4,9 @@ package ostree
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -13,17 +16,32 @@ import (
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
"unsafe"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/containers/storage/pkg/archive"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
selinux "github.com/opencontainers/selinux/go-selinux"
|
||||
"github.com/ostreedev/ostree-go/pkg/otbuiltin"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/vbatts/tar-split/tar/asm"
|
||||
"github.com/vbatts/tar-split/tar/storage"
|
||||
)
|
||||
|
||||
// #cgo pkg-config: glib-2.0 gobject-2.0 ostree-1 libselinux
|
||||
// #include <glib.h>
|
||||
// #include <glib-object.h>
|
||||
// #include <gio/gio.h>
|
||||
// #include <stdlib.h>
|
||||
// #include <ostree.h>
|
||||
// #include <gio/ginputstream.h>
|
||||
// #include <selinux/selinux.h>
|
||||
// #include <selinux/label.h>
|
||||
import "C"
|
||||
|
||||
type blobToImport struct {
|
||||
Size int64
|
||||
Digest digest.Digest
|
||||
@@ -35,18 +53,24 @@ type descriptor struct {
|
||||
Digest digest.Digest `json:"digest"`
|
||||
}
|
||||
|
||||
type fsLayersSchema1 struct {
|
||||
BlobSum digest.Digest `json:"blobSum"`
|
||||
}
|
||||
|
||||
type manifestSchema struct {
|
||||
ConfigDescriptor descriptor `json:"config"`
|
||||
LayersDescriptors []descriptor `json:"layers"`
|
||||
LayersDescriptors []descriptor `json:"layers"`
|
||||
FSLayers []fsLayersSchema1 `json:"fsLayers"`
|
||||
}
|
||||
|
||||
type ostreeImageDestination struct {
|
||||
ref ostreeReference
|
||||
manifest string
|
||||
schema manifestSchema
|
||||
tmpDirPath string
|
||||
blobs map[string]*blobToImport
|
||||
digest digest.Digest
|
||||
ref ostreeReference
|
||||
manifest string
|
||||
schema manifestSchema
|
||||
tmpDirPath string
|
||||
blobs map[string]*blobToImport
|
||||
digest digest.Digest
|
||||
signaturesLen int
|
||||
repo *C.struct_OstreeRepo
|
||||
}
|
||||
|
||||
// newImageDestination returns an ImageDestination for writing to an existing ostree.
|
||||
@@ -55,7 +79,7 @@ func newImageDestination(ref ostreeReference, tmpDirPath string) (types.ImageDes
|
||||
if err := ensureDirectoryExists(tmpDirPath); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &ostreeImageDestination{ref, "", manifestSchema{}, tmpDirPath, map[string]*blobToImport{}, ""}, nil
|
||||
return &ostreeImageDestination{ref, "", manifestSchema{}, tmpDirPath, map[string]*blobToImport{}, "", 0, nil}, nil
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
|
||||
@@ -66,6 +90,9 @@ func (d *ostreeImageDestination) Reference() types.ImageReference {
|
||||
|
||||
// Close removes resources associated with an initialized ImageDestination, if any.
|
||||
func (d *ostreeImageDestination) Close() error {
|
||||
if d.repo != nil {
|
||||
C.g_object_unref(C.gpointer(d.repo))
|
||||
}
|
||||
return os.RemoveAll(d.tmpDirPath)
|
||||
}
|
||||
|
||||
@@ -77,13 +104,13 @@ func (d *ostreeImageDestination) SupportedManifestMIMETypes() []string {
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *ostreeImageDestination) SupportsSignatures() error {
|
||||
func (d *ostreeImageDestination) SupportsSignatures(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *ostreeImageDestination) ShouldCompressLayers() bool {
|
||||
return false
|
||||
func (d *ostreeImageDestination) DesiredLayerCompression() types.LayerCompression {
|
||||
return types.PreserveOriginal
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
@@ -97,7 +124,7 @@ func (d *ostreeImageDestination) MustMatchRuntimeOS() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (d *ostreeImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
func (d *ostreeImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
|
||||
tmpDir, err := ioutil.TempDir(d.tmpDirPath, "blob")
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
@@ -113,6 +140,7 @@ func (d *ostreeImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
|
||||
digester := digest.Canonical.Digester()
|
||||
tee := io.TeeReader(stream, digester.Hash())
|
||||
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
|
||||
size, err := io.Copy(blobFile, tee)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
@@ -130,7 +158,7 @@ func (d *ostreeImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
|
||||
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
|
||||
}
|
||||
|
||||
func fixFiles(dir string, usermode bool) error {
|
||||
func fixFiles(selinuxHnd *C.struct_selabel_handle, root string, dir string, usermode bool) error {
|
||||
entries, err := ioutil.ReadDir(dir)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -144,13 +172,43 @@ func fixFiles(dir string, usermode bool) error {
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if selinuxHnd != nil {
|
||||
relPath, err := filepath.Rel(root, fullpath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Handle /exports/hostfs as a special case. Files under this directory are copied to the host,
|
||||
// thus we benefit from maintaining the same SELinux label they would have on the host as we could
|
||||
// use hard links instead of copying the files.
|
||||
relPath = fmt.Sprintf("/%s", strings.TrimPrefix(relPath, "exports/hostfs/"))
|
||||
|
||||
relPathC := C.CString(relPath)
|
||||
defer C.free(unsafe.Pointer(relPathC))
|
||||
var context *C.char
|
||||
|
||||
res, err := C.selabel_lookup_raw(selinuxHnd, &context, relPathC, C.int(info.Mode()&os.ModePerm))
|
||||
if int(res) < 0 && err != syscall.ENOENT {
|
||||
return errors.Wrapf(err, "cannot selabel_lookup_raw %s", relPath)
|
||||
}
|
||||
if int(res) == 0 {
|
||||
defer C.freecon(context)
|
||||
fullpathC := C.CString(fullpath)
|
||||
defer C.free(unsafe.Pointer(fullpathC))
|
||||
res, err = C.lsetfilecon_raw(fullpathC, context)
|
||||
if int(res) < 0 {
|
||||
return errors.Wrapf(err, "cannot setfilecon_raw %s", fullpath)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if info.IsDir() {
|
||||
if usermode {
|
||||
if err := os.Chmod(fullpath, info.Mode()|0700); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
err = fixFiles(fullpath, usermode)
|
||||
err = fixFiles(selinuxHnd, root, fullpath, usermode)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -174,7 +232,41 @@ func (d *ostreeImageDestination) ostreeCommit(repo *otbuiltin.Repo, branch strin
|
||||
return err
|
||||
}
|
||||
|
||||
func (d *ostreeImageDestination) importBlob(repo *otbuiltin.Repo, blob *blobToImport) error {
|
||||
func generateTarSplitMetadata(output *bytes.Buffer, file string) (digest.Digest, int64, error) {
|
||||
mfz := gzip.NewWriter(output)
|
||||
defer mfz.Close()
|
||||
metaPacker := storage.NewJSONPacker(mfz)
|
||||
|
||||
stream, err := os.OpenFile(file, os.O_RDONLY, 0)
|
||||
if err != nil {
|
||||
return "", -1, err
|
||||
}
|
||||
defer stream.Close()
|
||||
|
||||
gzReader, err := archive.DecompressStream(stream)
|
||||
if err != nil {
|
||||
return "", -1, err
|
||||
}
|
||||
defer gzReader.Close()
|
||||
|
||||
its, err := asm.NewInputTarStream(gzReader, metaPacker, nil)
|
||||
if err != nil {
|
||||
return "", -1, err
|
||||
}
|
||||
|
||||
digester := digest.Canonical.Digester()
|
||||
|
||||
written, err := io.Copy(digester.Hash(), its)
|
||||
if err != nil {
|
||||
return "", -1, err
|
||||
}
|
||||
|
||||
return digester.Digest(), written, nil
|
||||
}
|
||||
|
||||
func (d *ostreeImageDestination) importBlob(selinuxHnd *C.struct_selabel_handle, repo *otbuiltin.Repo, blob *blobToImport) error {
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using a context.Context.
|
||||
|
||||
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
|
||||
destinationPath := filepath.Join(d.tmpDirPath, blob.Digest.Hex(), "root")
|
||||
if err := ensureDirectoryExists(destinationPath); err != nil {
|
||||
@@ -185,11 +277,17 @@ func (d *ostreeImageDestination) importBlob(repo *otbuiltin.Repo, blob *blobToIm
|
||||
os.RemoveAll(destinationPath)
|
||||
}()
|
||||
|
||||
var tarSplitOutput bytes.Buffer
|
||||
uncompressedDigest, uncompressedSize, err := generateTarSplitMetadata(&tarSplitOutput, blob.BlobPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if os.Getuid() == 0 {
|
||||
if err := archive.UntarPath(blob.BlobPath, destinationPath); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := fixFiles(destinationPath, false); err != nil {
|
||||
if err := fixFiles(selinuxHnd, destinationPath, destinationPath, false); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
@@ -198,32 +296,51 @@ func (d *ostreeImageDestination) importBlob(repo *otbuiltin.Repo, blob *blobToIm
|
||||
return err
|
||||
}
|
||||
|
||||
if err := fixFiles(destinationPath, true); err != nil {
|
||||
if err := fixFiles(selinuxHnd, destinationPath, destinationPath, true); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return d.ostreeCommit(repo, ostreeBranch, destinationPath, []string{fmt.Sprintf("docker.size=%d", blob.Size),
|
||||
fmt.Sprintf("docker.uncompressed_size=%d", uncompressedSize),
|
||||
fmt.Sprintf("docker.uncompressed_digest=%s", uncompressedDigest.String()),
|
||||
fmt.Sprintf("tarsplit.output=%s", base64.StdEncoding.EncodeToString(tarSplitOutput.Bytes()))})
|
||||
|
||||
}
|
||||
|
||||
func (d *ostreeImageDestination) importConfig(repo *otbuiltin.Repo, blob *blobToImport) error {
|
||||
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
|
||||
destinationPath := filepath.Dir(blob.BlobPath)
|
||||
|
||||
return d.ostreeCommit(repo, ostreeBranch, destinationPath, []string{fmt.Sprintf("docker.size=%d", blob.Size)})
|
||||
}
|
||||
|
||||
func (d *ostreeImageDestination) importConfig(blob *blobToImport) error {
|
||||
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
|
||||
func (d *ostreeImageDestination) HasBlob(ctx context.Context, info types.BlobInfo) (bool, int64, error) {
|
||||
|
||||
return exec.Command("ostree", "commit",
|
||||
"--repo", d.ref.repo,
|
||||
fmt.Sprintf("--add-metadata-string=docker.size=%d", blob.Size),
|
||||
"--branch", ostreeBranch, filepath.Dir(blob.BlobPath)).Run()
|
||||
}
|
||||
|
||||
func (d *ostreeImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
branch := fmt.Sprintf("ociimage/%s", info.Digest.Hex())
|
||||
output, err := exec.Command("ostree", "show", "--repo", d.ref.repo, "--print-metadata-key=docker.size", branch).CombinedOutput()
|
||||
if err != nil {
|
||||
if bytes.Index(output, []byte("not found")) >= 0 || bytes.Index(output, []byte("No such")) >= 0 {
|
||||
return false, -1, nil
|
||||
if d.repo == nil {
|
||||
repo, err := openRepo(d.ref.repo)
|
||||
if err != nil {
|
||||
return false, 0, err
|
||||
}
|
||||
return false, -1, err
|
||||
d.repo = repo
|
||||
}
|
||||
size, err := strconv.ParseInt(strings.Trim(string(output), "'\n"), 10, 64)
|
||||
branch := fmt.Sprintf("ociimage/%s", info.Digest.Hex())
|
||||
|
||||
found, data, err := readMetadata(d.repo, branch, "docker.uncompressed_digest")
|
||||
if err != nil || !found {
|
||||
return found, -1, err
|
||||
}
|
||||
|
||||
found, data, err = readMetadata(d.repo, branch, "docker.uncompressed_size")
|
||||
if err != nil || !found {
|
||||
return found, -1, err
|
||||
}
|
||||
|
||||
found, data, err = readMetadata(d.repo, branch, "docker.size")
|
||||
if err != nil || !found {
|
||||
return found, -1, err
|
||||
}
|
||||
|
||||
size, err := strconv.ParseInt(data, 10, 64)
|
||||
if err != nil {
|
||||
return false, -1, err
|
||||
}
|
||||
@@ -231,7 +348,7 @@ func (d *ostreeImageDestination) HasBlob(info types.BlobInfo) (bool, int64, erro
|
||||
return true, size, nil
|
||||
}
|
||||
|
||||
func (d *ostreeImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
func (d *ostreeImageDestination) ReapplyBlob(ctx context.Context, info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return info, nil
|
||||
}
|
||||
|
||||
@@ -239,7 +356,7 @@ func (d *ostreeImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInf
|
||||
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
|
||||
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
|
||||
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
|
||||
func (d *ostreeImageDestination) PutManifest(manifestBlob []byte) error {
|
||||
func (d *ostreeImageDestination) PutManifest(ctx context.Context, manifestBlob []byte) error {
|
||||
d.manifest = string(manifestBlob)
|
||||
|
||||
if err := json.Unmarshal(manifestBlob, &d.schema); err != nil {
|
||||
@@ -260,7 +377,7 @@ func (d *ostreeImageDestination) PutManifest(manifestBlob []byte) error {
|
||||
return ioutil.WriteFile(manifestPath, manifestBlob, 0644)
|
||||
}
|
||||
|
||||
func (d *ostreeImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
func (d *ostreeImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
|
||||
path := filepath.Join(d.tmpDirPath, d.ref.signaturePath(0))
|
||||
if err := ensureParentDirectoryExists(path); err != nil {
|
||||
return err
|
||||
@@ -272,10 +389,11 @@ func (d *ostreeImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
d.signaturesLen = len(signatures)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *ostreeImageDestination) Commit() error {
|
||||
func (d *ostreeImageDestination) Commit(ctx context.Context) error {
|
||||
repo, err := otbuiltin.OpenRepo(d.ref.repo)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -286,24 +404,48 @@ func (d *ostreeImageDestination) Commit() error {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, layer := range d.schema.LayersDescriptors {
|
||||
hash := layer.Digest.Hex()
|
||||
var selinuxHnd *C.struct_selabel_handle
|
||||
|
||||
if os.Getuid() == 0 && selinux.GetEnabled() {
|
||||
selinuxHnd, err = C.selabel_open(C.SELABEL_CTX_FILE, nil, 0)
|
||||
if selinuxHnd == nil {
|
||||
return errors.Wrapf(err, "cannot open the SELinux DB")
|
||||
}
|
||||
|
||||
defer C.selabel_close(selinuxHnd)
|
||||
}
|
||||
|
||||
checkLayer := func(hash string) error {
|
||||
blob := d.blobs[hash]
|
||||
// if the blob is not present in d.blobs then it is already stored in OSTree,
|
||||
// and we don't need to import it.
|
||||
if blob == nil {
|
||||
continue
|
||||
return nil
|
||||
}
|
||||
err := d.importBlob(repo, blob)
|
||||
err := d.importBlob(selinuxHnd, repo, blob)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
delete(d.blobs, hash)
|
||||
return nil
|
||||
}
|
||||
for _, layer := range d.schema.LayersDescriptors {
|
||||
hash := layer.Digest.Hex()
|
||||
if err = checkLayer(hash); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
for _, layer := range d.schema.FSLayers {
|
||||
hash := layer.BlobSum.Hex()
|
||||
if err = checkLayer(hash); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
hash := d.schema.ConfigDescriptor.Digest.Hex()
|
||||
blob := d.blobs[hash]
|
||||
if blob != nil {
|
||||
err := d.importConfig(blob)
|
||||
// Import the other blobs that are not layers
|
||||
for _, blob := range d.blobs {
|
||||
err := d.importConfig(repo, blob)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -311,7 +453,9 @@ func (d *ostreeImageDestination) Commit() error {
|
||||
|
||||
manifestPath := filepath.Join(d.tmpDirPath, "manifest")
|
||||
|
||||
metadata := []string{fmt.Sprintf("docker.manifest=%s", string(d.manifest)), fmt.Sprintf("docker.digest=%s", string(d.digest))}
|
||||
metadata := []string{fmt.Sprintf("docker.manifest=%s", string(d.manifest)),
|
||||
fmt.Sprintf("signatures=%d", d.signaturesLen),
|
||||
fmt.Sprintf("docker.digest=%s", string(d.digest))}
|
||||
err = d.ostreeCommit(repo, fmt.Sprintf("ociimage/%s", d.ref.branchName), manifestPath, metadata)
|
||||
|
||||
_, err = repo.CommitTransaction()
|
||||
|
||||
408
vendor/github.com/containers/image/ostree/ostree_src.go
generated
vendored
Normal file
408
vendor/github.com/containers/image/ostree/ostree_src.go
generated
vendored
Normal file
@@ -0,0 +1,408 @@
|
||||
// +build !containers_image_ostree_stub
|
||||
|
||||
package ostree
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unsafe"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/containers/storage/pkg/ioutils"
|
||||
"github.com/opencontainers/go-digest"
|
||||
glib "github.com/ostreedev/ostree-go/pkg/glibobject"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/vbatts/tar-split/tar/asm"
|
||||
"github.com/vbatts/tar-split/tar/storage"
|
||||
)
|
||||
|
||||
// #cgo pkg-config: glib-2.0 gobject-2.0 ostree-1
|
||||
// #include <glib.h>
|
||||
// #include <glib-object.h>
|
||||
// #include <gio/gio.h>
|
||||
// #include <stdlib.h>
|
||||
// #include <ostree.h>
|
||||
// #include <gio/ginputstream.h>
|
||||
import "C"
|
||||
|
||||
type ostreeImageSource struct {
|
||||
ref ostreeReference
|
||||
tmpDir string
|
||||
repo *C.struct_OstreeRepo
|
||||
// get the compressed layer by its uncompressed checksum
|
||||
compressed map[digest.Digest]digest.Digest
|
||||
}
|
||||
|
||||
// newImageSource returns an ImageSource for reading from an existing directory.
|
||||
func newImageSource(tmpDir string, ref ostreeReference) (types.ImageSource, error) {
|
||||
return &ostreeImageSource{ref: ref, tmpDir: tmpDir, compressed: nil}, nil
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source.
|
||||
func (s *ostreeImageSource) Reference() types.ImageReference {
|
||||
return s.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
func (s *ostreeImageSource) Close() error {
|
||||
if s.repo != nil {
|
||||
C.g_object_unref(C.gpointer(s.repo))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ostreeImageSource) getLayerSize(blob string) (int64, error) {
|
||||
b := fmt.Sprintf("ociimage/%s", blob)
|
||||
found, data, err := readMetadata(s.repo, b, "docker.size")
|
||||
if err != nil || !found {
|
||||
return 0, err
|
||||
}
|
||||
return strconv.ParseInt(data, 10, 64)
|
||||
}
|
||||
|
||||
func (s *ostreeImageSource) getLenSignatures() (int64, error) {
|
||||
b := fmt.Sprintf("ociimage/%s", s.ref.branchName)
|
||||
found, data, err := readMetadata(s.repo, b, "signatures")
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
if !found {
|
||||
// if 'signatures' is not present, just return 0 signatures.
|
||||
return 0, nil
|
||||
}
|
||||
return strconv.ParseInt(data, 10, 64)
|
||||
}
|
||||
|
||||
func (s *ostreeImageSource) getTarSplitData(blob string) ([]byte, error) {
|
||||
b := fmt.Sprintf("ociimage/%s", blob)
|
||||
found, out, err := readMetadata(s.repo, b, "tarsplit.output")
|
||||
if err != nil || !found {
|
||||
return nil, err
|
||||
}
|
||||
return base64.StdEncoding.DecodeString(out)
|
||||
}
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *ostreeImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
|
||||
if instanceDigest != nil {
|
||||
return nil, "", errors.Errorf(`Manifest lists are not supported by "ostree:"`)
|
||||
}
|
||||
if s.repo == nil {
|
||||
repo, err := openRepo(s.ref.repo)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
s.repo = repo
|
||||
}
|
||||
|
||||
b := fmt.Sprintf("ociimage/%s", s.ref.branchName)
|
||||
found, out, err := readMetadata(s.repo, b, "docker.manifest")
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
if !found {
|
||||
return nil, "", errors.New("manifest not found")
|
||||
}
|
||||
m := []byte(out)
|
||||
return m, manifest.GuessMIMEType(m), nil
|
||||
}
|
||||
|
||||
func (s *ostreeImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
return nil, "", errors.New("manifest lists are not supported by this transport")
|
||||
}
|
||||
|
||||
func openRepo(path string) (*C.struct_OstreeRepo, error) {
|
||||
var cerr *C.GError
|
||||
cpath := C.CString(path)
|
||||
defer C.free(unsafe.Pointer(cpath))
|
||||
pathc := C.g_file_new_for_path(cpath)
|
||||
defer C.g_object_unref(C.gpointer(pathc))
|
||||
repo := C.ostree_repo_new(pathc)
|
||||
r := glib.GoBool(glib.GBoolean(C.ostree_repo_open(repo, nil, &cerr)))
|
||||
if !r {
|
||||
C.g_object_unref(C.gpointer(repo))
|
||||
return nil, glib.ConvertGError(glib.ToGError(unsafe.Pointer(cerr)))
|
||||
}
|
||||
return repo, nil
|
||||
}
|
||||
|
||||
type ostreePathFileGetter struct {
|
||||
repo *C.struct_OstreeRepo
|
||||
parentRoot *C.GFile
|
||||
}
|
||||
|
||||
type ostreeReader struct {
|
||||
stream *C.GFileInputStream
|
||||
}
|
||||
|
||||
func (o ostreeReader) Close() error {
|
||||
C.g_object_unref(C.gpointer(o.stream))
|
||||
return nil
|
||||
}
|
||||
func (o ostreeReader) Read(p []byte) (int, error) {
|
||||
var cerr *C.GError
|
||||
instanceCast := C.g_type_check_instance_cast((*C.GTypeInstance)(unsafe.Pointer(o.stream)), C.g_input_stream_get_type())
|
||||
stream := (*C.GInputStream)(unsafe.Pointer(instanceCast))
|
||||
|
||||
b := C.g_input_stream_read_bytes(stream, (C.gsize)(cap(p)), nil, &cerr)
|
||||
if b == nil {
|
||||
return 0, glib.ConvertGError(glib.ToGError(unsafe.Pointer(cerr)))
|
||||
}
|
||||
defer C.g_bytes_unref(b)
|
||||
|
||||
count := int(C.g_bytes_get_size(b))
|
||||
if count == 0 {
|
||||
return 0, io.EOF
|
||||
}
|
||||
data := (*[1 << 30]byte)(unsafe.Pointer(C.g_bytes_get_data(b, nil)))[:count:count]
|
||||
copy(p, data)
|
||||
return count, nil
|
||||
}
|
||||
|
||||
func readMetadata(repo *C.struct_OstreeRepo, commit, key string) (bool, string, error) {
|
||||
var cerr *C.GError
|
||||
var ref *C.char
|
||||
defer C.free(unsafe.Pointer(ref))
|
||||
|
||||
cCommit := C.CString(commit)
|
||||
defer C.free(unsafe.Pointer(cCommit))
|
||||
|
||||
if !glib.GoBool(glib.GBoolean(C.ostree_repo_resolve_rev(repo, cCommit, C.gboolean(1), &ref, &cerr))) {
|
||||
return false, "", glib.ConvertGError(glib.ToGError(unsafe.Pointer(cerr)))
|
||||
}
|
||||
|
||||
if ref == nil {
|
||||
return false, "", nil
|
||||
}
|
||||
|
||||
var variant *C.GVariant
|
||||
if !glib.GoBool(glib.GBoolean(C.ostree_repo_load_variant(repo, C.OSTREE_OBJECT_TYPE_COMMIT, ref, &variant, &cerr))) {
|
||||
return false, "", glib.ConvertGError(glib.ToGError(unsafe.Pointer(cerr)))
|
||||
}
|
||||
defer C.g_variant_unref(variant)
|
||||
if variant != nil {
|
||||
cKey := C.CString(key)
|
||||
defer C.free(unsafe.Pointer(cKey))
|
||||
|
||||
metadata := C.g_variant_get_child_value(variant, 0)
|
||||
defer C.g_variant_unref(metadata)
|
||||
|
||||
data := C.g_variant_lookup_value(metadata, (*C.gchar)(cKey), nil)
|
||||
if data != nil {
|
||||
defer C.g_variant_unref(data)
|
||||
ptr := (*C.char)(C.g_variant_get_string(data, nil))
|
||||
val := C.GoString(ptr)
|
||||
return true, val, nil
|
||||
}
|
||||
}
|
||||
return false, "", nil
|
||||
}
|
||||
|
||||
func newOSTreePathFileGetter(repo *C.struct_OstreeRepo, commit string) (*ostreePathFileGetter, error) {
|
||||
var cerr *C.GError
|
||||
var parentRoot *C.GFile
|
||||
cCommit := C.CString(commit)
|
||||
defer C.free(unsafe.Pointer(cCommit))
|
||||
if !glib.GoBool(glib.GBoolean(C.ostree_repo_read_commit(repo, cCommit, &parentRoot, nil, nil, &cerr))) {
|
||||
return &ostreePathFileGetter{}, glib.ConvertGError(glib.ToGError(unsafe.Pointer(cerr)))
|
||||
}
|
||||
|
||||
C.g_object_ref(C.gpointer(repo))
|
||||
|
||||
return &ostreePathFileGetter{repo: repo, parentRoot: parentRoot}, nil
|
||||
}
|
||||
|
||||
func (o ostreePathFileGetter) Get(filename string) (io.ReadCloser, error) {
|
||||
var file *C.GFile
|
||||
if strings.HasPrefix(filename, "./") {
|
||||
filename = filename[2:]
|
||||
}
|
||||
cfilename := C.CString(filename)
|
||||
defer C.free(unsafe.Pointer(cfilename))
|
||||
|
||||
file = (*C.GFile)(C.g_file_resolve_relative_path(o.parentRoot, cfilename))
|
||||
|
||||
var cerr *C.GError
|
||||
stream := C.g_file_read(file, nil, &cerr)
|
||||
if stream == nil {
|
||||
return nil, glib.ConvertGError(glib.ToGError(unsafe.Pointer(cerr)))
|
||||
}
|
||||
|
||||
return &ostreeReader{stream: stream}, nil
|
||||
}
|
||||
|
||||
func (o ostreePathFileGetter) Close() {
|
||||
C.g_object_unref(C.gpointer(o.repo))
|
||||
C.g_object_unref(C.gpointer(o.parentRoot))
|
||||
}
|
||||
|
||||
func (s *ostreeImageSource) readSingleFile(commit, path string) (io.ReadCloser, error) {
|
||||
getter, err := newOSTreePathFileGetter(s.repo, commit)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer getter.Close()
|
||||
|
||||
return getter.Get(path)
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob's size.
|
||||
func (s *ostreeImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
|
||||
blob := info.Digest.Hex()
|
||||
|
||||
// Ensure s.compressed is initialized. It is build by LayerInfosForCopy.
|
||||
if s.compressed == nil {
|
||||
_, err := s.LayerInfosForCopy(ctx)
|
||||
if err != nil {
|
||||
return nil, -1, err
|
||||
}
|
||||
|
||||
}
|
||||
compressedBlob, found := s.compressed[info.Digest]
|
||||
if found {
|
||||
blob = compressedBlob.Hex()
|
||||
}
|
||||
branch := fmt.Sprintf("ociimage/%s", blob)
|
||||
|
||||
if s.repo == nil {
|
||||
repo, err := openRepo(s.ref.repo)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
s.repo = repo
|
||||
}
|
||||
|
||||
layerSize, err := s.getLayerSize(blob)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
tarsplit, err := s.getTarSplitData(blob)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
// if tarsplit is nil we are looking at the manifest. Return directly the file in /content
|
||||
if tarsplit == nil {
|
||||
file, err := s.readSingleFile(branch, "/content")
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
return file, layerSize, nil
|
||||
}
|
||||
|
||||
mf := bytes.NewReader(tarsplit)
|
||||
mfz, err := gzip.NewReader(mf)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
defer mfz.Close()
|
||||
metaUnpacker := storage.NewJSONUnpacker(mfz)
|
||||
|
||||
getter, err := newOSTreePathFileGetter(s.repo, branch)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
ots := asm.NewOutputTarStream(getter, metaUnpacker)
|
||||
|
||||
pipeReader, pipeWriter := io.Pipe()
|
||||
go func() {
|
||||
io.Copy(pipeWriter, ots)
|
||||
pipeWriter.Close()
|
||||
}()
|
||||
|
||||
rc := ioutils.NewReadCloserWrapper(pipeReader, func() error {
|
||||
getter.Close()
|
||||
return ots.Close()
|
||||
})
|
||||
return rc, layerSize, nil
|
||||
}
|
||||
|
||||
func (s *ostreeImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
if instanceDigest != nil {
|
||||
return nil, errors.New("manifest lists are not supported by this transport")
|
||||
}
|
||||
lenSignatures, err := s.getLenSignatures()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
branch := fmt.Sprintf("ociimage/%s", s.ref.branchName)
|
||||
|
||||
if s.repo == nil {
|
||||
repo, err := openRepo(s.ref.repo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s.repo = repo
|
||||
}
|
||||
|
||||
signatures := [][]byte{}
|
||||
for i := int64(1); i <= lenSignatures; i++ {
|
||||
sigReader, err := s.readSingleFile(branch, fmt.Sprintf("/signature-%d", i))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer sigReader.Close()
|
||||
|
||||
sig, err := ioutil.ReadAll(sigReader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
signatures = append(signatures, sig)
|
||||
}
|
||||
return signatures, nil
|
||||
}
|
||||
|
||||
// LayerInfosForCopy() returns the list of layer blobs that make up the root filesystem of
|
||||
// the image, after they've been decompressed.
|
||||
func (s *ostreeImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
updatedBlobInfos := []types.BlobInfo{}
|
||||
manifestBlob, manifestType, err := s.GetManifest(ctx, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
man, err := manifest.FromBlob(manifestBlob, manifestType)
|
||||
|
||||
s.compressed = make(map[digest.Digest]digest.Digest)
|
||||
|
||||
layerBlobs := man.LayerInfos()
|
||||
|
||||
for _, layerBlob := range layerBlobs {
|
||||
branch := fmt.Sprintf("ociimage/%s", layerBlob.Digest.Hex())
|
||||
found, uncompressedDigestStr, err := readMetadata(s.repo, branch, "docker.uncompressed_digest")
|
||||
if err != nil || !found {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
found, uncompressedSizeStr, err := readMetadata(s.repo, branch, "docker.uncompressed_size")
|
||||
if err != nil || !found {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
uncompressedSize, err := strconv.ParseInt(uncompressedSizeStr, 10, 64)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
uncompressedDigest := digest.Digest(uncompressedDigestStr)
|
||||
blobInfo := types.BlobInfo{
|
||||
Digest: uncompressedDigest,
|
||||
Size: uncompressedSize,
|
||||
MediaType: layerBlob.MediaType,
|
||||
}
|
||||
s.compressed[uncompressedDigest] = layerBlob.Digest
|
||||
updatedBlobInfos = append(updatedBlobInfos, blobInfo)
|
||||
}
|
||||
return updatedBlobInfos, nil
|
||||
}
|
||||
52
vendor/github.com/containers/image/ostree/ostree_transport.go
generated
vendored
52
vendor/github.com/containers/image/ostree/ostree_transport.go
generated
vendored
@@ -4,18 +4,19 @@ package ostree
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/containers/image/directory/explicitfilepath"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const defaultOSTreeRepo = "/ostree/repo"
|
||||
@@ -66,6 +67,11 @@ type ostreeReference struct {
|
||||
repo string
|
||||
}
|
||||
|
||||
type ostreeImageCloser struct {
|
||||
types.ImageCloser
|
||||
size int64
|
||||
}
|
||||
|
||||
func (t ostreeTransport) ParseReference(ref string) (types.ImageReference, error) {
|
||||
var repo = ""
|
||||
var image = ""
|
||||
@@ -110,7 +116,7 @@ func NewReference(image string, repo string) (types.ImageReference, error) {
|
||||
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
|
||||
// from being ambiguous with values of PolicyConfigurationIdentity.
|
||||
if strings.Contains(resolved, ":") {
|
||||
return nil, errors.Errorf("Invalid OSTreeCI reference %s@%s: path %s contains a colon", image, repo, resolved)
|
||||
return nil, errors.Errorf("Invalid OSTree reference %s@%s: path %s contains a colon", image, repo, resolved)
|
||||
}
|
||||
|
||||
return ostreeReference{
|
||||
@@ -168,34 +174,54 @@ func (ref ostreeReference) PolicyConfigurationNamespaces() []string {
|
||||
return res
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
func (s *ostreeImageCloser) Size() (int64, error) {
|
||||
return s.size, nil
|
||||
}
|
||||
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref ostreeReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
return nil, errors.New("Reading ostree: images is currently not supported")
|
||||
func (ref ostreeReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
var tmpDir string
|
||||
if sys == nil || sys.OSTreeTmpDirPath == "" {
|
||||
tmpDir = os.TempDir()
|
||||
} else {
|
||||
tmpDir = sys.OSTreeTmpDirPath
|
||||
}
|
||||
src, err := newImageSource(tmpDir, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return image.FromSource(ctx, sys, src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref ostreeReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
return nil, errors.New("Reading ostree: images is currently not supported")
|
||||
func (ref ostreeReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
var tmpDir string
|
||||
if sys == nil || sys.OSTreeTmpDirPath == "" {
|
||||
tmpDir = os.TempDir()
|
||||
} else {
|
||||
tmpDir = sys.OSTreeTmpDirPath
|
||||
}
|
||||
return newImageSource(tmpDir, ref)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref ostreeReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
func (ref ostreeReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
var tmpDir string
|
||||
if ctx == nil || ctx.OSTreeTmpDirPath == "" {
|
||||
if sys == nil || sys.OSTreeTmpDirPath == "" {
|
||||
tmpDir = os.TempDir()
|
||||
} else {
|
||||
tmpDir = ctx.OSTreeTmpDirPath
|
||||
tmpDir = sys.OSTreeTmpDirPath
|
||||
}
|
||||
return newImageDestination(ref, tmpDir)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref ostreeReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
func (ref ostreeReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
return errors.Errorf("Deleting images not implemented for ostree: images")
|
||||
}
|
||||
|
||||
|
||||
84
vendor/github.com/containers/image/pkg/docker/config/config.go
generated
vendored
84
vendor/github.com/containers/image/pkg/docker/config/config.go
generated
vendored
@@ -15,6 +15,7 @@ import (
|
||||
"github.com/docker/docker-credential-helpers/credentials"
|
||||
"github.com/docker/docker/pkg/homedir"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
type dockerAuthConfig struct {
|
||||
@@ -27,7 +28,7 @@ type dockerConfigFile struct {
|
||||
}
|
||||
|
||||
const (
|
||||
defaultPath = "/run/user"
|
||||
defaultPath = "/run"
|
||||
authCfg = "containers"
|
||||
authCfgFileName = "auth.json"
|
||||
dockerCfg = ".docker"
|
||||
@@ -42,8 +43,8 @@ var (
|
||||
)
|
||||
|
||||
// SetAuthentication stores the username and password in the auth.json file
|
||||
func SetAuthentication(ctx *types.SystemContext, registry, username, password string) error {
|
||||
return modifyJSON(ctx, func(auths *dockerConfigFile) (bool, error) {
|
||||
func SetAuthentication(sys *types.SystemContext, registry, username, password string) error {
|
||||
return modifyJSON(sys, func(auths *dockerConfigFile) (bool, error) {
|
||||
if ch, exists := auths.CredHelpers[registry]; exists {
|
||||
return false, setAuthToCredHelper(ch, registry, username, password)
|
||||
}
|
||||
@@ -58,13 +59,23 @@ func SetAuthentication(ctx *types.SystemContext, registry, username, password st
|
||||
// GetAuthentication returns the registry credentials stored in
|
||||
// either auth.json file or .docker/config.json
|
||||
// If an entry is not found empty strings are returned for the username and password
|
||||
func GetAuthentication(ctx *types.SystemContext, registry string) (string, string, error) {
|
||||
if ctx != nil && ctx.DockerAuthConfig != nil {
|
||||
return ctx.DockerAuthConfig.Username, ctx.DockerAuthConfig.Password, nil
|
||||
func GetAuthentication(sys *types.SystemContext, registry string) (string, string, error) {
|
||||
if sys != nil && sys.DockerAuthConfig != nil {
|
||||
return sys.DockerAuthConfig.Username, sys.DockerAuthConfig.Password, nil
|
||||
}
|
||||
|
||||
dockerLegacyPath := filepath.Join(homedir.Get(), dockerLegacyCfg)
|
||||
paths := [3]string{getPathToAuth(ctx), filepath.Join(homedir.Get(), dockerCfg, dockerCfgFileName), dockerLegacyPath}
|
||||
var paths []string
|
||||
pathToAuth, err := getPathToAuth(sys)
|
||||
if err == nil {
|
||||
paths = append(paths, pathToAuth)
|
||||
} else {
|
||||
// Error means that the path set for XDG_RUNTIME_DIR does not exist
|
||||
// but we don't want to completely fail in the case that the user is pulling a public image
|
||||
// Logging the error as a warning instead and moving on to pulling the image
|
||||
logrus.Warnf("%v: Trying to pull image in the event that it is a public image.", err)
|
||||
}
|
||||
paths = append(paths, filepath.Join(homedir.Get(), dockerCfg, dockerCfgFileName), dockerLegacyPath)
|
||||
|
||||
for _, path := range paths {
|
||||
legacyFormat := path == dockerLegacyPath
|
||||
@@ -82,18 +93,21 @@ func GetAuthentication(ctx *types.SystemContext, registry string) (string, strin
|
||||
// GetUserLoggedIn returns the username logged in to registry from either
|
||||
// auth.json or XDG_RUNTIME_DIR
|
||||
// Used to tell the user if someone is logged in to the registry when logging in
|
||||
func GetUserLoggedIn(ctx *types.SystemContext, registry string) string {
|
||||
path := getPathToAuth(ctx)
|
||||
func GetUserLoggedIn(sys *types.SystemContext, registry string) (string, error) {
|
||||
path, err := getPathToAuth(sys)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
username, _, _ := findAuthentication(registry, path, false)
|
||||
if username != "" {
|
||||
return username
|
||||
return username, nil
|
||||
}
|
||||
return ""
|
||||
return "", nil
|
||||
}
|
||||
|
||||
// RemoveAuthentication deletes the credentials stored in auth.json
|
||||
func RemoveAuthentication(ctx *types.SystemContext, registry string) error {
|
||||
return modifyJSON(ctx, func(auths *dockerConfigFile) (bool, error) {
|
||||
func RemoveAuthentication(sys *types.SystemContext, registry string) error {
|
||||
return modifyJSON(sys, func(auths *dockerConfigFile) (bool, error) {
|
||||
// First try cred helpers.
|
||||
if ch, exists := auths.CredHelpers[registry]; exists {
|
||||
return false, deleteAuthFromCredHelper(ch, registry)
|
||||
@@ -111,8 +125,8 @@ func RemoveAuthentication(ctx *types.SystemContext, registry string) error {
|
||||
}
|
||||
|
||||
// RemoveAllAuthentication deletes all the credentials stored in auth.json
|
||||
func RemoveAllAuthentication(ctx *types.SystemContext) error {
|
||||
return modifyJSON(ctx, func(auths *dockerConfigFile) (bool, error) {
|
||||
func RemoveAllAuthentication(sys *types.SystemContext) error {
|
||||
return modifyJSON(sys, func(auths *dockerConfigFile) (bool, error) {
|
||||
auths.CredHelpers = make(map[string]string)
|
||||
auths.AuthConfigs = make(map[string]dockerAuthConfig)
|
||||
return true, nil
|
||||
@@ -123,20 +137,32 @@ func RemoveAllAuthentication(ctx *types.SystemContext) error {
|
||||
// The path can be overriden by the user if the overwrite-path flag is set
|
||||
// If the flag is not set and XDG_RUNTIME_DIR is ser, the auth.json file is saved in XDG_RUNTIME_DIR/containers
|
||||
// Otherwise, the auth.json file is stored in /run/user/UID/containers
|
||||
func getPathToAuth(ctx *types.SystemContext) string {
|
||||
if ctx != nil {
|
||||
if ctx.AuthFilePath != "" {
|
||||
return ctx.AuthFilePath
|
||||
func getPathToAuth(sys *types.SystemContext) (string, error) {
|
||||
if sys != nil {
|
||||
if sys.AuthFilePath != "" {
|
||||
return sys.AuthFilePath, nil
|
||||
}
|
||||
if ctx.RootForImplicitAbsolutePaths != "" {
|
||||
return filepath.Join(ctx.RootForImplicitAbsolutePaths, defaultPath, strconv.Itoa(os.Getuid()), authCfg, authCfgFileName)
|
||||
if sys.RootForImplicitAbsolutePaths != "" {
|
||||
return filepath.Join(sys.RootForImplicitAbsolutePaths, defaultPath, strconv.Itoa(os.Getuid()), authCfg, authCfgFileName), nil
|
||||
}
|
||||
}
|
||||
|
||||
runtimeDir := os.Getenv("XDG_RUNTIME_DIR")
|
||||
if runtimeDir == "" {
|
||||
runtimeDir = filepath.Join(defaultPath, strconv.Itoa(os.Getuid()))
|
||||
if runtimeDir != "" {
|
||||
// This function does not in general need to separately check that the returned path exists; that’s racy, and callers will fail accessing the file anyway.
|
||||
// We are checking for os.IsNotExist here only to give the user better guidance what to do in this special case.
|
||||
_, err := os.Stat(runtimeDir)
|
||||
if os.IsNotExist(err) {
|
||||
// This means the user set the XDG_RUNTIME_DIR variable and either forgot to create the directory
|
||||
// or made a typo while setting the environment variable,
|
||||
// so return an error referring to $XDG_RUNTIME_DIR instead of …/authCfgFileName inside.
|
||||
return "", errors.Wrapf(err, "%q directory set by $XDG_RUNTIME_DIR does not exist. Either create the directory or unset $XDG_RUNTIME_DIR.", runtimeDir)
|
||||
} // else ignore err and let the caller fail accessing …/authCfgFileName.
|
||||
runtimeDir = filepath.Join(runtimeDir, authCfg)
|
||||
} else {
|
||||
runtimeDir = filepath.Join(defaultPath, authCfg, strconv.Itoa(os.Getuid()))
|
||||
}
|
||||
return filepath.Join(runtimeDir, authCfg, authCfgFileName)
|
||||
return filepath.Join(runtimeDir, authCfgFileName), nil
|
||||
}
|
||||
|
||||
// readJSONFile unmarshals the authentications stored in the auth.json file and returns it
|
||||
@@ -166,11 +192,15 @@ func readJSONFile(path string, legacyFormat bool) (dockerConfigFile, error) {
|
||||
}
|
||||
|
||||
// modifyJSON writes to auth.json if the dockerConfigFile has been updated
|
||||
func modifyJSON(ctx *types.SystemContext, editor func(auths *dockerConfigFile) (bool, error)) error {
|
||||
path := getPathToAuth(ctx)
|
||||
func modifyJSON(sys *types.SystemContext, editor func(auths *dockerConfigFile) (bool, error)) error {
|
||||
path, err := getPathToAuth(sys)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dir := filepath.Dir(path)
|
||||
if _, err := os.Stat(dir); os.IsNotExist(err) {
|
||||
if err = os.Mkdir(dir, 0700); err != nil {
|
||||
if err = os.MkdirAll(dir, 0700); err != nil {
|
||||
return errors.Wrapf(err, "error creating directory %q", dir)
|
||||
}
|
||||
}
|
||||
|
||||
18
vendor/github.com/containers/image/signature/policy_config.go
generated
vendored
18
vendor/github.com/containers/image/signature/policy_config.go
generated
vendored
@@ -44,21 +44,21 @@ func (err InvalidPolicyFormatError) Error() string {
|
||||
// DefaultPolicy returns the default policy of the system.
|
||||
// Most applications should be using this method to get the policy configured
|
||||
// by the system administrator.
|
||||
// ctx should usually be nil, can be set to override the default.
|
||||
// sys should usually be nil, can be set to override the default.
|
||||
// NOTE: When this function returns an error, report it to the user and abort.
|
||||
// DO NOT hard-code fallback policies in your application.
|
||||
func DefaultPolicy(ctx *types.SystemContext) (*Policy, error) {
|
||||
return NewPolicyFromFile(defaultPolicyPath(ctx))
|
||||
func DefaultPolicy(sys *types.SystemContext) (*Policy, error) {
|
||||
return NewPolicyFromFile(defaultPolicyPath(sys))
|
||||
}
|
||||
|
||||
// defaultPolicyPath returns a path to the default policy of the system.
|
||||
func defaultPolicyPath(ctx *types.SystemContext) string {
|
||||
if ctx != nil {
|
||||
if ctx.SignaturePolicyPath != "" {
|
||||
return ctx.SignaturePolicyPath
|
||||
func defaultPolicyPath(sys *types.SystemContext) string {
|
||||
if sys != nil {
|
||||
if sys.SignaturePolicyPath != "" {
|
||||
return sys.SignaturePolicyPath
|
||||
}
|
||||
if ctx.RootForImplicitAbsolutePaths != "" {
|
||||
return filepath.Join(ctx.RootForImplicitAbsolutePaths, systemDefaultPolicyPath)
|
||||
if sys.RootForImplicitAbsolutePaths != "" {
|
||||
return filepath.Join(sys.RootForImplicitAbsolutePaths, systemDefaultPolicyPath)
|
||||
}
|
||||
}
|
||||
return systemDefaultPolicyPath
|
||||
|
||||
14
vendor/github.com/containers/image/signature/policy_eval.go
generated
vendored
14
vendor/github.com/containers/image/signature/policy_eval.go
generated
vendored
@@ -55,14 +55,14 @@ type PolicyRequirement interface {
|
||||
// a container based on this image; use IsRunningImageAllowed instead.
|
||||
// - Just because a signature is accepted does not automatically mean the contents of the
|
||||
// signature are authorized to run code as root, or to affect system or cluster configuration.
|
||||
isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error)
|
||||
isSignatureAuthorAccepted(ctx context.Context, image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error)
|
||||
|
||||
// isRunningImageAllowed returns true if the requirement allows running an image.
|
||||
// If it returns false, err must be non-nil, and should be an PolicyRequirementError if evaluation
|
||||
// succeeded but the result was rejection.
|
||||
// WARNING: This validates signatures and the manifest, but does not download or validate the
|
||||
// layers. Users must validate that the layers match their expected digests.
|
||||
isRunningImageAllowed(image types.UnparsedImage) (bool, error)
|
||||
isRunningImageAllowed(ctx context.Context, image types.UnparsedImage) (bool, error)
|
||||
}
|
||||
|
||||
// PolicyReferenceMatch specifies a set of image identities accepted in PolicyRequirement.
|
||||
@@ -175,7 +175,7 @@ func (pc *PolicyContext) requirementsForImageRef(ref types.ImageReference) Polic
|
||||
// a container based on this image; use IsRunningImageAllowed instead.
|
||||
// - Just because a signature is accepted does not automatically mean the contents of the
|
||||
// signature are authorized to run code as root, or to affect system or cluster configuration.
|
||||
func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.UnparsedImage) (sigs []*Signature, finalErr error) {
|
||||
func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(ctx context.Context, image types.UnparsedImage) (sigs []*Signature, finalErr error) {
|
||||
if err := pc.changeState(pcReady, pcInUse); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -191,7 +191,7 @@ func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.UnparsedIma
|
||||
|
||||
// FIXME: rename Signatures to UnverifiedSignatures
|
||||
// FIXME: pass context.Context
|
||||
unverifiedSignatures, err := image.Signatures(context.TODO())
|
||||
unverifiedSignatures, err := image.Signatures(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -206,7 +206,7 @@ func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.UnparsedIma
|
||||
for reqNumber, req := range reqs {
|
||||
// FIXME: Log the requirement itself? For now, we use just the number.
|
||||
// FIXME: supply state
|
||||
switch res, as, err := req.isSignatureAuthorAccepted(image, sig); res {
|
||||
switch res, as, err := req.isSignatureAuthorAccepted(ctx, image, sig); res {
|
||||
case sarAccepted:
|
||||
if as == nil { // Coverage: this should never happen
|
||||
logrus.Debugf(" Requirement %d: internal inconsistency: sarAccepted but no parsed contents", reqNumber)
|
||||
@@ -256,7 +256,7 @@ func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.UnparsedIma
|
||||
// succeeded but the result was rejection.
|
||||
// WARNING: This validates signatures and the manifest, but does not download or validate the
|
||||
// layers. Users must validate that the layers match their expected digests.
|
||||
func (pc *PolicyContext) IsRunningImageAllowed(image types.UnparsedImage) (res bool, finalErr error) {
|
||||
func (pc *PolicyContext) IsRunningImageAllowed(ctx context.Context, image types.UnparsedImage) (res bool, finalErr error) {
|
||||
if err := pc.changeState(pcReady, pcInUse); err != nil {
|
||||
return false, err
|
||||
}
|
||||
@@ -276,7 +276,7 @@ func (pc *PolicyContext) IsRunningImageAllowed(image types.UnparsedImage) (res b
|
||||
|
||||
for reqNumber, req := range reqs {
|
||||
// FIXME: supply state
|
||||
allowed, err := req.isRunningImageAllowed(image)
|
||||
allowed, err := req.isRunningImageAllowed(ctx, image)
|
||||
if !allowed {
|
||||
logrus.Debugf("Requirement %d: denied, done", reqNumber)
|
||||
return false, err
|
||||
|
||||
6
vendor/github.com/containers/image/signature/policy_eval_baselayer.go
generated
vendored
6
vendor/github.com/containers/image/signature/policy_eval_baselayer.go
generated
vendored
@@ -3,15 +3,17 @@
|
||||
package signature
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/containers/image/types"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func (pr *prSignedBaseLayer) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
func (pr *prSignedBaseLayer) isSignatureAuthorAccepted(ctx context.Context, image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
return sarUnknown, nil, nil
|
||||
}
|
||||
|
||||
func (pr *prSignedBaseLayer) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
|
||||
func (pr *prSignedBaseLayer) isRunningImageAllowed(ctx context.Context, image types.UnparsedImage) (bool, error) {
|
||||
// FIXME? Reject this at policy parsing time already?
|
||||
logrus.Errorf("signedBaseLayer not implemented yet!")
|
||||
return false, PolicyRequirementError("signedBaseLayer not implemented yet!")
|
||||
|
||||
10
vendor/github.com/containers/image/signature/policy_eval_signedby.go
generated
vendored
10
vendor/github.com/containers/image/signature/policy_eval_signedby.go
generated
vendored
@@ -15,7 +15,7 @@ import (
|
||||
"github.com/opencontainers/go-digest"
|
||||
)
|
||||
|
||||
func (pr *prSignedBy) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
func (pr *prSignedBy) isSignatureAuthorAccepted(ctx context.Context, image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
switch pr.KeyType {
|
||||
case SBKeyTypeGPGKeys:
|
||||
case SBKeyTypeSignedByGPGKeys, SBKeyTypeX509Certificates, SBKeyTypeSignedByX509CAs:
|
||||
@@ -69,7 +69,7 @@ func (pr *prSignedBy) isSignatureAuthorAccepted(image types.UnparsedImage, sig [
|
||||
return nil
|
||||
},
|
||||
validateSignedDockerManifestDigest: func(digest digest.Digest) error {
|
||||
m, _, err := image.Manifest()
|
||||
m, _, err := image.Manifest(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -90,16 +90,16 @@ func (pr *prSignedBy) isSignatureAuthorAccepted(image types.UnparsedImage, sig [
|
||||
return sarAccepted, signature, nil
|
||||
}
|
||||
|
||||
func (pr *prSignedBy) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
|
||||
func (pr *prSignedBy) isRunningImageAllowed(ctx context.Context, image types.UnparsedImage) (bool, error) {
|
||||
// FIXME: pass context.Context
|
||||
sigs, err := image.Signatures(context.TODO())
|
||||
sigs, err := image.Signatures(ctx)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
var rejections []error
|
||||
for _, s := range sigs {
|
||||
var reason error
|
||||
switch res, _, err := pr.isSignatureAuthorAccepted(image, s); res {
|
||||
switch res, _, err := pr.isSignatureAuthorAccepted(ctx, image, s); res {
|
||||
case sarAccepted:
|
||||
// One accepted signature is enough.
|
||||
return true, nil
|
||||
|
||||
9
vendor/github.com/containers/image/signature/policy_eval_simple.go
generated
vendored
9
vendor/github.com/containers/image/signature/policy_eval_simple.go
generated
vendored
@@ -3,26 +3,27 @@
|
||||
package signature
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
func (pr *prInsecureAcceptAnything) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
func (pr *prInsecureAcceptAnything) isSignatureAuthorAccepted(ctx context.Context, image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
// prInsecureAcceptAnything semantics: Every image is allowed to run,
|
||||
// but this does not consider the signature as verified.
|
||||
return sarUnknown, nil, nil
|
||||
}
|
||||
|
||||
func (pr *prInsecureAcceptAnything) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
|
||||
func (pr *prInsecureAcceptAnything) isRunningImageAllowed(ctx context.Context, image types.UnparsedImage) (bool, error) {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (pr *prReject) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
func (pr *prReject) isSignatureAuthorAccepted(ctx context.Context, image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
return sarRejected, nil, PolicyRequirementError(fmt.Sprintf("Any signatures for image %s are rejected by policy.", transports.ImageName(image.Reference())))
|
||||
}
|
||||
|
||||
func (pr *prReject) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
|
||||
func (pr *prReject) isRunningImageAllowed(ctx context.Context, image types.UnparsedImage) (bool, error) {
|
||||
return false, PolicyRequirementError(fmt.Sprintf("Running image %s is rejected by policy.", transports.ImageName(image.Reference())))
|
||||
}
|
||||
|
||||
963
vendor/github.com/containers/image/storage/storage_image.go
generated
vendored
963
vendor/github.com/containers/image/storage/storage_image.go
generated
vendored
File diff suppressed because it is too large
Load Diff
83
vendor/github.com/containers/image/storage/storage_reference.go
generated
vendored
83
vendor/github.com/containers/image/storage/storage_reference.go
generated
vendored
@@ -3,11 +3,13 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/containers/storage"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
@@ -20,9 +22,11 @@ type storageReference struct {
|
||||
reference string
|
||||
id string
|
||||
name reference.Named
|
||||
tag string
|
||||
digest digest.Digest
|
||||
}
|
||||
|
||||
func newReference(transport storageTransport, reference, id string, name reference.Named) *storageReference {
|
||||
func newReference(transport storageTransport, reference, id string, name reference.Named, tag string, digest digest.Digest) *storageReference {
|
||||
// We take a copy of the transport, which contains a pointer to the
|
||||
// store that it used for resolving this reference, so that the
|
||||
// transport that we'll return from Transport() won't be affected by
|
||||
@@ -32,6 +36,8 @@ func newReference(transport storageTransport, reference, id string, name referen
|
||||
reference: reference,
|
||||
id: id,
|
||||
name: name,
|
||||
tag: tag,
|
||||
digest: digest,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -39,25 +45,49 @@ func newReference(transport storageTransport, reference, id string, name referen
|
||||
// one present with the same name or ID, and return the image.
|
||||
func (s *storageReference) resolveImage() (*storage.Image, error) {
|
||||
if s.id == "" {
|
||||
// Look for an image that has the expanded reference name as an explicit Name value.
|
||||
image, err := s.transport.store.Image(s.reference)
|
||||
if image != nil && err == nil {
|
||||
s.id = image.ID
|
||||
}
|
||||
}
|
||||
if s.id == "" && s.name != nil && s.digest != "" {
|
||||
// Look for an image with the specified digest that has the same name,
|
||||
// though possibly with a different tag or digest, as a Name value, so
|
||||
// that the canonical reference can be implicitly resolved to the image.
|
||||
images, err := s.transport.store.ImagesByDigest(s.digest)
|
||||
if images != nil && err == nil {
|
||||
repo := reference.FamiliarName(reference.TrimNamed(s.name))
|
||||
search:
|
||||
for _, image := range images {
|
||||
for _, name := range image.Names {
|
||||
if named, err := reference.ParseNormalizedNamed(name); err == nil {
|
||||
if reference.FamiliarName(reference.TrimNamed(named)) == repo {
|
||||
s.id = image.ID
|
||||
break search
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if s.id == "" {
|
||||
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
|
||||
return nil, ErrNoSuchImage
|
||||
logrus.Debugf("reference %q does not resolve to an image ID", s.StringWithinTransport())
|
||||
return nil, errors.Wrapf(ErrNoSuchImage, "reference %q does not resolve to an image ID", s.StringWithinTransport())
|
||||
}
|
||||
img, err := s.transport.store.Image(s.id)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error reading image %q", s.id)
|
||||
}
|
||||
if s.reference != "" {
|
||||
if s.name != nil {
|
||||
repo := reference.FamiliarName(reference.TrimNamed(s.name))
|
||||
nameMatch := false
|
||||
for _, name := range img.Names {
|
||||
if name == s.reference {
|
||||
nameMatch = true
|
||||
break
|
||||
if named, err := reference.ParseNormalizedNamed(name); err == nil {
|
||||
if reference.FamiliarName(reference.TrimNamed(named)) == repo {
|
||||
nameMatch = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if !nameMatch {
|
||||
@@ -78,8 +108,21 @@ func (s storageReference) Transport() types.ImageTransport {
|
||||
}
|
||||
}
|
||||
|
||||
// Return a name with a tag, if we have a name to base them on.
|
||||
// Return a name with a tag or digest, if we have either, else return it bare.
|
||||
func (s storageReference) DockerReference() reference.Named {
|
||||
if s.name == nil {
|
||||
return nil
|
||||
}
|
||||
if s.tag != "" {
|
||||
if namedTagged, err := reference.WithTag(s.name, s.tag); err == nil {
|
||||
return namedTagged
|
||||
}
|
||||
}
|
||||
if s.digest != "" {
|
||||
if canonical, err := reference.WithDigest(s.name, s.digest); err == nil {
|
||||
return canonical
|
||||
}
|
||||
}
|
||||
return s.name
|
||||
}
|
||||
|
||||
@@ -93,7 +136,7 @@ func (s storageReference) StringWithinTransport() string {
|
||||
optionsList = ":" + strings.Join(options, ",")
|
||||
}
|
||||
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "+" + s.transport.store.RunRoot() + optionsList + "]"
|
||||
if s.name == nil {
|
||||
if s.reference == "" {
|
||||
return storeSpec + "@" + s.id
|
||||
}
|
||||
if s.id == "" {
|
||||
@@ -122,11 +165,8 @@ func (s storageReference) PolicyConfigurationNamespaces() []string {
|
||||
driverlessStoreSpec := "[" + s.transport.store.GraphRoot() + "]"
|
||||
namespaces := []string{}
|
||||
if s.name != nil {
|
||||
if s.id != "" {
|
||||
// The reference without the ID is also a valid namespace.
|
||||
namespaces = append(namespaces, storeSpec+s.reference)
|
||||
}
|
||||
components := strings.Split(s.name.Name(), "/")
|
||||
name := reference.TrimNamed(s.name)
|
||||
components := strings.Split(name.String(), "/")
|
||||
for len(components) > 0 {
|
||||
namespaces = append(namespaces, storeSpec+strings.Join(components, "/"))
|
||||
components = components[:len(components)-1]
|
||||
@@ -137,11 +177,16 @@ func (s storageReference) PolicyConfigurationNamespaces() []string {
|
||||
return namespaces
|
||||
}
|
||||
|
||||
func (s storageReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
return newImage(s)
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
func (s storageReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
return newImage(ctx, sys, s)
|
||||
}
|
||||
|
||||
func (s storageReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
func (s storageReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
img, err := s.resolveImage()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -156,10 +201,10 @@ func (s storageReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
return err
|
||||
}
|
||||
|
||||
func (s storageReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
func (s storageReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
return newImageSource(s)
|
||||
}
|
||||
|
||||
func (s storageReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
func (s storageReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(s)
|
||||
}
|
||||
|
||||
207
vendor/github.com/containers/image/storage/storage_transport.go
generated
vendored
207
vendor/github.com/containers/image/storage/storage_transport.go
generated
vendored
@@ -13,11 +13,14 @@ import (
|
||||
"github.com/containers/image/types"
|
||||
"github.com/containers/storage"
|
||||
"github.com/containers/storage/pkg/idtools"
|
||||
"github.com/opencontainers/go-digest"
|
||||
ddigest "github.com/opencontainers/go-digest"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
const (
|
||||
minimumTruncatedIDLength = 3
|
||||
)
|
||||
|
||||
func init() {
|
||||
transports.Register(Transport)
|
||||
}
|
||||
@@ -103,69 +106,133 @@ func (s *storageTransport) DefaultGIDMap() []idtools.IDMap {
|
||||
// relative to the given store, and returns it in a reference object.
|
||||
func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (*storageReference, error) {
|
||||
var name reference.Named
|
||||
var sum digest.Digest
|
||||
var err error
|
||||
if ref == "" {
|
||||
return nil, ErrInvalidReference
|
||||
return nil, errors.Wrapf(ErrInvalidReference, "%q is an empty reference")
|
||||
}
|
||||
if ref[0] == '[' {
|
||||
// Ignore the store specifier.
|
||||
closeIndex := strings.IndexRune(ref, ']')
|
||||
if closeIndex < 1 {
|
||||
return nil, ErrInvalidReference
|
||||
return nil, errors.Wrapf(ErrInvalidReference, "store specifier in %q did not end", ref)
|
||||
}
|
||||
ref = ref[closeIndex+1:]
|
||||
}
|
||||
refInfo := strings.SplitN(ref, "@", 2)
|
||||
if len(refInfo) == 1 {
|
||||
// A name.
|
||||
name, err = reference.ParseNormalizedNamed(refInfo[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
// The last segment, if there's more than one, is either a digest from a reference, or an image ID.
|
||||
split := strings.LastIndex(ref, "@")
|
||||
idOrDigest := ""
|
||||
if split != -1 {
|
||||
// Peel off that last bit so that we can work on the rest.
|
||||
idOrDigest = ref[split+1:]
|
||||
if idOrDigest == "" {
|
||||
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like a digest or image ID", idOrDigest)
|
||||
}
|
||||
} else if len(refInfo) == 2 {
|
||||
// An ID, possibly preceded by a name.
|
||||
if refInfo[0] != "" {
|
||||
name, err = reference.ParseNormalizedNamed(refInfo[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
sum, err = digest.Parse(refInfo[1])
|
||||
if err != nil || sum.Validate() != nil {
|
||||
sum, err = digest.Parse("sha256:" + refInfo[1])
|
||||
if err != nil || sum.Validate() != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
} else { // Coverage: len(refInfo) is always 1 or 2
|
||||
// Anything else: store specified in a form we don't
|
||||
// recognize.
|
||||
return nil, ErrInvalidReference
|
||||
ref = ref[:split]
|
||||
}
|
||||
|
||||
// The middle segment (now the last segment), if there is one, is a digest.
|
||||
split = strings.LastIndex(ref, "@")
|
||||
sum := digest.Digest("")
|
||||
if split != -1 {
|
||||
sum = digest.Digest(ref[split+1:])
|
||||
if sum == "" {
|
||||
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like an image digest", sum)
|
||||
}
|
||||
ref = ref[:split]
|
||||
}
|
||||
|
||||
// If we have something that unambiguously should be a digest, validate it, and then the third part,
|
||||
// if we have one, as an ID.
|
||||
id := ""
|
||||
if sum != "" {
|
||||
if idSum, err := digest.Parse("sha256:" + idOrDigest); err != nil || idSum.Validate() != nil {
|
||||
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like an image ID", idOrDigest)
|
||||
}
|
||||
if err := sum.Validate(); err != nil {
|
||||
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like an image digest", sum)
|
||||
}
|
||||
id = idOrDigest
|
||||
if img, err := store.Image(idOrDigest); err == nil && img != nil && len(idOrDigest) >= minimumTruncatedIDLength && strings.HasPrefix(img.ID, idOrDigest) {
|
||||
// The ID is a truncated version of the ID of an image that's present in local storage,
|
||||
// so we might as well use the expanded value.
|
||||
id = img.ID
|
||||
}
|
||||
} else if idOrDigest != "" {
|
||||
// There was no middle portion, so the final portion could be either a digest or an ID.
|
||||
if idSum, err := digest.Parse("sha256:" + idOrDigest); err == nil && idSum.Validate() == nil {
|
||||
// It's an ID.
|
||||
id = idOrDigest
|
||||
} else if idSum, err := digest.Parse(idOrDigest); err == nil && idSum.Validate() == nil {
|
||||
// It's a digest.
|
||||
sum = idSum
|
||||
} else if img, err := store.Image(idOrDigest); err == nil && img != nil && len(idOrDigest) >= minimumTruncatedIDLength && strings.HasPrefix(img.ID, idOrDigest) {
|
||||
// It's a truncated version of the ID of an image that's present in local storage,
|
||||
// and we may need the expanded value.
|
||||
id = img.ID
|
||||
} else {
|
||||
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like a digest or image ID", idOrDigest)
|
||||
}
|
||||
}
|
||||
|
||||
// If we only had one portion, then _maybe_ it's a truncated image ID. Only check on that if it's
|
||||
// at least of what we guess is a reasonable minimum length, because we don't want a really short value
|
||||
// like "a" matching an image by ID prefix when the input was actually meant to specify an image name.
|
||||
if len(ref) >= minimumTruncatedIDLength && sum == "" && id == "" {
|
||||
if img, err := store.Image(ref); err == nil && img != nil && strings.HasPrefix(img.ID, ref) {
|
||||
// It's a truncated version of the ID of an image that's present in local storage;
|
||||
// we need to expand it.
|
||||
id = img.ID
|
||||
ref = ""
|
||||
}
|
||||
}
|
||||
|
||||
// The initial portion is probably a name, possibly with a tag.
|
||||
if ref != "" {
|
||||
var err error
|
||||
if name, err = reference.ParseNormalizedNamed(ref); err != nil {
|
||||
return nil, errors.Wrapf(err, "error parsing named reference %q", ref)
|
||||
}
|
||||
}
|
||||
if name == nil && sum == "" && id == "" {
|
||||
return nil, errors.Errorf("error parsing reference")
|
||||
}
|
||||
|
||||
// Construct a copy of the store spec.
|
||||
optionsList := ""
|
||||
options := store.GraphOptions()
|
||||
if len(options) > 0 {
|
||||
optionsList = ":" + strings.Join(options, ",")
|
||||
}
|
||||
storeSpec := "[" + store.GraphDriverName() + "@" + store.GraphRoot() + "+" + store.RunRoot() + optionsList + "]"
|
||||
id := ""
|
||||
if sum.Validate() == nil {
|
||||
id = sum.Hex()
|
||||
}
|
||||
|
||||
// Convert the name back into a reference string, if we got a name.
|
||||
refname := ""
|
||||
tag := ""
|
||||
if name != nil {
|
||||
name = reference.TagNameOnly(name)
|
||||
refname = verboseName(name)
|
||||
if sum.Validate() == nil {
|
||||
canonical, err := reference.WithDigest(name, sum)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error mixing name %q with digest %q", name, sum)
|
||||
}
|
||||
refname = verboseName(canonical)
|
||||
} else {
|
||||
name = reference.TagNameOnly(name)
|
||||
tagged, ok := name.(reference.Tagged)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("error parsing possibly-tagless name %q", ref)
|
||||
}
|
||||
refname = verboseName(name)
|
||||
tag = tagged.Tag()
|
||||
}
|
||||
}
|
||||
if refname == "" {
|
||||
logrus.Debugf("parsed reference into %q", storeSpec+"@"+id)
|
||||
logrus.Debugf("parsed reference to id into %q", storeSpec+"@"+id)
|
||||
} else if id == "" {
|
||||
logrus.Debugf("parsed reference into %q", storeSpec+refname)
|
||||
logrus.Debugf("parsed reference to refname into %q", storeSpec+refname)
|
||||
} else {
|
||||
logrus.Debugf("parsed reference into %q", storeSpec+refname+"@"+id)
|
||||
logrus.Debugf("parsed reference to refname@id into %q", storeSpec+refname+"@"+id)
|
||||
}
|
||||
return newReference(storageTransport{store: store, defaultUIDMap: s.defaultUIDMap, defaultGIDMap: s.defaultGIDMap}, refname, id, name), nil
|
||||
return newReference(storageTransport{store: store, defaultUIDMap: s.defaultUIDMap, defaultGIDMap: s.defaultGIDMap}, refname, id, name, tag, sum), nil
|
||||
}
|
||||
|
||||
func (s *storageTransport) GetStore() (storage.Store, error) {
|
||||
@@ -184,11 +251,14 @@ func (s *storageTransport) GetStore() (storage.Store, error) {
|
||||
return s.store, nil
|
||||
}
|
||||
|
||||
// ParseReference takes a name and/or an ID ("_name_"/"@_id_"/"_name_@_id_"),
|
||||
// ParseReference takes a name and a tag or digest and/or ID
|
||||
// ("_name_"/"@_id_"/"_name_:_tag_"/"_name_:_tag_@_id_"/"_name_@_digest_"/"_name_@_digest_@_id_"),
|
||||
// possibly prefixed with a store specifier in the form "[_graphroot_]" or
|
||||
// "[_driver_@_graphroot_]" or "[_driver_@_graphroot_+_runroot_]" or
|
||||
// "[_driver_@_graphroot_:_options_]" or "[_driver_@_graphroot_+_runroot_:_options_]",
|
||||
// tries to figure out which it is, and returns it in a reference object.
|
||||
// If _id_ is the ID of an image that's present in local storage, it can be truncated, and
|
||||
// even be specified as if it were a _name_, value.
|
||||
func (s *storageTransport) ParseReference(reference string) (types.ImageReference, error) {
|
||||
var store storage.Store
|
||||
// Check if there's a store location prefix. If there is, then it
|
||||
@@ -267,17 +337,23 @@ func (s *storageTransport) ParseReference(reference string) (types.ImageReferenc
|
||||
|
||||
func (s storageTransport) GetStoreImage(store storage.Store, ref types.ImageReference) (*storage.Image, error) {
|
||||
dref := ref.DockerReference()
|
||||
if dref == nil {
|
||||
if sref, ok := ref.(*storageReference); ok {
|
||||
if sref.id != "" {
|
||||
if img, err := store.Image(sref.id); err == nil {
|
||||
return img, nil
|
||||
}
|
||||
if dref != nil {
|
||||
if img, err := store.Image(verboseName(dref)); err == nil {
|
||||
return img, nil
|
||||
}
|
||||
}
|
||||
if sref, ok := ref.(*storageReference); ok {
|
||||
if sref.id != "" {
|
||||
if img, err := store.Image(sref.id); err == nil {
|
||||
return img, nil
|
||||
}
|
||||
}
|
||||
return nil, ErrInvalidReference
|
||||
tmpRef := *sref
|
||||
if img, err := tmpRef.resolveImage(); err == nil {
|
||||
return img, nil
|
||||
}
|
||||
}
|
||||
return store.Image(verboseName(dref))
|
||||
return nil, storage.ErrImageUnknown
|
||||
}
|
||||
|
||||
func (s *storageTransport) GetImage(ref types.ImageReference) (*storage.Image, error) {
|
||||
@@ -337,7 +413,7 @@ func (s storageTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = ddigest.Parse("sha256:" + scopeInfo[1])
|
||||
_, err = digest.Parse("sha256:" + scopeInfo[1])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -347,11 +423,28 @@ func (s storageTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func verboseName(name reference.Named) string {
|
||||
name = reference.TagNameOnly(name)
|
||||
tag := ""
|
||||
if tagged, ok := name.(reference.NamedTagged); ok {
|
||||
tag = ":" + tagged.Tag()
|
||||
func verboseName(r reference.Reference) string {
|
||||
if r == nil {
|
||||
return ""
|
||||
}
|
||||
return name.Name() + tag
|
||||
named, isNamed := r.(reference.Named)
|
||||
digested, isDigested := r.(reference.Digested)
|
||||
tagged, isTagged := r.(reference.Tagged)
|
||||
name := ""
|
||||
tag := ""
|
||||
sum := ""
|
||||
if isNamed {
|
||||
name = (reference.TrimNamed(named)).String()
|
||||
}
|
||||
if isTagged {
|
||||
if tagged.Tag() != "" {
|
||||
tag = ":" + tagged.Tag()
|
||||
}
|
||||
}
|
||||
if isDigested {
|
||||
if digested.Digest().Validate() == nil {
|
||||
sum = "@" + digested.Digest().String()
|
||||
}
|
||||
}
|
||||
return name + tag + sum
|
||||
}
|
||||
|
||||
18
vendor/github.com/containers/image/tarball/tarball_reference.go
generated
vendored
18
vendor/github.com/containers/image/tarball/tarball_reference.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package tarball
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
@@ -61,12 +62,17 @@ func (r *tarballReference) PolicyConfigurationNamespaces() []string {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *tarballReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src, err := r.NewImageSource(ctx)
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
func (r *tarballReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
|
||||
src, err := r.NewImageSource(ctx, sys)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
img, err := image.FromSource(src)
|
||||
img, err := image.FromSource(ctx, sys, src)
|
||||
if err != nil {
|
||||
src.Close()
|
||||
return nil, err
|
||||
@@ -74,7 +80,7 @@ func (r *tarballReference) NewImage(ctx *types.SystemContext) (types.Image, erro
|
||||
return img, nil
|
||||
}
|
||||
|
||||
func (r *tarballReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
func (r *tarballReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
|
||||
for _, filename := range r.filenames {
|
||||
if err := os.Remove(filename); err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("error removing %q: %v", filename, err)
|
||||
@@ -83,6 +89,6 @@ func (r *tarballReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *tarballReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return nil, fmt.Errorf("destination not implemented yet")
|
||||
func (r *tarballReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
|
||||
return nil, fmt.Errorf(`"tarball:" locations can only be read from, not written to`)
|
||||
}
|
||||
|
||||
33
vendor/github.com/containers/image/tarball/tarball_src.go
generated
vendored
33
vendor/github.com/containers/image/tarball/tarball_src.go
generated
vendored
@@ -34,7 +34,7 @@ type tarballImageSource struct {
|
||||
manifest []byte
|
||||
}
|
||||
|
||||
func (r *tarballReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
|
||||
func (r *tarballReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
|
||||
// Gather up the digests, sizes, and date information for all of the files.
|
||||
filenames := []string{}
|
||||
diffIDs := []digest.Digest{}
|
||||
@@ -87,6 +87,7 @@ func (r *tarballReference) NewImageSource(ctx *types.SystemContext) (types.Image
|
||||
layerType = imgspecv1.MediaTypeImageLayer
|
||||
uncompressed = nil
|
||||
}
|
||||
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
|
||||
n, err := io.Copy(ioutil.Discard, reader)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error reading %q: %v", filename, err)
|
||||
@@ -206,7 +207,7 @@ func (is *tarballImageSource) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (is *tarballImageSource) GetBlob(blobinfo types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
func (is *tarballImageSource) GetBlob(ctx context.Context, blobinfo types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
// We should only be asked about things in the manifest. Maybe the configuration blob.
|
||||
if blobinfo.Digest == is.configID {
|
||||
return ioutil.NopCloser(bytes.NewBuffer(is.config)), is.configSize, nil
|
||||
@@ -228,23 +229,33 @@ func (is *tarballImageSource) GetBlob(blobinfo types.BlobInfo) (io.ReadCloser, i
|
||||
return nil, -1, fmt.Errorf("no blob with digest %q found", blobinfo.Digest.String())
|
||||
}
|
||||
|
||||
func (is *tarballImageSource) GetManifest() ([]byte, string, error) {
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
|
||||
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
|
||||
func (is *tarballImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
|
||||
if instanceDigest != nil {
|
||||
return nil, "", fmt.Errorf("manifest lists are not supported by the %q transport", transportName)
|
||||
}
|
||||
return is.manifest, imgspecv1.MediaTypeImageManifest, nil
|
||||
}
|
||||
|
||||
func (*tarballImageSource) GetSignatures(context.Context) ([][]byte, error) {
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
|
||||
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
|
||||
// (e.g. if the source never returns manifest lists).
|
||||
func (*tarballImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
|
||||
if instanceDigest != nil {
|
||||
return nil, fmt.Errorf("manifest lists are not supported by the %q transport", transportName)
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (*tarballImageSource) GetTargetManifest(digest.Digest) ([]byte, string, error) {
|
||||
return nil, "", fmt.Errorf("manifest lists are not supported by the %q transport", transportName)
|
||||
}
|
||||
|
||||
func (is *tarballImageSource) Reference() types.ImageReference {
|
||||
return &is.reference
|
||||
}
|
||||
|
||||
// UpdatedLayerInfos() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
|
||||
func (*tarballImageSource) UpdatedLayerInfos() []types.BlobInfo {
|
||||
return nil
|
||||
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
|
||||
func (*tarballImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
2
vendor/github.com/containers/image/transports/alltransports/ostree.go
generated
vendored
2
vendor/github.com/containers/image/transports/alltransports/ostree.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// +build !containers_image_ostree_stub
|
||||
// +build !containers_image_ostree_stub,linux
|
||||
|
||||
package alltransports
|
||||
|
||||
|
||||
2
vendor/github.com/containers/image/transports/alltransports/ostree_stub.go
generated
vendored
2
vendor/github.com/containers/image/transports/alltransports/ostree_stub.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// +build containers_image_ostree_stub
|
||||
// +build containers_image_ostree_stub !linux
|
||||
|
||||
package alltransports
|
||||
|
||||
|
||||
113
vendor/github.com/containers/image/types/types.go
generated
vendored
113
vendor/github.com/containers/image/types/types.go
generated
vendored
@@ -73,20 +73,21 @@ type ImageReference interface {
|
||||
// and each following element to be a prefix of the element preceding it.
|
||||
PolicyConfigurationNamespaces() []string
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
NewImage(ctx *SystemContext) (Image, error)
|
||||
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
|
||||
NewImage(ctx context.Context, sys *SystemContext) (ImageCloser, error)
|
||||
// NewImageSource returns a types.ImageSource for this reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
NewImageSource(ctx *SystemContext) (ImageSource, error)
|
||||
NewImageSource(ctx context.Context, sys *SystemContext) (ImageSource, error)
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
NewImageDestination(ctx *SystemContext) (ImageDestination, error)
|
||||
NewImageDestination(ctx context.Context, sys *SystemContext) (ImageDestination, error)
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
DeleteImage(ctx *SystemContext) error
|
||||
DeleteImage(ctx context.Context, sys *SystemContext) error
|
||||
}
|
||||
|
||||
// BlobInfo collects known information about a blob (layer/config).
|
||||
@@ -99,7 +100,7 @@ type BlobInfo struct {
|
||||
MediaType string
|
||||
}
|
||||
|
||||
// ImageSource is a service, possibly remote (= slow), to download components of a single image.
|
||||
// ImageSource is a service, possibly remote (= slow), to download components of a single image or a named image set (manifest list).
|
||||
// This is primarily useful for copying images around; for examining their properties, Image (below)
|
||||
// is usually more useful.
|
||||
// Each ImageSource should eventually be closed by calling Close().
|
||||
@@ -114,17 +115,36 @@ type ImageSource interface {
|
||||
Close() error
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
GetManifest() ([]byte, string, error)
|
||||
// GetTargetManifest returns an image's manifest given a digest. This is mainly used to retrieve a single image's manifest
|
||||
// out of a manifest list.
|
||||
GetTargetManifest(digest digest.Digest) ([]byte, string, error)
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
|
||||
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
|
||||
GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error)
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
// The Digest field in BlobInfo is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
|
||||
GetBlob(BlobInfo) (io.ReadCloser, int64, error)
|
||||
GetBlob(context.Context, BlobInfo) (io.ReadCloser, int64, error)
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
GetSignatures(context.Context) ([][]byte, error)
|
||||
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
|
||||
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
|
||||
// (e.g. if the source never returns manifest lists).
|
||||
GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error)
|
||||
// LayerInfosForCopy returns either nil (meaning the values in the manifest are fine), or updated values for the layer blobsums that are listed in the image's manifest.
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
LayerInfosForCopy(ctx context.Context) ([]BlobInfo, error)
|
||||
}
|
||||
|
||||
// LayerCompression indicates if layers must be compressed, decompressed or preserved
|
||||
type LayerCompression int
|
||||
|
||||
const (
|
||||
// PreserveOriginal indicates the layer must be preserved, ie
|
||||
// no compression or decompression.
|
||||
PreserveOriginal LayerCompression = iota
|
||||
// Decompress indicates the layer must be decompressed
|
||||
Decompress
|
||||
// Compress indicates the layer must be compressed
|
||||
Compress
|
||||
)
|
||||
|
||||
// ImageDestination is a service, possibly remote (= slow), to store components of a single image.
|
||||
//
|
||||
// There is a specific required order for some of the calls:
|
||||
@@ -146,9 +166,9 @@ type ImageDestination interface {
|
||||
SupportedManifestMIMETypes() []string
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
SupportsSignatures() error
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
ShouldCompressLayers() bool
|
||||
SupportsSignatures(ctx context.Context) error
|
||||
// DesiredLayerCompression indicates the kind of compression to apply on layers
|
||||
DesiredLayerCompression() LayerCompression
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
// uploaded to the image destination, true otherwise.
|
||||
AcceptsForeignLayerURLs() bool
|
||||
@@ -161,25 +181,25 @@ type ImageDestination interface {
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
PutBlob(stream io.Reader, inputInfo BlobInfo) (BlobInfo, error)
|
||||
PutBlob(ctx context.Context, stream io.Reader, inputInfo BlobInfo, isConfig bool) (BlobInfo, error)
|
||||
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob.
|
||||
// Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned.
|
||||
// If the destination does not contain the blob, or it is unknown, HasBlob ordinarily returns (false, -1, nil);
|
||||
// it returns a non-nil error only on an unexpected failure.
|
||||
HasBlob(info BlobInfo) (bool, int64, error)
|
||||
HasBlob(ctx context.Context, info BlobInfo) (bool, int64, error)
|
||||
// ReapplyBlob informs the image destination that a blob for which HasBlob previously returned true would have been passed to PutBlob if it had returned false. Like HasBlob and unlike PutBlob, the digest can not be empty. If the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree.
|
||||
ReapplyBlob(info BlobInfo) (BlobInfo, error)
|
||||
ReapplyBlob(ctx context.Context, info BlobInfo) (BlobInfo, error)
|
||||
// PutManifest writes manifest to the destination.
|
||||
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
|
||||
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
|
||||
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
|
||||
PutManifest(manifest []byte) error
|
||||
PutSignatures(signatures [][]byte) error
|
||||
PutManifest(ctx context.Context, manifest []byte) error
|
||||
PutSignatures(ctx context.Context, signatures [][]byte) error
|
||||
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
Commit() error
|
||||
Commit(ctx context.Context) error
|
||||
}
|
||||
|
||||
// ManifestTypeRejectedError is returned by ImageDestination.PutManifest if the destination is in principle available,
|
||||
@@ -196,21 +216,24 @@ func (e ManifestTypeRejectedError) Error() string {
|
||||
// Thus, an UnparsedImage can be created from an ImageSource simply by fetching blobs without interpreting them,
|
||||
// allowing cryptographic signature verification to happen first, before even fetching the manifest, or parsing anything else.
|
||||
// This also makes the UnparsedImage→Image conversion an explicitly visible step.
|
||||
// Each UnparsedImage should eventually be closed by calling Close().
|
||||
//
|
||||
// An UnparsedImage is a pair of (ImageSource, instance digest); it can represent either a manifest list or a single image instance.
|
||||
//
|
||||
// The UnparsedImage must not be used after the underlying ImageSource is Close()d.
|
||||
type UnparsedImage interface {
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
Reference() ImageReference
|
||||
// Close removes resources associated with an initialized UnparsedImage, if any.
|
||||
Close() error
|
||||
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
|
||||
Manifest() ([]byte, string, error)
|
||||
Manifest(ctx context.Context) ([]byte, string, error)
|
||||
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
|
||||
Signatures(ctx context.Context) ([][]byte, error)
|
||||
}
|
||||
|
||||
// Image is the primary API for inspecting properties of images.
|
||||
// Each Image should eventually be closed by calling Close().
|
||||
// An Image is based on a pair of (ImageSource, instance digest); it can represent either a manifest list or a single image instance.
|
||||
//
|
||||
// The Image must not be used after the underlying ImageSource is Close()d.
|
||||
type Image interface {
|
||||
// Note that Reference may return nil in the return value of UpdatedImage!
|
||||
UnparsedImage
|
||||
@@ -219,21 +242,25 @@ type Image interface {
|
||||
ConfigInfo() BlobInfo
|
||||
// ConfigBlob returns the blob described by ConfigInfo, if ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
ConfigBlob() ([]byte, error)
|
||||
ConfigBlob(context.Context) ([]byte, error)
|
||||
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
|
||||
// layers in the resulting configuration isn't guaranteed to be returned to due how
|
||||
// old image manifests work (docker v2s1 especially).
|
||||
OCIConfig() (*v1.Image, error)
|
||||
OCIConfig(context.Context) (*v1.Image, error)
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
LayerInfos() []BlobInfo
|
||||
// LayerInfosForCopy returns either nil (meaning the values in the manifest are fine), or updated values for the layer blobsums that are listed in the image's manifest.
|
||||
// The Digest field is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
LayerInfosForCopy(context.Context) ([]BlobInfo, error)
|
||||
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
|
||||
// It returns false if the manifest does not embed a Docker reference.
|
||||
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
|
||||
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
Inspect() (*ImageInspectInfo, error)
|
||||
Inspect(context.Context) (*ImageInspectInfo, error)
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
|
||||
// (most importantly it forces us to download the full layers even if they are already present at the destination).
|
||||
@@ -241,14 +268,21 @@ type Image interface {
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// Everything in options.InformationOnly should be provided, other fields should be set only if a modification is desired.
|
||||
// This does not change the state of the original Image object.
|
||||
UpdatedImage(options ManifestUpdateOptions) (Image, error)
|
||||
// IsMultiImage returns true if the image's manifest is a list of images, false otherwise.
|
||||
IsMultiImage() bool
|
||||
UpdatedImage(ctx context.Context, options ManifestUpdateOptions) (Image, error)
|
||||
// Size returns an approximation of the amount of disk space which is consumed by the image in its current
|
||||
// location. If the size is not known, -1 will be returned.
|
||||
Size() (int64, error)
|
||||
}
|
||||
|
||||
// ImageCloser is an Image with a Close() method which must be called by the user.
|
||||
// This is returned by ImageReference.NewImage, which transparently instantiates a types.ImageSource,
|
||||
// to ensure that the ImageSource is closed.
|
||||
type ImageCloser interface {
|
||||
Image
|
||||
// Close removes resources associated with an initialized ImageCloser.
|
||||
Close() error
|
||||
}
|
||||
|
||||
// ManifestUpdateOptions is a way to pass named optional arguments to Image.UpdatedManifest
|
||||
type ManifestUpdateOptions struct {
|
||||
LayerInfos []BlobInfo // Complete BlobInfos (size+digest+urls+annotations) which should replace the originals, in order (the root layer first, and then successive layered layers). BlobInfos' MediaType fields are ignored.
|
||||
@@ -271,7 +305,7 @@ type ManifestUpdateInformation struct {
|
||||
// for other manifest types.
|
||||
type ImageInspectInfo struct {
|
||||
Tag string
|
||||
Created time.Time
|
||||
Created *time.Time
|
||||
DockerVersion string
|
||||
Labels map[string]string
|
||||
Architecture string
|
||||
@@ -308,6 +342,13 @@ type SystemContext struct {
|
||||
SystemRegistriesConfPath string
|
||||
// If not "", overrides the default path for the authentication file
|
||||
AuthFilePath string
|
||||
// If not "", overrides the use of platform.GOARCH when choosing an image or verifying architecture match.
|
||||
ArchitectureChoice string
|
||||
// If not "", overrides the use of platform.GOOS when choosing an image or verifying OS match.
|
||||
OSChoice string
|
||||
|
||||
// Additional tags when creating or copying a docker-archive.
|
||||
DockerArchiveAdditionalTags []reference.NamedTagged
|
||||
|
||||
// === OCI.Transport overrides ===
|
||||
// If not "", a directory containing a CA certificate (ending with ".crt"),
|
||||
@@ -349,6 +390,10 @@ type SystemContext struct {
|
||||
DockerDaemonHost string
|
||||
// Used to skip TLS verification, off by default. To take effect DockerDaemonCertPath needs to be specified as well.
|
||||
DockerDaemonInsecureSkipTLSVerify bool
|
||||
|
||||
// === dir.Transport overrides ===
|
||||
// DirForceCompress compresses the image layers if set to true
|
||||
DirForceCompress bool
|
||||
}
|
||||
|
||||
// ProgressProperties is used to pass information from the copy code to a monitor which
|
||||
|
||||
9
vendor/github.com/containers/image/vendor.conf
generated
vendored
9
vendor/github.com/containers/image/vendor.conf
generated
vendored
@@ -1,5 +1,7 @@
|
||||
github.com/containers/image
|
||||
|
||||
github.com/sirupsen/logrus v1.0.0
|
||||
github.com/containers/storage 47536c89fcc545a87745e1a1573addc439409165
|
||||
github.com/containers/storage master
|
||||
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
|
||||
github.com/docker/docker-credential-helpers d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1
|
||||
github.com/docker/distribution 5f6282db7d65e6d72ad7c2cc66310724a57be716
|
||||
@@ -8,11 +10,9 @@ github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
|
||||
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
|
||||
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
|
||||
github.com/ghodss/yaml 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
|
||||
github.com/gorilla/context 08b5f424b9271eedf6f9f0ce86cb9396ed337a42
|
||||
github.com/gorilla/mux 94e7d24fd285520f3d12ae998f7fdd6b5393d453
|
||||
github.com/imdario/mergo 50d4dbd4eb0e84778abe37cefef140271d96fade
|
||||
github.com/mattn/go-runewidth 14207d285c6c197daabb5c9793d63e7af9ab2d50
|
||||
github.com/mattn/go-shellwords 005a0944d84452842197c2108bd9168ced206f78
|
||||
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
|
||||
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
|
||||
github.com/opencontainers/go-digest aa2ec055abd10d26d539eb630a92241b781ce4bc
|
||||
@@ -36,4 +36,5 @@ github.com/tchap/go-patricia v2.2.6
|
||||
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
|
||||
github.com/BurntSushi/toml b26d9c308763d68093482582cea63d69be07a0f0
|
||||
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460
|
||||
github.com/gogo/protobuf/proto fcdc5011193ff531a548e9b0301828d5a5b97fd8
|
||||
github.com/gogo/protobuf fcdc5011193ff531a548e9b0301828d5a5b97fd8
|
||||
github.com/pquerna/ffjson master
|
||||
|
||||
39
vendor/github.com/containers/storage/containers.go
generated
vendored
39
vendor/github.com/containers/storage/containers.go
generated
vendored
@@ -7,6 +7,7 @@ import (
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/containers/storage/pkg/idtools"
|
||||
"github.com/containers/storage/pkg/ioutils"
|
||||
"github.com/containers/storage/pkg/stringid"
|
||||
"github.com/containers/storage/pkg/truncindex"
|
||||
@@ -56,6 +57,12 @@ type Container struct {
|
||||
// is set before using it.
|
||||
Created time.Time `json:"created,omitempty"`
|
||||
|
||||
// UIDMap and GIDMap are used for setting up a container's root
|
||||
// filesystem for use inside of a user namespace where UID mapping is
|
||||
// being used.
|
||||
UIDMap []idtools.IDMap `json:"uidmap,omitempty"`
|
||||
GIDMap []idtools.IDMap `json:"gidmap,omitempty"`
|
||||
|
||||
Flags map[string]interface{} `json:"flags,omitempty"`
|
||||
}
|
||||
|
||||
@@ -70,7 +77,9 @@ type ContainerStore interface {
|
||||
// random one if an empty value is supplied) and optional names,
|
||||
// based on the specified image, using the specified layer as its
|
||||
// read-write layer.
|
||||
Create(id string, names []string, image, layer, metadata string) (*Container, error)
|
||||
// The maps in the container's options structure are recorded for the
|
||||
// convenience of the caller, nothing more.
|
||||
Create(id string, names []string, image, layer, metadata string, options *ContainerOptions) (*Container, error)
|
||||
|
||||
// SetNames updates the list of names associated with the container
|
||||
// with the specified ID.
|
||||
@@ -106,10 +115,27 @@ type containerStore struct {
|
||||
byname map[string]*Container
|
||||
}
|
||||
|
||||
func copyContainer(c *Container) *Container {
|
||||
return &Container{
|
||||
ID: c.ID,
|
||||
Names: copyStringSlice(c.Names),
|
||||
ImageID: c.ImageID,
|
||||
LayerID: c.LayerID,
|
||||
Metadata: c.Metadata,
|
||||
BigDataNames: copyStringSlice(c.BigDataNames),
|
||||
BigDataSizes: copyStringInt64Map(c.BigDataSizes),
|
||||
BigDataDigests: copyStringDigestMap(c.BigDataDigests),
|
||||
Created: c.Created,
|
||||
UIDMap: copyIDMap(c.UIDMap),
|
||||
GIDMap: copyIDMap(c.GIDMap),
|
||||
Flags: copyStringInterfaceMap(c.Flags),
|
||||
}
|
||||
}
|
||||
|
||||
func (r *containerStore) Containers() ([]Container, error) {
|
||||
containers := make([]Container, len(r.containers))
|
||||
for i := range r.containers {
|
||||
containers[i] = *(r.containers[i])
|
||||
containers[i] = *copyContainer(r.containers[i])
|
||||
}
|
||||
return containers, nil
|
||||
}
|
||||
@@ -237,7 +263,7 @@ func (r *containerStore) SetFlag(id string, flag string, value interface{}) erro
|
||||
return r.Save()
|
||||
}
|
||||
|
||||
func (r *containerStore) Create(id string, names []string, image, layer, metadata string) (container *Container, err error) {
|
||||
func (r *containerStore) Create(id string, names []string, image, layer, metadata string, options *ContainerOptions) (container *Container, err error) {
|
||||
if id == "" {
|
||||
id = stringid.GenerateRandomID()
|
||||
_, idInUse := r.byid[id]
|
||||
@@ -267,6 +293,8 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
|
||||
BigDataDigests: make(map[string]digest.Digest),
|
||||
Created: time.Now().UTC(),
|
||||
Flags: make(map[string]interface{}),
|
||||
UIDMap: copyIDMap(options.UIDMap),
|
||||
GIDMap: copyIDMap(options.GIDMap),
|
||||
}
|
||||
r.containers = append(r.containers, container)
|
||||
r.byid[id] = container
|
||||
@@ -276,6 +304,7 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
|
||||
r.byname[name] = container
|
||||
}
|
||||
err = r.Save()
|
||||
container = copyContainer(container)
|
||||
}
|
||||
return container, err
|
||||
}
|
||||
@@ -355,7 +384,7 @@ func (r *containerStore) Delete(id string) error {
|
||||
|
||||
func (r *containerStore) Get(id string) (*Container, error) {
|
||||
if container, ok := r.lookup(id); ok {
|
||||
return container, nil
|
||||
return copyContainer(container), nil
|
||||
}
|
||||
return nil, ErrContainerUnknown
|
||||
}
|
||||
@@ -444,7 +473,7 @@ func (r *containerStore) BigDataNames(id string) ([]string, error) {
|
||||
if !ok {
|
||||
return nil, ErrContainerUnknown
|
||||
}
|
||||
return c.BigDataNames, nil
|
||||
return copyStringSlice(c.BigDataNames), nil
|
||||
}
|
||||
|
||||
func (r *containerStore) SetBigData(id, key string, data []byte) error {
|
||||
|
||||
221
vendor/github.com/containers/storage/containers_ffjson.go
generated
vendored
221
vendor/github.com/containers/storage/containers_ffjson.go
generated
vendored
@@ -1,5 +1,5 @@
|
||||
// Code generated by ffjson <https://github.com/pquerna/ffjson>. DO NOT EDIT.
|
||||
// source: containers.go
|
||||
// source: ./containers.go
|
||||
|
||||
package storage
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/containers/storage/pkg/idtools"
|
||||
"github.com/opencontainers/go-digest"
|
||||
fflib "github.com/pquerna/ffjson/fflib/v1"
|
||||
)
|
||||
@@ -126,6 +127,46 @@ func (j *Container) MarshalJSONBuf(buf fflib.EncodingBuffer) error {
|
||||
}
|
||||
buf.WriteByte(',')
|
||||
}
|
||||
if len(j.UIDMap) != 0 {
|
||||
buf.WriteString(`"uidmap":`)
|
||||
if j.UIDMap != nil {
|
||||
buf.WriteString(`[`)
|
||||
for i, v := range j.UIDMap {
|
||||
if i != 0 {
|
||||
buf.WriteString(`,`)
|
||||
}
|
||||
/* Struct fall back. type=idtools.IDMap kind=struct */
|
||||
err = buf.Encode(&v)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
buf.WriteString(`]`)
|
||||
} else {
|
||||
buf.WriteString(`null`)
|
||||
}
|
||||
buf.WriteByte(',')
|
||||
}
|
||||
if len(j.GIDMap) != 0 {
|
||||
buf.WriteString(`"gidmap":`)
|
||||
if j.GIDMap != nil {
|
||||
buf.WriteString(`[`)
|
||||
for i, v := range j.GIDMap {
|
||||
if i != 0 {
|
||||
buf.WriteString(`,`)
|
||||
}
|
||||
/* Struct fall back. type=idtools.IDMap kind=struct */
|
||||
err = buf.Encode(&v)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
buf.WriteString(`]`)
|
||||
} else {
|
||||
buf.WriteString(`null`)
|
||||
}
|
||||
buf.WriteByte(',')
|
||||
}
|
||||
if len(j.Flags) != 0 {
|
||||
buf.WriteString(`"flags":`)
|
||||
/* Falling back. type=map[string]interface {} kind=map */
|
||||
@@ -162,6 +203,10 @@ const (
|
||||
|
||||
ffjtContainerCreated
|
||||
|
||||
ffjtContainerUIDMap
|
||||
|
||||
ffjtContainerGIDMap
|
||||
|
||||
ffjtContainerFlags
|
||||
)
|
||||
|
||||
@@ -183,6 +228,10 @@ var ffjKeyContainerBigDataDigests = []byte("big-data-digests")
|
||||
|
||||
var ffjKeyContainerCreated = []byte("created")
|
||||
|
||||
var ffjKeyContainerUIDMap = []byte("uidmap")
|
||||
|
||||
var ffjKeyContainerGIDMap = []byte("gidmap")
|
||||
|
||||
var ffjKeyContainerFlags = []byte("flags")
|
||||
|
||||
// UnmarshalJSON umarshall json - template of ffjson
|
||||
@@ -280,6 +329,14 @@ mainparse:
|
||||
goto mainparse
|
||||
}
|
||||
|
||||
case 'g':
|
||||
|
||||
if bytes.Equal(ffjKeyContainerGIDMap, kn) {
|
||||
currentKey = ffjtContainerGIDMap
|
||||
state = fflib.FFParse_want_colon
|
||||
goto mainparse
|
||||
}
|
||||
|
||||
case 'i':
|
||||
|
||||
if bytes.Equal(ffjKeyContainerID, kn) {
|
||||
@@ -317,6 +374,14 @@ mainparse:
|
||||
goto mainparse
|
||||
}
|
||||
|
||||
case 'u':
|
||||
|
||||
if bytes.Equal(ffjKeyContainerUIDMap, kn) {
|
||||
currentKey = ffjtContainerUIDMap
|
||||
state = fflib.FFParse_want_colon
|
||||
goto mainparse
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
if fflib.EqualFoldRight(ffjKeyContainerFlags, kn) {
|
||||
@@ -325,6 +390,18 @@ mainparse:
|
||||
goto mainparse
|
||||
}
|
||||
|
||||
if fflib.SimpleLetterEqualFold(ffjKeyContainerGIDMap, kn) {
|
||||
currentKey = ffjtContainerGIDMap
|
||||
state = fflib.FFParse_want_colon
|
||||
goto mainparse
|
||||
}
|
||||
|
||||
if fflib.SimpleLetterEqualFold(ffjKeyContainerUIDMap, kn) {
|
||||
currentKey = ffjtContainerUIDMap
|
||||
state = fflib.FFParse_want_colon
|
||||
goto mainparse
|
||||
}
|
||||
|
||||
if fflib.SimpleLetterEqualFold(ffjKeyContainerCreated, kn) {
|
||||
currentKey = ffjtContainerCreated
|
||||
state = fflib.FFParse_want_colon
|
||||
@@ -423,6 +500,12 @@ mainparse:
|
||||
case ffjtContainerCreated:
|
||||
goto handle_Created
|
||||
|
||||
case ffjtContainerUIDMap:
|
||||
goto handle_UIDMap
|
||||
|
||||
case ffjtContainerGIDMap:
|
||||
goto handle_GIDMap
|
||||
|
||||
case ffjtContainerFlags:
|
||||
goto handle_Flags
|
||||
|
||||
@@ -931,6 +1014,142 @@ handle_Created:
|
||||
state = fflib.FFParse_after_value
|
||||
goto mainparse
|
||||
|
||||
handle_UIDMap:
|
||||
|
||||
/* handler: j.UIDMap type=[]idtools.IDMap kind=slice quoted=false*/
|
||||
|
||||
{
|
||||
|
||||
{
|
||||
if tok != fflib.FFTok_left_brace && tok != fflib.FFTok_null {
|
||||
return fs.WrapErr(fmt.Errorf("cannot unmarshal %s into Go value for ", tok))
|
||||
}
|
||||
}
|
||||
|
||||
if tok == fflib.FFTok_null {
|
||||
j.UIDMap = nil
|
||||
} else {
|
||||
|
||||
j.UIDMap = []idtools.IDMap{}
|
||||
|
||||
wantVal := true
|
||||
|
||||
for {
|
||||
|
||||
var tmpJUIDMap idtools.IDMap
|
||||
|
||||
tok = fs.Scan()
|
||||
if tok == fflib.FFTok_error {
|
||||
goto tokerror
|
||||
}
|
||||
if tok == fflib.FFTok_right_brace {
|
||||
break
|
||||
}
|
||||
|
||||
if tok == fflib.FFTok_comma {
|
||||
if wantVal == true {
|
||||
// TODO(pquerna): this isn't an ideal error message, this handles
|
||||
// things like [,,,] as an array value.
|
||||
return fs.WrapErr(fmt.Errorf("wanted value token, but got token: %v", tok))
|
||||
}
|
||||
continue
|
||||
} else {
|
||||
wantVal = true
|
||||
}
|
||||
|
||||
/* handler: tmpJUIDMap type=idtools.IDMap kind=struct quoted=false*/
|
||||
|
||||
{
|
||||
/* Falling back. type=idtools.IDMap kind=struct */
|
||||
tbuf, err := fs.CaptureField(tok)
|
||||
if err != nil {
|
||||
return fs.WrapErr(err)
|
||||
}
|
||||
|
||||
err = json.Unmarshal(tbuf, &tmpJUIDMap)
|
||||
if err != nil {
|
||||
return fs.WrapErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
j.UIDMap = append(j.UIDMap, tmpJUIDMap)
|
||||
|
||||
wantVal = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
state = fflib.FFParse_after_value
|
||||
goto mainparse
|
||||
|
||||
handle_GIDMap:
|
||||
|
||||
/* handler: j.GIDMap type=[]idtools.IDMap kind=slice quoted=false*/
|
||||
|
||||
{
|
||||
|
||||
{
|
||||
if tok != fflib.FFTok_left_brace && tok != fflib.FFTok_null {
|
||||
return fs.WrapErr(fmt.Errorf("cannot unmarshal %s into Go value for ", tok))
|
||||
}
|
||||
}
|
||||
|
||||
if tok == fflib.FFTok_null {
|
||||
j.GIDMap = nil
|
||||
} else {
|
||||
|
||||
j.GIDMap = []idtools.IDMap{}
|
||||
|
||||
wantVal := true
|
||||
|
||||
for {
|
||||
|
||||
var tmpJGIDMap idtools.IDMap
|
||||
|
||||
tok = fs.Scan()
|
||||
if tok == fflib.FFTok_error {
|
||||
goto tokerror
|
||||
}
|
||||
if tok == fflib.FFTok_right_brace {
|
||||
break
|
||||
}
|
||||
|
||||
if tok == fflib.FFTok_comma {
|
||||
if wantVal == true {
|
||||
// TODO(pquerna): this isn't an ideal error message, this handles
|
||||
// things like [,,,] as an array value.
|
||||
return fs.WrapErr(fmt.Errorf("wanted value token, but got token: %v", tok))
|
||||
}
|
||||
continue
|
||||
} else {
|
||||
wantVal = true
|
||||
}
|
||||
|
||||
/* handler: tmpJGIDMap type=idtools.IDMap kind=struct quoted=false*/
|
||||
|
||||
{
|
||||
/* Falling back. type=idtools.IDMap kind=struct */
|
||||
tbuf, err := fs.CaptureField(tok)
|
||||
if err != nil {
|
||||
return fs.WrapErr(err)
|
||||
}
|
||||
|
||||
err = json.Unmarshal(tbuf, &tmpJGIDMap)
|
||||
if err != nil {
|
||||
return fs.WrapErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
j.GIDMap = append(j.GIDMap, tmpJGIDMap)
|
||||
|
||||
wantVal = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
state = fflib.FFParse_after_value
|
||||
goto mainparse
|
||||
|
||||
handle_Flags:
|
||||
|
||||
/* handler: j.Flags type=map[string]interface {} kind=map quoted=false*/
|
||||
|
||||
64
vendor/github.com/containers/storage/drivers/aufs/aufs.go
generated
vendored
64
vendor/github.com/containers/storage/drivers/aufs/aufs.go
generated
vendored
@@ -167,7 +167,7 @@ func Init(root string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
|
||||
}
|
||||
}
|
||||
|
||||
a.naiveDiff = graphdriver.NewNaiveDiffDriver(a, uidMaps, gidMaps)
|
||||
a.naiveDiff = graphdriver.NewNaiveDiffDriver(a, a)
|
||||
return a, nil
|
||||
}
|
||||
|
||||
@@ -250,7 +250,7 @@ func (a *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) error {
|
||||
return fmt.Errorf("--storage-opt is not supported for aufs")
|
||||
}
|
||||
|
||||
if err := a.createDirsFor(id); err != nil {
|
||||
if err := a.createDirsFor(id, parent); err != nil {
|
||||
return err
|
||||
}
|
||||
// Write the layers metadata
|
||||
@@ -281,21 +281,26 @@ func (a *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) error {
|
||||
|
||||
// createDirsFor creates two directories for the given id.
|
||||
// mnt and diff
|
||||
func (a *Driver) createDirsFor(id string) error {
|
||||
func (a *Driver) createDirsFor(id, parent string) error {
|
||||
paths := []string{
|
||||
"mnt",
|
||||
"diff",
|
||||
}
|
||||
|
||||
rootUID, rootGID, err := idtools.GetRootUIDGID(a.uidMaps, a.gidMaps)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Directory permission is 0755.
|
||||
// The path of directories are <aufs_root_path>/mnt/<image_id>
|
||||
// and <aufs_root_path>/diff/<image_id>
|
||||
for _, p := range paths {
|
||||
if err := idtools.MkdirAllAs(path.Join(a.rootPath(), p, id), 0755, rootUID, rootGID); err != nil {
|
||||
rootPair := idtools.NewIDMappingsFromMaps(a.uidMaps, a.gidMaps).RootPair()
|
||||
if parent != "" {
|
||||
st, err := system.Stat(path.Join(a.rootPath(), p, parent))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rootPair.UID = int(st.UID())
|
||||
rootPair.GID = int(st.GID())
|
||||
}
|
||||
if err := idtools.MkdirAllAndChownNew(path.Join(a.rootPath(), p, id), os.FileMode(0755), rootPair); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -463,17 +468,21 @@ func (a *Driver) isParent(id, parent string) bool {
|
||||
|
||||
// Diff produces an archive of the changes between the specified
|
||||
// layer and its parent layer which may be "".
|
||||
func (a *Driver) Diff(id, parent string) (io.ReadCloser, error) {
|
||||
func (a *Driver) Diff(id string, idMappings *idtools.IDMappings, parent string, parentMappings *idtools.IDMappings, mountLabel string) (io.ReadCloser, error) {
|
||||
if !a.isParent(id, parent) {
|
||||
return a.naiveDiff.Diff(id, parent)
|
||||
return a.naiveDiff.Diff(id, idMappings, parent, parentMappings, mountLabel)
|
||||
}
|
||||
|
||||
if idMappings == nil {
|
||||
idMappings = &idtools.IDMappings{}
|
||||
}
|
||||
|
||||
// AUFS doesn't need the parent layer to produce a diff.
|
||||
return archive.TarWithOptions(path.Join(a.rootPath(), "diff", id), &archive.TarOptions{
|
||||
Compression: archive.Uncompressed,
|
||||
ExcludePatterns: []string{archive.WhiteoutMetaPrefix + "*", "!" + archive.WhiteoutOpaqueDir},
|
||||
UIDMaps: a.uidMaps,
|
||||
GIDMaps: a.gidMaps,
|
||||
UIDMaps: idMappings.UIDs(),
|
||||
GIDMaps: idMappings.GIDs(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -492,19 +501,22 @@ func (a *Driver) DiffGetter(id string) (graphdriver.FileGetCloser, error) {
|
||||
return fileGetNilCloser{storage.NewPathFileGetter(p)}, nil
|
||||
}
|
||||
|
||||
func (a *Driver) applyDiff(id string, diff io.Reader) error {
|
||||
func (a *Driver) applyDiff(id string, idMappings *idtools.IDMappings, diff io.Reader) error {
|
||||
if idMappings == nil {
|
||||
idMappings = &idtools.IDMappings{}
|
||||
}
|
||||
return chrootarchive.UntarUncompressed(diff, path.Join(a.rootPath(), "diff", id), &archive.TarOptions{
|
||||
UIDMaps: a.uidMaps,
|
||||
GIDMaps: a.gidMaps,
|
||||
UIDMaps: idMappings.UIDs(),
|
||||
GIDMaps: idMappings.GIDs(),
|
||||
})
|
||||
}
|
||||
|
||||
// DiffSize calculates the changes between the specified id
|
||||
// and its parent and returns the size in bytes of the changes
|
||||
// relative to its base filesystem directory.
|
||||
func (a *Driver) DiffSize(id, parent string) (size int64, err error) {
|
||||
func (a *Driver) DiffSize(id string, idMappings *idtools.IDMappings, parent string, parentMappings *idtools.IDMappings, mountLabel string) (size int64, err error) {
|
||||
if !a.isParent(id, parent) {
|
||||
return a.naiveDiff.DiffSize(id, parent)
|
||||
return a.naiveDiff.DiffSize(id, idMappings, parent, parentMappings, mountLabel)
|
||||
}
|
||||
// AUFS doesn't need the parent layer to calculate the diff size.
|
||||
return directory.Size(path.Join(a.rootPath(), "diff", id))
|
||||
@@ -513,24 +525,24 @@ func (a *Driver) DiffSize(id, parent string) (size int64, err error) {
|
||||
// ApplyDiff extracts the changeset from the given diff into the
|
||||
// layer with the specified id and parent, returning the size of the
|
||||
// new layer in bytes.
|
||||
func (a *Driver) ApplyDiff(id, parent string, diff io.Reader) (size int64, err error) {
|
||||
func (a *Driver) ApplyDiff(id string, idMappings *idtools.IDMappings, parent, mountLabel string, diff io.Reader) (size int64, err error) {
|
||||
if !a.isParent(id, parent) {
|
||||
return a.naiveDiff.ApplyDiff(id, parent, diff)
|
||||
return a.naiveDiff.ApplyDiff(id, idMappings, parent, mountLabel, diff)
|
||||
}
|
||||
|
||||
// AUFS doesn't need the parent id to apply the diff if it is the direct parent.
|
||||
if err = a.applyDiff(id, diff); err != nil {
|
||||
if err = a.applyDiff(id, idMappings, diff); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return a.DiffSize(id, parent)
|
||||
return directory.Size(path.Join(a.rootPath(), "diff", id))
|
||||
}
|
||||
|
||||
// Changes produces a list of changes between the specified layer
|
||||
// and its parent layer. If parent is "", then all changes will be ADD changes.
|
||||
func (a *Driver) Changes(id, parent string) ([]archive.Change, error) {
|
||||
func (a *Driver) Changes(id string, idMappings *idtools.IDMappings, parent string, parentMappings *idtools.IDMappings, mountLabel string) ([]archive.Change, error) {
|
||||
if !a.isParent(id, parent) {
|
||||
return a.naiveDiff.Changes(id, parent)
|
||||
return a.naiveDiff.Changes(id, idMappings, parent, parentMappings, mountLabel)
|
||||
}
|
||||
|
||||
// AUFS doesn't have snapshots, so we need to get changes from all parent
|
||||
@@ -689,3 +701,9 @@ func useDirperm() bool {
|
||||
})
|
||||
return enableDirperm
|
||||
}
|
||||
|
||||
// UpdateLayerIDMap updates ID mappings in a layer from matching the ones
|
||||
// specified by toContainer to those specified by toHost.
|
||||
func (a *Driver) UpdateLayerIDMap(id string, toContainer, toHost *idtools.IDMappings, mountLabel string) error {
|
||||
return fmt.Errorf("aufs doesn't support changing ID mappings")
|
||||
}
|
||||
|
||||
2
vendor/github.com/containers/storage/drivers/btrfs/btrfs.go
generated
vendored
2
vendor/github.com/containers/storage/drivers/btrfs/btrfs.go
generated
vendored
@@ -90,7 +90,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
|
||||
}
|
||||
}
|
||||
|
||||
return graphdriver.NewNaiveDiffDriver(driver, uidMaps, gidMaps), nil
|
||||
return graphdriver.NewNaiveDiffDriver(driver, graphdriver.NewNaiveLayerIDMapUpdater(driver)), nil
|
||||
}
|
||||
|
||||
func parseOptions(opt []string) (btrfsOptions, bool, error) {
|
||||
|
||||
167
vendor/github.com/containers/storage/drivers/chown.go
generated
vendored
Normal file
167
vendor/github.com/containers/storage/drivers/chown.go
generated
vendored
Normal file
@@ -0,0 +1,167 @@
|
||||
package graphdriver
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
|
||||
"github.com/containers/storage/pkg/idtools"
|
||||
"github.com/containers/storage/pkg/reexec"
|
||||
)
|
||||
|
||||
const (
|
||||
chownByMapsCmd = "storage-chown-by-maps"
|
||||
)
|
||||
|
||||
func init() {
|
||||
reexec.Register(chownByMapsCmd, chownByMapsMain)
|
||||
}
|
||||
|
||||
func chownByMapsMain() {
|
||||
if len(os.Args) < 2 {
|
||||
fmt.Fprintf(os.Stderr, "requires mapping configuration on stdin and directory path")
|
||||
os.Exit(1)
|
||||
}
|
||||
// Read and decode our configuration.
|
||||
discreteMaps := [4][]idtools.IDMap{}
|
||||
config := bytes.Buffer{}
|
||||
if _, err := config.ReadFrom(os.Stdin); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "error reading configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := json.Unmarshal(config.Bytes(), &discreteMaps); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "error decoding configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
// Try to chroot. This may not be possible, and on some systems that
|
||||
// means we just Chdir() to the directory, so from here on we should be
|
||||
// using relative paths.
|
||||
if err := chrootOrChdir(os.Args[1]); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "error chrooting to %q: %v", os.Args[1], err)
|
||||
os.Exit(1)
|
||||
}
|
||||
// Build the mapping objects.
|
||||
toContainer := idtools.NewIDMappingsFromMaps(discreteMaps[0], discreteMaps[1])
|
||||
if len(toContainer.UIDs()) == 0 && len(toContainer.GIDs()) == 0 {
|
||||
toContainer = nil
|
||||
}
|
||||
toHost := idtools.NewIDMappingsFromMaps(discreteMaps[2], discreteMaps[3])
|
||||
if len(toHost.UIDs()) == 0 && len(toHost.GIDs()) == 0 {
|
||||
toHost = nil
|
||||
}
|
||||
chown := func(path string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return fmt.Errorf("error walking to %q: %v", path, err)
|
||||
}
|
||||
sysinfo := info.Sys()
|
||||
if st, ok := sysinfo.(*syscall.Stat_t); ok {
|
||||
// Map an on-disk UID/GID pair from host to container
|
||||
// using the first map, then back to the host using the
|
||||
// second map. Skip that first step if they're 0, to
|
||||
// compensate for cases where a parent layer should
|
||||
// have had a mapped value, but didn't.
|
||||
uid, gid := int(st.Uid), int(st.Gid)
|
||||
if toContainer != nil {
|
||||
pair := idtools.IDPair{
|
||||
UID: uid,
|
||||
GID: gid,
|
||||
}
|
||||
mappedUid, mappedGid, err := toContainer.ToContainer(pair)
|
||||
if err != nil {
|
||||
if (uid != 0) || (gid != 0) {
|
||||
return fmt.Errorf("error mapping host ID pair %#v for %q to container: %v", pair, path, err)
|
||||
}
|
||||
mappedUid, mappedGid = uid, gid
|
||||
}
|
||||
uid, gid = mappedUid, mappedGid
|
||||
}
|
||||
if toHost != nil {
|
||||
pair := idtools.IDPair{
|
||||
UID: uid,
|
||||
GID: gid,
|
||||
}
|
||||
mappedPair, err := toHost.ToHost(pair)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error mapping container ID pair %#v for %q to host: %v", pair, path, err)
|
||||
}
|
||||
uid, gid = mappedPair.UID, mappedPair.GID
|
||||
}
|
||||
if uid != int(st.Uid) || gid != int(st.Gid) {
|
||||
// Make the change.
|
||||
if err := syscall.Lchown(path, uid, gid); err != nil {
|
||||
return fmt.Errorf("%s: chown(%q): %v", os.Args[0], path, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if err := filepath.Walk(".", chown); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "error during chown: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
// ChownPathByMaps walks the filesystem tree, changing the ownership
|
||||
// information using the toContainer and toHost mappings, using them to replace
|
||||
// on-disk owner UIDs and GIDs which are "host" values in the first map with
|
||||
// UIDs and GIDs for "host" values from the second map which correspond to the
|
||||
// same "container" IDs.
|
||||
func ChownPathByMaps(path string, toContainer, toHost *idtools.IDMappings) error {
|
||||
if toContainer == nil {
|
||||
toContainer = &idtools.IDMappings{}
|
||||
}
|
||||
if toHost == nil {
|
||||
toHost = &idtools.IDMappings{}
|
||||
}
|
||||
|
||||
config, err := json.Marshal([4][]idtools.IDMap{toContainer.UIDs(), toContainer.GIDs(), toHost.UIDs(), toHost.GIDs()})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cmd := reexec.Command(chownByMapsCmd, path)
|
||||
cmd.Stdin = bytes.NewReader(config)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if len(output) > 0 && err != nil {
|
||||
return fmt.Errorf("%v: %s", err, string(output))
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(output) > 0 {
|
||||
return fmt.Errorf("%s", string(output))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type naiveLayerIDMapUpdater struct {
|
||||
ProtoDriver
|
||||
}
|
||||
|
||||
// NewNaiveLayerIDMapUpdater wraps the ProtoDriver in a LayerIDMapUpdater that
|
||||
// uses ChownPathByMaps to update the ownerships in a layer's filesystem tree.
|
||||
func NewNaiveLayerIDMapUpdater(driver ProtoDriver) LayerIDMapUpdater {
|
||||
return &naiveLayerIDMapUpdater{ProtoDriver: driver}
|
||||
}
|
||||
|
||||
// UpdateLayerIDMap walks the layer's filesystem tree, changing the ownership
|
||||
// information using the toContainer and toHost mappings, using them to replace
|
||||
// on-disk owner UIDs and GIDs which are "host" values in the first map with
|
||||
// UIDs and GIDs for "host" values from the second map which correspond to the
|
||||
// same "container" IDs.
|
||||
func (n *naiveLayerIDMapUpdater) UpdateLayerIDMap(id string, toContainer, toHost *idtools.IDMappings, mountLabel string) error {
|
||||
driver := n.ProtoDriver
|
||||
layerFs, err := driver.Get(id, mountLabel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer func() {
|
||||
driver.Put(id)
|
||||
}()
|
||||
|
||||
return ChownPathByMaps(layerFs, toContainer, toHost)
|
||||
}
|
||||
21
vendor/github.com/containers/storage/drivers/chroot_unix.go
generated
vendored
Normal file
21
vendor/github.com/containers/storage/drivers/chroot_unix.go
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
// +build linux darwin freebsd solaris
|
||||
|
||||
package graphdriver
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// chrootOrChdir() is either a chdir() to the specified path, or a chroot() to the
|
||||
// specified path followed by chdir() to the new root directory
|
||||
func chrootOrChdir(path string) error {
|
||||
if err := syscall.Chroot(path); err != nil {
|
||||
return fmt.Errorf("error chrooting to %q: %v", path, err)
|
||||
}
|
||||
if err := syscall.Chdir(string(os.PathSeparator)); err != nil {
|
||||
return fmt.Errorf("error changing to %q: %v", path, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user