mirror of
https://github.com/containers/skopeo.git
synced 2026-01-31 06:19:20 +00:00
Compare commits
88 Commits
v0.2.0
...
release-1.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fcc57b1fa7 | ||
|
|
13c69d48fa | ||
|
|
67abbb3cef | ||
|
|
747abd054f | ||
|
|
8a664856cd | ||
|
|
3f81ee46c3 | ||
|
|
14ba92b2f6 | ||
|
|
63085f5bef | ||
|
|
091f9248dc | ||
|
|
dd7dd75334 | ||
|
|
b70dfae2ae | ||
|
|
0bd78a0604 | ||
|
|
9e0839c33f | ||
|
|
9bafa7e80d | ||
|
|
827293a13b | ||
|
|
6198daeb2c | ||
|
|
161ef5a224 | ||
|
|
9e99ad99d4 | ||
|
|
c36502ce31 | ||
|
|
f9b0d93ee0 | ||
|
|
4eaaf31249 | ||
|
|
c6b488a82c | ||
|
|
7cfc62922f | ||
|
|
5284f6d832 | ||
|
|
ae97c667e3 | ||
|
|
a2c1d46302 | ||
|
|
8b4b954332 | ||
|
|
c103d65284 | ||
|
|
c5183d0e34 | ||
|
|
16b435257b | ||
|
|
35f3595d02 | ||
|
|
0ee81dc9fe | ||
|
|
805885091f | ||
|
|
97ec6873fa | ||
|
|
d16cd39939 | ||
|
|
7439f94e22 | ||
|
|
443380731e | ||
|
|
56c6325ba0 | ||
|
|
0ae9db5dd6 | ||
|
|
677c29bf24 | ||
|
|
72376c4144 | ||
|
|
322625eeca | ||
|
|
9c1936fd07 | ||
|
|
3a94432e42 | ||
|
|
ce1f807aa0 | ||
|
|
a51af64dd9 | ||
|
|
a31d6069dc | ||
|
|
96353f2b64 | ||
|
|
2330455c8d | ||
|
|
91a88de6a1 | ||
|
|
2afe7a3e1e | ||
|
|
bec7f6977e | ||
|
|
60ecaffbe8 | ||
|
|
dcaee948d3 | ||
|
|
2fe7087d52 | ||
|
|
bd162028cd | ||
|
|
a214a305fd | ||
|
|
5093d5b5f6 | ||
|
|
0d9939dcd4 | ||
|
|
1b2de8ec5d | ||
|
|
ab2300500a | ||
|
|
fbf061260c | ||
|
|
4244d68240 | ||
|
|
dda31b3d4b | ||
|
|
2af172653c | ||
|
|
3247c0d229 | ||
|
|
eb024319de | ||
|
|
4ca9b139bb | ||
|
|
b79a37ead9 | ||
|
|
0ec2610f04 | ||
|
|
71a14d7df6 | ||
|
|
8936e76316 | ||
|
|
e21d6b3687 | ||
|
|
a6ab2291ba | ||
|
|
8f845aac23 | ||
|
|
439ea83081 | ||
|
|
42f68c1c76 | ||
|
|
8d252f82fd | ||
|
|
1ddb736b5a | ||
|
|
46fbbbd282 | ||
|
|
e7a7f018bd | ||
|
|
311fc89548 | ||
|
|
a6abdb8547 | ||
|
|
02407d98a5 | ||
|
|
b230a507e7 | ||
|
|
116add9d00 | ||
|
|
2415f3fa4d | ||
|
|
5f8d3fc639 |
@@ -7,6 +7,8 @@ RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-md2ma
|
||||
# gpgme bindings deps
|
||||
libassuan-devel gpgme-devel \
|
||||
gnupg \
|
||||
# htpasswd for system tests
|
||||
httpd-tools \
|
||||
# OpenShift deps
|
||||
which tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \
|
||||
bats jq podman runc \
|
||||
|
||||
188
README.md
188
README.md
@@ -7,9 +7,13 @@ skopeo [ as well as the original Docker v2 images.
|
||||
|
||||
Skopeo works with API V2 registries such as Docker registries, the Atomic registry, private registries, local directories and local OCI-layout directories. Skopeo does not require a daemon to be running to perform these operations which consist of:
|
||||
Skopeo works with API V2 container image registries such as [docker.io](https://docker.io) and [quay.io](https://quay.io) registries, private registries, local directories and local OCI-layout directories. Skopeo can perform operations which consist of:
|
||||
|
||||
* Copying an image from and to various storage mechanisms.
|
||||
For example you can copy images from one registry to another, without requiring privilege.
|
||||
@@ -20,16 +24,16 @@ Skopeo works with API V2 registries such as Docker registries, the Atomic regist
|
||||
Skopeo operates on the following image and repository types:
|
||||
|
||||
* containers-storage:docker-reference
|
||||
An image located in a local containers/storage image store. Location and image store specified in /etc/containers/storage.conf
|
||||
An image located in a local containers/storage image store. Both the location and image store are specified in /etc/containers/storage.conf. (This is the backend for [Podman](https://podman.io), [CRI-O](https://cri-o.io), [Buildah](https://buildah.io) and friends)
|
||||
|
||||
* dir:path
|
||||
An existing local directory path storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
|
||||
|
||||
* docker://docker-reference
|
||||
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in $HOME/.docker/config.json, which is set e.g. using (docker login).
|
||||
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `skopeo login`.
|
||||
|
||||
* docker-archive:path[:docker-reference]
|
||||
An image is stored in the `docker save` formated file. docker-reference is only used when creating such a file, and it must not contain a digest.
|
||||
An image is stored in a `docker save`-formatted file. docker-reference is only used when creating such a file, and it must not contain a digest.
|
||||
|
||||
* docker-daemon:docker-reference
|
||||
An image docker-reference stored in the docker daemon internal storage. docker-reference must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
|
||||
@@ -37,134 +41,150 @@ Skopeo works with API V2 registries such as Docker registries, the Atomic regist
|
||||
* oci:path:tag
|
||||
An image tag in a directory compliant with "Open Container Image Layout Specification" at path.
|
||||
|
||||
Inspecting a repository
|
||||
-
|
||||
`skopeo` is able to _inspect_ a repository on a Docker registry and fetch images layers.
|
||||
## Inspecting a repository
|
||||
`skopeo` is able to _inspect_ a repository on a container registry and fetch images layers.
|
||||
The _inspect_ command fetches the repository's manifest and it is able to show you a `docker inspect`-like
|
||||
json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about
|
||||
a repository or a tag before pulling it (using disk space). The inspect command can show you which tags are available for the given
|
||||
repository, the labels the image has, the creation date and operating system of the image and more.
|
||||
|
||||
|
||||
Examples:
|
||||
```sh
|
||||
# show properties of fedora:latest
|
||||
$ skopeo inspect docker://docker.io/fedora
|
||||
|
||||
#### Show properties of fedora:latest
|
||||
```console
|
||||
$ skopeo inspect docker://registry.fedoraproject.org/fedora:latest
|
||||
{
|
||||
"Name": "docker.io/library/fedora",
|
||||
"Tag": "latest",
|
||||
"Digest": "sha256:cfd8f071bf8da7a466748f522406f7ae5908d002af1b1a1c0dcf893e183e5b32",
|
||||
"Name": "registry.fedoraproject.org/fedora",
|
||||
"Digest": "sha256:655721ff613ee766a4126cb5e0d5ae81598e1b0c3bcf7017c36c4d72cb092fe9",
|
||||
"RepoTags": [
|
||||
"20",
|
||||
"21",
|
||||
"22",
|
||||
"23",
|
||||
"heisenbug",
|
||||
"latest",
|
||||
"rawhide"
|
||||
"24",
|
||||
"25",
|
||||
"26-modular",
|
||||
...
|
||||
],
|
||||
"Created": "2016-03-04T18:40:02.92155334Z",
|
||||
"DockerVersion": "1.9.1",
|
||||
"Labels": {},
|
||||
"Created": "2020-04-29T06:48:16Z",
|
||||
"DockerVersion": "1.10.1",
|
||||
"Labels": {
|
||||
"license": "MIT",
|
||||
"name": "fedora",
|
||||
"vendor": "Fedora Project",
|
||||
"version": "32"
|
||||
},
|
||||
"Architecture": "amd64",
|
||||
"Os": "linux",
|
||||
"Layers": [
|
||||
"sha256:236608c7b546e2f4e7223526c74fc71470ba06d46ec82aeb402e704bfdee02a2",
|
||||
"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
|
||||
"sha256:3088721d7dbf674fc0be64cd3cf00c25aab921cacf35fa0e7b1578500a3e1653"
|
||||
],
|
||||
"Env": [
|
||||
"DISTTAG=f32container",
|
||||
"FGC=f32",
|
||||
"container=oci"
|
||||
]
|
||||
}
|
||||
|
||||
# show unverifed image's digest
|
||||
$ skopeo inspect docker://docker.io/fedora:rawhide | jq '.Digest'
|
||||
"sha256:905b4846938c8aef94f52f3e41a11398ae5b40f5855fb0e40ed9c157e721d7f8"
|
||||
```
|
||||
|
||||
Copying images
|
||||
-
|
||||
`skopeo` can copy container images between various storage mechanisms, including:
|
||||
* Docker distribution based registries
|
||||
#### Show container configuration from `fedora:latest`
|
||||
|
||||
- The Docker Hub, OpenShift, GCR, Artifactory, Quay ...
|
||||
```console
|
||||
$ skopeo inspect --config docker://registry.fedoraproject.org/fedora:latest | jq
|
||||
{
|
||||
"created": "2020-04-29T06:48:16Z",
|
||||
"architecture": "amd64",
|
||||
"os": "linux",
|
||||
"config": {
|
||||
"Env": [
|
||||
"DISTTAG=f32container",
|
||||
"FGC=f32",
|
||||
"container=oci"
|
||||
],
|
||||
"Cmd": [
|
||||
"/bin/bash"
|
||||
],
|
||||
"Labels": {
|
||||
"license": "MIT",
|
||||
"name": "fedora",
|
||||
"vendor": "Fedora Project",
|
||||
"version": "32"
|
||||
}
|
||||
},
|
||||
"rootfs": {
|
||||
"type": "layers",
|
||||
"diff_ids": [
|
||||
"sha256:a4c0fa2b217d3fd63d51e55a6fd59432e543d499c0df2b1acd48fbe424f2ddd1"
|
||||
]
|
||||
},
|
||||
"history": [
|
||||
{
|
||||
"created": "2020-04-29T06:48:16Z",
|
||||
"comment": "Created by Image Factory"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
#### Show unverifed image's digest
|
||||
```console
|
||||
$ skopeo inspect docker://registry.fedoraproject.org/fedora:latest | jq '.Digest'
|
||||
"sha256:655721ff613ee766a4126cb5e0d5ae81598e1b0c3bcf7017c36c4d72cb092fe9"
|
||||
```
|
||||
|
||||
## Copying images
|
||||
|
||||
`skopeo` can copy container images between various storage mechanisms, including:
|
||||
* Container registries
|
||||
|
||||
- The Quay, Docker Hub, OpenShift, GCR, Artifactory ...
|
||||
|
||||
* Container Storage backends
|
||||
|
||||
- Docker daemon storage
|
||||
- github.com/containers/storage (Backend for [Podman](https://podman.io), [CRI-O](https://cri-o.io), [Buildah](https://buildah.io) and friends)
|
||||
|
||||
- github.com/containers/storage (Backend for CRI-O, Buildah and friends)
|
||||
- Docker daemon storage
|
||||
|
||||
* Local directories
|
||||
|
||||
* Local OCI-layout directories
|
||||
|
||||
```sh
|
||||
$ skopeo copy docker://busybox:1-glibc atomic:myns/unsigned:streaming
|
||||
$ skopeo copy docker://busybox:latest dir:existingemptydirectory
|
||||
$ skopeo copy docker://busybox:latest oci:busybox_ocilayout:latest
|
||||
```console
|
||||
$ skopeo copy docker://quay.io/buildah/stable docker://registry.internal.company.com/buildah
|
||||
$ skopeo copy oci:busybox_ocilayout:latest dir:existingemptydirectory
|
||||
```
|
||||
|
||||
Deleting images
|
||||
-
|
||||
For example,
|
||||
```sh
|
||||
## Deleting images
|
||||
```console
|
||||
$ skopeo delete docker://localhost:5000/imagename:latest
|
||||
```
|
||||
|
||||
Private registries with authentication
|
||||
-
|
||||
When interacting with private registries, `skopeo` first looks for `--creds` (for `skopeo inspect|delete`) or `--src-creds|--dest-creds` (for `skopeo copy`) flags. If those aren't provided, it looks for the Docker's cli config file (usually located at `$HOME/.docker/config.json`) to get the credentials needed to authenticate. The ultimate fallback, as Docker does, is to provide an empty authentication when interacting with those registries.
|
||||
## Authenticating to a registry
|
||||
|
||||
Examples:
|
||||
```sh
|
||||
$ cat /home/runcom/.docker/config.json
|
||||
{
|
||||
"auths": {
|
||||
"myregistrydomain.com:5000": {
|
||||
"auth": "dGVzdHVzZXI6dGVzdHBhc3N3b3Jk",
|
||||
"email": "stuf@ex.cm"
|
||||
}
|
||||
}
|
||||
}
|
||||
#### Private registries with authentication
|
||||
skopeo uses credentials from the --creds (for skopeo inspect|delete) or --src-creds|--dest-creds (for skopeo copy) flags, if set; otherwise it uses configuration set by skopeo login, podman login, buildah login, or docker login.
|
||||
|
||||
# we can see I'm already authenticated via docker login so everything will be fine
|
||||
```console
|
||||
$ skopeo login --user USER docker://myregistrydomain.com:5000
|
||||
Password:
|
||||
$ skopeo inspect docker://myregistrydomain.com:5000/busybox
|
||||
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
|
||||
$ skopeo logout docker://myregistrydomain.com:5000
|
||||
```
|
||||
|
||||
# let's try now to fake a non existent Docker's config file
|
||||
$ cat /home/runcom/.docker/config.json
|
||||
{}
|
||||
#### Using --creds directly
|
||||
|
||||
$ skopeo inspect docker://myregistrydomain.com:5000/busybox
|
||||
FATA[0000] unauthorized: authentication required
|
||||
|
||||
# passing --creds - we can see that everything goes fine
|
||||
```console
|
||||
$ skopeo inspect --creds=testuser:testpassword docker://myregistrydomain.com:5000/busybox
|
||||
{"Tag":"latest","Digest":"sha256:473bb2189d7b913ed7187a33d11e743fdc2f88931122a44d91a301b64419f092","RepoTags":["latest"],"Comment":"","Created":"2016-01-15T18:06:41.282540103Z","ContainerConfig":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/sh","-c","#(nop) CMD [\"sh\"]"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"1.8.3","Author":"","Config":{"Hostname":"aded96b43f48","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["sh"],"Image":"9e77fef7a1c9f989988c06620dabc4020c607885b959a2cbd7c2283c91da3e33","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux"}
|
||||
```
|
||||
|
||||
# skopeo copy example:
|
||||
```console
|
||||
$ skopeo copy --src-creds=testuser:testpassword docker://myregistrydomain.com:5000/private oci:local_oci_image
|
||||
```
|
||||
If your cli config is found but it doesn't contain the necessary credentials for the queried registry
|
||||
you'll get an error. You can fix this by either logging in (via `docker login`) or providing `--creds` or `--src-creds|--dest-creds`.
|
||||
|
||||
|
||||
Obtaining skopeo
|
||||
[Obtaining skopeo](./install.md)
|
||||
-
|
||||
|
||||
For a detailed description how to install or build skopeo, see
|
||||
[install.md](./install.md).
|
||||
|
||||
TODO
|
||||
-
|
||||
- list all images on registry?
|
||||
- registry v2 search?
|
||||
- show repo tags via flag or when reference isn't tagged or digested
|
||||
- support rkt/appc image spec
|
||||
|
||||
NOT TODO
|
||||
-
|
||||
- provide a _format_ flag - just use the awesome [jq](https://stedolan.github.io/jq/)
|
||||
|
||||
CONTRIBUTING
|
||||
Contributing
|
||||
-
|
||||
|
||||
Please read the [contribution guide](CONTRIBUTING.md) if you want to collaborate in the project.
|
||||
|
||||
3
SECURITY.md
Normal file
3
SECURITY.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Security and Disclosure Information Policy for the skopeo Project
|
||||
|
||||
The skopeo Project follows the [Security and Disclosure Information Policy](https://github.com/containers/common/blob/master/SECURITY.md) for the Containers Projects.
|
||||
@@ -11,29 +11,29 @@ import (
|
||||
"github.com/containers/image/v5/manifest"
|
||||
"github.com/containers/image/v5/transports"
|
||||
"github.com/containers/image/v5/transports/alltransports"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
encconfig "github.com/containers/ocicrypt/config"
|
||||
enchelpers "github.com/containers/ocicrypt/helpers"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
type copyOptions struct {
|
||||
global *globalOptions
|
||||
srcImage *imageOptions
|
||||
destImage *imageDestOptions
|
||||
additionalTags cli.StringSlice // For docker-archive: destinations, in addition to the name:tag specified as destination, also add these
|
||||
removeSignatures bool // Do not copy signatures from the source image
|
||||
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
|
||||
format optionalString // Force conversion of the image to a specified format
|
||||
quiet bool // Suppress output information when copying images
|
||||
all bool // Copy all of the images if the source is a list
|
||||
encryptLayer cli.IntSlice // The list of layers to encrypt
|
||||
encryptionKeys cli.StringSlice // Keys needed to encrypt the image
|
||||
decryptionKeys cli.StringSlice // Keys needed to decrypt the image
|
||||
additionalTags []string // For docker-archive: destinations, in addition to the name:tag specified as destination, also add these
|
||||
removeSignatures bool // Do not copy signatures from the source image
|
||||
signByFingerprint string // Sign the image using a GPG key with the specified fingerprint
|
||||
format optionalString // Force conversion of the image to a specified format
|
||||
quiet bool // Suppress output information when copying images
|
||||
all bool // Copy all of the images if the source is a list
|
||||
encryptLayer []int // The list of layers to encrypt
|
||||
encryptionKeys []string // Keys needed to encrypt the image
|
||||
decryptionKeys []string // Keys needed to decrypt the image
|
||||
}
|
||||
|
||||
func copyCmd(global *globalOptions) cli.Command {
|
||||
func copyCmd(global *globalOptions) *cobra.Command {
|
||||
sharedFlags, sharedOpts := sharedImageFlags()
|
||||
srcFlags, srcOpts := imageFlags(global, sharedOpts, "src-", "screds")
|
||||
destFlags, destOpts := imageDestFlags(global, sharedOpts, "dest-", "dcreds")
|
||||
@@ -41,70 +41,34 @@ func copyCmd(global *globalOptions) cli.Command {
|
||||
srcImage: srcOpts,
|
||||
destImage: destOpts,
|
||||
}
|
||||
cmd := &cobra.Command{
|
||||
Use: "copy [command options] SOURCE-IMAGE DESTINATION-IMAGE",
|
||||
Short: "Copy an IMAGE-NAME from one location to another",
|
||||
Long: fmt.Sprintf(`Container "IMAGE-NAME" uses a "transport":"details" format.
|
||||
|
||||
return cli.Command{
|
||||
Name: "copy",
|
||||
Usage: "Copy an IMAGE-NAME from one location to another",
|
||||
Description: fmt.Sprintf(`
|
||||
Supported transports:
|
||||
%s
|
||||
|
||||
Container "IMAGE-NAME" uses a "transport":"details" format.
|
||||
|
||||
Supported transports:
|
||||
%s
|
||||
|
||||
See skopeo(1) section "IMAGE NAMES" for the expected format
|
||||
`, strings.Join(transports.ListNames(), ", ")),
|
||||
ArgsUsage: "SOURCE-IMAGE DESTINATION-IMAGE",
|
||||
Action: commandAction(opts.run),
|
||||
// FIXME: Do we need to namespace the GPG aspect?
|
||||
Flags: append(append(append([]cli.Flag{
|
||||
cli.StringSliceFlag{
|
||||
Name: "additional-tag",
|
||||
Usage: "additional tags (supports docker-archive)",
|
||||
Value: &opts.additionalTags, // Surprisingly StringSliceFlag does not support Destination:, but modifies Value: in place.
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "quiet, q",
|
||||
Usage: "Suppress output information when copying images",
|
||||
Destination: &opts.quiet,
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "all, a",
|
||||
Usage: "Copy all images if SOURCE-IMAGE is a list",
|
||||
Destination: &opts.all,
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "remove-signatures",
|
||||
Usage: "Do not copy signatures from SOURCE-IMAGE",
|
||||
Destination: &opts.removeSignatures,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "sign-by",
|
||||
Usage: "Sign the image using a GPG key with the specified `FINGERPRINT`",
|
||||
Destination: &opts.signByFingerprint,
|
||||
},
|
||||
cli.GenericFlag{
|
||||
Name: "format, f",
|
||||
Usage: "`MANIFEST TYPE` (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)",
|
||||
Value: newOptionalStringValue(&opts.format),
|
||||
},
|
||||
cli.StringSliceFlag{
|
||||
Name: "encryption-key",
|
||||
Usage: "*Experimental* key with the encryption protocol to use needed to encrypt the image (e.g. jwe:/path/to/key.pem)",
|
||||
Value: &opts.encryptionKeys,
|
||||
},
|
||||
cli.IntSliceFlag{
|
||||
Name: "encrypt-layer",
|
||||
Usage: "*Experimental* the 0-indexed layer indices, with support for negative indexing (e.g. 0 is the first layer, -1 is the last layer)",
|
||||
Value: &opts.encryptLayer,
|
||||
},
|
||||
cli.StringSliceFlag{
|
||||
Name: "decryption-key",
|
||||
Usage: "*Experimental* key needed to decrypt the image",
|
||||
Value: &opts.decryptionKeys,
|
||||
},
|
||||
}, sharedFlags...), srcFlags...), destFlags...),
|
||||
See skopeo(1) section "IMAGE NAMES" for the expected format
|
||||
`, strings.Join(transports.ListNames(), ", ")),
|
||||
RunE: commandAction(opts.run),
|
||||
Example: `skopeo copy --sign-by dev@example.com container-storage:example/busybox:streaming docker://example/busybox:gold`,
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
flags := cmd.Flags()
|
||||
flags.AddFlagSet(&sharedFlags)
|
||||
flags.AddFlagSet(&srcFlags)
|
||||
flags.AddFlagSet(&destFlags)
|
||||
flags.StringSliceVar(&opts.additionalTags, "additional-tag", []string{}, "additional tags (supports docker-archive)")
|
||||
flags.BoolVarP(&opts.quiet, "quiet", "q", false, "Suppress output information when copying images")
|
||||
flags.BoolVarP(&opts.all, "all", "a", false, "Copy all images if SOURCE-IMAGE is a list")
|
||||
flags.BoolVar(&opts.removeSignatures, "remove-signatures", false, "Do not copy signatures from SOURCE-IMAGE")
|
||||
flags.StringVar(&opts.signByFingerprint, "sign-by", "", "Sign the image using a GPG key with the specified `FINGERPRINT`")
|
||||
flags.VarP(newOptionalStringValue(&opts.format), "format", "f", `MANIFEST TYPE (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)`)
|
||||
flags.StringSliceVar(&opts.encryptionKeys, "encryption-key", []string{}, "*Experimental* key with the encryption protocol to use needed to encrypt the image (e.g. jwe:/path/to/key.pem)")
|
||||
flags.IntSliceVar(&opts.encryptLayer, "encrypt-layer", []int{}, "*Experimental* the 0-indexed layer indices, with support for negative indexing (e.g. 0 is the first layer, -1 is the last layer)")
|
||||
flags.StringSliceVar(&opts.decryptionKeys, "decryption-key", []string{}, "*Experimental* key needed to decrypt the image")
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *copyOptions) run(args []string, stdout io.Writer) error {
|
||||
@@ -178,7 +142,7 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
|
||||
imageListSelection = copy.CopyAllImages
|
||||
}
|
||||
|
||||
if len(opts.encryptionKeys.Value()) > 0 && len(opts.decryptionKeys.Value()) > 0 {
|
||||
if len(opts.encryptionKeys) > 0 && len(opts.decryptionKeys) > 0 {
|
||||
return fmt.Errorf("--encryption-key and --decryption-key cannot be specified together")
|
||||
}
|
||||
|
||||
@@ -186,15 +150,15 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
|
||||
var encConfig *encconfig.EncryptConfig
|
||||
var decConfig *encconfig.DecryptConfig
|
||||
|
||||
if len(opts.encryptLayer.Value()) > 0 && len(opts.encryptionKeys.Value()) == 0 {
|
||||
if len(opts.encryptLayer) > 0 && len(opts.encryptionKeys) == 0 {
|
||||
return fmt.Errorf("--encrypt-layer can only be used with --encryption-key")
|
||||
}
|
||||
|
||||
if len(opts.encryptionKeys.Value()) > 0 {
|
||||
if len(opts.encryptionKeys) > 0 {
|
||||
// encryption
|
||||
p := opts.encryptLayer.Value()
|
||||
p := opts.encryptLayer
|
||||
encLayers = &p
|
||||
encryptionKeys := opts.encryptionKeys.Value()
|
||||
encryptionKeys := opts.encryptionKeys
|
||||
ecc, err := enchelpers.CreateCryptoConfig(encryptionKeys, []string{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("Invalid encryption keys: %v", err)
|
||||
@@ -203,9 +167,9 @@ func (opts *copyOptions) run(args []string, stdout io.Writer) error {
|
||||
encConfig = cc.EncryptConfig
|
||||
}
|
||||
|
||||
if len(opts.decryptionKeys.Value()) > 0 {
|
||||
if len(opts.decryptionKeys) > 0 {
|
||||
// decryption
|
||||
decryptionKeys := opts.decryptionKeys.Value()
|
||||
decryptionKeys := opts.decryptionKeys
|
||||
dcc, err := enchelpers.CreateCryptoConfig([]string{}, decryptionKeys)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Invalid decryption keys: %v", err)
|
||||
|
||||
@@ -8,7 +8,7 @@ import (
|
||||
|
||||
"github.com/containers/image/v5/transports"
|
||||
"github.com/containers/image/v5/transports/alltransports"
|
||||
"github.com/urfave/cli"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type deleteOptions struct {
|
||||
@@ -16,28 +16,29 @@ type deleteOptions struct {
|
||||
image *imageOptions
|
||||
}
|
||||
|
||||
func deleteCmd(global *globalOptions) cli.Command {
|
||||
func deleteCmd(global *globalOptions) *cobra.Command {
|
||||
sharedFlags, sharedOpts := sharedImageFlags()
|
||||
imageFlags, imageOpts := imageFlags(global, sharedOpts, "", "")
|
||||
opts := deleteOptions{
|
||||
global: global,
|
||||
image: imageOpts,
|
||||
}
|
||||
return cli.Command{
|
||||
Name: "delete",
|
||||
Usage: "Delete image IMAGE-NAME",
|
||||
Description: fmt.Sprintf(`
|
||||
Delete an "IMAGE_NAME" from a transport
|
||||
|
||||
Supported transports:
|
||||
%s
|
||||
|
||||
See skopeo(1) section "IMAGE NAMES" for the expected format
|
||||
`, strings.Join(transports.ListNames(), ", ")),
|
||||
ArgsUsage: "IMAGE-NAME",
|
||||
Action: commandAction(opts.run),
|
||||
Flags: append(sharedFlags, imageFlags...),
|
||||
cmd := &cobra.Command{
|
||||
Use: "delete [command options] IMAGE-NAME",
|
||||
Short: "Delete image IMAGE-NAME",
|
||||
Long: fmt.Sprintf(`Delete an "IMAGE_NAME" from a transport
|
||||
Supported transports:
|
||||
%s
|
||||
See skopeo(1) section "IMAGE NAMES" for the expected format
|
||||
`, strings.Join(transports.ListNames(), ", ")),
|
||||
RunE: commandAction(opts.run),
|
||||
Example: `skopeo delete docker://registry.example.com/example/pause:latest`,
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
flags := cmd.Flags()
|
||||
flags.AddFlagSet(&sharedFlags)
|
||||
flags.AddFlagSet(&imageFlags)
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *deleteOptions) run(args []string, stdout io.Writer) error {
|
||||
|
||||
@@ -3,7 +3,7 @@ package main
|
||||
import (
|
||||
"strconv"
|
||||
|
||||
"github.com/urfave/cli"
|
||||
"github.com/spf13/pflag"
|
||||
)
|
||||
|
||||
// optionalBool is a boolean with a separate presence flag.
|
||||
@@ -15,10 +15,18 @@ type optionalBool struct {
|
||||
// optionalBool is a cli.Generic == flag.Value implementation equivalent to
|
||||
// the one underlying flag.Bool, except that it records whether the flag has been set.
|
||||
// This is distinct from optionalBool to (pretend to) force callers to use
|
||||
// newOptionalBool
|
||||
// optionalBoolFlag
|
||||
type optionalBoolValue optionalBool
|
||||
|
||||
func newOptionalBoolValue(p *optionalBool) cli.Generic {
|
||||
func optionalBoolFlag(fs *pflag.FlagSet, p *optionalBool, name, usage string) *pflag.Flag {
|
||||
flag := fs.VarPF(internalNewOptionalBoolValue(p), name, "", usage)
|
||||
flag.NoOptDefVal = "true"
|
||||
return flag
|
||||
}
|
||||
|
||||
// WARNING: Do not directly use this method to define optionalBool flag.
|
||||
// Caller should use optionalBoolFlag
|
||||
func internalNewOptionalBoolValue(p *optionalBool) pflag.Value {
|
||||
p.present = false
|
||||
return (*optionalBoolValue)(p)
|
||||
}
|
||||
@@ -40,6 +48,10 @@ func (ob *optionalBoolValue) String() string {
|
||||
return strconv.FormatBool(ob.value)
|
||||
}
|
||||
|
||||
func (ob *optionalBoolValue) Type() string {
|
||||
return "bool"
|
||||
}
|
||||
|
||||
func (ob *optionalBoolValue) IsBoolFlag() bool {
|
||||
return true
|
||||
}
|
||||
@@ -56,7 +68,7 @@ type optionalString struct {
|
||||
// newoptionalString
|
||||
type optionalStringValue optionalString
|
||||
|
||||
func newOptionalStringValue(p *optionalString) cli.Generic {
|
||||
func newOptionalStringValue(p *optionalString) pflag.Value {
|
||||
p.present = false
|
||||
return (*optionalStringValue)(p)
|
||||
}
|
||||
@@ -74,6 +86,10 @@ func (ob *optionalStringValue) String() string {
|
||||
return ob.value
|
||||
}
|
||||
|
||||
func (ob *optionalStringValue) Type() string {
|
||||
return "string"
|
||||
}
|
||||
|
||||
// optionalInt is a int with a separate presence flag.
|
||||
type optionalInt struct {
|
||||
present bool
|
||||
@@ -86,7 +102,7 @@ type optionalInt struct {
|
||||
// newoptionalIntValue
|
||||
type optionalIntValue optionalInt
|
||||
|
||||
func newOptionalIntValue(p *optionalInt) cli.Generic {
|
||||
func newOptionalIntValue(p *optionalInt) pflag.Value {
|
||||
p.present = false
|
||||
return (*optionalIntValue)(p)
|
||||
}
|
||||
@@ -107,3 +123,7 @@ func (ob *optionalIntValue) String() string {
|
||||
}
|
||||
return strconv.Itoa(int(ob.value))
|
||||
}
|
||||
|
||||
func (ob *optionalIntValue) Type() string {
|
||||
return "int"
|
||||
}
|
||||
|
||||
@@ -3,9 +3,9 @@ package main
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
func TestOptionalBoolSet(t *testing.T) {
|
||||
@@ -34,7 +34,7 @@ func TestOptionalBoolSet(t *testing.T) {
|
||||
{"2", false, false},
|
||||
} {
|
||||
var ob optionalBool
|
||||
v := newOptionalBoolValue(&ob)
|
||||
v := internalNewOptionalBoolValue(&ob)
|
||||
require.False(t, ob.present)
|
||||
err := v.Set(c.input)
|
||||
if c.accepted {
|
||||
@@ -51,30 +51,23 @@ func TestOptionalBoolSet(t *testing.T) {
|
||||
// is not called in any possible situation).
|
||||
var globalOB, commandOB optionalBool
|
||||
actionRun := false
|
||||
app := cli.NewApp()
|
||||
app.EnableBashCompletion = true
|
||||
app.Flags = []cli.Flag{
|
||||
cli.GenericFlag{
|
||||
Name: "global-OB",
|
||||
Value: newOptionalBoolValue(&globalOB),
|
||||
},
|
||||
app := &cobra.Command{
|
||||
Use: "app",
|
||||
}
|
||||
app.Commands = []cli.Command{{
|
||||
Name: "cmd",
|
||||
Flags: []cli.Flag{
|
||||
cli.GenericFlag{
|
||||
Name: "command-OB",
|
||||
Value: newOptionalBoolValue(&commandOB),
|
||||
},
|
||||
},
|
||||
Action: func(*cli.Context) error {
|
||||
optionalBoolFlag(app.PersistentFlags(), &globalOB, "global-OB", "")
|
||||
cmd := &cobra.Command{
|
||||
Use: "cmd",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
assert.False(t, globalOB.present)
|
||||
assert.False(t, commandOB.present)
|
||||
actionRun = true
|
||||
return nil
|
||||
},
|
||||
}}
|
||||
err := app.Run([]string{"app", "cmd"})
|
||||
}
|
||||
optionalBoolFlag(cmd.Flags(), &commandOB, "command-OB", "")
|
||||
app.AddCommand(cmd)
|
||||
app.SetArgs([]string{"cmd"})
|
||||
err := app.Execute()
|
||||
require.NoError(t, err)
|
||||
assert.True(t, actionRun)
|
||||
}
|
||||
@@ -90,7 +83,7 @@ func TestOptionalBoolString(t *testing.T) {
|
||||
{optionalBool{present: false, value: false}, ""},
|
||||
} {
|
||||
var ob optionalBool
|
||||
v := newOptionalBoolValue(&ob)
|
||||
v := internalNewOptionalBoolValue(&ob)
|
||||
ob = c.input
|
||||
res := v.String()
|
||||
assert.Equal(t, c.expected, res)
|
||||
@@ -114,23 +107,21 @@ func TestOptionalBoolIsBoolFlag(t *testing.T) {
|
||||
} {
|
||||
var ob optionalBool
|
||||
actionRun := false
|
||||
app := cli.NewApp()
|
||||
app.Commands = []cli.Command{{
|
||||
Name: "cmd",
|
||||
Flags: []cli.Flag{
|
||||
cli.GenericFlag{
|
||||
Name: "OB",
|
||||
Value: newOptionalBoolValue(&ob),
|
||||
},
|
||||
},
|
||||
Action: func(ctx *cli.Context) error {
|
||||
app := &cobra.Command{Use: "app"}
|
||||
cmd := &cobra.Command{
|
||||
Use: "cmd",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
assert.Equal(t, c.expectedOB, ob)
|
||||
assert.Equal(t, c.expectedArgs, ([]string)(ctx.Args()))
|
||||
assert.Equal(t, c.expectedArgs, args)
|
||||
actionRun = true
|
||||
return nil
|
||||
},
|
||||
}}
|
||||
err := app.Run(append([]string{"app", "cmd"}, c.input...))
|
||||
}
|
||||
optionalBoolFlag(cmd.Flags(), &ob, "OB", "")
|
||||
app.AddCommand(cmd)
|
||||
|
||||
app.SetArgs(append([]string{"cmd"}, c.input...))
|
||||
err := app.Execute()
|
||||
require.NoError(t, err)
|
||||
assert.True(t, actionRun)
|
||||
}
|
||||
@@ -152,30 +143,23 @@ func TestOptionalStringSet(t *testing.T) {
|
||||
// is not called in any possible situation).
|
||||
var globalOS, commandOS optionalString
|
||||
actionRun := false
|
||||
app := cli.NewApp()
|
||||
app.EnableBashCompletion = true
|
||||
app.Flags = []cli.Flag{
|
||||
cli.GenericFlag{
|
||||
Name: "global-OS",
|
||||
Value: newOptionalStringValue(&globalOS),
|
||||
},
|
||||
app := &cobra.Command{
|
||||
Use: "app",
|
||||
}
|
||||
app.Commands = []cli.Command{{
|
||||
Name: "cmd",
|
||||
Flags: []cli.Flag{
|
||||
cli.GenericFlag{
|
||||
Name: "command-OS",
|
||||
Value: newOptionalStringValue(&commandOS),
|
||||
},
|
||||
},
|
||||
Action: func(*cli.Context) error {
|
||||
app.PersistentFlags().Var(newOptionalStringValue(&globalOS), "global-OS", "")
|
||||
cmd := &cobra.Command{
|
||||
Use: "cmd",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
assert.False(t, globalOS.present)
|
||||
assert.False(t, commandOS.present)
|
||||
actionRun = true
|
||||
return nil
|
||||
},
|
||||
}}
|
||||
err := app.Run([]string{"app", "cmd"})
|
||||
}
|
||||
cmd.Flags().Var(newOptionalStringValue(&commandOS), "command-OS", "")
|
||||
app.AddCommand(cmd)
|
||||
app.SetArgs([]string{"cmd"})
|
||||
err := app.Execute()
|
||||
require.NoError(t, err)
|
||||
assert.True(t, actionRun)
|
||||
}
|
||||
@@ -216,23 +200,22 @@ func TestOptionalStringIsBoolFlag(t *testing.T) {
|
||||
} {
|
||||
var os optionalString
|
||||
actionRun := false
|
||||
app := cli.NewApp()
|
||||
app.Commands = []cli.Command{{
|
||||
Name: "cmd",
|
||||
Flags: []cli.Flag{
|
||||
cli.GenericFlag{
|
||||
Name: "OS",
|
||||
Value: newOptionalStringValue(&os),
|
||||
},
|
||||
},
|
||||
Action: func(ctx *cli.Context) error {
|
||||
app := &cobra.Command{
|
||||
Use: "app",
|
||||
}
|
||||
cmd := &cobra.Command{
|
||||
Use: "cmd",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
assert.Equal(t, c.expectedOS, os)
|
||||
assert.Equal(t, c.expectedArgs, ([]string)(ctx.Args()))
|
||||
assert.Equal(t, c.expectedArgs, args)
|
||||
actionRun = true
|
||||
return nil
|
||||
},
|
||||
}}
|
||||
err := app.Run(append([]string{"app", "cmd"}, c.input...))
|
||||
}
|
||||
cmd.Flags().Var(newOptionalStringValue(&os), "OS", "")
|
||||
app.AddCommand(cmd)
|
||||
app.SetArgs(append([]string{"cmd"}, c.input...))
|
||||
err := app.Execute()
|
||||
require.NoError(t, err)
|
||||
assert.True(t, actionRun)
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/urfave/cli"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// inspectOutput is the output format of (skopeo inspect), primarily so that we can format it with a simple json.MarshalIndent.
|
||||
@@ -39,39 +39,32 @@ type inspectOptions struct {
|
||||
config bool // Output the raw config blob instead of parsing information about the image
|
||||
}
|
||||
|
||||
func inspectCmd(global *globalOptions) cli.Command {
|
||||
func inspectCmd(global *globalOptions) *cobra.Command {
|
||||
sharedFlags, sharedOpts := sharedImageFlags()
|
||||
imageFlags, imageOpts := imageFlags(global, sharedOpts, "", "")
|
||||
opts := inspectOptions{
|
||||
global: global,
|
||||
image: imageOpts,
|
||||
}
|
||||
return cli.Command{
|
||||
Name: "inspect",
|
||||
Usage: "Inspect image IMAGE-NAME",
|
||||
Description: fmt.Sprintf(`
|
||||
Return low-level information about "IMAGE-NAME" in a registry/transport
|
||||
cmd := &cobra.Command{
|
||||
Use: "inspect [command options] IMAGE-NAME",
|
||||
Short: "Inspect image IMAGE-NAME",
|
||||
Long: fmt.Sprintf(`Return low-level information about "IMAGE-NAME" in a registry/transport
|
||||
Supported transports:
|
||||
%s
|
||||
|
||||
Supported transports:
|
||||
%s
|
||||
|
||||
See skopeo(1) section "IMAGE NAMES" for the expected format
|
||||
`, strings.Join(transports.ListNames(), ", ")),
|
||||
ArgsUsage: "IMAGE-NAME",
|
||||
Flags: append(append([]cli.Flag{
|
||||
cli.BoolFlag{
|
||||
Name: "raw",
|
||||
Usage: "output raw manifest or configuration",
|
||||
Destination: &opts.raw,
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "config",
|
||||
Usage: "output configuration",
|
||||
Destination: &opts.config,
|
||||
},
|
||||
}, sharedFlags...), imageFlags...),
|
||||
Action: commandAction(opts.run),
|
||||
See skopeo(1) section "IMAGE NAMES" for the expected format
|
||||
`, strings.Join(transports.ListNames(), ", ")),
|
||||
RunE: commandAction(opts.run),
|
||||
Example: `skopeo inspect docker://docker.io/fedora`,
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
flags := cmd.Flags()
|
||||
flags.BoolVar(&opts.raw, "raw", false, "output raw manifest or configuration")
|
||||
flags.BoolVar(&opts.config, "config", false, "output configuration")
|
||||
flags.AddFlagSet(&sharedFlags)
|
||||
flags.AddFlagSet(&imageFlags)
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *inspectOptions) run(args []string, stdout io.Writer) (retErr error) {
|
||||
|
||||
@@ -13,7 +13,7 @@ import (
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/urfave/cli"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type layersOptions struct {
|
||||
@@ -21,21 +21,24 @@ type layersOptions struct {
|
||||
image *imageOptions
|
||||
}
|
||||
|
||||
func layersCmd(global *globalOptions) cli.Command {
|
||||
func layersCmd(global *globalOptions) *cobra.Command {
|
||||
sharedFlags, sharedOpts := sharedImageFlags()
|
||||
imageFlags, imageOpts := imageFlags(global, sharedOpts, "", "")
|
||||
opts := layersOptions{
|
||||
global: global,
|
||||
image: imageOpts,
|
||||
}
|
||||
return cli.Command{
|
||||
Name: "layers",
|
||||
Usage: "Get layers of IMAGE-NAME",
|
||||
ArgsUsage: "IMAGE-NAME [LAYER...]",
|
||||
Hidden: true,
|
||||
Action: commandAction(opts.run),
|
||||
Flags: append(sharedFlags, imageFlags...),
|
||||
cmd := &cobra.Command{
|
||||
Hidden: true,
|
||||
Use: "layers [command options] IMAGE-NAME [LAYER...]",
|
||||
Short: "Get layers of IMAGE-NAME",
|
||||
RunE: commandAction(opts.run),
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
flags := cmd.Flags()
|
||||
flags.AddFlagSet(&sharedFlags)
|
||||
flags.AddFlagSet(&imageFlags)
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *layersOptions) run(args []string, stdout io.Writer) (retErr error) {
|
||||
|
||||
@@ -4,15 +4,15 @@ import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/v5/docker"
|
||||
"github.com/containers/image/v5/docker/reference"
|
||||
"github.com/containers/image/v5/transports/alltransports"
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/urfave/cli"
|
||||
"strings"
|
||||
|
||||
"io"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// tagListOutput is the output format of (skopeo list-tags), primarily so that we can format it with a simple json.MarshalIndent.
|
||||
@@ -26,7 +26,7 @@ type tagsOptions struct {
|
||||
image *imageOptions
|
||||
}
|
||||
|
||||
func tagsCmd(global *globalOptions) cli.Command {
|
||||
func tagsCmd(global *globalOptions) *cobra.Command {
|
||||
sharedFlags, sharedOpts := sharedImageFlags()
|
||||
imageFlags, imageOpts := dockerImageFlags(global, sharedOpts, "", "")
|
||||
|
||||
@@ -34,22 +34,24 @@ func tagsCmd(global *globalOptions) cli.Command {
|
||||
global: global,
|
||||
image: imageOpts,
|
||||
}
|
||||
cmd := &cobra.Command{
|
||||
Use: "list-tags [command options] REPOSITORY-NAME",
|
||||
Short: "List tags in the transport/repository specified by the REPOSITORY-NAME",
|
||||
Long: `Return the list of tags from the transport/repository "REPOSITORY-NAME"
|
||||
|
||||
return cli.Command{
|
||||
Name: "list-tags",
|
||||
Usage: "List tags in the transport/repository specified by the REPOSITORY-NAME",
|
||||
Description: `
|
||||
Return the list of tags from the transport/repository "REPOSITORY-NAME"
|
||||
|
||||
Supported transports:
|
||||
docker
|
||||
Supported transports:
|
||||
docker
|
||||
|
||||
See skopeo-list-tags(1) section "REPOSITORY NAMES" for the expected format
|
||||
`,
|
||||
ArgsUsage: "REPOSITORY-NAME",
|
||||
Flags: append(sharedFlags, imageFlags...),
|
||||
Action: commandAction(opts.run),
|
||||
See skopeo-list-tags(1) section "REPOSITORY NAMES" for the expected format
|
||||
`,
|
||||
RunE: commandAction(opts.run),
|
||||
Example: `skopeo list-tags docker://docker.io/fedora`,
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
flags := cmd.Flags()
|
||||
flags.AddFlagSet(&sharedFlags)
|
||||
flags.AddFlagSet(&imageFlags)
|
||||
return cmd
|
||||
}
|
||||
|
||||
// Customized version of the alltransports.ParseImageName and docker.ParseReference that does not place a default tag in the reference
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/containers/image/v5/transports/alltransports"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Tests the kinds of inputs allowed and expected to the command
|
||||
|
||||
47
cmd/skopeo/login.go
Normal file
47
cmd/skopeo/login.go
Normal file
@@ -0,0 +1,47 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
|
||||
"github.com/containers/common/pkg/auth"
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type loginOptions struct {
|
||||
global *globalOptions
|
||||
loginOpts auth.LoginOptions
|
||||
getLogin optionalBool
|
||||
tlsVerify optionalBool
|
||||
}
|
||||
|
||||
func loginCmd(global *globalOptions) *cobra.Command {
|
||||
opts := loginOptions{
|
||||
global: global,
|
||||
}
|
||||
cmd := &cobra.Command{
|
||||
Use: "login",
|
||||
Short: "Login to a container registry",
|
||||
Long: "Login to a container registry on a specified server.",
|
||||
RunE: commandAction(opts.run),
|
||||
Example: `skopeo login quay.io`,
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
flags := cmd.Flags()
|
||||
optionalBoolFlag(flags, &opts.tlsVerify, "tls-verify", "require HTTPS and verify certificates when accessing the registry")
|
||||
flags.AddFlagSet(auth.GetLoginFlags(&opts.loginOpts))
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *loginOptions) run(args []string, stdout io.Writer) error {
|
||||
ctx, cancel := opts.global.commandTimeoutContext()
|
||||
defer cancel()
|
||||
opts.loginOpts.Stdout = stdout
|
||||
opts.loginOpts.Stdin = os.Stdin
|
||||
sys := opts.global.newSystemContext()
|
||||
if opts.tlsVerify.present {
|
||||
sys.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.value)
|
||||
}
|
||||
return auth.Login(ctx, sys, &opts.loginOpts, args)
|
||||
}
|
||||
35
cmd/skopeo/logout.go
Normal file
35
cmd/skopeo/logout.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/containers/common/pkg/auth"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type logoutOptions struct {
|
||||
global *globalOptions
|
||||
logoutOpts auth.LogoutOptions
|
||||
}
|
||||
|
||||
func logoutCmd(global *globalOptions) *cobra.Command {
|
||||
opts := logoutOptions{
|
||||
global: global,
|
||||
}
|
||||
cmd := &cobra.Command{
|
||||
Use: "logout",
|
||||
Short: "Logout of a container registry",
|
||||
Long: "Logout of a container registry on a specified server.",
|
||||
RunE: commandAction(opts.run),
|
||||
Example: `skopeo logout quay.io`,
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
cmd.Flags().AddFlagSet(auth.GetLogoutFlags(&opts.logoutOpts))
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *logoutOptions) run(args []string, stdout io.Writer) error {
|
||||
opts.logoutOpts.Stdout = stdout
|
||||
sys := opts.global.newSystemContext()
|
||||
return auth.Logout(sys, &opts.logoutOpts, args)
|
||||
}
|
||||
@@ -3,14 +3,14 @@ package main
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/containers/image/v5/signature"
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/containers/skopeo/version"
|
||||
"github.com/containers/storage/pkg/reexec"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/urfave/cli"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// gitCommit will be the hash that the binary was built from
|
||||
@@ -31,96 +31,61 @@ type globalOptions struct {
|
||||
tmpDir string // Path to use for big temporary files
|
||||
}
|
||||
|
||||
// createApp returns a cli.App, and the underlying globalOptions object, to be run or tested.
|
||||
func createApp() (*cli.App, *globalOptions) {
|
||||
// createApp returns a cobra.Command, and the underlying globalOptions object, to be run or tested.
|
||||
func createApp() (*cobra.Command, *globalOptions) {
|
||||
opts := globalOptions{}
|
||||
|
||||
app := cli.NewApp()
|
||||
app.EnableBashCompletion = true
|
||||
app.Name = "skopeo"
|
||||
rootCommand := &cobra.Command{
|
||||
Use: "skopeo",
|
||||
Long: "Various operations with container images and container image registries",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
return opts.before(cmd)
|
||||
},
|
||||
SilenceUsage: true,
|
||||
SilenceErrors: true,
|
||||
}
|
||||
if gitCommit != "" {
|
||||
app.Version = fmt.Sprintf("%s commit: %s", version.Version, gitCommit)
|
||||
rootCommand.Version = fmt.Sprintf("%s commit: %s", version.Version, gitCommit)
|
||||
} else {
|
||||
app.Version = version.Version
|
||||
rootCommand.Version = version.Version
|
||||
}
|
||||
app.Usage = "Various operations with container images and container image registries"
|
||||
app.Flags = []cli.Flag{
|
||||
cli.DurationFlag{
|
||||
Name: "command-timeout",
|
||||
Usage: "timeout for the command execution",
|
||||
Destination: &opts.commandTimeout,
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "debug",
|
||||
Usage: "enable debug output",
|
||||
Destination: &opts.debug,
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "insecure-policy",
|
||||
Usage: "run the tool without any policy check",
|
||||
Destination: &opts.insecurePolicy,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "override-arch",
|
||||
Usage: "use `ARCH` instead of the architecture of the machine for choosing images",
|
||||
Destination: &opts.overrideArch,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "override-os",
|
||||
Usage: "use `OS` instead of the running OS for choosing images",
|
||||
Destination: &opts.overrideOS,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "override-variant",
|
||||
Usage: "use `VARIANT` instead of the running architecture variant for choosing images",
|
||||
Destination: &opts.overrideVariant,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "policy",
|
||||
Usage: "Path to a trust policy file",
|
||||
Destination: &opts.policyPath,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "registries-conf",
|
||||
Usage: "path to the registries.conf file",
|
||||
Destination: &opts.registriesConfPath,
|
||||
Hidden: true,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "registries.d",
|
||||
Usage: "use registry configuration files in `DIR` (e.g. for container signature storage)",
|
||||
Destination: &opts.registriesDirPath,
|
||||
},
|
||||
cli.GenericFlag{
|
||||
Name: "tls-verify",
|
||||
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
|
||||
Hidden: true,
|
||||
Value: newOptionalBoolValue(&opts.tlsVerify),
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "tmpdir",
|
||||
Usage: "directory used to store temporary files",
|
||||
Destination: &opts.tmpDir,
|
||||
},
|
||||
// Override default `--version` global flag to enable `-v` shorthand
|
||||
var dummyVersion bool
|
||||
rootCommand.Flags().BoolVarP(&dummyVersion, "version", "v", false, "Version for Skopeo")
|
||||
rootCommand.PersistentFlags().BoolVar(&opts.debug, "debug", false, "enable debug output")
|
||||
flag := optionalBoolFlag(rootCommand.PersistentFlags(), &opts.tlsVerify, "tls-verify", "Require HTTPS and verify certificates when accessing the registry")
|
||||
flag.Hidden = true
|
||||
rootCommand.PersistentFlags().StringVar(&opts.policyPath, "policy", "", "Path to a trust policy file")
|
||||
rootCommand.PersistentFlags().BoolVar(&opts.insecurePolicy, "insecure-policy", false, "run the tool without any policy check")
|
||||
rootCommand.PersistentFlags().StringVar(&opts.registriesDirPath, "registries.d", "", "use registry configuration files in `DIR` (e.g. for container signature storage)")
|
||||
rootCommand.PersistentFlags().StringVar(&opts.overrideArch, "override-arch", "", "use `ARCH` instead of the architecture of the machine for choosing images")
|
||||
rootCommand.PersistentFlags().StringVar(&opts.overrideOS, "override-os", "", "use `OS` instead of the running OS for choosing images")
|
||||
rootCommand.PersistentFlags().StringVar(&opts.overrideVariant, "override-variant", "", "use `VARIANT` instead of the running architecture variant for choosing images")
|
||||
rootCommand.PersistentFlags().DurationVar(&opts.commandTimeout, "command-timeout", 0, "timeout for the command execution")
|
||||
rootCommand.PersistentFlags().StringVar(&opts.registriesConfPath, "registries-conf", "", "path to the registries.conf file")
|
||||
if err := rootCommand.PersistentFlags().MarkHidden("registries-conf"); err != nil {
|
||||
logrus.Fatal("unable to mark registries-conf flag as hidden")
|
||||
}
|
||||
app.Before = opts.before
|
||||
app.Commands = []cli.Command{
|
||||
rootCommand.PersistentFlags().StringVar(&opts.tmpDir, "tmpdir", "", "directory used to store temporary files")
|
||||
rootCommand.AddCommand(
|
||||
copyCmd(&opts),
|
||||
deleteCmd(&opts),
|
||||
inspectCmd(&opts),
|
||||
layersCmd(&opts),
|
||||
tagsCmd(&opts),
|
||||
loginCmd(&opts),
|
||||
logoutCmd(&opts),
|
||||
manifestDigestCmd(),
|
||||
syncCmd(&opts),
|
||||
standaloneSignCmd(),
|
||||
standaloneVerifyCmd(),
|
||||
syncCmd(&opts),
|
||||
tagsCmd(&opts),
|
||||
untrustedSignatureDumpCmd(),
|
||||
}
|
||||
return app, &opts
|
||||
)
|
||||
return rootCommand, &opts
|
||||
}
|
||||
|
||||
// before is run by the cli package for any command, before running the command-specific handler.
|
||||
func (opts *globalOptions) before(ctx *cli.Context) error {
|
||||
func (opts *globalOptions) before(cmd *cobra.Command) error {
|
||||
if opts.debug {
|
||||
logrus.SetLevel(logrus.DebugLevel)
|
||||
}
|
||||
@@ -134,8 +99,8 @@ func main() {
|
||||
if reexec.Init() {
|
||||
return
|
||||
}
|
||||
app, _ := createApp()
|
||||
if err := app.Run(os.Args); err != nil {
|
||||
rootCmd, _ := createApp()
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
}
|
||||
@@ -167,3 +132,21 @@ func (opts *globalOptions) commandTimeoutContext() (context.Context, context.Can
|
||||
}
|
||||
return ctx, cancel
|
||||
}
|
||||
|
||||
// newSystemContext returns a *types.SystemContext corresponding to opts.
|
||||
// It is guaranteed to return a fresh instance, so it is safe to make additional updates to it.
|
||||
func (opts *globalOptions) newSystemContext() *types.SystemContext {
|
||||
ctx := &types.SystemContext{
|
||||
RegistriesDirPath: opts.registriesDirPath,
|
||||
ArchitectureChoice: opts.overrideArch,
|
||||
OSChoice: opts.overrideOS,
|
||||
VariantChoice: opts.overrideVariant,
|
||||
SystemRegistriesConfPath: opts.registriesConfPath,
|
||||
BigFilesTemporaryDir: opts.tmpDir,
|
||||
}
|
||||
// DEPRECATED: We support this for backward compatibility, but override it if a per-image flag is provided.
|
||||
if opts.tlsVerify.present {
|
||||
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.value)
|
||||
}
|
||||
return ctx
|
||||
}
|
||||
|
||||
@@ -1,14 +1,47 @@
|
||||
package main
|
||||
|
||||
import "bytes"
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// runSkopeo creates an app object and runs it with args, with an implied first "skopeo".
|
||||
// Returns output intended for stdout and the returned error, if any.
|
||||
func runSkopeo(args ...string) (string, error) {
|
||||
app, _ := createApp()
|
||||
stdout := bytes.Buffer{}
|
||||
app.Writer = &stdout
|
||||
args = append([]string{"skopeo"}, args...)
|
||||
err := app.Run(args)
|
||||
app.SetOut(&stdout)
|
||||
app.SetArgs(args)
|
||||
err := app.Execute()
|
||||
return stdout.String(), err
|
||||
}
|
||||
|
||||
func TestGlobalOptionsNewSystemContext(t *testing.T) {
|
||||
// Default state
|
||||
opts, _ := fakeGlobalOptions(t, []string{})
|
||||
res := opts.newSystemContext()
|
||||
assert.Equal(t, &types.SystemContext{}, res)
|
||||
// Set everything to non-default values.
|
||||
opts, _ = fakeGlobalOptions(t, []string{
|
||||
"--registries.d", "/srv/registries.d",
|
||||
"--override-arch", "overridden-arch",
|
||||
"--override-os", "overridden-os",
|
||||
"--override-variant", "overridden-variant",
|
||||
"--tmpdir", "/srv",
|
||||
"--registries-conf", "/srv/registries.conf",
|
||||
"--tls-verify=false",
|
||||
})
|
||||
res = opts.newSystemContext()
|
||||
assert.Equal(t, &types.SystemContext{
|
||||
RegistriesDirPath: "/srv/registries.d",
|
||||
ArchitectureChoice: "overridden-arch",
|
||||
OSChoice: "overridden-os",
|
||||
VariantChoice: "overridden-variant",
|
||||
BigFilesTemporaryDir: "/srv",
|
||||
SystemRegistriesConfPath: "/srv/registries.conf",
|
||||
DockerInsecureSkipTLSVerify: types.OptionalBoolTrue,
|
||||
}, res)
|
||||
}
|
||||
|
||||
@@ -7,20 +7,22 @@ import (
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/containers/image/v5/manifest"
|
||||
"github.com/urfave/cli"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type manifestDigestOptions struct {
|
||||
}
|
||||
|
||||
func manifestDigestCmd() cli.Command {
|
||||
opts := manifestDigestOptions{}
|
||||
return cli.Command{
|
||||
Name: "manifest-digest",
|
||||
Usage: "Compute a manifest digest of a file",
|
||||
ArgsUsage: "MANIFEST",
|
||||
Action: commandAction(opts.run),
|
||||
func manifestDigestCmd() *cobra.Command {
|
||||
var opts manifestDigestOptions
|
||||
cmd := &cobra.Command{
|
||||
Use: "manifest-digest MANIFEST",
|
||||
Short: "Compute a manifest digest of a file",
|
||||
RunE: commandAction(opts.run),
|
||||
Example: "skopeo manifest-digest manifest.json",
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *manifestDigestOptions) run(args []string, stdout io.Writer) error {
|
||||
|
||||
@@ -8,28 +8,24 @@ import (
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/containers/image/v5/signature"
|
||||
"github.com/urfave/cli"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type standaloneSignOptions struct {
|
||||
output string // Output file path
|
||||
}
|
||||
|
||||
func standaloneSignCmd() cli.Command {
|
||||
func standaloneSignCmd() *cobra.Command {
|
||||
opts := standaloneSignOptions{}
|
||||
return cli.Command{
|
||||
Name: "standalone-sign",
|
||||
Usage: "Create a signature using local files",
|
||||
ArgsUsage: "MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT",
|
||||
Action: commandAction(opts.run),
|
||||
Flags: []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: "output, o",
|
||||
Usage: "output the signature to `SIGNATURE`",
|
||||
Destination: &opts.output,
|
||||
},
|
||||
},
|
||||
cmd := &cobra.Command{
|
||||
Use: "standalone-sign [command options] MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT",
|
||||
Short: "Create a signature using local files",
|
||||
RunE: commandAction(opts.run),
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
flags := cmd.Flags()
|
||||
flags.StringVarP(&opts.output, "output", "o", "", "output the signature to `SIGNATURE`")
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *standaloneSignOptions) run(args []string, stdout io.Writer) error {
|
||||
@@ -64,14 +60,15 @@ func (opts *standaloneSignOptions) run(args []string, stdout io.Writer) error {
|
||||
type standaloneVerifyOptions struct {
|
||||
}
|
||||
|
||||
func standaloneVerifyCmd() cli.Command {
|
||||
func standaloneVerifyCmd() *cobra.Command {
|
||||
opts := standaloneVerifyOptions{}
|
||||
return cli.Command{
|
||||
Name: "standalone-verify",
|
||||
Usage: "Verify a signature using local files",
|
||||
ArgsUsage: "MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT SIGNATURE",
|
||||
Action: commandAction(opts.run),
|
||||
cmd := &cobra.Command{
|
||||
Use: "standalone-verify MANIFEST DOCKER-REFERENCE KEY-FINGERPRINT SIGNATURE",
|
||||
Short: "Verify a signature using local files",
|
||||
RunE: commandAction(opts.run),
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *standaloneVerifyOptions) run(args []string, stdout io.Writer) error {
|
||||
@@ -115,15 +112,16 @@ func (opts *standaloneVerifyOptions) run(args []string, stdout io.Writer) error
|
||||
type untrustedSignatureDumpOptions struct {
|
||||
}
|
||||
|
||||
func untrustedSignatureDumpCmd() cli.Command {
|
||||
func untrustedSignatureDumpCmd() *cobra.Command {
|
||||
opts := untrustedSignatureDumpOptions{}
|
||||
return cli.Command{
|
||||
Name: "untrusted-signature-dump-without-verification",
|
||||
Usage: "Dump contents of a signature WITHOUT VERIFYING IT",
|
||||
ArgsUsage: "SIGNATURE",
|
||||
Hidden: true,
|
||||
Action: commandAction(opts.run),
|
||||
cmd := &cobra.Command{
|
||||
Use: "untrusted-signature-dump-without-verification SIGNATURE",
|
||||
Short: "Dump contents of a signature WITHOUT VERIFYING IT",
|
||||
RunE: commandAction(opts.run),
|
||||
Hidden: true,
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (opts *untrustedSignatureDumpOptions) run(args []string, stdout io.Writer) error {
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/v5/copy"
|
||||
@@ -18,7 +19,7 @@ import (
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/urfave/cli"
|
||||
"github.com/spf13/cobra"
|
||||
"gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
@@ -50,16 +51,17 @@ type tlsVerifyConfig struct {
|
||||
// registrySyncConfig contains information about a single registry, read from
|
||||
// the source YAML file
|
||||
type registrySyncConfig struct {
|
||||
Images map[string][]string // Images map images name to slices with the images' tags
|
||||
Credentials types.DockerAuthConfig // Username and password used to authenticate with the registry
|
||||
TLSVerify tlsVerifyConfig `yaml:"tls-verify"` // TLS verification mode (enabled by default)
|
||||
CertDir string `yaml:"cert-dir"` // Path to the TLS certificates of the registry
|
||||
Images map[string][]string // Images map images name to slices with the images' tags
|
||||
ImagesByTagRegex map[string]string `yaml:"images-by-tag-regex"` // Images map images name to regular expression with the images' tags
|
||||
Credentials types.DockerAuthConfig // Username and password used to authenticate with the registry
|
||||
TLSVerify tlsVerifyConfig `yaml:"tls-verify"` // TLS verification mode (enabled by default)
|
||||
CertDir string `yaml:"cert-dir"` // Path to the TLS certificates of the registry
|
||||
}
|
||||
|
||||
// sourceConfig contains all registries information read from the source YAML file
|
||||
type sourceConfig map[string]registrySyncConfig
|
||||
|
||||
func syncCmd(global *globalOptions) cli.Command {
|
||||
func syncCmd(global *globalOptions) *cobra.Command {
|
||||
sharedFlags, sharedOpts := sharedImageFlags()
|
||||
srcFlags, srcOpts := dockerImageFlags(global, sharedOpts, "src-", "screds")
|
||||
destFlags, destOpts := dockerImageFlags(global, sharedOpts, "dest-", "dcreds")
|
||||
@@ -70,49 +72,30 @@ func syncCmd(global *globalOptions) cli.Command {
|
||||
destImage: &imageDestOptions{imageOptions: destOpts},
|
||||
}
|
||||
|
||||
return cli.Command{
|
||||
Name: "sync",
|
||||
Usage: "Synchronize one or more images from one location to another",
|
||||
Description: fmt.Sprint(`
|
||||
cmd := &cobra.Command{
|
||||
Use: "sync [command options] --src SOURCE-LOCATION --dest DESTINATION-LOCATION SOURCE DESTINATION",
|
||||
Short: "Synchronize one or more images from one location to another",
|
||||
Long: fmt.Sprint(`Copy all the images from a SOURCE to a DESTINATION.
|
||||
|
||||
Copy all the images from a SOURCE to a DESTINATION.
|
||||
Allowed SOURCE transports (specified with --src): docker, dir, yaml.
|
||||
Allowed DESTINATION transports (specified with --dest): docker, dir.
|
||||
|
||||
Allowed SOURCE transports (specified with --src): docker, dir, yaml.
|
||||
Allowed DESTINATION transports (specified with --dest): docker, dir.
|
||||
|
||||
See skopeo-sync(1) for details.
|
||||
`),
|
||||
ArgsUsage: "--src SOURCE-LOCATION --dest DESTINATION-LOCATION SOURCE DESTINATION",
|
||||
Action: commandAction(opts.run),
|
||||
// FIXME: Do we need to namespace the GPG aspect?
|
||||
Flags: append(append(append([]cli.Flag{
|
||||
cli.BoolFlag{
|
||||
Name: "remove-signatures",
|
||||
Usage: "Do not copy signatures from SOURCE images",
|
||||
Destination: &opts.removeSignatures,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "sign-by",
|
||||
Usage: "Sign the image using a GPG key with the specified `FINGERPRINT`",
|
||||
Destination: &opts.signByFingerprint,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "src, s",
|
||||
Usage: "SOURCE transport type",
|
||||
Destination: &opts.source,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: "dest, d",
|
||||
Usage: "DESTINATION transport type",
|
||||
Destination: &opts.destination,
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "scoped",
|
||||
Usage: "Images at DESTINATION are prefix using the full source image path as scope",
|
||||
Destination: &opts.scoped,
|
||||
},
|
||||
}, sharedFlags...), srcFlags...), destFlags...),
|
||||
See skopeo-sync(1) for details.
|
||||
`),
|
||||
RunE: commandAction(opts.run),
|
||||
Example: `skopeo sync --src docker --dest dir --scoped registry.example.com/busybox /media/usb`,
|
||||
}
|
||||
adjustUsage(cmd)
|
||||
flags := cmd.Flags()
|
||||
flags.BoolVar(&opts.removeSignatures, "remove-signatures", false, "Do not copy signatures from SOURCE images")
|
||||
flags.StringVar(&opts.signByFingerprint, "sign-by", "", "Sign the image using a GPG key with the specified `FINGERPRINT`")
|
||||
flags.StringVarP(&opts.source, "src", "s", "", "SOURCE transport type")
|
||||
flags.StringVarP(&opts.destination, "dest", "d", "", "DESTINATION transport type")
|
||||
flags.BoolVar(&opts.scoped, "scoped", false, "Images at DESTINATION are prefix using the full source image path as scope")
|
||||
flags.AddFlagSet(&sharedFlags)
|
||||
flags.AddFlagSet(&srcFlags)
|
||||
flags.AddFlagSet(&destFlags)
|
||||
return cmd
|
||||
}
|
||||
|
||||
// unmarshalYAML is the implementation of the Unmarshaler interface method
|
||||
@@ -144,6 +127,18 @@ func newSourceConfig(yamlFile string) (sourceConfig, error) {
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
// parseRepositoryReference parses input into a reference.Named, and verifies that it names a repository, not an image.
|
||||
func parseRepositoryReference(input string) (reference.Named, error) {
|
||||
ref, err := reference.ParseNormalizedNamed(input)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !reference.IsNameOnly(ref) {
|
||||
return nil, errors.Errorf("input names a reference, not a repository")
|
||||
}
|
||||
return ref, nil
|
||||
}
|
||||
|
||||
// destinationReference creates an image reference using the provided transport.
|
||||
// It returns a image reference to be used as destination of an image copy and
|
||||
// any error encountered.
|
||||
@@ -157,15 +152,14 @@ func destinationReference(destination string, transport string) (types.ImageRefe
|
||||
case directory.Transport.Name():
|
||||
_, err := os.Stat(destination)
|
||||
if err == nil {
|
||||
return nil, errors.Errorf(fmt.Sprintf("Refusing to overwrite destination directory %q", destination))
|
||||
return nil, errors.Errorf("Refusing to overwrite destination directory %q", destination)
|
||||
}
|
||||
if !os.IsNotExist(err) {
|
||||
return nil, errors.Wrap(err, "Destination directory could not be used")
|
||||
}
|
||||
// the directory holding the image must be created here
|
||||
if err = os.MkdirAll(destination, 0755); err != nil {
|
||||
return nil, errors.Wrapf(err, fmt.Sprintf("Error creating directory for image %s",
|
||||
destination))
|
||||
return nil, errors.Wrapf(err, "Error creating directory for image %s", destination)
|
||||
}
|
||||
imageTransport = directory.Transport
|
||||
default:
|
||||
@@ -175,21 +169,26 @@ func destinationReference(destination string, transport string) (types.ImageRefe
|
||||
|
||||
destRef, err := imageTransport.ParseReference(destination)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, fmt.Sprintf("Cannot obtain a valid image reference for transport %q and reference %q", imageTransport.Name(), destination))
|
||||
return nil, errors.Wrapf(err, "Cannot obtain a valid image reference for transport %q and reference %q", imageTransport.Name(), destination)
|
||||
}
|
||||
|
||||
return destRef, nil
|
||||
}
|
||||
|
||||
// getImageTags retrieves all the tags associated to an image hosted on a
|
||||
// container registry.
|
||||
// getImageTags lists all tags in a repository.
|
||||
// It returns a string slice of tags and any error encountered.
|
||||
func getImageTags(ctx context.Context, sysCtx *types.SystemContext, imgRef types.ImageReference) ([]string, error) {
|
||||
name := imgRef.DockerReference().Name()
|
||||
func getImageTags(ctx context.Context, sysCtx *types.SystemContext, repoRef reference.Named) ([]string, error) {
|
||||
name := repoRef.Name()
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"image": name,
|
||||
}).Info("Getting tags")
|
||||
tags, err := docker.GetRepositoryTags(ctx, sysCtx, imgRef)
|
||||
// Ugly: NewReference rejects IsNameOnly references, and GetRepositoryTags ignores the tag/digest.
|
||||
// So, we use TagNameOnly here only to shut up NewReference
|
||||
dockerRef, err := docker.NewReference(reference.TagNameOnly(repoRef))
|
||||
if err != nil {
|
||||
return nil, err // Should never happen for a reference with tag and no digest
|
||||
}
|
||||
tags, err := docker.GetRepositoryTags(ctx, sysCtx, dockerRef)
|
||||
|
||||
switch err := err.(type) {
|
||||
case nil:
|
||||
@@ -200,44 +199,31 @@ func getImageTags(ctx context.Context, sysCtx *types.SystemContext, imgRef types
|
||||
logrus.Warnf("Registry disallows tag list retrieval: %s", err)
|
||||
break
|
||||
default:
|
||||
return tags, errors.Wrapf(err, fmt.Sprintf("Error determining repository tags for image %s", name))
|
||||
return tags, errors.Wrapf(err, "Error determining repository tags for image %s", name)
|
||||
}
|
||||
|
||||
return tags, nil
|
||||
}
|
||||
|
||||
// isTagSpecified checks if an image name includes a tag and returns any errors
|
||||
// encountered.
|
||||
func isTagSpecified(imageName string) (bool, error) {
|
||||
normNamed, err := reference.ParseNormalizedNamed(imageName)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
tagged := !reference.IsNameOnly(normNamed)
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"imagename": imageName,
|
||||
"tagged": tagged,
|
||||
}).Info("Tag presence check")
|
||||
return tagged, nil
|
||||
}
|
||||
|
||||
// imagesTopCopyFromRepo builds a list of image references from the tags
|
||||
// found in the source repository.
|
||||
// imagesToCopyFromRepo builds a list of image references from the tags
|
||||
// found in a source repository.
|
||||
// It returns an image reference slice with as many elements as the tags found
|
||||
// and any error encountered.
|
||||
func imagesToCopyFromRepo(repoReference types.ImageReference, repoName string, sourceCtx *types.SystemContext) ([]types.ImageReference, error) {
|
||||
var sourceReferences []types.ImageReference
|
||||
tags, err := getImageTags(context.Background(), sourceCtx, repoReference)
|
||||
func imagesToCopyFromRepo(sys *types.SystemContext, repoRef reference.Named) ([]types.ImageReference, error) {
|
||||
tags, err := getImageTags(context.Background(), sys, repoRef)
|
||||
if err != nil {
|
||||
return sourceReferences, err
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var sourceReferences []types.ImageReference
|
||||
for _, tag := range tags {
|
||||
imageAndTag := fmt.Sprintf("%s:%s", repoName, tag)
|
||||
ref, err := docker.ParseReference(imageAndTag)
|
||||
taggedRef, err := reference.WithTag(repoRef, tag)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, fmt.Sprintf("Cannot obtain a valid image reference for transport %q and reference %q", docker.Transport.Name(), imageAndTag))
|
||||
return nil, errors.Wrapf(err, "Error creating a reference for repository %s and tag %q", repoRef.Name(), tag)
|
||||
}
|
||||
ref, err := docker.NewReference(taggedRef)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "Cannot obtain a valid image reference for transport %q and reference %s", docker.Transport.Name(), taggedRef.String())
|
||||
}
|
||||
sourceReferences = append(sourceReferences, ref)
|
||||
}
|
||||
@@ -258,7 +244,7 @@ func imagesToCopyFromDir(dirPath string) ([]types.ImageReference, error) {
|
||||
dirname := filepath.Dir(path)
|
||||
ref, err := directory.Transport.ParseReference(dirname)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, fmt.Sprintf("Cannot obtain a valid image reference for transport %q and reference %q", directory.Transport.Name(), dirname))
|
||||
return errors.Wrapf(err, "Cannot obtain a valid image reference for transport %q and reference %q", directory.Transport.Name(), dirname)
|
||||
}
|
||||
sourceReferences = append(sourceReferences, ref)
|
||||
return filepath.SkipDir
|
||||
@@ -268,7 +254,7 @@ func imagesToCopyFromDir(dirPath string) ([]types.ImageReference, error) {
|
||||
|
||||
if err != nil {
|
||||
return sourceReferences,
|
||||
errors.Wrapf(err, fmt.Sprintf("Error walking the path %q", dirPath))
|
||||
errors.Wrapf(err, "Error walking the path %q", dirPath)
|
||||
}
|
||||
|
||||
return sourceReferences, nil
|
||||
@@ -280,69 +266,113 @@ func imagesToCopyFromDir(dirPath string) ([]types.ImageReference, error) {
|
||||
// found and any error encountered. Each element of the slice is a list of
|
||||
// tagged image references, to be used as sync source.
|
||||
func imagesToCopyFromRegistry(registryName string, cfg registrySyncConfig, sourceCtx types.SystemContext) ([]repoDescriptor, error) {
|
||||
serverCtx := &sourceCtx
|
||||
// override ctx with per-registryName options
|
||||
serverCtx.DockerCertPath = cfg.CertDir
|
||||
serverCtx.DockerDaemonCertPath = cfg.CertDir
|
||||
serverCtx.DockerDaemonInsecureSkipTLSVerify = (cfg.TLSVerify.skip == types.OptionalBoolTrue)
|
||||
serverCtx.DockerInsecureSkipTLSVerify = cfg.TLSVerify.skip
|
||||
serverCtx.DockerAuthConfig = &cfg.Credentials
|
||||
|
||||
var repoDescList []repoDescriptor
|
||||
for imageName, tags := range cfg.Images {
|
||||
repoName := fmt.Sprintf("//%s", path.Join(registryName, imageName))
|
||||
logrus.WithFields(logrus.Fields{
|
||||
repoLogger := logrus.WithFields(logrus.Fields{
|
||||
"repo": imageName,
|
||||
"registry": registryName,
|
||||
}).Info("Processing repo")
|
||||
|
||||
serverCtx := &sourceCtx
|
||||
// override ctx with per-registryName options
|
||||
serverCtx.DockerCertPath = cfg.CertDir
|
||||
serverCtx.DockerDaemonCertPath = cfg.CertDir
|
||||
serverCtx.DockerDaemonInsecureSkipTLSVerify = (cfg.TLSVerify.skip == types.OptionalBoolTrue)
|
||||
serverCtx.DockerInsecureSkipTLSVerify = cfg.TLSVerify.skip
|
||||
serverCtx.DockerAuthConfig = &cfg.Credentials
|
||||
|
||||
var sourceReferences []types.ImageReference
|
||||
for _, tag := range tags {
|
||||
source := fmt.Sprintf("%s:%s", repoName, tag)
|
||||
|
||||
imageRef, err := docker.ParseReference(source)
|
||||
if err != nil {
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"tag": source,
|
||||
}).Error("Error processing tag, skipping")
|
||||
logrus.Errorf("Error getting image reference: %s", err)
|
||||
continue
|
||||
}
|
||||
sourceReferences = append(sourceReferences, imageRef)
|
||||
})
|
||||
repoRef, err := parseRepositoryReference(fmt.Sprintf("%s/%s", registryName, imageName))
|
||||
if err != nil {
|
||||
repoLogger.Error("Error parsing repository name, skipping")
|
||||
logrus.Error(err)
|
||||
continue
|
||||
}
|
||||
|
||||
if len(tags) == 0 {
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"repo": imageName,
|
||||
"registry": registryName,
|
||||
}).Info("Querying registry for image tags")
|
||||
repoLogger.Info("Processing repo")
|
||||
|
||||
imageRef, err := docker.ParseReference(repoName)
|
||||
if err != nil {
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"repo": imageName,
|
||||
"registry": registryName,
|
||||
}).Error("Error processing repo, skipping")
|
||||
logrus.Error(err)
|
||||
continue
|
||||
var sourceReferences []types.ImageReference
|
||||
if len(tags) != 0 {
|
||||
for _, tag := range tags {
|
||||
tagLogger := logrus.WithFields(logrus.Fields{"tag": tag})
|
||||
taggedRef, err := reference.WithTag(repoRef, tag)
|
||||
if err != nil {
|
||||
tagLogger.Error("Error parsing tag, skipping")
|
||||
logrus.Error(err)
|
||||
continue
|
||||
}
|
||||
imageRef, err := docker.NewReference(taggedRef)
|
||||
if err != nil {
|
||||
tagLogger.Error("Error processing tag, skipping")
|
||||
logrus.Errorf("Error getting image reference: %s", err)
|
||||
continue
|
||||
}
|
||||
sourceReferences = append(sourceReferences, imageRef)
|
||||
}
|
||||
|
||||
sourceReferences, err = imagesToCopyFromRepo(imageRef, repoName, serverCtx)
|
||||
} else { // len(tags) == 0
|
||||
repoLogger.Info("Querying registry for image tags")
|
||||
sourceReferences, err = imagesToCopyFromRepo(serverCtx, repoRef)
|
||||
if err != nil {
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"repo": imageName,
|
||||
"registry": registryName,
|
||||
}).Error("Error processing repo, skipping")
|
||||
repoLogger.Error("Error processing repo, skipping")
|
||||
logrus.Error(err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if len(sourceReferences) == 0 {
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"repo": imageName,
|
||||
"registry": registryName,
|
||||
}).Warnf("No tags to sync found")
|
||||
repoLogger.Warnf("No tags to sync found")
|
||||
continue
|
||||
}
|
||||
repoDescList = append(repoDescList, repoDescriptor{
|
||||
TaggedImages: sourceReferences,
|
||||
Context: serverCtx})
|
||||
}
|
||||
|
||||
for imageName, tagRegex := range cfg.ImagesByTagRegex {
|
||||
repoLogger := logrus.WithFields(logrus.Fields{
|
||||
"repo": imageName,
|
||||
"registry": registryName,
|
||||
})
|
||||
repoRef, err := parseRepositoryReference(fmt.Sprintf("%s/%s", registryName, imageName))
|
||||
if err != nil {
|
||||
repoLogger.Error("Error parsing repository name, skipping")
|
||||
logrus.Error(err)
|
||||
continue
|
||||
}
|
||||
|
||||
repoLogger.Info("Processing repo")
|
||||
|
||||
var sourceReferences []types.ImageReference
|
||||
|
||||
tagReg, err := regexp.Compile(tagRegex)
|
||||
if err != nil {
|
||||
repoLogger.WithFields(logrus.Fields{
|
||||
"regex": tagRegex,
|
||||
}).Error("Error parsing regex, skipping")
|
||||
logrus.Error(err)
|
||||
continue
|
||||
}
|
||||
|
||||
repoLogger.Info("Querying registry for image tags")
|
||||
allSourceReferences, err := imagesToCopyFromRepo(serverCtx, repoRef)
|
||||
if err != nil {
|
||||
repoLogger.Error("Error processing repo, skipping")
|
||||
logrus.Error(err)
|
||||
continue
|
||||
}
|
||||
|
||||
repoLogger.Infof("Start filtering using the regular expression: %v", tagRegex)
|
||||
for _, sReference := range allSourceReferences {
|
||||
tagged, isTagged := sReference.DockerReference().(reference.Tagged)
|
||||
if !isTagged {
|
||||
repoLogger.Errorf("Internal error, reference %s does not have a tag, skipping", sReference.DockerReference())
|
||||
continue
|
||||
}
|
||||
if tagReg.MatchString(tagged.Tag()) {
|
||||
sourceReferences = append(sourceReferences, sReference)
|
||||
}
|
||||
}
|
||||
|
||||
if len(sourceReferences) == 0 {
|
||||
repoLogger.Warnf("No tags to sync found")
|
||||
continue
|
||||
}
|
||||
repoDescList = append(repoDescList, repoDescriptor{
|
||||
@@ -366,32 +396,29 @@ func imagesToCopy(source string, transport string, sourceCtx *types.SystemContex
|
||||
desc := repoDescriptor{
|
||||
Context: sourceCtx,
|
||||
}
|
||||
refName := fmt.Sprintf("//%s", source)
|
||||
srcRef, err := docker.ParseReference(refName)
|
||||
named, err := reference.ParseNormalizedNamed(source) // May be a repository or an image.
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, fmt.Sprintf("Cannot obtain a valid image reference for transport %q and reference %q", docker.Transport.Name(), refName))
|
||||
return nil, errors.Wrapf(err, "Cannot obtain a valid image reference for transport %q and reference %q", docker.Transport.Name(), source)
|
||||
}
|
||||
imageTagged, err := isTagSpecified(source)
|
||||
if err != nil {
|
||||
return descriptors, err
|
||||
}
|
||||
|
||||
imageTagged := !reference.IsNameOnly(named)
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"imagename": source,
|
||||
"tagged": imageTagged,
|
||||
}).Info("Tag presence check")
|
||||
if imageTagged {
|
||||
desc.TaggedImages = append(desc.TaggedImages, srcRef)
|
||||
descriptors = append(descriptors, desc)
|
||||
break
|
||||
}
|
||||
|
||||
desc.TaggedImages, err = imagesToCopyFromRepo(
|
||||
srcRef,
|
||||
fmt.Sprintf("//%s", source),
|
||||
sourceCtx)
|
||||
|
||||
if err != nil {
|
||||
return descriptors, err
|
||||
}
|
||||
if len(desc.TaggedImages) == 0 {
|
||||
return descriptors, errors.Errorf("No images to sync found in %q", source)
|
||||
srcRef, err := docker.NewReference(named)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "Cannot obtain a valid image reference for transport %q and reference %q", docker.Transport.Name(), named.String())
|
||||
}
|
||||
desc.TaggedImages = []types.ImageReference{srcRef}
|
||||
} else {
|
||||
desc.TaggedImages, err = imagesToCopyFromRepo(sourceCtx, named)
|
||||
if err != nil {
|
||||
return descriptors, err
|
||||
}
|
||||
if len(desc.TaggedImages) == 0 {
|
||||
return descriptors, errors.Errorf("No images to sync found in %q", source)
|
||||
}
|
||||
}
|
||||
descriptors = append(descriptors, desc)
|
||||
|
||||
@@ -420,7 +447,7 @@ func imagesToCopy(source string, transport string, sourceCtx *types.SystemContex
|
||||
return descriptors, err
|
||||
}
|
||||
for registryName, registryConfig := range cfg {
|
||||
if len(registryConfig.Images) == 0 {
|
||||
if len(registryConfig.Images) == 0 && len(registryConfig.ImagesByTagRegex) == 0 {
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"registry": registryName,
|
||||
}).Warn("No images specified for registry")
|
||||
@@ -538,7 +565,7 @@ func (opts *syncOptions) run(args []string, stdout io.Writer) error {
|
||||
|
||||
_, err = copy.Image(ctx, policyContext, destRef, ref, &options)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, fmt.Sprintf("Error copying tag %q", transports.ImageName(ref)))
|
||||
return errors.Wrapf(err, "Error copying tag %q", transports.ImageName(ref))
|
||||
}
|
||||
imagesNumber++
|
||||
}
|
||||
|
||||
@@ -10,7 +10,8 @@ import (
|
||||
"github.com/containers/image/v5/transports/alltransports"
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/urfave/cli"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
)
|
||||
|
||||
// errorShouldDisplayUsage is a subtype of error used by command handlers to indicate that cli.ShowSubcommandHelp should be called.
|
||||
@@ -18,16 +19,16 @@ type errorShouldDisplayUsage struct {
|
||||
error
|
||||
}
|
||||
|
||||
// commandAction intermediates between the cli.ActionFunc interface and the real handler,
|
||||
// primarily to ensure that cli.Context is not available to the handler, which in turn
|
||||
// makes sure that the cli.String() etc. flag access functions are not used,
|
||||
// and everything is done using the *Options structures and the Destination: members of cli.Flag.
|
||||
// handler may return errorShouldDisplayUsage to cause cli.ShowSubcommandHelp to be called.
|
||||
func commandAction(handler func(args []string, stdout io.Writer) error) cli.ActionFunc {
|
||||
return func(c *cli.Context) error {
|
||||
err := handler(([]string)(c.Args()), c.App.Writer)
|
||||
// commandAction intermediates between the RunE interface and the real handler,
|
||||
// primarily to ensure that cobra.Command is not available to the handler, which in turn
|
||||
// makes sure that the cmd.Flags() etc. flag access functions are not used,
|
||||
// and everything is done using the *Options structures and the *Var() methods of cmd.Flag().
|
||||
// handler may return errorShouldDisplayUsage to cause c.Help to be called.
|
||||
func commandAction(handler func(args []string, stdout io.Writer) error) func(cmd *cobra.Command, args []string) error {
|
||||
return func(c *cobra.Command, args []string) error {
|
||||
err := handler(args, c.OutOrStdout())
|
||||
if _, ok := err.(errorShouldDisplayUsage); ok {
|
||||
cli.ShowSubcommandHelp(c)
|
||||
c.Help()
|
||||
}
|
||||
return err
|
||||
}
|
||||
@@ -39,20 +40,15 @@ type sharedImageOptions struct {
|
||||
authFilePath string // Path to a */containers/auth.json
|
||||
}
|
||||
|
||||
// imageFlags prepares a collection of CLI flags writing into sharedImageOptions, and the managed sharedImageOptions structure.
|
||||
func sharedImageFlags() ([]cli.Flag, *sharedImageOptions) {
|
||||
// sharedImageFlags prepares a collection of CLI flags writing into sharedImageOptions, and the managed sharedImageOptions structure.
|
||||
func sharedImageFlags() (pflag.FlagSet, *sharedImageOptions) {
|
||||
opts := sharedImageOptions{}
|
||||
return []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: "authfile",
|
||||
Usage: "path of the authentication file. Example: ${XDG_RUNTIME_DIR}/containers/auth.json",
|
||||
Value: os.Getenv("REGISTRY_AUTH_FILE"),
|
||||
Destination: &opts.authFilePath,
|
||||
},
|
||||
}, &opts
|
||||
fs := pflag.FlagSet{}
|
||||
fs.StringVar(&opts.authFilePath, "authfile", os.Getenv("REGISTRY_AUTH_FILE"), "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json")
|
||||
return fs, &opts
|
||||
}
|
||||
|
||||
// imageOptions collects CLI flags specific to the "docker" transport, which are
|
||||
// dockerImageOptions collects CLI flags specific to the "docker" transport, which are
|
||||
// the same across subcommands, but may be different for each image
|
||||
// (e.g. may differ between the source and destination of a copy)
|
||||
type dockerImageOptions struct {
|
||||
@@ -75,101 +71,60 @@ type imageOptions struct {
|
||||
|
||||
// dockerImageFlags prepares a collection of docker-transport specific CLI flags
|
||||
// writing into imageOptions, and the managed imageOptions structure.
|
||||
func dockerImageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) ([]cli.Flag, *imageOptions) {
|
||||
opts := imageOptions{
|
||||
func dockerImageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) (pflag.FlagSet, *imageOptions) {
|
||||
flags := imageOptions{
|
||||
dockerImageOptions: dockerImageOptions{
|
||||
global: global,
|
||||
shared: shared,
|
||||
},
|
||||
}
|
||||
|
||||
// This is horribly ugly, but we need to support the old option forms of (skopeo copy) for compatibility.
|
||||
// Don't add any more cases like this.
|
||||
credsOptionExtra := ""
|
||||
if credsOptionAlias != "" {
|
||||
credsOptionExtra += "," + credsOptionAlias
|
||||
}
|
||||
|
||||
var flags []cli.Flag
|
||||
fs := pflag.FlagSet{}
|
||||
if flagPrefix != "" {
|
||||
// the non-prefixed flag is handled by a shared flag.
|
||||
flags = append(flags,
|
||||
cli.GenericFlag{
|
||||
Name: flagPrefix + "authfile",
|
||||
Usage: "path of the authentication file. Example: ${XDG_RUNTIME_DIR}/containers/auth.json",
|
||||
Value: newOptionalStringValue(&opts.authFilePath),
|
||||
},
|
||||
)
|
||||
fs.Var(newOptionalStringValue(&flags.authFilePath), flagPrefix+"authfile", "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json")
|
||||
}
|
||||
flags = append(flags,
|
||||
cli.GenericFlag{
|
||||
Name: flagPrefix + "creds" + credsOptionExtra,
|
||||
Usage: "Use `USERNAME[:PASSWORD]` for accessing the registry",
|
||||
Value: newOptionalStringValue(&opts.credsOption),
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: flagPrefix + "cert-dir",
|
||||
Usage: "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry or daemon",
|
||||
Destination: &opts.dockerCertPath,
|
||||
},
|
||||
cli.GenericFlag{
|
||||
Name: flagPrefix + "tls-verify",
|
||||
Usage: "require HTTPS and verify certificates when talking to the container registry or daemon (defaults to true)",
|
||||
Value: newOptionalBoolValue(&opts.tlsVerify),
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: flagPrefix + "no-creds",
|
||||
Usage: "Access the registry anonymously",
|
||||
Destination: &opts.noCreds,
|
||||
},
|
||||
)
|
||||
return flags, &opts
|
||||
fs.Var(newOptionalStringValue(&flags.credsOption), flagPrefix+"creds", "Use `USERNAME[:PASSWORD]` for accessing the registry")
|
||||
if credsOptionAlias != "" {
|
||||
// This is horribly ugly, but we need to support the old option forms of (skopeo copy) for compatibility.
|
||||
// Don't add any more cases like this.
|
||||
f := fs.VarPF(newOptionalStringValue(&flags.credsOption), credsOptionAlias, "", "Use `USERNAME[:PASSWORD]` for accessing the registry")
|
||||
f.Hidden = true
|
||||
}
|
||||
fs.StringVar(&flags.dockerCertPath, flagPrefix+"cert-dir", "", "use certificates at `PATH` (*.crt, *.cert, *.key) to connect to the registry or daemon")
|
||||
optionalBoolFlag(&fs, &flags.tlsVerify, flagPrefix+"tls-verify", "require HTTPS and verify certificates when talking to the container registry or daemon (defaults to true)")
|
||||
fs.BoolVar(&flags.noCreds, flagPrefix+"no-creds", false, "Access the registry anonymously")
|
||||
return fs, &flags
|
||||
}
|
||||
|
||||
// imageFlags prepares a collection of CLI flags writing into imageOptions, and the managed imageOptions structure.
|
||||
func imageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) ([]cli.Flag, *imageOptions) {
|
||||
func imageFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) (pflag.FlagSet, *imageOptions) {
|
||||
dockerFlags, opts := dockerImageFlags(global, shared, flagPrefix, credsOptionAlias)
|
||||
|
||||
return append(dockerFlags, []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: flagPrefix + "shared-blob-dir",
|
||||
Usage: "`DIRECTORY` to use to share blobs across OCI repositories",
|
||||
Destination: &opts.sharedBlobDir,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: flagPrefix + "daemon-host",
|
||||
Usage: "use docker daemon host at `HOST` (docker-daemon: only)",
|
||||
Destination: &opts.dockerDaemonHost,
|
||||
},
|
||||
}...), opts
|
||||
fs := pflag.FlagSet{}
|
||||
fs.StringVar(&opts.sharedBlobDir, flagPrefix+"shared-blob-dir", "", "`DIRECTORY` to use to share blobs across OCI repositories")
|
||||
fs.StringVar(&opts.dockerDaemonHost, flagPrefix+"daemon-host", "", "use docker daemon host at `HOST` (docker-daemon: only)")
|
||||
fs.AddFlagSet(&dockerFlags)
|
||||
return fs, opts
|
||||
}
|
||||
|
||||
// newSystemContext returns a *types.SystemContext corresponding to opts.
|
||||
// It is guaranteed to return a fresh instance, so it is safe to make additional updates to it.
|
||||
func (opts *imageOptions) newSystemContext() (*types.SystemContext, error) {
|
||||
ctx := &types.SystemContext{
|
||||
RegistriesDirPath: opts.global.registriesDirPath,
|
||||
ArchitectureChoice: opts.global.overrideArch,
|
||||
OSChoice: opts.global.overrideOS,
|
||||
VariantChoice: opts.global.overrideVariant,
|
||||
DockerCertPath: opts.dockerCertPath,
|
||||
OCISharedBlobDirPath: opts.sharedBlobDir,
|
||||
AuthFilePath: opts.shared.authFilePath,
|
||||
DockerDaemonHost: opts.dockerDaemonHost,
|
||||
DockerDaemonCertPath: opts.dockerCertPath,
|
||||
SystemRegistriesConfPath: opts.global.registriesConfPath,
|
||||
BigFilesTemporaryDir: opts.global.tmpDir,
|
||||
}
|
||||
// *types.SystemContext instance from globalOptions
|
||||
// imageOptions option overrides the instance if both are present.
|
||||
ctx := opts.global.newSystemContext()
|
||||
ctx.DockerCertPath = opts.dockerCertPath
|
||||
ctx.OCISharedBlobDirPath = opts.sharedBlobDir
|
||||
ctx.AuthFilePath = opts.shared.authFilePath
|
||||
ctx.DockerDaemonHost = opts.dockerDaemonHost
|
||||
ctx.DockerDaemonCertPath = opts.dockerCertPath
|
||||
if opts.dockerImageOptions.authFilePath.present {
|
||||
ctx.AuthFilePath = opts.dockerImageOptions.authFilePath.value
|
||||
}
|
||||
if opts.tlsVerify.present {
|
||||
ctx.DockerDaemonInsecureSkipTLSVerify = !opts.tlsVerify.value
|
||||
}
|
||||
// DEPRECATED: We support this for backward compatibility, but override it if a per-image flag is provided.
|
||||
if opts.global.tlsVerify.present {
|
||||
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.global.tlsVerify.value)
|
||||
}
|
||||
if opts.tlsVerify.present {
|
||||
ctx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!opts.tlsVerify.value)
|
||||
}
|
||||
@@ -200,32 +155,16 @@ type imageDestOptions struct {
|
||||
}
|
||||
|
||||
// imageDestFlags prepares a collection of CLI flags writing into imageDestOptions, and the managed imageDestOptions structure.
|
||||
func imageDestFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) ([]cli.Flag, *imageDestOptions) {
|
||||
func imageDestFlags(global *globalOptions, shared *sharedImageOptions, flagPrefix, credsOptionAlias string) (pflag.FlagSet, *imageDestOptions) {
|
||||
genericFlags, genericOptions := imageFlags(global, shared, flagPrefix, credsOptionAlias)
|
||||
opts := imageDestOptions{imageOptions: genericOptions}
|
||||
|
||||
return append(genericFlags, []cli.Flag{
|
||||
cli.BoolFlag{
|
||||
Name: flagPrefix + "compress",
|
||||
Usage: "Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)",
|
||||
Destination: &opts.dirForceCompression,
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: flagPrefix + "oci-accept-uncompressed-layers",
|
||||
Usage: "Allow uncompressed image layers when saving to an OCI image using the 'oci' transport. (default is to compress things that aren't compressed)",
|
||||
Destination: &opts.ociAcceptUncompressedLayers,
|
||||
},
|
||||
cli.StringFlag{
|
||||
Name: flagPrefix + "compress-format",
|
||||
Usage: "`FORMAT` to use for the compression",
|
||||
Destination: &opts.compressionFormat,
|
||||
},
|
||||
cli.GenericFlag{
|
||||
Name: flagPrefix + "compress-level",
|
||||
Usage: "`LEVEL` to use for the compression",
|
||||
Value: newOptionalIntValue(&opts.compressionLevel),
|
||||
},
|
||||
}...), &opts
|
||||
fs := pflag.FlagSet{}
|
||||
fs.AddFlagSet(&genericFlags)
|
||||
fs.BoolVar(&opts.dirForceCompression, flagPrefix+"compress", false, "Compress tarball image layers when saving to directory using the 'dir' transport. (default is same compression type as source)")
|
||||
fs.BoolVar(&opts.ociAcceptUncompressedLayers, flagPrefix+"oci-accept-uncompressed-layers", false, "Allow uncompressed image layers when saving to an OCI image using the 'oci' transport. (default is to compress things that aren't compressed)")
|
||||
fs.StringVar(&opts.compressionFormat, flagPrefix+"compress-format", "", "`FORMAT` to use for the compression")
|
||||
fs.Var(newOptionalIntValue(&opts.compressionLevel), flagPrefix+"compress-level", "`LEVEL` to use for the compression")
|
||||
return fs, &opts
|
||||
}
|
||||
|
||||
// newSystemContext returns a *types.SystemContext corresponding to opts.
|
||||
@@ -276,20 +215,6 @@ func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
// parseImage converts image URL-like string to an initialized handler for that image.
|
||||
// The caller must call .Close() on the returned ImageCloser.
|
||||
func parseImage(ctx context.Context, opts *imageOptions, name string) (types.ImageCloser, error) {
|
||||
ref, err := alltransports.ParseImageName(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sys, err := opts.newSystemContext()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ref.NewImage(ctx, sys)
|
||||
}
|
||||
|
||||
// parseImageSource converts image URL-like string to an ImageSource.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func parseImageSource(ctx context.Context, opts *imageOptions, name string) (types.ImageSource, error) {
|
||||
@@ -303,3 +228,32 @@ func parseImageSource(ctx context.Context, opts *imageOptions, name string) (typ
|
||||
}
|
||||
return ref.NewImageSource(ctx, sys)
|
||||
}
|
||||
|
||||
// usageTemplate returns the usage template for skopeo commands
|
||||
// This blocks the displaying of the global options. The main skopeo
|
||||
// command should not use this.
|
||||
const usageTemplate = `Usage:{{if .Runnable}}
|
||||
{{.UseLine}}{{end}}{{if .HasAvailableSubCommands}}
|
||||
|
||||
{{.CommandPath}} [command]{{end}}{{if gt (len .Aliases) 0}}
|
||||
|
||||
Aliases:
|
||||
{{.NameAndAliases}}{{end}}{{if .HasExample}}
|
||||
|
||||
Examples:
|
||||
{{.Example}}{{end}}{{if .HasAvailableSubCommands}}
|
||||
|
||||
Available Commands:{{range .Commands}}{{if (or .IsAvailableCommand (eq .Name "help"))}}
|
||||
{{rpad .Name .NamePadding }} {{.Short}}{{end}}{{end}}{{end}}{{if .HasAvailableLocalFlags}}
|
||||
|
||||
Flags:
|
||||
{{.LocalFlags.FlagUsages | trimTrailingWhitespaces}}{{end}}{{if .HasAvailableInheritedFlags}}
|
||||
{{end}}
|
||||
`
|
||||
|
||||
// adjustUsage uses usageTemplate template to get rid the GlobalOption from usage
|
||||
// and disable [flag] at the end of command usage
|
||||
func adjustUsage(c *cobra.Command) {
|
||||
c.SetUsageTemplate(usageTemplate)
|
||||
c.DisableFlagsInUseLine = true
|
||||
}
|
||||
|
||||
@@ -1,42 +1,33 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// fakeGlobalOptions creates globalOptions and sets it according to flags.
|
||||
// NOTE: This is QUITE FAKE; none of the urfave/cli normalization and the like happens.
|
||||
func fakeGlobalOptions(t *testing.T, flags []string) *globalOptions {
|
||||
func fakeGlobalOptions(t *testing.T, flags []string) (*globalOptions, *cobra.Command) {
|
||||
app, opts := createApp()
|
||||
|
||||
flagSet := flag.NewFlagSet(app.Name, flag.ContinueOnError)
|
||||
for _, f := range app.Flags {
|
||||
f.Apply(flagSet)
|
||||
}
|
||||
err := flagSet.Parse(flags)
|
||||
cmd := &cobra.Command{}
|
||||
app.AddCommand(cmd)
|
||||
err := cmd.ParseFlags(flags)
|
||||
require.NoError(t, err)
|
||||
|
||||
return opts
|
||||
return opts, cmd
|
||||
}
|
||||
|
||||
// fakeImageOptions creates imageOptions and sets it according to globalFlags/cmdFlags.
|
||||
// NOTE: This is QUITE FAKE; none of the urfave/cli normalization and the like happens.
|
||||
func fakeImageOptions(t *testing.T, flagPrefix string, globalFlags []string, cmdFlags []string) *imageOptions {
|
||||
globalOpts := fakeGlobalOptions(t, globalFlags)
|
||||
|
||||
globalOpts, cmd := fakeGlobalOptions(t, globalFlags)
|
||||
sharedFlags, sharedOpts := sharedImageFlags()
|
||||
imageFlags, imageOpts := imageFlags(globalOpts, sharedOpts, flagPrefix, "")
|
||||
flagSet := flag.NewFlagSet("fakeImageOptions", flag.ContinueOnError)
|
||||
for _, f := range append(sharedFlags, imageFlags...) {
|
||||
f.Apply(flagSet)
|
||||
}
|
||||
err := flagSet.Parse(cmdFlags)
|
||||
cmd.Flags().AddFlagSet(&sharedFlags)
|
||||
cmd.Flags().AddFlagSet(&imageFlags)
|
||||
err := cmd.ParseFlags(cmdFlags)
|
||||
require.NoError(t, err)
|
||||
return imageOpts
|
||||
}
|
||||
@@ -120,17 +111,13 @@ func TestImageOptionsNewSystemContext(t *testing.T) {
|
||||
}
|
||||
|
||||
// fakeImageDestOptions creates imageDestOptions and sets it according to globalFlags/cmdFlags.
|
||||
// NOTE: This is QUITE FAKE; none of the urfave/cli normalization and the like happens.
|
||||
func fakeImageDestOptions(t *testing.T, flagPrefix string, globalFlags []string, cmdFlags []string) *imageDestOptions {
|
||||
globalOpts := fakeGlobalOptions(t, globalFlags)
|
||||
|
||||
globalOpts, cmd := fakeGlobalOptions(t, globalFlags)
|
||||
sharedFlags, sharedOpts := sharedImageFlags()
|
||||
imageFlags, imageOpts := imageDestFlags(globalOpts, sharedOpts, flagPrefix, "")
|
||||
flagSet := flag.NewFlagSet("fakeImageDestOptions", flag.ContinueOnError)
|
||||
for _, f := range append(sharedFlags, imageFlags...) {
|
||||
f.Apply(flagSet)
|
||||
}
|
||||
err := flagSet.Parse(cmdFlags)
|
||||
cmd.Flags().AddFlagSet(&sharedFlags)
|
||||
cmd.Flags().AddFlagSet(&imageFlags)
|
||||
err := cmd.ParseFlags(cmdFlags)
|
||||
require.NoError(t, err)
|
||||
return imageOpts
|
||||
}
|
||||
|
||||
@@ -158,6 +158,33 @@ _skopeo_list_repository_tags() {
|
||||
_complete_ "$options_with_args" "$boolean_options"
|
||||
}
|
||||
|
||||
_skopeo_login() {
|
||||
local options_with_args="
|
||||
--authfile
|
||||
--cert-dir
|
||||
--password -p
|
||||
--username -u
|
||||
"
|
||||
|
||||
local boolean_options="
|
||||
--get-login
|
||||
--tls-verify
|
||||
--password-stdin
|
||||
"
|
||||
_complete_ "$options_with_args" "$boolean_options"
|
||||
}
|
||||
|
||||
_skopeo_logout() {
|
||||
local options_with_args="
|
||||
--authfile
|
||||
"
|
||||
|
||||
local boolean_options="
|
||||
--all -a
|
||||
"
|
||||
_complete_ "$options_with_args" "$boolean_options"
|
||||
}
|
||||
|
||||
_skopeo_skopeo() {
|
||||
# XXX: Changes here need to be refleceted in the manually expanded
|
||||
# string in the `case` statement below as well.
|
||||
@@ -177,6 +204,21 @@ _skopeo_skopeo() {
|
||||
--help -h
|
||||
"
|
||||
|
||||
local commands=(
|
||||
copy
|
||||
delete
|
||||
inspect
|
||||
list-tags
|
||||
login
|
||||
logout
|
||||
manifest-digest
|
||||
standalone-sign
|
||||
standalone-verify
|
||||
sync
|
||||
help
|
||||
h
|
||||
)
|
||||
|
||||
case "$prev" in
|
||||
# XXX: Changes here need to be refleceted in $options_with_args as well.
|
||||
--policy|--registries.d|--override-arch|--override-os|--override-variant|--command-timeout)
|
||||
@@ -189,8 +231,6 @@ _skopeo_skopeo() {
|
||||
while IFS='' read -r line; do COMPREPLY+=("$line"); done < <(compgen -W "$boolean_options $options_with_args" -- "$cur")
|
||||
;;
|
||||
*)
|
||||
commands=$( "${COMP_WORDS[@]:0:$COMP_CWORD}" --generate-bash-completion )
|
||||
|
||||
while IFS='' read -r line; do COMPREPLY+=("$line"); done < <(compgen -W "${commands[*]} help" -- "$cur")
|
||||
;;
|
||||
esac
|
||||
|
||||
36
contrib/skopeoimage/README.md
Normal file
36
contrib/skopeoimage/README.md
Normal file
@@ -0,0 +1,36 @@
|
||||
<img src="https://cdn.rawgit.com/containers/skopeo/master/docs/skopeo.svg" width="250">
|
||||
|
||||
----
|
||||
|
||||
# skopeoimage
|
||||
|
||||
## Overview
|
||||
|
||||
This directory contains the Dockerfiles necessary to create the three skopeoimage container
|
||||
images that are housed on quay.io under the skopeo account. All three repositories where
|
||||
the images live are public and can be pulled without credentials. These container images
|
||||
are secured and the resulting containers can run safely. The container images are built
|
||||
using the latest Fedora and then Skopeo is installed into them:
|
||||
|
||||
* quay.io/skopeo/stable - This image is built using the latest stable version of Skopeo in a Fedora based container. Built with skopeoimage/stable/Dockerfile.
|
||||
* quay.io/skopeo/upstream - This image is built using the latest code found in this GitHub repository. When someone creates a commit and pushes it, the image is created. Due to that the image changes frequently and is not guaranteed to be stable. Built with skopeoimage/upstream/Dockerfile.
|
||||
* quay.io/skopeo/testing - This image is built using the latest version of Skopeo that is or was in updates testing for Fedora. At times this may be the same as the stable image. This container image will primarily be used by the development teams for verification testing when a new package is created. Built with skopeoimage/testing/Dockerfile.
|
||||
|
||||
## Sample Usage
|
||||
|
||||
Although not required, it is suggested that [Podman](https://github.com/containers/libpod) be used with these container images.
|
||||
|
||||
```
|
||||
# Get Help on Skopeo
|
||||
podman run docker://quay.io/skopeo/stable:latest --help
|
||||
|
||||
# Get help on the Skopeo Copy command
|
||||
podman run docker://quay.io/skopeo/stable:latest copy --help
|
||||
|
||||
# Copy the Skopeo container image from quay.io to
|
||||
# a private registry
|
||||
podman run docker://quay.io/skopeo/stable:latest copy docker://quay.io/skopeo/stable docker://registry.internal.company.com/skopeo
|
||||
|
||||
# Inspect the fedora:latest image
|
||||
podman run docker://quay.io/skopeo/stable:latest inspect --config docker://registry.fedoraproject.org/fedora:latest | jq
|
||||
```
|
||||
33
contrib/skopeoimage/stable/Dockerfile
Normal file
33
contrib/skopeoimage/stable/Dockerfile
Normal file
@@ -0,0 +1,33 @@
|
||||
# stable/Dockerfile
|
||||
#
|
||||
# Build a Skopeo container image from the latest
|
||||
# stable version of Skopeo on the Fedoras Updates System.
|
||||
# https://bodhi.fedoraproject.org/updates/?search=skopeo
|
||||
# This image can be used to create a secured container
|
||||
# that runs safely with privileges within the container.
|
||||
#
|
||||
FROM registry.fedoraproject.org/fedora:32
|
||||
|
||||
# Don't include container-selinux and remove
|
||||
# directories used by yum that are just taking
|
||||
# up space. Also reinstall shadow-utils as without
|
||||
# doing so, the setuid/setgid bits on newuidmap
|
||||
# and newgidmap are lost in the Fedora images.
|
||||
RUN useradd skopeo; yum -y update; yum -y reinstall shadow-utils; yum -y install skopeo fuse-overlayfs --exclude container-selinux; yum clean all; rm -rf /var/cache /var/log/dnf* /var/log/yum.*;
|
||||
|
||||
# Adjust storage.conf to enable Fuse storage.
|
||||
RUN sed -i -e 's|^#mount_program|mount_program|g' -e '/additionalimage.*/a "/var/lib/shared",' -e 's|^mountopt[[:space:]]*=.*$|mountopt = "nodev,fsync=0"|g' /etc/containers/storage.conf
|
||||
|
||||
# Setup the ability to use additional stores
|
||||
# with this container image.
|
||||
RUN mkdir -p /var/lib/shared/overlay-images /var/lib/shared/overlay-layers; touch /var/lib/shared/overlay-images/images.lock; touch /var/lib/shared/overlay-layers/layers.lock
|
||||
|
||||
# Setup skopeo's uid/guid entries
|
||||
RUN echo skopeo:100000:65536 > /etc/subuid
|
||||
RUN echo skopeo:100000:65536 > /etc/subgid
|
||||
|
||||
# Point to the Authorization file
|
||||
ENV REGISTRY_AUTH_FILE=/auth.json
|
||||
|
||||
# Set the entrypoint
|
||||
ENTRYPOINT ["/usr/bin/skopeo"]
|
||||
34
contrib/skopeoimage/testing/Dockerfile
Normal file
34
contrib/skopeoimage/testing/Dockerfile
Normal file
@@ -0,0 +1,34 @@
|
||||
# testing/Dockerfile
|
||||
#
|
||||
# Build a Skopeo container image from the latest
|
||||
# version of Skopeo that is in updates-testing
|
||||
# on the Fedoras Updates System.
|
||||
# https://bodhi.fedoraproject.org/updates/?search=skopeo
|
||||
# This image can be used to create a secured container
|
||||
# that runs safely with privileges within the container.
|
||||
#
|
||||
FROM registry.fedoraproject.org/fedora:32
|
||||
|
||||
# Don't include container-selinux and remove
|
||||
# directories used by yum that are just taking
|
||||
# up space. Also reinstall shadow-utils as without
|
||||
# doing so, the setuid/setgid bits on newuidmap
|
||||
# and newgidmap are lost in the Fedora images.
|
||||
RUN useradd skopeo; yum -y update; yum -y reinstall shadow-utils; yum -y install skopeo fuse-overlayfs --enablerepo updates-testing --exclude container-selinux; yum clean all; rm -rf /var/cache /var/log/dnf* /var/log/yum.*;
|
||||
|
||||
# Adjust storage.conf to enable Fuse storage.
|
||||
RUN sed -i -e 's|^#mount_program|mount_program|g' -e '/additionalimage.*/a "/var/lib/shared",' -e 's|^mountopt[[:space:]]*=.*$|mountopt = "nodev,fsync=0"|g' /etc/containers/storage.conf
|
||||
|
||||
# Setup the ability to use additional stores
|
||||
# with this container image.
|
||||
RUN mkdir -p /var/lib/shared/overlay-images /var/lib/shared/overlay-layers; touch /var/lib/shared/overlay-images/images.lock; touch /var/lib/shared/overlay-layers/layers.lock
|
||||
|
||||
# Setup skopeo's uid/guid entries
|
||||
RUN echo skopeo:100000:65536 > /etc/subuid
|
||||
RUN echo skopeo:100000:65536 > /etc/subgid
|
||||
|
||||
# Point to the Authorization file
|
||||
ENV REGISTRY_AUTH_FILE=/auth.json
|
||||
|
||||
# Set the entrypoint
|
||||
ENTRYPOINT ["/usr/bin/skopeo"]
|
||||
53
contrib/skopeoimage/upstream/Dockerfile
Normal file
53
contrib/skopeoimage/upstream/Dockerfile
Normal file
@@ -0,0 +1,53 @@
|
||||
# upstream/Dockerfile
|
||||
#
|
||||
# Build a Skopeo container image from the latest
|
||||
# upstream version of Skopeo on GitHub.
|
||||
# https://github.com/containers/skopeo
|
||||
# This image can be used to create a secured container
|
||||
# that runs safely with privileges within the container.
|
||||
#
|
||||
FROM registry.fedoraproject.org/fedora:32
|
||||
|
||||
# Don't include container-selinux and remove
|
||||
# directories used by yum that are just taking
|
||||
# up space. Also reinstall shadow-utils as without
|
||||
# doing so, the setuid/setgid bits on newuidmap
|
||||
# and newgidmap are lost in the Fedora images.
|
||||
RUN useradd skopeo; yum -y update; yum -y reinstall shadow-utils; \
|
||||
yum -y install make \
|
||||
golang \
|
||||
git \
|
||||
go-md2man \
|
||||
fuse-overlayfs \
|
||||
fuse3 \
|
||||
containers-common \
|
||||
gpgme-devel \
|
||||
libassuan-devel \
|
||||
btrfs-progs-devel \
|
||||
device-mapper-devel --enablerepo updates-testing --exclude container-selinux; \
|
||||
mkdir /root/skopeo; \
|
||||
git clone https://github.com/containers/skopeo /root/skopeo/src/github.com/containers/skopeo; \
|
||||
export GOPATH=/root/skopeo; \
|
||||
cd /root/skopeo/src/github.com/containers/skopeo; \
|
||||
make binary-local;\
|
||||
make install;\
|
||||
rm -rf /root/skopeo/*; \
|
||||
yum -y remove git golang go-md2man make; \
|
||||
yum clean all; rm -rf /var/cache /var/log/dnf* /var/log/yum.*;
|
||||
|
||||
# Adjust storage.conf to enable Fuse storage.
|
||||
RUN sed -i -e 's|^#mount_program|mount_program|g' -e '/additionalimage.*/a "/var/lib/shared",' -e 's|^mountopt[[:space:]]*=.*$|mountopt = "nodev,fsync=0"|g' /etc/containers/storage.conf
|
||||
|
||||
# Setup the ability to use additional stores
|
||||
# with this container image.
|
||||
RUN mkdir -p /var/lib/shared/overlay-images /var/lib/shared/overlay-layers; touch /var/lib/shared/overlay-images/images.lock; touch /var/lib/shared/overlay-layers/layers.lock
|
||||
|
||||
# Setup skopeo's uid/guid entries
|
||||
RUN echo skopeo:100000:65536 > /etc/subuid
|
||||
RUN echo skopeo:100000:65536 > /etc/subgid
|
||||
|
||||
# Point to the Authorization file
|
||||
ENV REGISTRY_AUTH_FILE=/auth.json
|
||||
|
||||
# Set the entrypoint
|
||||
ENTRYPOINT ["/usr/bin/skopeo"]
|
||||
@@ -25,7 +25,7 @@ the images in the list, and the list itself.
|
||||
|
||||
**--authfile** _path_
|
||||
|
||||
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
|
||||
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
|
||||
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
|
||||
|
||||
Note: You can also override the default path of the authentication file by setting the REGISTRY\_AUTH\_FILE
|
||||
@@ -96,7 +96,7 @@ $ ls /var/lib/images/busybox/*
|
||||
To copy and sign an image:
|
||||
|
||||
```sh
|
||||
# skopeo copy --sign-by dev@example.com container-storage:example/busybox:streaming docker://example/busybox:gold
|
||||
# skopeo copy --sign-by dev@example.com containers-storage:example/busybox:streaming docker://example/busybox:gold
|
||||
```
|
||||
|
||||
To encrypt an image:
|
||||
@@ -132,7 +132,7 @@ skopeo copy --encryption-key jwe:./public.key --encrypt-layer 1 oci:local_nginx
|
||||
```
|
||||
|
||||
## SEE ALSO
|
||||
skopeo(1), podman-login(1), docker-login(1)
|
||||
skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5), containers-policy.json(5), containers-transports(5)
|
||||
|
||||
## AUTHORS
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ $ docker exec -it registry /usr/bin/registry garbage-collect /etc/docker-distrib
|
||||
|
||||
**--authfile** _path_
|
||||
|
||||
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
|
||||
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
|
||||
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
|
||||
|
||||
**--creds** _username[:password]_ for accessing the registry
|
||||
@@ -44,7 +44,7 @@ See above for additional details on using the command **delete**.
|
||||
|
||||
|
||||
## SEE ALSO
|
||||
skopeo(1), podman-login(1), docker-login(1)
|
||||
skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5)
|
||||
|
||||
## AUTHORS
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ Return low-level information about _image-name_ in a registry
|
||||
|
||||
**--authfile** _path_
|
||||
|
||||
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
|
||||
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
|
||||
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
|
||||
|
||||
**--creds** _username[:password]_ for accessing the registry
|
||||
@@ -63,7 +63,7 @@ $ skopeo inspect docker://docker.io/fedora
|
||||
```
|
||||
|
||||
# SEE ALSO
|
||||
skopeo(1), podman-login(1), docker-login(1)
|
||||
skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5)
|
||||
|
||||
## AUTHORS
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ Return a list of tags from _repository-name_ in a registry.
|
||||
|
||||
**--authfile** _path_
|
||||
|
||||
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
|
||||
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
|
||||
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
|
||||
|
||||
**--creds** _username[:password]_ for accessing the registry
|
||||
@@ -30,7 +30,7 @@ Repository names are transport-specific references as each transport may have it
|
||||
This commands refers to repositories using a _transport_`:`_details_ format. The following formats are supported:
|
||||
|
||||
**docker://**_docker-repository-reference_
|
||||
A repository in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(podman login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
|
||||
A repository in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(skopeo login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
|
||||
A _docker-repository-reference_ is of the form: **registryhost:port/repositoryname** which is similar to an _image-reference_ but with no tag or digest allowed as the last component (e.g no `:latest` or `@sha256:xyz`)
|
||||
|
||||
Examples of valid docker-repository-references:
|
||||
@@ -94,7 +94,7 @@ $ skopeo list-tags docker://localhost:5000/fedora
|
||||
```
|
||||
|
||||
# SEE ALSO
|
||||
skopeo(1), podman-login(1), docker-login(1)
|
||||
skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5)
|
||||
|
||||
## AUTHORS
|
||||
|
||||
|
||||
101
docs/skopeo-login.1.md
Normal file
101
docs/skopeo-login.1.md
Normal file
@@ -0,0 +1,101 @@
|
||||
% skopeo-login(1)
|
||||
|
||||
## NAME
|
||||
skopeo\-login - Login to a container registry
|
||||
|
||||
## SYNOPSIS
|
||||
**skoepo login** [*options*] *registry*
|
||||
|
||||
## DESCRIPTION
|
||||
**skopeo login** logs into a specified registry server with the correct username
|
||||
and password. **skopeo login** reads in the username and password from STDIN.
|
||||
The username and password can also be set using the **username** and **password** flags.
|
||||
The path of the authentication file can be specified by the user by setting the **authfile**
|
||||
flag. The default path used is **${XDG\_RUNTIME\_DIR}/containers/auth.json**.
|
||||
|
||||
## OPTIONS
|
||||
|
||||
**--password**, **-p**=*password*
|
||||
|
||||
Password for registry
|
||||
|
||||
**--password-stdin**
|
||||
|
||||
Take the password from stdin
|
||||
|
||||
**--username**, **-u**=*username*
|
||||
|
||||
Username for registry
|
||||
|
||||
**--authfile**=*path*
|
||||
|
||||
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json
|
||||
|
||||
Note: You can also override the default path of the authentication file by setting the REGISTRY\_AUTH\_FILE
|
||||
environment variable. `export REGISTRY_AUTH_FILE=path`
|
||||
|
||||
**--get-login**
|
||||
|
||||
Return the logged-in user for the registry. Return error if no login is found.
|
||||
|
||||
**--cert-dir**=*path*
|
||||
|
||||
Use certificates at *path* (\*.crt, \*.cert, \*.key) to connect to the registry.
|
||||
Default certificates directory is _/etc/containers/certs.d_.
|
||||
|
||||
**--tls-verify**=*true|false*
|
||||
|
||||
Require HTTPS and verify certificates when contacting registries (default: true). If explicitly set to true,
|
||||
then TLS verification will be used. If set to false, then TLS verification will not be used. If not specified,
|
||||
TLS verification will be used unless the target registry is listed as an insecure registry in registries.conf.
|
||||
|
||||
**--help**, **-h**
|
||||
|
||||
Print usage statement
|
||||
|
||||
## EXAMPLES
|
||||
|
||||
```
|
||||
$ skopeo login docker.io
|
||||
Username: testuser
|
||||
Password:
|
||||
Login Succeeded!
|
||||
```
|
||||
|
||||
```
|
||||
$ skopeo login -u testuser -p testpassword localhost:5000
|
||||
Login Succeeded!
|
||||
```
|
||||
|
||||
```
|
||||
$ skopeo login --authfile authdir/myauths.json docker.io
|
||||
Username: testuser
|
||||
Password:
|
||||
Login Succeeded!
|
||||
```
|
||||
|
||||
```
|
||||
$ skopeo login --tls-verify=false -u test -p test localhost:5000
|
||||
Login Succeeded!
|
||||
```
|
||||
|
||||
```
|
||||
$ skopeo login --cert-dir /etc/containers/certs.d/ -u foo -p bar localhost:5000
|
||||
Login Succeeded!
|
||||
```
|
||||
|
||||
```
|
||||
$ skopeo login -u testuser --password-stdin < testpassword.txt docker.io
|
||||
Login Succeeded!
|
||||
```
|
||||
|
||||
```
|
||||
$ echo $testpassword | skopeo login -u testuser --password-stdin docker.io
|
||||
Login Succeeded!
|
||||
```
|
||||
|
||||
## SEE ALSO
|
||||
skopeo(1), skopeo-logout(1), containers-auth.json(5), containers-registries.conf(5), containers-certs.d.5.md
|
||||
|
||||
## HISTORY
|
||||
May 2020, Originally compiled by Qi Wang <qiwan@redhat.com>
|
||||
53
docs/skopeo-logout.1.md
Normal file
53
docs/skopeo-logout.1.md
Normal file
@@ -0,0 +1,53 @@
|
||||
% skopeo-logout(1)
|
||||
|
||||
## NAME
|
||||
skopeo\-logout - Logout of a container registry
|
||||
|
||||
## SYNOPSIS
|
||||
**skopeo logout** [*options*] *registry*
|
||||
|
||||
## DESCRIPTION
|
||||
**skopeo logout** logs out of a specified registry server by deleting the cached credentials
|
||||
stored in the **auth.json** file. The path of the authentication file can be overridden by the user by setting the **authfile** flag.
|
||||
The default path used is **${XDG\_RUNTIME\_DIR}/containers/auth.json**.
|
||||
All the cached credentials can be removed by setting the **all** flag.
|
||||
|
||||
## OPTIONS
|
||||
|
||||
**--authfile**=*path*
|
||||
|
||||
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json
|
||||
|
||||
Note: You can also override the default path of the authentication file by setting the REGISTRY\_AUTH\_FILE
|
||||
environment variable. `export REGISTRY_AUTH_FILE=path`
|
||||
|
||||
**--all**, **-a**
|
||||
|
||||
Remove the cached credentials for all registries in the auth file
|
||||
|
||||
**--help**, **-h**
|
||||
|
||||
Print usage statement
|
||||
|
||||
## EXAMPLES
|
||||
|
||||
```
|
||||
$ skopeo logout docker.io
|
||||
Remove login credentials for docker.io
|
||||
```
|
||||
|
||||
```
|
||||
$ skopeo logout --authfile authdir/myauths.json docker.io
|
||||
Remove login credentials for docker.io
|
||||
```
|
||||
|
||||
```
|
||||
$ skopeo logout --all
|
||||
Remove login credentials for all registries
|
||||
```
|
||||
|
||||
## SEE ALSO
|
||||
skopeo(1), skopeo-login(1), containers-auth.json(5)
|
||||
|
||||
## HISTORY
|
||||
May 2020, Originally compiled by Qi Wang <qiwan@redhat.com>
|
||||
@@ -26,7 +26,7 @@ $
|
||||
```
|
||||
|
||||
## SEE ALSO
|
||||
skopeo(1), skopeo-copy(1)
|
||||
skopeo(1), skopeo-copy(1), containers-signature(5)
|
||||
|
||||
## AUTHORS
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ Signature verified, digest sha256:20bf21ed457b390829cdbeec8795a7bea1626991fda603
|
||||
```
|
||||
|
||||
## SEE ALSO
|
||||
skopeo(1)
|
||||
skopeo(1), containers-signature(5)
|
||||
|
||||
## AUTHORS
|
||||
|
||||
|
||||
@@ -34,7 +34,7 @@ name can be stored at _destination_.
|
||||
## OPTIONS
|
||||
**--authfile** _path_
|
||||
|
||||
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
|
||||
Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `skopeo login`.
|
||||
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
|
||||
|
||||
**--src-authfile** _path_
|
||||
@@ -86,6 +86,21 @@ Images are located at:
|
||||
/media/usb/busybox:latest
|
||||
```
|
||||
|
||||
### Synchronizing to a container registry from local
|
||||
Images are located at:
|
||||
```
|
||||
/media/usb/busybox:1-glibc
|
||||
```
|
||||
Sync run
|
||||
```
|
||||
$ skopeo sync --src dir --dest docker /media/usb/busybox:1-glibc my-registry.local.lan/test/
|
||||
```
|
||||
Destination registry content:
|
||||
```
|
||||
REPO TAGS
|
||||
my-registry.local.lan/test/busybox 1-glibc
|
||||
```
|
||||
|
||||
### Synchronizing to a local directory, scoped
|
||||
```
|
||||
$ skopeo sync --src docker --dest dir --scoped registry.example.com/busybox /media/usb
|
||||
@@ -128,6 +143,8 @@ registry.example.com:
|
||||
redis:
|
||||
- "1.0"
|
||||
- "2.0"
|
||||
images-by-tag-regex:
|
||||
nginx: ^1\.13\.[12]-alpine-perl$
|
||||
credentials:
|
||||
username: john
|
||||
password: this is a secret
|
||||
@@ -139,22 +156,25 @@ quay.io:
|
||||
coreos/etcd:
|
||||
- latest
|
||||
```
|
||||
|
||||
If the yaml filename is `sync.yml`, sync run:
|
||||
```
|
||||
skopeo sync --src yaml --dest docker sync.yml my-registry.local.lan/repo/
|
||||
```
|
||||
This will copy the following images:
|
||||
- Repository `registry.example.com/busybox`: all images, as no tags are specified.
|
||||
- Repository `registry.example.com/redis`: images tagged "1.0" and "2.0".
|
||||
- Repository `registry.example.com/nginx`: images tagged "1.13.1-alpine-perl" and "1.13.2-alpine-perl".
|
||||
- Repository `quay.io/coreos/etcd`: images tagged "latest".
|
||||
|
||||
For the registry `registry.example.com`, the "john"/"this is a secret" credentials are used, with server TLS certificates located at `/home/john/certs`.
|
||||
|
||||
TLS verification is normally enabled, and it can be disabled setting `tls-verify` to `true`.
|
||||
TLS verification is normally enabled, and it can be disabled setting `tls-verify` to `false`.
|
||||
In the above example, TLS verification is enabled for `reigstry.example.com`, while is
|
||||
disabled for `quay.io`.
|
||||
|
||||
## SEE ALSO
|
||||
skopeo(1), podman-login(1), docker-login(1)
|
||||
skopeo(1), skopeo-login(1), docker-login(1), containers-auth.json(5), containers-policy.json(5), containers-transports(5)
|
||||
|
||||
## AUTHORS
|
||||
|
||||
Flavio Castelli <fcastelli@suse.com>, Marco Vedovati <mvedovati@suse.com>
|
||||
|
||||
|
||||
@@ -27,13 +27,13 @@ its functionality. It also does not require root, unless you are copying images
|
||||
Most commands refer to container images, using a _transport_`:`_details_ format. The following formats are supported:
|
||||
|
||||
**containers-storage:**_docker-reference_
|
||||
An image located in a local containers/storage image store. Location and image store specified in /etc/containers/storage.conf
|
||||
An image located in a local containers/storage image store. Both the location and image store are specified in /etc/containers/storage.conf. (Backend for Podman, CRI-O, Buildah and friends)
|
||||
|
||||
**dir:**_path_
|
||||
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
|
||||
|
||||
**docker://**_docker-reference_
|
||||
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(podman login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
|
||||
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in either `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(skopeo login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
|
||||
|
||||
**docker-archive:**_path_[**:**_docker-reference_]
|
||||
An image is stored in the `docker save` formatted file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
|
||||
@@ -76,6 +76,8 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
|
||||
| [skopeo-delete(1)](skopeo-delete.1.md) | Mark image-name for deletion. |
|
||||
| [skopeo-inspect(1)](skopeo-inspect.1.md) | Return low-level information about image-name in a registry. |
|
||||
| [skopeo-list-tags(1)](skopeo-list-tags.1.md) | List the tags for the given transport/repository. |
|
||||
| [skopeo-login(1)](skopeo-login.1.md) | Login to a container registry. |
|
||||
| [skopeo-logout(1)](skopeo-logout.1.md) | Logout of a container registry. |
|
||||
| [skopeo-manifest-digest(1)](skopeo-manifest-digest.1.md) | Compute a manifest digest of manifest-file and write it to standard output.|
|
||||
| [skopeo-standalone-sign(1)](skopeo-standalone-sign.1.md) | Sign an image. |
|
||||
| [skopeo-standalone-verify(1)](skopeo-standalone-verify.1.md)| Verify an image. |
|
||||
@@ -84,14 +86,14 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
|
||||
## FILES
|
||||
**/etc/containers/policy.json**
|
||||
Default trust policy file, if **--policy** is not specified.
|
||||
The policy format is documented in https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md .
|
||||
The policy format is documented in [containers-policy.json(5)](https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md) .
|
||||
|
||||
**/etc/containers/registries.d**
|
||||
Default directory containing registry configuration, if **--registries.d** is not specified.
|
||||
The contents of this directory are documented in https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md .
|
||||
The contents of this directory are documented in [containers-policy.json(5)](https://github.com/containers/image/blob/master/docs/containers-policy.json.5.md).
|
||||
|
||||
## SEE ALSO
|
||||
podman-login(1), docker-login(1)
|
||||
skopeo-login(1), docker-login(1), containers-auth.json(5), containers-storage.conf(5), containers-policy.json(5), containers-transports(5)
|
||||
|
||||
## AUTHORS
|
||||
|
||||
|
||||
19
go.mod
19
go.mod
@@ -3,24 +3,25 @@ module github.com/containers/skopeo
|
||||
go 1.12
|
||||
|
||||
require (
|
||||
github.com/containerd/containerd v1.3.0 // indirect
|
||||
github.com/containers/image/v5 v5.4.3
|
||||
github.com/containers/common v0.14.0
|
||||
github.com/containers/image/v5 v5.5.1
|
||||
github.com/containers/ocicrypt v1.0.2
|
||||
github.com/containers/storage v1.18.2
|
||||
github.com/containers/storage v1.20.2
|
||||
github.com/docker/docker v1.4.2-0.20191219165747-a9416c67da9f
|
||||
github.com/dsnet/compress v0.0.1 // indirect
|
||||
github.com/go-check/check v0.0.0-20180628173108-788fd7840127
|
||||
github.com/google/go-cmp v0.3.1 // indirect
|
||||
github.com/opencontainers/go-digest v1.0.0-rc1
|
||||
github.com/opencontainers/go-digest v1.0.0
|
||||
github.com/opencontainers/image-spec v1.0.2-0.20190823105129-775207bd45b6
|
||||
github.com/opencontainers/image-tools v0.0.0-20170926011501-6d941547fa1d
|
||||
github.com/opencontainers/runtime-spec v1.0.0 // indirect
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/russross/blackfriday v2.0.0+incompatible // indirect
|
||||
github.com/sirupsen/logrus v1.5.0
|
||||
github.com/stretchr/testify v1.5.1
|
||||
github.com/sirupsen/logrus v1.6.0
|
||||
github.com/spf13/cobra v1.0.0
|
||||
github.com/spf13/pflag v1.0.5
|
||||
github.com/stretchr/testify v1.6.1
|
||||
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2
|
||||
github.com/urfave/cli v1.22.1
|
||||
go4.org v0.0.0-20190218023631-ce4c26f7be8e // indirect
|
||||
gopkg.in/yaml.v2 v2.2.8
|
||||
golang.org/x/text v0.3.3 // indirect
|
||||
gopkg.in/yaml.v2 v2.3.0
|
||||
)
|
||||
|
||||
188
go.sum
188
go.sum
@@ -9,17 +9,22 @@ github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5 h1:ygIc8M6tr
|
||||
github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
|
||||
github.com/Microsoft/hcsshim v0.8.7 h1:ptnOoufxGSzauVTsdE+wMYnCWA301PdoN4xg5oRdZpg=
|
||||
github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
|
||||
github.com/Microsoft/hcsshim v0.8.9 h1:VrfodqvztU8YSOvygU+DN1BGaSGxmrNfqOv5oOuX2Bk=
|
||||
github.com/Microsoft/hcsshim v0.8.9/go.mod h1:5692vkUqntj1idxauYlpoINNKeqCiG6Sg38RRsjT5y8=
|
||||
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
|
||||
github.com/VividCortex/ewma v1.1.1 h1:MnEK4VOv6n0RSY4vtRe3h11qjxL3+t0B8yOL8iMXdcM=
|
||||
github.com/VividCortex/ewma v1.1.1/go.mod h1:2Tkkvm3sRDVXaiyucHiACn4cqf7DpdyLvmxzcbUokwA=
|
||||
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d h1:licZJFw2RwpHMqeKTCYkitsPqHNxTmd4SNR5r94FGM8=
|
||||
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d/go.mod h1:asat636LX7Bqt5lYEZ27JNDcqxfjdBQuJ/MM4CN/Lzo=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
|
||||
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f h1:tSNMc+rJDfmYntojat8lljbt1mgKNpTxUZJsSzJ9Y1s=
|
||||
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
|
||||
@@ -27,28 +32,39 @@ github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on
|
||||
github.com/containerd/containerd v1.2.10/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69 h1:rG1clvJbgsUcmb50J82YUJhUMopWNtZvyMZjb+4fqGw=
|
||||
github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
github.com/containerd/containerd v1.3.0 h1:xjvXQWABwS2uiv3TWgQt5Uth60Gu86LTGZXMJkjc7rY=
|
||||
github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
github.com/containerd/containerd v1.3.2 h1:ForxmXkA6tPIvffbrDAcPUIB32QgXkt2XFj+F0UxetA=
|
||||
github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc h1:TP+534wVlf61smEIq1nwLLAjQVEK2EADoW3CX9AuT+8=
|
||||
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
|
||||
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
|
||||
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
|
||||
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
|
||||
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
|
||||
github.com/containers/image/v5 v5.4.3 h1:zn2HR7uu4hpvT5QQHgjqonOzKDuM1I1UHUEmzZT5sbs=
|
||||
github.com/containers/image/v5 v5.4.3/go.mod h1:pN0tvp3YbDd7BWavK2aE0mvJUqVd2HmhPjekyWSFm0U=
|
||||
github.com/containers/common v0.14.0 h1:hiZFDPf6ajKiDmojN5f5X3gboKPO73NLrYb0RXfrQiA=
|
||||
github.com/containers/common v0.14.0/go.mod h1:9olhlE+WhYof1npnMJdyRMX14/yIUint6zyHzcyRVAg=
|
||||
github.com/containers/image/v5 v5.4.4 h1:JSanNn3v/BMd3o0MEvO4R4OKNuoJUSzVGQAI1+0FMXE=
|
||||
github.com/containers/image/v5 v5.4.4/go.mod h1:g7cxNXitiLi6pEr9/L9n/0wfazRuhDKXU15kV86N8h8=
|
||||
github.com/containers/image/v5 v5.5.1 h1:h1FCOXH6Ux9/p/E4rndsQOC4yAdRU0msRTfLVeQ7FDQ=
|
||||
github.com/containers/image/v5 v5.5.1/go.mod h1:4PyNYR0nwlGq/ybVJD9hWlhmIsNra4Q8uOQX2s6E2uM=
|
||||
github.com/containers/libtrust v0.0.0-20190913040956-14b96171aa3b h1:Q8ePgVfHDplZ7U33NwHZkrVELsZP5fYj9pM5WBZB2GE=
|
||||
github.com/containers/libtrust v0.0.0-20190913040956-14b96171aa3b/go.mod h1:9rfv8iPl1ZP7aqh9YA68wnZv2NUDbXdcdPHVz0pFbPY=
|
||||
github.com/containers/ocicrypt v1.0.2 h1:Q0/IPs8ohfbXNxEfyJ2pFVmvJu5BhqJUAmc6ES9NKbo=
|
||||
github.com/containers/ocicrypt v1.0.2/go.mod h1:nsOhbP19flrX6rE7ieGFvBlr7modwmNjsqWarIUce4M=
|
||||
github.com/containers/storage v1.18.2 h1:4cgFbrrgr9nR9xCeOmfpyxk1MtXYZGr7XGPJfAVkGmc=
|
||||
github.com/containers/storage v1.18.2/go.mod h1:WTBMf+a9ZZ/LbmEVeLHH2TX4CikWbO1Bt+/m58ZHVPg=
|
||||
github.com/containers/storage v1.19.1 h1:YKIzOO12iaD5Ra0PKFS6emcygbHLmwmQOCQRU/19YAQ=
|
||||
github.com/containers/storage v1.19.1/go.mod h1:KbXjSwKnx17ejOsjFcCXSf78mCgZkQSLPBNTMRc3XrQ=
|
||||
github.com/containers/storage v1.20.2 h1:tw/uKRPDnmVrluIzer3dawTFG/bTJLP8IEUyHFhltYk=
|
||||
github.com/containers/storage v1.20.2/go.mod h1:oOB9Ie8OVPojvoaKWEGSEtHbXUAs+tSyr7RO7ZGteMc=
|
||||
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
||||
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
||||
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
|
||||
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
|
||||
github.com/docker/docker v1.4.2-0.20191219165747-a9416c67da9f h1:Sm8iD2lifO31DwXfkGzq8VgA7rwxPjRsYmeo0K/dF9Y=
|
||||
@@ -66,6 +82,8 @@ github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7/go.mod h1:cyGadeNE
|
||||
github.com/dsnet/compress v0.0.1 h1:PlZu0n3Tuv04TzpfPbrnI0HW/YwodEXDS+oPKahKF0Q=
|
||||
github.com/dsnet/compress v0.0.1/go.mod h1:Aw8dCMJ7RioblQeTqt88akK31OvO8Dhf5JflhBbQEHo=
|
||||
github.com/dsnet/golib v0.0.0-20171103203638-1ea166775780/go.mod h1:Lj+Z9rebOhdfkVLjJ8T6VcRQv3SXugXy999NBtR9aFY=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
|
||||
github.com/fullsailor/pkcs7 v0.0.0-20190404230743-d7302db945fa h1:RDBNVkRviHZtvDvId8XSGPu3rmpmSe+wKRcEWNgsfWU=
|
||||
github.com/fullsailor/pkcs7 v0.0.0-20190404230743-d7302db945fa/go.mod h1:KnogPXtdwXqoenmZCw6S+25EAm2MkxbG0deNDu4cbSA=
|
||||
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
|
||||
@@ -83,18 +101,33 @@ github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
|
||||
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
|
||||
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
|
||||
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
|
||||
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
|
||||
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
|
||||
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
|
||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/gorilla/mux v1.7.4 h1:VuZ8uybHlWmqV03+zRzdwKL4tUnIp1MAQtp1mIFE1bc=
|
||||
github.com/gorilla/mux v1.7.4/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
|
||||
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
@@ -103,8 +136,13 @@ github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uP
|
||||
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
|
||||
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
|
||||
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
|
||||
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
|
||||
github.com/imdario/mergo v0.3.9 h1:UauaLniWCFHWd+Jp9oCEkTBj8VO/9DKg3PV3VCNMDIg=
|
||||
github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
||||
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
|
||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
@@ -112,26 +150,39 @@ github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvW
|
||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.4.1/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
|
||||
github.com/klauspost/compress v1.10.3 h1:OP96hzwJVBIHYU52pVTI6CczrxPvrGfgqF9N5eTO0Q8=
|
||||
github.com/klauspost/compress v1.10.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
|
||||
github.com/klauspost/compress v1.10.5 h1:7q6vHIqubShURwQz8cQK6yIe/xC3IF0Vm7TGfqjewrc=
|
||||
github.com/klauspost/compress v1.10.5/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
|
||||
github.com/klauspost/compress v1.10.7 h1:7rix8v8GpI3ZBb0nSozFRgbtXKv+hOe+qfEpZqybrAg=
|
||||
github.com/klauspost/compress v1.10.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
|
||||
github.com/klauspost/compress v1.10.8 h1:eLeJ3dr/Y9+XRfJT4l+8ZjmtB5RPJhucH2HeCV5+IZY=
|
||||
github.com/klauspost/compress v1.10.8/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
|
||||
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
|
||||
github.com/klauspost/pgzip v1.2.3 h1:Ce2to9wvs/cuJ2b86/CKQoTYr9VHfpanYosZ0UBJqdw=
|
||||
github.com/klauspost/pgzip v1.2.3/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
github.com/klauspost/pgzip v1.2.4 h1:TQ7CNpYKovDOmqzRHKxJh0BeaBI7UdQZYc6p7pMQh1A=
|
||||
github.com/klauspost/pgzip v1.2.4/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.2 h1:DB17ag19krx9CFsz4o3enTrPXyIXCl+2iCXH/aMAp9s=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3 h1:CE8S1cTafDpPvMhIxNJKvHsGVBgn1xWYf1NbHQhywc8=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
|
||||
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
|
||||
github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/QdE+0=
|
||||
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
|
||||
github.com/mattn/go-shellwords v1.0.10 h1:Y7Xqm8piKOO3v10Thp7Z36h4FYFjt5xB//6XvOrs2Gw=
|
||||
github.com/mattn/go-shellwords v1.0.10/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/mistifyio/go-zfs v2.1.1+incompatible h1:gAMO1HM9xBRONLHHYnu5iFsOJUiJdNZo6oqSENd4eW8=
|
||||
github.com/mistifyio/go-zfs v2.1.1+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
|
||||
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
@@ -141,9 +192,18 @@ github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7P
|
||||
github.com/mtrmac/gpgme v0.1.2 h1:dNOmvYmsrakgW7LcgiprD0yfRuQQe8/C8F6Z+zogO3s=
|
||||
github.com/mtrmac/gpgme v0.1.2/go.mod h1:GYYHnGSuS7HK3zVS2n3y73y0okK/BeKzwnn5jgiVFNI=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
|
||||
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
|
||||
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
|
||||
github.com/onsi/ginkgo v1.13.0/go.mod h1:+REjRxOmWfHCjfv9TTWB1jD1Frx4XydAD3zm1lskyM0=
|
||||
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
|
||||
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
|
||||
github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
|
||||
github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
|
||||
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
|
||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
|
||||
github.com/opencontainers/image-spec v1.0.2-0.20190823105129-775207bd45b6 h1:yN8BPXVwMBAm3Cuvh1L5XE8XpvYRMdsVLd82ILprhUU=
|
||||
github.com/opencontainers/image-spec v1.0.2-0.20190823105129-775207bd45b6/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
|
||||
@@ -152,15 +212,20 @@ github.com/opencontainers/image-tools v0.0.0-20170926011501-6d941547fa1d/go.mod
|
||||
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
|
||||
github.com/opencontainers/runc v1.0.0-rc9 h1:/k06BMULKF5hidyoZymkoDCzdJzltZpz/UU4LguQVtc=
|
||||
github.com/opencontainers/runc v1.0.0-rc9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
|
||||
github.com/opencontainers/runc v1.0.0-rc90 h1:4+xo8mtWixbHoEm451+WJNUrq12o2/tDsyK9Vgc/NcA=
|
||||
github.com/opencontainers/runc v1.0.0-rc90/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
|
||||
github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
|
||||
github.com/opencontainers/runtime-spec v0.1.2-0.20190618234442-a950415649c7/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
|
||||
github.com/opencontainers/runtime-spec v1.0.0 h1:O6L965K88AilqnxeYPks/75HLpp4IG+FjeSCI3cVdRg=
|
||||
github.com/opencontainers/runtime-spec v1.0.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
|
||||
github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
|
||||
github.com/opencontainers/selinux v1.4.0/go.mod h1:yTcKuYAh6R95iDpefGLQaPaRwJFwyzAJufJyiTt7s0g=
|
||||
github.com/opencontainers/selinux v1.5.1 h1:jskKwSMFYqyTrHEuJgQoUlTcId0av64S6EWObrIfn5Y=
|
||||
github.com/opencontainers/selinux v1.5.1/go.mod h1:yTcKuYAh6R95iDpefGLQaPaRwJFwyzAJufJyiTt7s0g=
|
||||
github.com/opencontainers/selinux v1.5.2 h1:F6DgIsjgBIcDksLW4D5RG9bXok6oqZ3nvMwj4ZoFu/Q=
|
||||
github.com/opencontainers/selinux v1.5.2/go.mod h1:yTcKuYAh6R95iDpefGLQaPaRwJFwyzAJufJyiTt7s0g=
|
||||
github.com/ostreedev/ostree-go v0.0.0-20190702140239-759a8c1ac913 h1:TnbXhKzrTOyuvWrjI8W6pcoI9XPbLHFXCdN2dtUw7Rw=
|
||||
github.com/ostreedev/ostree-go v0.0.0-20190702140239-759a8c1ac913/go.mod h1:J6OG6YJVEWopen4avK3VNQSnALmmjvniMmni/YFYAwc=
|
||||
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
@@ -171,52 +236,78 @@ github.com/pquerna/ffjson v0.0.0-20181028064349-e517b90714f7/go.mod h1:YARuvh7BU
|
||||
github.com/pquerna/ffjson v0.0.0-20190813045741-dac163c6c0a9 h1:kyf9snWXHvQc+yxE9imhdI8YAm4oKeZISlaAR+x73zs=
|
||||
github.com/pquerna/ffjson v0.0.0-20190813045741-dac163c6c0a9/go.mod h1:YARuvh7BUWHNhzDq2OM5tzR2RiCcN2D7sapiKyCel/M=
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
|
||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||
github.com/prometheus/client_golang v1.1.0 h1:BQ53HtBmfOitExawJ6LokA4x8ov/z0SYYb0+HxJfRI8=
|
||||
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.6.0 h1:kRhiuYSXR3+uv2IbVbZhUxK5zVD/2pp3Gd2PpvPkpEo=
|
||||
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
|
||||
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
|
||||
github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8=
|
||||
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
|
||||
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
|
||||
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
||||
github.com/russross/blackfriday v2.0.0+incompatible h1:cBXrhZNUf9C+La9/YpS+UHpUT8YD6Td9ZMSU9APFcsk=
|
||||
github.com/russross/blackfriday v2.0.0+incompatible/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
|
||||
github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q=
|
||||
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/sclevine/agouti v3.0.0+incompatible/go.mod h1:b4WX9W9L1sfQKXeJf1mUTLZKJ48R1S7H23Ji7oFO5Bw=
|
||||
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
|
||||
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
|
||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/sirupsen/logrus v1.5.0 h1:1N5EYkVAPEywqZRJd7cwnRtCb6xJx7NH3T3WUTF980Q=
|
||||
github.com/sirupsen/logrus v1.5.0/go.mod h1:+F7Ogzej0PZc/94MaYx/nvG9jOFMD2osvC3s+Squfpo=
|
||||
github.com/sirupsen/logrus v1.6.0 h1:UBcNElsrwanuuMsnGSlYmtmgbb23qDR5dG+6X6Oo89I=
|
||||
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
|
||||
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
|
||||
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
|
||||
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cobra v1.0.0 h1:6m/oheQuQ13N9ks4hubMG6BnvwOeaJrqSPLahSnczz8=
|
||||
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
|
||||
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
|
||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
|
||||
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
||||
github.com/stretchr/testify v1.6.0 h1:jlIyCplCJFULU/01vCkhKuTyc3OorI3bJFuw6obfgho=
|
||||
github.com/stretchr/testify v1.6.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
|
||||
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2 h1:b6uOv7YOFK0TYG7HtkIgExQo+2RdLuwRft63jn2HWj8=
|
||||
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
|
||||
github.com/tchap/go-patricia v2.3.0+incompatible h1:GkY4dP3cEfEASBPPkWd+AmjYxhmDkqO9/zg7R0lSQRs=
|
||||
github.com/tchap/go-patricia v2.3.0+incompatible/go.mod h1:bmLyhP68RS6kStMGxByiQ23RP/odRBOTVjwp2cDyi6I=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
|
||||
github.com/ulikunitz/xz v0.5.6/go.mod h1:2bypXElzHzzJZwzH67Y6wb67pO62Rzfn7BSiF4ABRW8=
|
||||
github.com/ulikunitz/xz v0.5.7 h1:YvTNdFzX6+W5m9msiYg/zpkSURPPtOlzbqYjrFn7Yt4=
|
||||
github.com/ulikunitz/xz v0.5.7/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
|
||||
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
|
||||
github.com/urfave/cli v1.22.1 h1:+mkCCcOFKPnCmVYVcURKps1Xe+3zP90gSYGNfRkjoIY=
|
||||
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
|
||||
github.com/vbatts/tar-split v0.11.1 h1:0Odu65rhcZ3JZaPHxl7tCI3V/C/Q9Zf82UFravl02dE=
|
||||
github.com/vbatts/tar-split v0.11.1/go.mod h1:LEuURwDEiWjRjwu46yU3KVGuUdVv/dcnpcEPSzR8z6g=
|
||||
github.com/vbauerster/mpb/v5 v5.0.3 h1:Ldt/azOkbThTk2loi6FrBd/3fhxGFQ24MxFAS88PoNY=
|
||||
github.com/vbauerster/mpb/v5 v5.0.3/go.mod h1:h3YxU5CSr8rZP4Q3xZPVB3jJLhWPou63lHEdr9ytH4Y=
|
||||
github.com/vbauerster/mpb/v5 v5.0.4 h1:w7l/tJfHmtIOKZkU+bhbDZOUxj1kln9jy4DUOp3Tl14=
|
||||
github.com/vbauerster/mpb/v5 v5.0.4/go.mod h1:fvzasBUyuo35UyuA6sSOlVhpLoNQsp2nBdHw7OiSUU8=
|
||||
github.com/vbauerster/mpb/v5 v5.2.2 h1:zIICVOm+XD+uV6crpSORaL6I0Q1WqOdvxZTp+r3L9cw=
|
||||
github.com/vbauerster/mpb/v5 v5.2.2/go.mod h1:W5Fvgw4dm3/0NhqzV8j6EacfuTe5SvnzBRwiXxDR9ww=
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20190809123943-df4f5c81cb3b h1:6cLsL+2FW6dRAdl5iMtHgRogVCff0QpRi9653YmdcJA=
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20190809123943-df4f5c81cb3b/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
|
||||
@@ -225,33 +316,44 @@ github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:
|
||||
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
|
||||
github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74=
|
||||
github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
|
||||
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
|
||||
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
|
||||
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
go.etcd.io/bbolt v1.3.4 h1:hi1bXHMVrlQh6WwxAy+qZCV/SYIlqo+Ushwdpa4tAKg=
|
||||
go.etcd.io/bbolt v1.3.4/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
|
||||
go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
|
||||
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
||||
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go4.org v0.0.0-20190218023631-ce4c26f7be8e h1:m9LfARr2VIOW0vsV19kEKp/sWQvZnGobA8JHui/XJoY=
|
||||
go4.org v0.0.0-20190218023631-ce4c26f7be8e/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200311171314-f7b00557c8c4/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59 h1:3zb4D3T4G8jdExgVU/95+vQXfpEPiMdCaZgmGVxjNHM=
|
||||
golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200423211502-4bdfaf469ed5 h1:Q7tZBpemrlsc2I7IyODzhtallWRSm4Q0d09pL6XbQtU=
|
||||
golang.org/x/crypto v0.0.0-20200423211502-4bdfaf469ed5/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e h1:3G+cUijn7XD+S4eJFddp53Pv7+slrESplyjG25HgL+k=
|
||||
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7 h1:AeiKBIuRw3UomYXSbLy0Mc2dDLfdtbT/IVn4keq83P0=
|
||||
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
@@ -262,6 +364,8 @@ golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a h1:WXEvlFVvvGxCJLG6REjsT03i
|
||||
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -269,17 +373,26 @@ golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191127021746-63cb32ae39b2/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200327173247-9dae0f8f5775 h1:TC0v2RSO1u2kn1ZugjrFXkRZAEaqMN/RW+OTZkBzmLE=
|
||||
golang.org/x/sys v0.0.0-20200327173247-9dae0f8f5775/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200420163511-1957bb5e6d1f h1:gWF768j/LaZugp8dyS4UwsslYCYz9XgFxvlgsn0n9H8=
|
||||
golang.org/x/sys v0.0.0-20200420163511-1957bb5e6d1f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299 h1:DYfZAGf2WMFjMxbgTjaC+2HC7NkNAQs+6Q8b9WEB/F4=
|
||||
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0 h1:/5xXl8Y5W96D+TtHSlonuFqGHIWVuyCkGJLwGh9JJFs=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
@@ -289,25 +402,48 @@ golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGm
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb h1:i1Ppqkc3WQXikh8bXiwHqAN5Rv3/qDCcRk0/Otx73BY=
|
||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873 h1:nfPFGzJkUDX6uBmpN/pSw7MbOAWegH5QDQuoXFHedLg=
|
||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.24.0 h1:vb/1TCsVn3DcJlQ0Gs1yB1pKI6Do2/QNwxdKqmc/b0s=
|
||||
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
|
||||
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
|
||||
google.golang.org/protobuf v1.23.0 h1:4MY060fB1DLGMB/7MBTLnwQUY6+F09GEiz6SsrNqyzM=
|
||||
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
||||
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
|
||||
gopkg.in/square/go-jose.v2 v2.3.1 h1:SK5KegNXmKmqE342YYN2qPHEnUYeoMiXXl1poUlI+o4=
|
||||
gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
||||
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
|
||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
|
||||
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
|
||||
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
|
||||
@@ -25,6 +25,12 @@ on alpine:
|
||||
$ sudo apk add skopeo
|
||||
```
|
||||
|
||||
on macOS:
|
||||
|
||||
```sh
|
||||
$ brew install skopeo
|
||||
```
|
||||
|
||||
Debian (10 and newer including Raspbian) and Ubuntu (18.04 and newer): Packages
|
||||
are available via the [Kubic][0] project repositories:
|
||||
|
||||
|
||||
@@ -30,18 +30,11 @@ type SkopeoSuite struct {
|
||||
func (s *SkopeoSuite) SetUpSuite(c *check.C) {
|
||||
_, err := exec.LookPath(skopeoBinary)
|
||||
c.Assert(err, check.IsNil)
|
||||
}
|
||||
|
||||
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
|
||||
|
||||
}
|
||||
|
||||
func (s *SkopeoSuite) SetUpTest(c *check.C) {
|
||||
s.regV2 = setupRegistryV2At(c, privateRegistryURL0, false, false)
|
||||
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL1, true, false)
|
||||
}
|
||||
|
||||
func (s *SkopeoSuite) TearDownTest(c *check.C) {
|
||||
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
|
||||
if s.regV2 != nil {
|
||||
s.regV2.Close()
|
||||
}
|
||||
@@ -71,7 +64,7 @@ func (s *SkopeoSuite) TestNeedAuthToPrivateRegistryV2WithoutDockerCfg(c *check.C
|
||||
}
|
||||
|
||||
func (s *SkopeoSuite) TestCertDirInsteadOfCertPath(c *check.C) {
|
||||
wanted := ".*flag provided but not defined: -cert-path.*"
|
||||
wanted := ".*unknown flag: --cert-path.*"
|
||||
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-path=/")
|
||||
wanted = ".*unauthorized: authentication required.*"
|
||||
assertSkopeoFails(c, wanted, "--tls-verify=false", "inspect", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url), "--cert-dir=/etc/docker/certs.d/")
|
||||
@@ -91,3 +84,30 @@ func (s *SkopeoSuite) TestNoNeedAuthToPrivateRegistryV2ImageNotFound(c *check.C)
|
||||
func (s *SkopeoSuite) TestInspectFailsWhenReferenceIsInvalid(c *check.C) {
|
||||
assertSkopeoFails(c, `.*Invalid image name.*`, "inspect", "unknown")
|
||||
}
|
||||
|
||||
func (s *SkopeoSuite) TestLoginLogout(c *check.C) {
|
||||
wanted := "^Login Succeeded!\n$"
|
||||
assertSkopeoSucceeds(c, wanted, "login", "--tls-verify=false", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, s.regV2WithAuth.url)
|
||||
// test --get-login returns username
|
||||
wanted = fmt.Sprintf("^%s\n$", s.regV2WithAuth.username)
|
||||
assertSkopeoSucceeds(c, wanted, "login", "--tls-verify=false", "--get-login", s.regV2WithAuth.url)
|
||||
// test logout
|
||||
wanted = fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url)
|
||||
assertSkopeoSucceeds(c, wanted, "logout", s.regV2WithAuth.url)
|
||||
}
|
||||
|
||||
func (s *SkopeoSuite) TestCopyWithLocalAuth(c *check.C) {
|
||||
wanted := "^Login Succeeded!\n$"
|
||||
assertSkopeoSucceeds(c, wanted, "login", "--tls-verify=false", "--username="+s.regV2WithAuth.username, "--password="+s.regV2WithAuth.password, s.regV2WithAuth.url)
|
||||
// copy to private registry using local authentication
|
||||
imageName := fmt.Sprintf("docker://%s/busybox:mine", s.regV2WithAuth.url)
|
||||
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "docker://docker.io/library/busybox:latest", imageName)
|
||||
// inspec from private registry
|
||||
assertSkopeoSucceeds(c, "", "inspect", "--tls-verify=false", imageName)
|
||||
// logout from the registry
|
||||
wanted = fmt.Sprintf("^Removed login credentials for %s\n$", s.regV2WithAuth.url)
|
||||
assertSkopeoSucceeds(c, wanted, "logout", s.regV2WithAuth.url)
|
||||
// inspect from private registry should fail after logout
|
||||
wanted = ".*unauthorized: authentication required.*"
|
||||
assertSkopeoFails(c, wanted, "inspect", "--tls-verify=false", imageName)
|
||||
}
|
||||
|
||||
@@ -1055,7 +1055,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
|
||||
|
||||
// Get another image (different so that they don't share signatures, and sign it using docker://)
|
||||
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir,
|
||||
"copy", "--sign-by", "personal@example.com", "docker://estesp/busybox:ppc64le", "atomic:localhost:5000/myns/extension:extension")
|
||||
"copy", "--sign-by", "personal@example.com", "docker://estesp/busybox:ppc64le", "docker://localhost:5000/myns/extension:extension")
|
||||
c.Logf("%s", combinedOutputOfCommand(c, "oc", "get", "istag", "extension:extension", "-o", "json"))
|
||||
// Pulling the image using atomic: succeeds.
|
||||
assertSkopeoSucceeds(c, "", "--debug", "--tls-verify=false", "--policy", policy,
|
||||
@@ -1067,6 +1067,67 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
|
||||
assertDirImagesAreEqual(c, filepath.Join(topDir, "dirDA"), filepath.Join(topDir, "dirDD"))
|
||||
}
|
||||
|
||||
func (s *CopySuite) TestCopyVerifyingMirroredSignatures(c *check.C) {
|
||||
const regPrefix = "docker://localhost:5006/myns/mirroring-"
|
||||
|
||||
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
|
||||
c.Assert(err, check.IsNil)
|
||||
defer mech.Close()
|
||||
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
|
||||
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
|
||||
}
|
||||
|
||||
topDir, err := ioutil.TempDir("", "mirrored-signatures") // FIXME: Will this be used?
|
||||
c.Assert(err, check.IsNil)
|
||||
defer os.RemoveAll(topDir)
|
||||
registriesDir := filepath.Join(topDir, "registries.d") // An empty directory to disable sigstore use
|
||||
dirDest := "dir:" + filepath.Join(topDir, "unused-dest")
|
||||
|
||||
policy := fileFromFixture(c, "fixtures/policy.json", map[string]string{"@keydir@": s.gpgHome})
|
||||
defer os.Remove(policy)
|
||||
|
||||
// We use X-R-S-S for this testing to avoid having to deal with the sigstores.
|
||||
// A downside is that OpenShift records signatures per image, so the error messsages below
|
||||
// list all signatures for other tags used for the same image as well.
|
||||
// So, make sure to never create a signature that could be considered valid in a different part of the test (i.e. don't reuse tags).
|
||||
|
||||
// Get an image to work with.
|
||||
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "docker://busybox", regPrefix+"primary:unsigned")
|
||||
// Verify that unsigned images are rejected
|
||||
assertSkopeoFails(c, ".*Source image rejected: A signature was required, but no signature exists.*",
|
||||
"--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"primary:unsigned", dirDest)
|
||||
// Sign the image for the primary location
|
||||
assertSkopeoSucceeds(c, "", "--registries.d", registriesDir, "copy", "--src-tls-verify=false", "--dest-tls-verify=false", "--sign-by", "personal@example.com", regPrefix+"primary:unsigned", regPrefix+"primary:direct")
|
||||
// Verify that a correctly signed image in the primary location is usable.
|
||||
assertSkopeoSucceeds(c, "", "--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"primary:direct", dirDest)
|
||||
|
||||
// Sign the image for the mirror
|
||||
assertSkopeoSucceeds(c, "", "--registries.d", registriesDir, "copy", "--src-tls-verify=false", "--dest-tls-verify=false", "--sign-by", "personal@example.com", regPrefix+"primary:unsigned", regPrefix+"mirror:mirror-signed")
|
||||
// Verify that a correctly signed image for the mirror is acessible using the mirror's reference
|
||||
assertSkopeoSucceeds(c, "", "--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"mirror:mirror-signed", dirDest)
|
||||
// … but verify that while it is accessible using the primary location redirecting to the mirror, …
|
||||
assertSkopeoSucceeds(c, "" /* no --policy */, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"primary:mirror-signed", dirDest)
|
||||
// … verify it is NOT accessible when requiring a signature.
|
||||
assertSkopeoFails(c, ".*Source image rejected: None of the signatures were accepted, reasons: Signature for identity localhost:5006/myns/mirroring-primary:direct is not accepted; Signature for identity localhost:5006/myns/mirroring-mirror:mirror-signed is not accepted.*",
|
||||
"--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"primary:mirror-signed", dirDest)
|
||||
|
||||
// Create a signature for mirroring-primary:primary-signed without pushing there. This should be easier than using standalone-sign.
|
||||
signingDir := filepath.Join(topDir, "signing-temp")
|
||||
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", regPrefix+"primary:unsigned", "dir:"+signingDir)
|
||||
c.Logf("%s", combinedOutputOfCommand(c, "ls", "-laR", signingDir))
|
||||
assertSkopeoSucceeds(c, "^$", "standalone-sign", "-o", filepath.Join(signingDir, "signature-1"),
|
||||
filepath.Join(signingDir, "manifest.json"), "localhost:5006/myns/mirroring-primary:primary-signed", "personal@example.com")
|
||||
c.Logf("%s", combinedOutputOfCommand(c, "ls", "-laR", signingDir))
|
||||
assertSkopeoSucceeds(c, "", "--registries.d", registriesDir, "copy", "--dest-tls-verify=false", "dir:"+signingDir, regPrefix+"mirror:primary-signed")
|
||||
// Verify that a correctly signed image for the primary is accessible using the primary's reference
|
||||
assertSkopeoSucceeds(c, "", "--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"primary:primary-signed", dirDest)
|
||||
// … but verify that while it is accessible using the mirror location
|
||||
assertSkopeoSucceeds(c, "" /* no --policy */, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"mirror:primary-signed", dirDest)
|
||||
// … verify it is NOT accessible when requiring a signature.
|
||||
assertSkopeoFails(c, ".*Source image rejected: None of the signatures were accepted, reasons: Signature for identity localhost:5006/myns/mirroring-primary:direct is not accepted; Signature for identity localhost:5006/myns/mirroring-mirror:mirror-signed is not accepted; Signature for identity localhost:5006/myns/mirroring-primary:primary-signed is not accepted.*",
|
||||
"--policy", policy, "--registries.d", registriesDir, "--registries-conf", "fixtures/registries.conf", "copy", "--src-tls-verify=false", regPrefix+"mirror:primary-signed", dirDest)
|
||||
}
|
||||
|
||||
func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
|
||||
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--dest-creds=testuser:testpassword", "docker://busybox", fmt.Sprintf("docker://%s/busybox:latest", s.regV2WithAuth.url))
|
||||
dir1, err := ioutil.TempDir("", "copy-1")
|
||||
|
||||
@@ -20,6 +20,20 @@
|
||||
"keyPath": "@keydir@/personal-pubkey.gpg"
|
||||
}
|
||||
],
|
||||
"localhost:5006/myns/mirroring-primary": [
|
||||
{
|
||||
"type": "signedBy",
|
||||
"keyType": "GPGKeys",
|
||||
"keyPath": "@keydir@/personal-pubkey.gpg"
|
||||
}
|
||||
],
|
||||
"localhost:5006/myns/mirroring-mirror": [
|
||||
{
|
||||
"type": "signedBy",
|
||||
"keyType": "GPGKeys",
|
||||
"keyPath": "@keydir@/personal-pubkey.gpg"
|
||||
}
|
||||
],
|
||||
"docker.io/openshift": [
|
||||
{
|
||||
"type": "insecureAcceptAnything"
|
||||
|
||||
@@ -26,3 +26,9 @@ mirror = [
|
||||
{ location = "wrong-mirror-0.invalid" },
|
||||
{ location = "gcr.io/google-containers" },
|
||||
]
|
||||
|
||||
[[registry]]
|
||||
location = "localhost:5006/myns/mirroring-primary"
|
||||
mirror = [
|
||||
{ location = "localhost:5006/myns/mirroring-mirror"},
|
||||
]
|
||||
|
||||
@@ -255,6 +255,40 @@ docker.io:
|
||||
c.Assert(nManifests, check.Equals, len(tags))
|
||||
}
|
||||
|
||||
func (s *SyncSuite) TestYamlRegex2Dir(c *check.C) {
|
||||
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
|
||||
c.Assert(err, check.IsNil)
|
||||
defer os.RemoveAll(tmpDir)
|
||||
dir1 := path.Join(tmpDir, "dir1")
|
||||
|
||||
yamlConfig := `
|
||||
docker.io:
|
||||
images-by-tag-regex:
|
||||
nginx: ^1\.13\.[12]-alpine-perl$ # regex string test
|
||||
`
|
||||
// the ↑ regex strings always matches only 2 images
|
||||
var nTags = 2
|
||||
c.Assert(nTags, check.Not(check.Equals), 0)
|
||||
|
||||
yamlFile := path.Join(tmpDir, "registries.yaml")
|
||||
ioutil.WriteFile(yamlFile, []byte(yamlConfig), 0644)
|
||||
assertSkopeoSucceeds(c, "", "sync", "--scoped", "--src", "yaml", "--dest", "dir", yamlFile, dir1)
|
||||
|
||||
nManifests := 0
|
||||
err = filepath.Walk(dir1, func(path string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !info.IsDir() && info.Name() == "manifest.json" {
|
||||
nManifests++
|
||||
return filepath.SkipDir
|
||||
}
|
||||
return nil
|
||||
})
|
||||
c.Assert(err, check.IsNil)
|
||||
c.Assert(nManifests, check.Equals, nTags)
|
||||
}
|
||||
|
||||
func (s *SyncSuite) TestYaml2Dir(c *check.C) {
|
||||
tmpDir, err := ioutil.TempDir("", "skopeo-sync-test")
|
||||
c.Assert(err, check.IsNil)
|
||||
@@ -270,7 +304,6 @@ docker.io:
|
||||
alpine:
|
||||
- edge
|
||||
- 3.8
|
||||
|
||||
opensuse/leap:
|
||||
- latest
|
||||
|
||||
|
||||
@@ -314,8 +314,7 @@ start_registry() {
|
||||
fi
|
||||
|
||||
if ! egrep -q "^$testuser:" $AUTHDIR/htpasswd; then
|
||||
log_and_run $PODMAN run --rm --entrypoint htpasswd $REGISTRY_FQIN \
|
||||
-Bbn $testuser $testpassword >> $AUTHDIR/htpasswd
|
||||
htpasswd -Bbn $testuser $testpassword >> $AUTHDIR/htpasswd
|
||||
fi
|
||||
|
||||
reg_args+=(
|
||||
|
||||
151
vendor/github.com/Microsoft/go-winio/vhd/vhd.go
generated
vendored
Normal file
151
vendor/github.com/Microsoft/go-winio/vhd/vhd.go
generated
vendored
Normal file
@@ -0,0 +1,151 @@
|
||||
// +build windows
|
||||
|
||||
package vhd
|
||||
|
||||
import "syscall"
|
||||
|
||||
//go:generate go run mksyscall_windows.go -output zvhd.go vhd.go
|
||||
|
||||
//sys createVirtualDisk(virtualStorageType *virtualStorageType, path string, virtualDiskAccessMask uint32, securityDescriptor *uintptr, flags uint32, providerSpecificFlags uint32, parameters *createVirtualDiskParameters, o *syscall.Overlapped, handle *syscall.Handle) (err error) [failretval != 0] = VirtDisk.CreateVirtualDisk
|
||||
//sys openVirtualDisk(virtualStorageType *virtualStorageType, path string, virtualDiskAccessMask uint32, flags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (err error) [failretval != 0] = VirtDisk.OpenVirtualDisk
|
||||
//sys detachVirtualDisk(handle syscall.Handle, flags uint32, providerSpecificFlags uint32) (err error) [failretval != 0] = VirtDisk.DetachVirtualDisk
|
||||
|
||||
type virtualStorageType struct {
|
||||
DeviceID uint32
|
||||
VendorID [16]byte
|
||||
}
|
||||
|
||||
type (
|
||||
createVirtualDiskFlag uint32
|
||||
VirtualDiskAccessMask uint32
|
||||
VirtualDiskFlag uint32
|
||||
)
|
||||
|
||||
const (
|
||||
// Flags for creating a VHD (not exported)
|
||||
createVirtualDiskFlagNone createVirtualDiskFlag = 0
|
||||
createVirtualDiskFlagFullPhysicalAllocation createVirtualDiskFlag = 1
|
||||
createVirtualDiskFlagPreventWritesToSourceDisk createVirtualDiskFlag = 2
|
||||
createVirtualDiskFlagDoNotCopyMetadataFromParent createVirtualDiskFlag = 4
|
||||
|
||||
// Access Mask for opening a VHD
|
||||
VirtualDiskAccessNone VirtualDiskAccessMask = 0
|
||||
VirtualDiskAccessAttachRO VirtualDiskAccessMask = 65536
|
||||
VirtualDiskAccessAttachRW VirtualDiskAccessMask = 131072
|
||||
VirtualDiskAccessDetach VirtualDiskAccessMask = 262144
|
||||
VirtualDiskAccessGetInfo VirtualDiskAccessMask = 524288
|
||||
VirtualDiskAccessCreate VirtualDiskAccessMask = 1048576
|
||||
VirtualDiskAccessMetaOps VirtualDiskAccessMask = 2097152
|
||||
VirtualDiskAccessRead VirtualDiskAccessMask = 851968
|
||||
VirtualDiskAccessAll VirtualDiskAccessMask = 4128768
|
||||
VirtualDiskAccessWritable VirtualDiskAccessMask = 3276800
|
||||
|
||||
// Flags for opening a VHD
|
||||
OpenVirtualDiskFlagNone VirtualDiskFlag = 0
|
||||
OpenVirtualDiskFlagNoParents VirtualDiskFlag = 0x1
|
||||
OpenVirtualDiskFlagBlankFile VirtualDiskFlag = 0x2
|
||||
OpenVirtualDiskFlagBootDrive VirtualDiskFlag = 0x4
|
||||
OpenVirtualDiskFlagCachedIO VirtualDiskFlag = 0x8
|
||||
OpenVirtualDiskFlagCustomDiffChain VirtualDiskFlag = 0x10
|
||||
OpenVirtualDiskFlagParentCachedIO VirtualDiskFlag = 0x20
|
||||
OpenVirtualDiskFlagVhdSetFileOnly VirtualDiskFlag = 0x40
|
||||
OpenVirtualDiskFlagIgnoreRelativeParentLocator VirtualDiskFlag = 0x80
|
||||
OpenVirtualDiskFlagNoWriteHardening VirtualDiskFlag = 0x100
|
||||
)
|
||||
|
||||
type createVersion2 struct {
|
||||
UniqueID [16]byte // GUID
|
||||
MaximumSize uint64
|
||||
BlockSizeInBytes uint32
|
||||
SectorSizeInBytes uint32
|
||||
ParentPath *uint16 // string
|
||||
SourcePath *uint16 // string
|
||||
OpenFlags uint32
|
||||
ParentVirtualStorageType virtualStorageType
|
||||
SourceVirtualStorageType virtualStorageType
|
||||
ResiliencyGUID [16]byte // GUID
|
||||
}
|
||||
|
||||
type createVirtualDiskParameters struct {
|
||||
Version uint32 // Must always be set to 2
|
||||
Version2 createVersion2
|
||||
}
|
||||
|
||||
type openVersion2 struct {
|
||||
GetInfoOnly int32 // bool but 4-byte aligned
|
||||
ReadOnly int32 // bool but 4-byte aligned
|
||||
ResiliencyGUID [16]byte // GUID
|
||||
}
|
||||
|
||||
type openVirtualDiskParameters struct {
|
||||
Version uint32 // Must always be set to 2
|
||||
Version2 openVersion2
|
||||
}
|
||||
|
||||
// CreateVhdx will create a simple vhdx file at the given path using default values.
|
||||
func CreateVhdx(path string, maxSizeInGb, blockSizeInMb uint32) error {
|
||||
var (
|
||||
defaultType virtualStorageType
|
||||
handle syscall.Handle
|
||||
)
|
||||
|
||||
parameters := createVirtualDiskParameters{
|
||||
Version: 2,
|
||||
Version2: createVersion2{
|
||||
MaximumSize: uint64(maxSizeInGb) * 1024 * 1024 * 1024,
|
||||
BlockSizeInBytes: blockSizeInMb * 1024 * 1024,
|
||||
},
|
||||
}
|
||||
|
||||
if err := createVirtualDisk(
|
||||
&defaultType,
|
||||
path,
|
||||
uint32(VirtualDiskAccessNone),
|
||||
nil,
|
||||
uint32(createVirtualDiskFlagNone),
|
||||
0,
|
||||
¶meters,
|
||||
nil,
|
||||
&handle); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := syscall.CloseHandle(handle); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// DetachVhd detaches a mounted container layer vhd found at `path`.
|
||||
func DetachVhd(path string) error {
|
||||
handle, err := OpenVirtualDisk(
|
||||
path,
|
||||
VirtualDiskAccessNone,
|
||||
OpenVirtualDiskFlagCachedIO|OpenVirtualDiskFlagIgnoreRelativeParentLocator)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer syscall.CloseHandle(handle)
|
||||
return detachVirtualDisk(handle, 0, 0)
|
||||
}
|
||||
|
||||
// OpenVirtualDisk obtains a handle to a VHD opened with supplied access mask and flags.
|
||||
func OpenVirtualDisk(path string, accessMask VirtualDiskAccessMask, flag VirtualDiskFlag) (syscall.Handle, error) {
|
||||
var (
|
||||
defaultType virtualStorageType
|
||||
handle syscall.Handle
|
||||
)
|
||||
parameters := openVirtualDiskParameters{Version: 2}
|
||||
if err := openVirtualDisk(
|
||||
&defaultType,
|
||||
path,
|
||||
uint32(accessMask),
|
||||
uint32(flag),
|
||||
¶meters,
|
||||
&handle); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return handle, nil
|
||||
}
|
||||
99
vendor/github.com/Microsoft/go-winio/vhd/zvhd.go
generated
vendored
Normal file
99
vendor/github.com/Microsoft/go-winio/vhd/zvhd.go
generated
vendored
Normal file
@@ -0,0 +1,99 @@
|
||||
// MACHINE GENERATED BY 'go generate' COMMAND; DO NOT EDIT
|
||||
|
||||
package vhd
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
var _ unsafe.Pointer
|
||||
|
||||
// Do the interface allocations only once for common
|
||||
// Errno values.
|
||||
const (
|
||||
errnoERROR_IO_PENDING = 997
|
||||
)
|
||||
|
||||
var (
|
||||
errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
|
||||
)
|
||||
|
||||
// errnoErr returns common boxed Errno values, to prevent
|
||||
// allocations at runtime.
|
||||
func errnoErr(e syscall.Errno) error {
|
||||
switch e {
|
||||
case 0:
|
||||
return nil
|
||||
case errnoERROR_IO_PENDING:
|
||||
return errERROR_IO_PENDING
|
||||
}
|
||||
// TODO: add more here, after collecting data on the common
|
||||
// error values see on Windows. (perhaps when running
|
||||
// all.bat?)
|
||||
return e
|
||||
}
|
||||
|
||||
var (
|
||||
modVirtDisk = windows.NewLazySystemDLL("VirtDisk.dll")
|
||||
|
||||
procCreateVirtualDisk = modVirtDisk.NewProc("CreateVirtualDisk")
|
||||
procOpenVirtualDisk = modVirtDisk.NewProc("OpenVirtualDisk")
|
||||
procDetachVirtualDisk = modVirtDisk.NewProc("DetachVirtualDisk")
|
||||
)
|
||||
|
||||
func createVirtualDisk(virtualStorageType *virtualStorageType, path string, virtualDiskAccessMask uint32, securityDescriptor *uintptr, flags uint32, providerSpecificFlags uint32, parameters *createVirtualDiskParameters, o *syscall.Overlapped, handle *syscall.Handle) (err error) {
|
||||
var _p0 *uint16
|
||||
_p0, err = syscall.UTF16PtrFromString(path)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
return _createVirtualDisk(virtualStorageType, _p0, virtualDiskAccessMask, securityDescriptor, flags, providerSpecificFlags, parameters, o, handle)
|
||||
}
|
||||
|
||||
func _createVirtualDisk(virtualStorageType *virtualStorageType, path *uint16, virtualDiskAccessMask uint32, securityDescriptor *uintptr, flags uint32, providerSpecificFlags uint32, parameters *createVirtualDiskParameters, o *syscall.Overlapped, handle *syscall.Handle) (err error) {
|
||||
r1, _, e1 := syscall.Syscall9(procCreateVirtualDisk.Addr(), 9, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(unsafe.Pointer(securityDescriptor)), uintptr(flags), uintptr(providerSpecificFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(o)), uintptr(unsafe.Pointer(handle)))
|
||||
if r1 != 0 {
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
} else {
|
||||
err = syscall.EINVAL
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func openVirtualDisk(virtualStorageType *virtualStorageType, path string, virtualDiskAccessMask uint32, flags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (err error) {
|
||||
var _p0 *uint16
|
||||
_p0, err = syscall.UTF16PtrFromString(path)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
return _openVirtualDisk(virtualStorageType, _p0, virtualDiskAccessMask, flags, parameters, handle)
|
||||
}
|
||||
|
||||
func _openVirtualDisk(virtualStorageType *virtualStorageType, path *uint16, virtualDiskAccessMask uint32, flags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (err error) {
|
||||
r1, _, e1 := syscall.Syscall6(procOpenVirtualDisk.Addr(), 6, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(flags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(handle)))
|
||||
if r1 != 0 {
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
} else {
|
||||
err = syscall.EINVAL
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func detachVirtualDisk(handle syscall.Handle, flags uint32, providerSpecificFlags uint32) (err error) {
|
||||
r1, _, e1 := syscall.Syscall(procDetachVirtualDisk.Addr(), 3, uintptr(handle), uintptr(flags), uintptr(providerSpecificFlags))
|
||||
if r1 != 0 {
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
} else {
|
||||
err = syscall.EINVAL
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
3
vendor/github.com/Microsoft/hcsshim/CODEOWNERS
generated
vendored
Normal file
3
vendor/github.com/Microsoft/hcsshim/CODEOWNERS
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
* @microsoft/containerplat
|
||||
|
||||
/hcn/* @nagiesek
|
||||
7
vendor/github.com/Microsoft/hcsshim/README.md
generated
vendored
7
vendor/github.com/Microsoft/hcsshim/README.md
generated
vendored
@@ -2,7 +2,7 @@
|
||||
|
||||
[](https://ci.appveyor.com/project/WindowsVirtualization/hcsshim/branch/master)
|
||||
|
||||
This package contains the Golang interface for using the Windows [Host Compute Service](https://blogs.technet.microsoft.com/virtualization/2017/01/27/introducing-the-host-compute-service-hcs/) (HCS) to launch and manage [Windows Containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/). It also contains other helpers and functions for managing Windows Containers such as the Golang interface for the Host Network Service (HNS).
|
||||
This package contains the Golang interface for using the Windows [Host Compute Service](https://techcommunity.microsoft.com/t5/containers/introducing-the-host-compute-service-hcs/ba-p/382332) (HCS) to launch and manage [Windows Containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/). It also contains other helpers and functions for managing Windows Containers such as the Golang interface for the Host Network Service (HNS).
|
||||
|
||||
It is primarily used in the [Moby Project](https://github.com/moby/moby), but it can be freely used by other projects as well.
|
||||
|
||||
@@ -16,6 +16,11 @@ When you submit a pull request, a CLA-bot will automatically determine whether y
|
||||
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
|
||||
provided by the bot. You will only need to do this once across all repos using our CLA.
|
||||
|
||||
We also ask that contributors [sign their commits](https://git-scm.com/docs/git-commit) using `git commit -s` or `git commit --signoff` to certify they either authored the work themselves or otherwise have permission to use it in this project.
|
||||
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
|
||||
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
|
||||
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
|
||||
|
||||
20
vendor/github.com/Microsoft/hcsshim/appveyor.yml
generated
vendored
20
vendor/github.com/Microsoft/hcsshim/appveyor.yml
generated
vendored
@@ -6,7 +6,7 @@ clone_folder: c:\gopath\src\github.com\Microsoft\hcsshim
|
||||
|
||||
environment:
|
||||
GOPATH: c:\gopath
|
||||
PATH: C:\mingw-w64\x86_64-7.2.0-posix-seh-rt_v5-rev1\mingw64\bin;%GOPATH%\bin;C:\gometalinter-2.0.12-windows-amd64;%PATH%
|
||||
PATH: "%GOPATH%\\bin;C:\\gometalinter-2.0.12-windows-amd64;%PATH%"
|
||||
|
||||
stack: go 1.13.4
|
||||
|
||||
@@ -22,10 +22,12 @@ build_script:
|
||||
- go build ./internal/tools/uvmboot
|
||||
- go build ./internal/tools/zapdir
|
||||
- go test -v ./... -tags admin
|
||||
- go test -c ./test/containerd-shim-runhcs-v1/ -tags functional
|
||||
- go test -c ./test/cri-containerd/ -tags functional
|
||||
- go test -c ./test/functional/ -tags functional
|
||||
- go test -c ./test/runhcs/ -tags functional
|
||||
- cd test
|
||||
- go test -v ./internal -tags admin
|
||||
- go test -c ./containerd-shim-runhcs-v1/ -tags functional
|
||||
- go test -c ./cri-containerd/ -tags functional
|
||||
- go test -c ./functional/ -tags functional
|
||||
- go test -c ./runhcs/ -tags functional
|
||||
|
||||
artifacts:
|
||||
- path: 'containerd-shim-runhcs-v1.exe'
|
||||
@@ -35,7 +37,7 @@ artifacts:
|
||||
- path: 'grantvmgroupaccess.exe'
|
||||
- path: 'uvmboot.exe'
|
||||
- path: 'zapdir.exe'
|
||||
- path: 'containerd-shim-runhcs-v1.test.exe'
|
||||
- path: 'cri-containerd.test.exe'
|
||||
- path: 'functional.test.exe'
|
||||
- path: 'runhcs.test.exe'
|
||||
- path: './test/containerd-shim-runhcs-v1.test.exe'
|
||||
- path: './test/cri-containerd.test.exe'
|
||||
- path: './test/functional.test.exe'
|
||||
- path: './test/runhcs.test.exe'
|
||||
|
||||
28
vendor/github.com/Microsoft/hcsshim/go.mod
generated
vendored
28
vendor/github.com/Microsoft/hcsshim/go.mod
generated
vendored
@@ -4,34 +4,32 @@ go 1.13
|
||||
|
||||
require (
|
||||
github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5
|
||||
github.com/blang/semver v3.1.0+incompatible // indirect
|
||||
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f
|
||||
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1
|
||||
github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69
|
||||
github.com/containerd/containerd v1.3.2
|
||||
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc // indirect
|
||||
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448 // indirect
|
||||
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3
|
||||
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de
|
||||
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd
|
||||
github.com/gogo/protobuf v1.2.1
|
||||
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce // indirect
|
||||
github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874 // indirect
|
||||
github.com/gogo/protobuf v1.3.1
|
||||
github.com/golang/protobuf v1.3.2 // indirect
|
||||
github.com/kr/pretty v0.1.0 // indirect
|
||||
github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2 // indirect
|
||||
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f // indirect
|
||||
github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700
|
||||
github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39
|
||||
github.com/pkg/errors v0.8.1
|
||||
github.com/prometheus/procfs v0.0.5 // indirect
|
||||
github.com/sirupsen/logrus v1.4.1
|
||||
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8 // indirect
|
||||
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7 // indirect
|
||||
github.com/sirupsen/logrus v1.4.2
|
||||
github.com/stretchr/testify v1.4.0 // indirect
|
||||
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f // indirect
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
|
||||
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f // indirect
|
||||
go.opencensus.io v0.22.0
|
||||
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6
|
||||
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9 // indirect
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58
|
||||
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3
|
||||
google.golang.org/grpc v1.20.1
|
||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873 // indirect
|
||||
google.golang.org/grpc v1.23.1
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect
|
||||
gopkg.in/yaml.v2 v2.2.8 // indirect
|
||||
gotest.tools v2.2.0+incompatible // indirect
|
||||
k8s.io/kubernetes v1.13.0
|
||||
)
|
||||
|
||||
64
vendor/github.com/Microsoft/hcsshim/go.sum
generated
vendored
64
vendor/github.com/Microsoft/hcsshim/go.sum
generated
vendored
@@ -1,16 +1,15 @@
|
||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5 h1:ygIc8M6trr62pF5DucadTWGdEB4mEyvzi0e2nbcmcyA=
|
||||
github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
|
||||
github.com/blang/semver v3.1.0+incompatible h1:7hqmJYuaEK3qwVjWubYiht3j93YI0WQBuysxHIfUriU=
|
||||
github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f h1:tSNMc+rJDfmYntojat8lljbt1mgKNpTxUZJsSzJ9Y1s=
|
||||
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
|
||||
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1 h1:uict5mhHFTzKLUCufdSLym7z/J0CbBJT59lYbP9wtbg=
|
||||
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
|
||||
github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69 h1:rG1clvJbgsUcmb50J82YUJhUMopWNtZvyMZjb+4fqGw=
|
||||
github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
github.com/containerd/containerd v1.3.2 h1:ForxmXkA6tPIvffbrDAcPUIB32QgXkt2XFj+F0UxetA=
|
||||
github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc h1:TP+534wVlf61smEIq1nwLLAjQVEK2EADoW3CX9AuT+8=
|
||||
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
|
||||
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448 h1:PUD50EuOMkXVcpBIA/R95d56duJR9VxhwncsFbNnxW4=
|
||||
@@ -23,6 +22,7 @@ github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd h1:JNn81o/xG+8N
|
||||
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
|
||||
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e h1:Wf6HqHfScWJN9/ZjdUKyjop4mf3Qdd+1TvvltAvM3m8=
|
||||
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
|
||||
@@ -31,6 +31,8 @@ github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e h1:BWhy2j3IXJhjCbC68Fp
|
||||
github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
|
||||
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
|
||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
|
||||
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
@@ -38,47 +40,47 @@ github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce h1:prjrVgOk2Yg6w+PflHoszQNLTUh4kaByUcEWM/9uin4=
|
||||
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874 h1:cAv7ZbSmyb1wjn6T4TIiyFCkpcfgpbcNNC3bM2srLaI=
|
||||
github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
|
||||
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
|
||||
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2 h1:QhPf3A2AZW3tTGvHPg0TA+CR3oHbVLlXUhlghqISp1I=
|
||||
github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
|
||||
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f h1:a969LJ4IQFwRHYqonHtUDMSh9i54WcKggeEkQ3fZMl4=
|
||||
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
|
||||
github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700 h1:eNUVfm/RFLIi1G7flU5/ZRTHvd4kcVuzfRnL6OFlzCI=
|
||||
github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
|
||||
github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39 h1:H7DMc6FAjgwZZi8BRqjrAAHWoqEr5e5L6pS4V0ezet4=
|
||||
github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
|
||||
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8=
|
||||
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
|
||||
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7 h1:hhvfGDVThBnd4kYisSFmYuHYeUhglxcwag7FhVPH9zM=
|
||||
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/sirupsen/logrus v1.4.1 h1:GL2rEmy6nsikmW0r8opw9JIRScdMF5hA8cOYLH7In1k=
|
||||
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
|
||||
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8 h1:zLV6q4e8Jv9EHjNg/iHfzwDkCve6Ua5jCygptrtXHvI=
|
||||
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
|
||||
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5 h1:MCfT24H3f//U5+UCrZp1/riVO3B50BovxtDiNn0XKkk=
|
||||
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c=
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0=
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
|
||||
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f h1:mvXjJIHRZyhNuGassLTcXTwjiWq7NmjdavZsUnmFybQ=
|
||||
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
|
||||
go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
|
||||
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
@@ -93,15 +95,19 @@ golang.org/x/net v0.0.0-20190311183353-d8887717615a h1:oWX7TPOiFAMXLq8o0ikBYfCJV
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09 h1:KaQtG+aDELoNmXYas3TVkGNYRuq8JQ1aa7LJt8EXVyo=
|
||||
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9 h1:rjwSpXsdiK0dV8/Naq3kAw9ymfAeJIyd0upUIElB+lI=
|
||||
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6 h1:bjcUS9ztw9kFmmIxJInhon/0Is3p+EHBKNgquIzo1OI=
|
||||
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3 h1:7TYNF4UdlohbFwpNH04CoPMp1cHUZgO1Ebq5r2hIjfo=
|
||||
@@ -112,20 +118,32 @@ golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb h1:i1Ppqkc3WQXikh8bXiwHqAN5Rv3/qDCcRk0/Otx73BY=
|
||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873 h1:nfPFGzJkUDX6uBmpN/pSw7MbOAWegH5QDQuoXFHedLg=
|
||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1 h1:Hz2g2wirWK7H0qIIhGIqRGTuMwTE8HEKFnDZZ7lm9NU=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.23.1 h1:q4XQuHFC6I28BKZpo6IYyb3mNO+l7lSOxRuYTCiDfXk=
|
||||
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
|
||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
|
||||
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
k8s.io/kubernetes v1.13.0 h1:qTfB+u5M92k2fCCCVP2iuhgwwSOv1EkAkvQY1tQODD8=
|
||||
k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
|
||||
3
vendor/github.com/Microsoft/hcsshim/hnspolicy.go
generated
vendored
3
vendor/github.com/Microsoft/hcsshim/hnspolicy.go
generated
vendored
@@ -21,8 +21,11 @@ const (
|
||||
OutboundNat = hns.OutboundNat
|
||||
ExternalLoadBalancer = hns.ExternalLoadBalancer
|
||||
Route = hns.Route
|
||||
Proxy = hns.Proxy
|
||||
)
|
||||
|
||||
type ProxyPolicy = hns.ProxyPolicy
|
||||
|
||||
type NatPolicy = hns.NatPolicy
|
||||
|
||||
type QosPolicy = hns.QosPolicy
|
||||
|
||||
7
vendor/github.com/Microsoft/hcsshim/internal/hcs/cgo.go
generated
vendored
7
vendor/github.com/Microsoft/hcsshim/internal/hcs/cgo.go
generated
vendored
@@ -1,7 +0,0 @@
|
||||
package hcs
|
||||
|
||||
import "C"
|
||||
|
||||
// This import is needed to make the library compile as CGO because HCSSHIM
|
||||
// only works with CGO due to callbacks from HCS comming back from a C thread
|
||||
// which is not supported without CGO. See https://github.com/golang/go/issues/10973
|
||||
5
vendor/github.com/Microsoft/hcsshim/internal/hcs/syscall.go
generated
vendored
Normal file
5
vendor/github.com/Microsoft/hcsshim/internal/hcs/syscall.go
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
package hcs
|
||||
|
||||
//go:generate go run ../../mksyscall_windows.go -output zsyscall_windows.go syscall.go
|
||||
|
||||
//sys hcsFormatWritableLayerVhd(handle uintptr) (hr error) = computestorage.HcsFormatWritableLayerVhd
|
||||
50
vendor/github.com/Microsoft/hcsshim/internal/hcs/system.go
generated
vendored
50
vendor/github.com/Microsoft/hcsshim/internal/hcs/system.go
generated
vendored
@@ -4,12 +4,9 @@ import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/cow"
|
||||
"github.com/Microsoft/hcsshim/internal/log"
|
||||
@@ -21,27 +18,6 @@ import (
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// currentContainerStarts is used to limit the number of concurrent container
|
||||
// starts.
|
||||
var currentContainerStarts containerStarts
|
||||
|
||||
type containerStarts struct {
|
||||
maxParallel int
|
||||
inProgress int
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
func init() {
|
||||
mpsS := os.Getenv("HCSSHIM_MAX_PARALLEL_START")
|
||||
if len(mpsS) > 0 {
|
||||
mpsI, err := strconv.Atoi(mpsS)
|
||||
if err != nil || mpsI < 0 {
|
||||
return
|
||||
}
|
||||
currentContainerStarts.maxParallel = mpsI
|
||||
}
|
||||
}
|
||||
|
||||
type System struct {
|
||||
handleLock sync.RWMutex
|
||||
handle vmcompute.HcsSystem
|
||||
@@ -215,32 +191,6 @@ func (computeSystem *System) Start(ctx context.Context) (err error) {
|
||||
return makeSystemError(computeSystem, operation, "", ErrAlreadyClosed, nil)
|
||||
}
|
||||
|
||||
// This is a very simple backoff-retry loop to limit the number
|
||||
// of parallel container starts if environment variable
|
||||
// HCSSHIM_MAX_PARALLEL_START is set to a positive integer.
|
||||
// It should generally only be used as a workaround to various
|
||||
// platform issues that exist between RS1 and RS4 as of Aug 2018
|
||||
if currentContainerStarts.maxParallel > 0 {
|
||||
for {
|
||||
currentContainerStarts.Lock()
|
||||
if currentContainerStarts.inProgress < currentContainerStarts.maxParallel {
|
||||
currentContainerStarts.inProgress++
|
||||
currentContainerStarts.Unlock()
|
||||
break
|
||||
}
|
||||
if currentContainerStarts.inProgress == currentContainerStarts.maxParallel {
|
||||
currentContainerStarts.Unlock()
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
// Make sure we decrement the count when we are done.
|
||||
defer func() {
|
||||
currentContainerStarts.Lock()
|
||||
currentContainerStarts.inProgress--
|
||||
currentContainerStarts.Unlock()
|
||||
}()
|
||||
}
|
||||
|
||||
resultJSON, err := vmcompute.HcsStartComputeSystem(ctx, computeSystem.handle, "")
|
||||
events, err := processAsyncHcsResult(ctx, err, resultJSON, computeSystem.callbackNumber, hcsNotificationSystemStartCompleted, &timeout.SystemStart)
|
||||
if err != nil {
|
||||
|
||||
28
vendor/github.com/Microsoft/hcsshim/internal/hcs/utils.go
generated
vendored
28
vendor/github.com/Microsoft/hcsshim/internal/hcs/utils.go
generated
vendored
@@ -1,10 +1,14 @@
|
||||
package hcs
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"syscall"
|
||||
|
||||
"github.com/Microsoft/go-winio"
|
||||
diskutil "github.com/Microsoft/go-winio/vhd"
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
// makeOpenFiles calls winio.MakeOpenFile for each handle in a slice but closes all the handles
|
||||
@@ -31,3 +35,27 @@ func makeOpenFiles(hs []syscall.Handle) (_ []io.ReadWriteCloser, err error) {
|
||||
}
|
||||
return fs, nil
|
||||
}
|
||||
|
||||
// creates a VHD formatted with NTFS of size `sizeGB` at the given `vhdPath`.
|
||||
func CreateNTFSVHD(ctx context.Context, vhdPath string, sizeGB uint32) (err error) {
|
||||
if err := diskutil.CreateVhdx(vhdPath, sizeGB, 1); err != nil {
|
||||
return errors.Wrap(err, "failed to create VHD")
|
||||
}
|
||||
|
||||
vhd, err := diskutil.OpenVirtualDisk(vhdPath, diskutil.VirtualDiskAccessNone, diskutil.OpenVirtualDiskFlagNone)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to open VHD")
|
||||
}
|
||||
defer func() {
|
||||
err2 := windows.CloseHandle(windows.Handle(vhd))
|
||||
if err == nil {
|
||||
err = errors.Wrap(err2, "failed to close VHD")
|
||||
}
|
||||
}()
|
||||
|
||||
if err := hcsFormatWritableLayerVhd(uintptr(vhd)); err != nil {
|
||||
return errors.Wrap(err, "failed to format VHD")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
54
vendor/github.com/Microsoft/hcsshim/internal/hcs/zsyscall_windows.go
generated
vendored
Normal file
54
vendor/github.com/Microsoft/hcsshim/internal/hcs/zsyscall_windows.go
generated
vendored
Normal file
@@ -0,0 +1,54 @@
|
||||
// Code generated mksyscall_windows.exe DO NOT EDIT
|
||||
|
||||
package hcs
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
var _ unsafe.Pointer
|
||||
|
||||
// Do the interface allocations only once for common
|
||||
// Errno values.
|
||||
const (
|
||||
errnoERROR_IO_PENDING = 997
|
||||
)
|
||||
|
||||
var (
|
||||
errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
|
||||
)
|
||||
|
||||
// errnoErr returns common boxed Errno values, to prevent
|
||||
// allocations at runtime.
|
||||
func errnoErr(e syscall.Errno) error {
|
||||
switch e {
|
||||
case 0:
|
||||
return nil
|
||||
case errnoERROR_IO_PENDING:
|
||||
return errERROR_IO_PENDING
|
||||
}
|
||||
// TODO: add more here, after collecting data on the common
|
||||
// error values see on Windows. (perhaps when running
|
||||
// all.bat?)
|
||||
return e
|
||||
}
|
||||
|
||||
var (
|
||||
modcomputestorage = windows.NewLazySystemDLL("computestorage.dll")
|
||||
|
||||
procHcsFormatWritableLayerVhd = modcomputestorage.NewProc("HcsFormatWritableLayerVhd")
|
||||
)
|
||||
|
||||
func hcsFormatWritableLayerVhd(handle uintptr) (hr error) {
|
||||
r0, _, _ := syscall.Syscall(procHcsFormatWritableLayerVhd.Addr(), 1, uintptr(handle), 0, 0)
|
||||
if int32(r0) < 0 {
|
||||
if r0&0x1fff0000 == 0x00070000 {
|
||||
r0 &= 0xffff
|
||||
}
|
||||
hr = syscall.Errno(r0)
|
||||
}
|
||||
return
|
||||
}
|
||||
21
vendor/github.com/Microsoft/hcsshim/internal/hns/hnsendpoint.go
generated
vendored
21
vendor/github.com/Microsoft/hcsshim/internal/hns/hnsendpoint.go
generated
vendored
@@ -173,6 +173,27 @@ func (endpoint *HNSEndpoint) ApplyACLPolicy(policies ...*ACLPolicy) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// ApplyProxyPolicy applies a set of Proxy Policies on the Endpoint
|
||||
func (endpoint *HNSEndpoint) ApplyProxyPolicy(policies ...*ProxyPolicy) error {
|
||||
operation := "ApplyProxyPolicy"
|
||||
title := "hcsshim::HNSEndpoint::" + operation
|
||||
logrus.Debugf(title+" id=%s", endpoint.Id)
|
||||
|
||||
for _, policy := range policies {
|
||||
if policy == nil {
|
||||
continue
|
||||
}
|
||||
jsonString, err := json.Marshal(policy)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
endpoint.Policies = append(endpoint.Policies, jsonString)
|
||||
}
|
||||
|
||||
_, err := endpoint.Update()
|
||||
return err
|
||||
}
|
||||
|
||||
// ContainerAttach attaches an endpoint to container
|
||||
func (endpoint *HNSEndpoint) ContainerAttach(containerID string, compartmentID uint16) error {
|
||||
operation := "ContainerAttach"
|
||||
|
||||
10
vendor/github.com/Microsoft/hcsshim/internal/hns/hnspolicy.go
generated
vendored
10
vendor/github.com/Microsoft/hcsshim/internal/hns/hnspolicy.go
generated
vendored
@@ -17,6 +17,7 @@ const (
|
||||
OutboundNat PolicyType = "OutBoundNAT"
|
||||
ExternalLoadBalancer PolicyType = "ELB"
|
||||
Route PolicyType = "ROUTE"
|
||||
Proxy PolicyType = "PROXY"
|
||||
)
|
||||
|
||||
type NatPolicy struct {
|
||||
@@ -60,6 +61,15 @@ type OutboundNatPolicy struct {
|
||||
Destinations []string `json:",omitempty"`
|
||||
}
|
||||
|
||||
type ProxyPolicy struct {
|
||||
Type PolicyType `json:"Type"`
|
||||
IP string `json:",omitempty"`
|
||||
Port string `json:",omitempty"`
|
||||
ExceptionList []string `json:",omitempty"`
|
||||
Destination string `json:",omitempty"`
|
||||
OutboundNat bool `json:",omitempty"`
|
||||
}
|
||||
|
||||
type ActionType string
|
||||
type DirectionType string
|
||||
type RuleType string
|
||||
|
||||
7
vendor/github.com/Microsoft/hcsshim/internal/schema1/schema1.go
generated
vendored
7
vendor/github.com/Microsoft/hcsshim/internal/schema1/schema1.go
generated
vendored
@@ -214,9 +214,10 @@ type MappedVirtualDiskController struct {
|
||||
|
||||
// GuestDefinedCapabilities is part of the GuestConnectionInfo returned by a GuestConnection call on a utility VM
|
||||
type GuestDefinedCapabilities struct {
|
||||
NamespaceAddRequestSupported bool `json:",omitempty"`
|
||||
SignalProcessSupported bool `json:",omitempty"`
|
||||
DumpStacksSupported bool `json:",omitempty"`
|
||||
NamespaceAddRequestSupported bool `json:",omitempty"`
|
||||
SignalProcessSupported bool `json:",omitempty"`
|
||||
DumpStacksSupported bool `json:",omitempty"`
|
||||
DeleteContainerStateSupported bool `json:",omitempty"`
|
||||
}
|
||||
|
||||
// GuestConnectionInfo is the structure of an iterm return by a GuestConnection call on a utility VM
|
||||
|
||||
4
vendor/github.com/Microsoft/hcsshim/internal/schema2/devices.go
generated
vendored
4
vendor/github.com/Microsoft/hcsshim/internal/schema2/devices.go
generated
vendored
@@ -39,4 +39,8 @@ type Devices struct {
|
||||
FlexibleIov map[string]FlexibleIoDevice `json:"FlexibleIov,omitempty"`
|
||||
|
||||
SharedMemory *SharedMemoryConfiguration `json:"SharedMemory,omitempty"`
|
||||
|
||||
// TODO: This is pre-release support in schema 2.3. Need to add build number
|
||||
// docs when a public build with this is out.
|
||||
VirtualPci map[string]VirtualPciDevice `json:",omitempty"`
|
||||
}
|
||||
|
||||
19
vendor/github.com/Microsoft/hcsshim/internal/schema2/memory_2.go
generated
vendored
19
vendor/github.com/Microsoft/hcsshim/internal/schema2/memory_2.go
generated
vendored
@@ -27,4 +27,23 @@ type Memory2 struct {
|
||||
// to the VM, allowing it to trim non-zeroed pages from the working set (if supported by
|
||||
// the guest operating system).
|
||||
EnableColdDiscardHint bool `json:"EnableColdDiscardHint,omitempty"`
|
||||
|
||||
// LowMmioGapInMB is the low MMIO region allocated below 4GB.
|
||||
//
|
||||
// TODO: This is pre-release support in schema 2.3. Need to add build number
|
||||
// docs when a public build with this is out.
|
||||
LowMMIOGapInMB uint64 `json:"LowMmioGapInMB,omitempty"`
|
||||
|
||||
// HighMmioBaseInMB is the high MMIO region allocated above 4GB (base and
|
||||
// size).
|
||||
//
|
||||
// TODO: This is pre-release support in schema 2.3. Need to add build number
|
||||
// docs when a public build with this is out.
|
||||
HighMMIOBaseInMB uint64 `json:"HighMmioBaseInMB,omitempty"`
|
||||
|
||||
// HighMmioGapInMB is the high MMIO region.
|
||||
//
|
||||
// TODO: This is pre-release support in schema 2.3. Need to add build number
|
||||
// docs when a public build with this is out.
|
||||
HighMMIOGapInMB uint64 `json:"HighMmioGapInMB,omitempty"`
|
||||
}
|
||||
|
||||
16
vendor/github.com/Microsoft/hcsshim/internal/schema2/virtual_pci_device.go
generated
vendored
Normal file
16
vendor/github.com/Microsoft/hcsshim/internal/schema2/virtual_pci_device.go
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
/*
|
||||
* HCS API
|
||||
*
|
||||
* No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
|
||||
*
|
||||
* API version: 2.3
|
||||
* Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
|
||||
*/
|
||||
|
||||
package hcsschema
|
||||
|
||||
// TODO: This is pre-release support in schema 2.3. Need to add build number
|
||||
// docs when a public build with this is out.
|
||||
type VirtualPciDevice struct {
|
||||
Functions []VirtualPciFunction `json:",omitempty"`
|
||||
}
|
||||
18
vendor/github.com/Microsoft/hcsshim/internal/schema2/virtual_pci_function.go
generated
vendored
Normal file
18
vendor/github.com/Microsoft/hcsshim/internal/schema2/virtual_pci_function.go
generated
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
/*
|
||||
* HCS API
|
||||
*
|
||||
* No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
|
||||
*
|
||||
* API version: 2.3
|
||||
* Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
|
||||
*/
|
||||
|
||||
package hcsschema
|
||||
|
||||
// TODO: This is pre-release support in schema 2.3. Need to add build number
|
||||
// docs when a public build with this is out.
|
||||
type VirtualPciFunction struct {
|
||||
DeviceInstancePath string `json:",omitempty"`
|
||||
|
||||
VirtualFunction uint16 `json:",omitempty"`
|
||||
}
|
||||
23
vendor/github.com/Microsoft/hcsshim/internal/wclayer/activatelayer.go
generated
vendored
23
vendor/github.com/Microsoft/hcsshim/internal/wclayer/activatelayer.go
generated
vendored
@@ -1,28 +1,23 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// ActivateLayer will find the layer with the given id and mount it's filesystem.
|
||||
// For a read/write layer, the mounted filesystem will appear as a volume on the
|
||||
// host, while a read-only layer is generally expected to be a no-op.
|
||||
// An activated layer must later be deactivated via DeactivateLayer.
|
||||
func ActivateLayer(path string) (err error) {
|
||||
func ActivateLayer(ctx context.Context, path string) (err error) {
|
||||
title := "hcsshim::ActivateLayer"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("path", path))
|
||||
|
||||
err = activateLayer(&stdDriverInfo, path)
|
||||
if err != nil {
|
||||
|
||||
17
vendor/github.com/Microsoft/hcsshim/internal/wclayer/baselayer.go
generated
vendored
17
vendor/github.com/Microsoft/hcsshim/internal/wclayer/baselayer.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@@ -8,10 +9,15 @@ import (
|
||||
|
||||
"github.com/Microsoft/go-winio"
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"github.com/Microsoft/hcsshim/internal/safefile"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
type baseLayerWriter struct {
|
||||
ctx context.Context
|
||||
s *trace.Span
|
||||
|
||||
root *os.File
|
||||
f *os.File
|
||||
bw *winio.BackupFileWriter
|
||||
@@ -136,12 +142,15 @@ func (w *baseLayerWriter) Write(b []byte) (int, error) {
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (w *baseLayerWriter) Close() error {
|
||||
func (w *baseLayerWriter) Close() (err error) {
|
||||
defer w.s.End()
|
||||
defer func() { oc.SetSpanStatus(w.s, err) }()
|
||||
defer func() {
|
||||
w.root.Close()
|
||||
w.root = nil
|
||||
}()
|
||||
err := w.closeCurrentFile()
|
||||
|
||||
err = w.closeCurrentFile()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -153,7 +162,7 @@ func (w *baseLayerWriter) Close() error {
|
||||
return err
|
||||
}
|
||||
|
||||
err = ProcessBaseLayer(w.root.Name())
|
||||
err = ProcessBaseLayer(w.ctx, w.root.Name())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -163,7 +172,7 @@ func (w *baseLayerWriter) Close() error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = ProcessUtilityVMImage(filepath.Join(w.root.Name(), "UtilityVM"))
|
||||
err = ProcessUtilityVMImage(w.ctx, filepath.Join(w.root.Name(), "UtilityVM"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
26
vendor/github.com/Microsoft/hcsshim/internal/wclayer/createlayer.go
generated
vendored
26
vendor/github.com/Microsoft/hcsshim/internal/wclayer/createlayer.go
generated
vendored
@@ -1,27 +1,23 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// CreateLayer creates a new, empty, read-only layer on the filesystem based on
|
||||
// the parent layer provided.
|
||||
func CreateLayer(path, parent string) (err error) {
|
||||
func CreateLayer(ctx context.Context, path, parent string) (err error) {
|
||||
title := "hcsshim::CreateLayer"
|
||||
fields := logrus.Fields{
|
||||
"parent": parent,
|
||||
"path": path,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(
|
||||
trace.StringAttribute("path", path),
|
||||
trace.StringAttribute("parent", parent))
|
||||
|
||||
err = createLayer(&stdDriverInfo, path, parent)
|
||||
if err != nil {
|
||||
|
||||
28
vendor/github.com/Microsoft/hcsshim/internal/wclayer/createscratchlayer.go
generated
vendored
28
vendor/github.com/Microsoft/hcsshim/internal/wclayer/createscratchlayer.go
generated
vendored
@@ -1,31 +1,29 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// CreateScratchLayer creates and populates new read-write layer for use by a container.
|
||||
// This requires both the id of the direct parent layer, as well as the full list
|
||||
// of paths to all parent layers up to the base (and including the direct parent
|
||||
// whose id was provided).
|
||||
func CreateScratchLayer(path string, parentLayerPaths []string) (err error) {
|
||||
func CreateScratchLayer(ctx context.Context, path string, parentLayerPaths []string) (err error) {
|
||||
title := "hcsshim::CreateScratchLayer"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(
|
||||
trace.StringAttribute("path", path),
|
||||
trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
|
||||
|
||||
// Generate layer descriptors
|
||||
layers, err := layerPathsToDescriptors(parentLayerPaths)
|
||||
layers, err := layerPathsToDescriptors(ctx, parentLayerPaths)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
23
vendor/github.com/Microsoft/hcsshim/internal/wclayer/deactivatelayer.go
generated
vendored
23
vendor/github.com/Microsoft/hcsshim/internal/wclayer/deactivatelayer.go
generated
vendored
@@ -1,25 +1,20 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// DeactivateLayer will dismount a layer that was mounted via ActivateLayer.
|
||||
func DeactivateLayer(path string) (err error) {
|
||||
func DeactivateLayer(ctx context.Context, path string) (err error) {
|
||||
title := "hcsshim::DeactivateLayer"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("path", path))
|
||||
|
||||
err = deactivateLayer(&stdDriverInfo, path)
|
||||
if err != nil {
|
||||
|
||||
23
vendor/github.com/Microsoft/hcsshim/internal/wclayer/destroylayer.go
generated
vendored
23
vendor/github.com/Microsoft/hcsshim/internal/wclayer/destroylayer.go
generated
vendored
@@ -1,26 +1,21 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// DestroyLayer will remove the on-disk files representing the layer with the given
|
||||
// path, including that layer's containing folder, if any.
|
||||
func DestroyLayer(path string) (err error) {
|
||||
func DestroyLayer(ctx context.Context, path string) (err error) {
|
||||
title := "hcsshim::DestroyLayer"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("path", path))
|
||||
|
||||
err = destroyLayer(&stdDriverInfo, path)
|
||||
if err != nil {
|
||||
|
||||
31
vendor/github.com/Microsoft/hcsshim/internal/wclayer/expandscratchsize.go
generated
vendored
31
vendor/github.com/Microsoft/hcsshim/internal/wclayer/expandscratchsize.go
generated
vendored
@@ -1,32 +1,27 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"github.com/Microsoft/hcsshim/osversion"
|
||||
"github.com/sirupsen/logrus"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// ExpandScratchSize expands the size of a layer to at least size bytes.
|
||||
func ExpandScratchSize(path string, size uint64) (err error) {
|
||||
func ExpandScratchSize(ctx context.Context, path string, size uint64) (err error) {
|
||||
title := "hcsshim::ExpandScratchSize"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
"size": size,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(
|
||||
trace.StringAttribute("path", path),
|
||||
trace.Int64Attribute("size", int64(size)))
|
||||
|
||||
err = expandSandboxSize(&stdDriverInfo, path, size)
|
||||
if err != nil {
|
||||
@@ -36,7 +31,7 @@ func ExpandScratchSize(path string, size uint64) (err error) {
|
||||
// Manually expand the volume now in order to work around bugs in 19H1 and
|
||||
// prerelease versions of Vb. Remove once this is fixed in Windows.
|
||||
if build := osversion.Get().Build; build >= osversion.V19H1 && build < 19020 {
|
||||
err = expandSandboxVolume(path)
|
||||
err = expandSandboxVolume(ctx, path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -84,7 +79,7 @@ func attachVhd(path string) (syscall.Handle, error) {
|
||||
return handle, nil
|
||||
}
|
||||
|
||||
func expandSandboxVolume(path string) error {
|
||||
func expandSandboxVolume(ctx context.Context, path string) error {
|
||||
// Mount the sandbox VHD temporarily.
|
||||
vhdPath := filepath.Join(path, "sandbox.vhdx")
|
||||
vhd, err := attachVhd(vhdPath)
|
||||
@@ -94,7 +89,7 @@ func expandSandboxVolume(path string) error {
|
||||
defer syscall.Close(vhd)
|
||||
|
||||
// Open the volume.
|
||||
volumePath, err := GetLayerMountPath(path)
|
||||
volumePath, err := GetLayerMountPath(ctx, path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
60
vendor/github.com/Microsoft/hcsshim/internal/wclayer/exportlayer.go
generated
vendored
60
vendor/github.com/Microsoft/hcsshim/internal/wclayer/exportlayer.go
generated
vendored
@@ -1,12 +1,15 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/Microsoft/go-winio"
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// ExportLayer will create a folder at exportFolderPath and fill that folder with
|
||||
@@ -14,24 +17,18 @@ import (
|
||||
// format includes any metadata required for later importing the layer (using
|
||||
// ImportLayer), and requires the full list of parent layer paths in order to
|
||||
// perform the export.
|
||||
func ExportLayer(path string, exportFolderPath string, parentLayerPaths []string) (err error) {
|
||||
func ExportLayer(ctx context.Context, path string, exportFolderPath string, parentLayerPaths []string) (err error) {
|
||||
title := "hcsshim::ExportLayer"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
"exportFolderPath": exportFolderPath,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(
|
||||
trace.StringAttribute("path", path),
|
||||
trace.StringAttribute("exportFolderPath", exportFolderPath),
|
||||
trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
|
||||
|
||||
// Generate layer descriptors
|
||||
layers, err := layerPathsToDescriptors(parentLayerPaths)
|
||||
layers, err := layerPathsToDescriptors(ctx, parentLayerPaths)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -52,25 +49,46 @@ type LayerReader interface {
|
||||
// NewLayerReader returns a new layer reader for reading the contents of an on-disk layer.
|
||||
// The caller must have taken the SeBackupPrivilege privilege
|
||||
// to call this and any methods on the resulting LayerReader.
|
||||
func NewLayerReader(path string, parentLayerPaths []string) (LayerReader, error) {
|
||||
func NewLayerReader(ctx context.Context, path string, parentLayerPaths []string) (_ LayerReader, err error) {
|
||||
ctx, span := trace.StartSpan(ctx, "hcsshim::NewLayerReader")
|
||||
defer func() {
|
||||
if err != nil {
|
||||
oc.SetSpanStatus(span, err)
|
||||
span.End()
|
||||
}
|
||||
}()
|
||||
span.AddAttributes(
|
||||
trace.StringAttribute("path", path),
|
||||
trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
|
||||
|
||||
exportPath, err := ioutil.TempDir("", "hcs")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = ExportLayer(path, exportPath, parentLayerPaths)
|
||||
err = ExportLayer(ctx, path, exportPath, parentLayerPaths)
|
||||
if err != nil {
|
||||
os.RemoveAll(exportPath)
|
||||
return nil, err
|
||||
}
|
||||
return &legacyLayerReaderWrapper{newLegacyLayerReader(exportPath)}, nil
|
||||
return &legacyLayerReaderWrapper{
|
||||
ctx: ctx,
|
||||
s: span,
|
||||
legacyLayerReader: newLegacyLayerReader(exportPath),
|
||||
}, nil
|
||||
}
|
||||
|
||||
type legacyLayerReaderWrapper struct {
|
||||
ctx context.Context
|
||||
s *trace.Span
|
||||
|
||||
*legacyLayerReader
|
||||
}
|
||||
|
||||
func (r *legacyLayerReaderWrapper) Close() error {
|
||||
err := r.legacyLayerReader.Close()
|
||||
func (r *legacyLayerReaderWrapper) Close() (err error) {
|
||||
defer r.s.End()
|
||||
defer func() { oc.SetSpanStatus(r.s, err) }()
|
||||
|
||||
err = r.legacyLayerReader.Close()
|
||||
os.RemoveAll(r.root)
|
||||
return err
|
||||
}
|
||||
|
||||
29
vendor/github.com/Microsoft/hcsshim/internal/wclayer/getlayermountpath.go
generated
vendored
29
vendor/github.com/Microsoft/hcsshim/internal/wclayer/getlayermountpath.go
generated
vendored
@@ -1,36 +1,31 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"syscall"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/log"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// GetLayerMountPath will look for a mounted layer with the given path and return
|
||||
// the path at which that layer can be accessed. This path may be a volume path
|
||||
// if the layer is a mounted read-write layer, otherwise it is expected to be the
|
||||
// folder path at which the layer is stored.
|
||||
func GetLayerMountPath(path string) (_ string, err error) {
|
||||
func GetLayerMountPath(ctx context.Context, path string) (_ string, err error) {
|
||||
title := "hcsshim::GetLayerMountPath"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("path", path))
|
||||
|
||||
var mountPathLength uintptr
|
||||
mountPathLength = 0
|
||||
|
||||
// Call the procedure itself.
|
||||
logrus.WithFields(fields).Debug("Calling proc (1)")
|
||||
log.G(ctx).Debug("Calling proc (1)")
|
||||
err = getLayerMountPath(&stdDriverInfo, path, &mountPathLength, nil)
|
||||
if err != nil {
|
||||
return "", hcserror.New(err, title+" - failed", "(first call)")
|
||||
@@ -44,13 +39,13 @@ func GetLayerMountPath(path string) (_ string, err error) {
|
||||
mountPathp[0] = 0
|
||||
|
||||
// Call the procedure again
|
||||
logrus.WithFields(fields).Debug("Calling proc (2)")
|
||||
log.G(ctx).Debug("Calling proc (2)")
|
||||
err = getLayerMountPath(&stdDriverInfo, path, &mountPathLength, &mountPathp[0])
|
||||
if err != nil {
|
||||
return "", hcserror.New(err, title+" - failed", "(second call)")
|
||||
}
|
||||
|
||||
mountPath := syscall.UTF16ToString(mountPathp[0:])
|
||||
fields["mountPath"] = mountPath
|
||||
span.AddAttributes(trace.StringAttribute("mountPath", mountPath))
|
||||
return mountPath, nil
|
||||
}
|
||||
|
||||
22
vendor/github.com/Microsoft/hcsshim/internal/wclayer/getsharedbaseimages.go
generated
vendored
22
vendor/github.com/Microsoft/hcsshim/internal/wclayer/getsharedbaseimages.go
generated
vendored
@@ -1,29 +1,29 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/Microsoft/hcsshim/internal/interop"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// GetSharedBaseImages will enumerate the images stored in the common central
|
||||
// image store and return descriptive info about those images for the purpose
|
||||
// of registering them with the graphdriver, graph, and tagstore.
|
||||
func GetSharedBaseImages() (imageData string, err error) {
|
||||
func GetSharedBaseImages(ctx context.Context) (_ string, err error) {
|
||||
title := "hcsshim::GetSharedBaseImages"
|
||||
logrus.Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error(err)
|
||||
} else {
|
||||
logrus.WithField("imageData", imageData).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
|
||||
var buffer *uint16
|
||||
err = getBaseImages(&buffer)
|
||||
if err != nil {
|
||||
return "", hcserror.New(err, title+" - failed", "")
|
||||
}
|
||||
return interop.ConvertAndFreeCoTaskMemString(buffer), nil
|
||||
imageData := interop.ConvertAndFreeCoTaskMemString(buffer)
|
||||
span.AddAttributes(trace.StringAttribute("imageData", imageData))
|
||||
return imageData, nil
|
||||
}
|
||||
|
||||
26
vendor/github.com/Microsoft/hcsshim/internal/wclayer/grantvmaccess.go
generated
vendored
26
vendor/github.com/Microsoft/hcsshim/internal/wclayer/grantvmaccess.go
generated
vendored
@@ -1,26 +1,22 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// GrantVmAccess adds access to a file for a given VM
|
||||
func GrantVmAccess(vmid string, filepath string) (err error) {
|
||||
func GrantVmAccess(ctx context.Context, vmid string, filepath string) (err error) {
|
||||
title := "hcsshim::GrantVmAccess"
|
||||
fields := logrus.Fields{
|
||||
"vm-id": vmid,
|
||||
"path": filepath,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(
|
||||
trace.StringAttribute("vm-id", vmid),
|
||||
trace.StringAttribute("path", filepath))
|
||||
|
||||
err = grantVmAccess(vmid, filepath)
|
||||
if err != nil {
|
||||
|
||||
60
vendor/github.com/Microsoft/hcsshim/internal/wclayer/importlayer.go
generated
vendored
60
vendor/github.com/Microsoft/hcsshim/internal/wclayer/importlayer.go
generated
vendored
@@ -1,38 +1,35 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/Microsoft/go-winio"
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"github.com/Microsoft/hcsshim/internal/safefile"
|
||||
"github.com/sirupsen/logrus"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// ImportLayer will take the contents of the folder at importFolderPath and import
|
||||
// that into a layer with the id layerId. Note that in order to correctly populate
|
||||
// the layer and interperet the transport format, all parent layers must already
|
||||
// be present on the system at the paths provided in parentLayerPaths.
|
||||
func ImportLayer(path string, importFolderPath string, parentLayerPaths []string) (err error) {
|
||||
func ImportLayer(ctx context.Context, path string, importFolderPath string, parentLayerPaths []string) (err error) {
|
||||
title := "hcsshim::ImportLayer"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
"importFolderPath": importFolderPath,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(
|
||||
trace.StringAttribute("path", path),
|
||||
trace.StringAttribute("importFolderPath", importFolderPath),
|
||||
trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
|
||||
|
||||
// Generate layer descriptors
|
||||
layers, err := layerPathsToDescriptors(parentLayerPaths)
|
||||
layers, err := layerPathsToDescriptors(ctx, parentLayerPaths)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -60,20 +57,26 @@ type LayerWriter interface {
|
||||
}
|
||||
|
||||
type legacyLayerWriterWrapper struct {
|
||||
ctx context.Context
|
||||
s *trace.Span
|
||||
|
||||
*legacyLayerWriter
|
||||
path string
|
||||
parentLayerPaths []string
|
||||
}
|
||||
|
||||
func (r *legacyLayerWriterWrapper) Close() error {
|
||||
func (r *legacyLayerWriterWrapper) Close() (err error) {
|
||||
defer r.s.End()
|
||||
defer func() { oc.SetSpanStatus(r.s, err) }()
|
||||
defer os.RemoveAll(r.root.Name())
|
||||
defer r.legacyLayerWriter.CloseRoots()
|
||||
err := r.legacyLayerWriter.Close()
|
||||
|
||||
err = r.legacyLayerWriter.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err = ImportLayer(r.destRoot.Name(), r.path, r.parentLayerPaths); err != nil {
|
||||
if err = ImportLayer(r.ctx, r.destRoot.Name(), r.path, r.parentLayerPaths); err != nil {
|
||||
return err
|
||||
}
|
||||
for _, name := range r.Tombstones {
|
||||
@@ -96,7 +99,7 @@ func (r *legacyLayerWriterWrapper) Close() error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = ProcessUtilityVMImage(filepath.Join(r.destRoot.Name(), "UtilityVM"))
|
||||
err = ProcessUtilityVMImage(r.ctx, filepath.Join(r.destRoot.Name(), "UtilityVM"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -107,7 +110,18 @@ func (r *legacyLayerWriterWrapper) Close() error {
|
||||
// NewLayerWriter returns a new layer writer for creating a layer on disk.
|
||||
// The caller must have taken the SeBackupPrivilege and SeRestorePrivilege privileges
|
||||
// to call this and any methods on the resulting LayerWriter.
|
||||
func NewLayerWriter(path string, parentLayerPaths []string) (LayerWriter, error) {
|
||||
func NewLayerWriter(ctx context.Context, path string, parentLayerPaths []string) (_ LayerWriter, err error) {
|
||||
ctx, span := trace.StartSpan(ctx, "hcsshim::NewLayerWriter")
|
||||
defer func() {
|
||||
if err != nil {
|
||||
oc.SetSpanStatus(span, err)
|
||||
span.End()
|
||||
}
|
||||
}()
|
||||
span.AddAttributes(
|
||||
trace.StringAttribute("path", path),
|
||||
trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
|
||||
|
||||
if len(parentLayerPaths) == 0 {
|
||||
// This is a base layer. It gets imported differently.
|
||||
f, err := safefile.OpenRoot(path)
|
||||
@@ -115,6 +129,8 @@ func NewLayerWriter(path string, parentLayerPaths []string) (LayerWriter, error)
|
||||
return nil, err
|
||||
}
|
||||
return &baseLayerWriter{
|
||||
ctx: ctx,
|
||||
s: span,
|
||||
root: f,
|
||||
}, nil
|
||||
}
|
||||
@@ -128,6 +144,8 @@ func NewLayerWriter(path string, parentLayerPaths []string) (LayerWriter, error)
|
||||
return nil, err
|
||||
}
|
||||
return &legacyLayerWriterWrapper{
|
||||
ctx: ctx,
|
||||
s: span,
|
||||
legacyLayerWriter: w,
|
||||
path: importPath,
|
||||
parentLayerPaths: parentLayerPaths,
|
||||
|
||||
25
vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerexists.go
generated
vendored
25
vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerexists.go
generated
vendored
@@ -1,26 +1,21 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// LayerExists will return true if a layer with the given id exists and is known
|
||||
// to the system.
|
||||
func LayerExists(path string) (_ bool, err error) {
|
||||
func LayerExists(ctx context.Context, path string) (_ bool, err error) {
|
||||
title := "hcsshim::LayerExists"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("path", path))
|
||||
|
||||
// Call the procedure itself.
|
||||
var exists uint32
|
||||
@@ -28,6 +23,6 @@ func LayerExists(path string) (_ bool, err error) {
|
||||
if err != nil {
|
||||
return false, hcserror.New(err, title+" - failed", "")
|
||||
}
|
||||
fields["layer-exists"] = exists != 0
|
||||
span.AddAttributes(trace.BoolAttribute("layer-exists", exists != 0))
|
||||
return exists != 0, nil
|
||||
}
|
||||
|
||||
13
vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerid.go
generated
vendored
13
vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerid.go
generated
vendored
@@ -1,13 +1,22 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/Microsoft/go-winio/pkg/guid"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// LayerID returns the layer ID of a layer on disk.
|
||||
func LayerID(path string) (guid.GUID, error) {
|
||||
func LayerID(ctx context.Context, path string) (_ guid.GUID, err error) {
|
||||
title := "hcsshim::LayerID"
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("path", path))
|
||||
|
||||
_, file := filepath.Split(path)
|
||||
return NameToGuid(file)
|
||||
return NameToGuid(ctx, file)
|
||||
}
|
||||
|
||||
5
vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerutils.go
generated
vendored
5
vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerutils.go
generated
vendored
@@ -4,6 +4,7 @@ package wclayer
|
||||
// functionality.
|
||||
|
||||
import (
|
||||
"context"
|
||||
"syscall"
|
||||
|
||||
"github.com/Microsoft/go-winio/pkg/guid"
|
||||
@@ -68,12 +69,12 @@ type WC_LAYER_DESCRIPTOR struct {
|
||||
Pathp *uint16
|
||||
}
|
||||
|
||||
func layerPathsToDescriptors(parentLayerPaths []string) ([]WC_LAYER_DESCRIPTOR, error) {
|
||||
func layerPathsToDescriptors(ctx context.Context, parentLayerPaths []string) ([]WC_LAYER_DESCRIPTOR, error) {
|
||||
// Array of descriptors that gets constructed.
|
||||
var layers []WC_LAYER_DESCRIPTOR
|
||||
|
||||
for i := 0; i < len(parentLayerPaths); i++ {
|
||||
g, err := LayerID(parentLayerPaths[i])
|
||||
g, err := LayerID(ctx, parentLayerPaths[i])
|
||||
if err != nil {
|
||||
logrus.WithError(err).Debug("Failed to convert name to guid")
|
||||
return nil, err
|
||||
|
||||
31
vendor/github.com/Microsoft/hcsshim/internal/wclayer/nametoguid.go
generated
vendored
31
vendor/github.com/Microsoft/hcsshim/internal/wclayer/nametoguid.go
generated
vendored
@@ -1,34 +1,29 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/Microsoft/go-winio/pkg/guid"
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// NameToGuid converts the given string into a GUID using the algorithm in the
|
||||
// Host Compute Service, ensuring GUIDs generated with the same string are common
|
||||
// across all clients.
|
||||
func NameToGuid(name string) (id guid.GUID, err error) {
|
||||
func NameToGuid(ctx context.Context, name string) (_ guid.GUID, err error) {
|
||||
title := "hcsshim::NameToGuid"
|
||||
fields := logrus.Fields{
|
||||
"name": name,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("name", name))
|
||||
|
||||
var id guid.GUID
|
||||
err = nameToGuid(name, &id)
|
||||
if err != nil {
|
||||
err = hcserror.New(err, title+" - failed", "")
|
||||
return
|
||||
return guid.GUID{}, hcserror.New(err, title+" - failed", "")
|
||||
}
|
||||
fields["guid"] = id.String()
|
||||
return
|
||||
span.AddAttributes(trace.StringAttribute("guid", id.String()))
|
||||
return id, nil
|
||||
}
|
||||
|
||||
27
vendor/github.com/Microsoft/hcsshim/internal/wclayer/preparelayer.go
generated
vendored
27
vendor/github.com/Microsoft/hcsshim/internal/wclayer/preparelayer.go
generated
vendored
@@ -1,10 +1,13 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
var prepareLayerLock sync.Mutex
|
||||
@@ -14,23 +17,17 @@ var prepareLayerLock sync.Mutex
|
||||
// parent layers, and is necessary in order to view or interact with the layer
|
||||
// as an actual filesystem (reading and writing files, creating directories, etc).
|
||||
// Disabling the filter must be done via UnprepareLayer.
|
||||
func PrepareLayer(path string, parentLayerPaths []string) (err error) {
|
||||
func PrepareLayer(ctx context.Context, path string, parentLayerPaths []string) (err error) {
|
||||
title := "hcsshim::PrepareLayer"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(
|
||||
trace.StringAttribute("path", path),
|
||||
trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
|
||||
|
||||
// Generate layer descriptors
|
||||
layers, err := layerPathsToDescriptors(parentLayerPaths)
|
||||
layers, err := layerPathsToDescriptors(ctx, parentLayerPaths)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
32
vendor/github.com/Microsoft/hcsshim/internal/wclayer/processimage.go
generated
vendored
32
vendor/github.com/Microsoft/hcsshim/internal/wclayer/processimage.go
generated
vendored
@@ -1,23 +1,41 @@
|
||||
package wclayer
|
||||
|
||||
import "os"
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// ProcessBaseLayer post-processes a base layer that has had its files extracted.
|
||||
// The files should have been extracted to <path>\Files.
|
||||
func ProcessBaseLayer(path string) error {
|
||||
err := processBaseImage(path)
|
||||
func ProcessBaseLayer(ctx context.Context, path string) (err error) {
|
||||
title := "hcsshim::ProcessBaseLayer"
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("path", path))
|
||||
|
||||
err = processBaseImage(path)
|
||||
if err != nil {
|
||||
return &os.PathError{Op: "ProcessBaseLayer", Path: path, Err: err}
|
||||
return &os.PathError{Op: title, Path: path, Err: err}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ProcessUtilityVMImage post-processes a utility VM image that has had its files extracted.
|
||||
// The files should have been extracted to <path>\Files.
|
||||
func ProcessUtilityVMImage(path string) error {
|
||||
err := processUtilityImage(path)
|
||||
func ProcessUtilityVMImage(ctx context.Context, path string) (err error) {
|
||||
title := "hcsshim::ProcessUtilityVMImage"
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("path", path))
|
||||
|
||||
err = processUtilityImage(path)
|
||||
if err != nil {
|
||||
return &os.PathError{Op: "ProcessUtilityVMImage", Path: path, Err: err}
|
||||
return &os.PathError{Op: title, Path: path, Err: err}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
23
vendor/github.com/Microsoft/hcsshim/internal/wclayer/unpreparelayer.go
generated
vendored
23
vendor/github.com/Microsoft/hcsshim/internal/wclayer/unpreparelayer.go
generated
vendored
@@ -1,26 +1,21 @@
|
||||
package wclayer
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/Microsoft/hcsshim/internal/hcserror"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/Microsoft/hcsshim/internal/oc"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// UnprepareLayer disables the filesystem filter for the read-write layer with
|
||||
// the given id.
|
||||
func UnprepareLayer(path string) (err error) {
|
||||
func UnprepareLayer(ctx context.Context, path string) (err error) {
|
||||
title := "hcsshim::UnprepareLayer"
|
||||
fields := logrus.Fields{
|
||||
"path": path,
|
||||
}
|
||||
logrus.WithFields(fields).Debug(title)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
fields[logrus.ErrorKey] = err
|
||||
logrus.WithFields(fields).Error(err)
|
||||
} else {
|
||||
logrus.WithFields(fields).Debug(title + " - succeeded")
|
||||
}
|
||||
}()
|
||||
ctx, span := trace.StartSpan(ctx, title)
|
||||
defer span.End()
|
||||
defer func() { oc.SetSpanStatus(span, err) }()
|
||||
span.AddAttributes(trace.StringAttribute("path", path))
|
||||
|
||||
err = unprepareLayer(&stdDriverInfo, path)
|
||||
if err != nil {
|
||||
|
||||
41
vendor/github.com/Microsoft/hcsshim/layer.go
generated
vendored
41
vendor/github.com/Microsoft/hcsshim/layer.go
generated
vendored
@@ -1,6 +1,7 @@
|
||||
package hcsshim
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/sha1"
|
||||
"path/filepath"
|
||||
|
||||
@@ -13,59 +14,59 @@ func layerPath(info *DriverInfo, id string) string {
|
||||
}
|
||||
|
||||
func ActivateLayer(info DriverInfo, id string) error {
|
||||
return wclayer.ActivateLayer(layerPath(&info, id))
|
||||
return wclayer.ActivateLayer(context.Background(), layerPath(&info, id))
|
||||
}
|
||||
func CreateLayer(info DriverInfo, id, parent string) error {
|
||||
return wclayer.CreateLayer(layerPath(&info, id), parent)
|
||||
return wclayer.CreateLayer(context.Background(), layerPath(&info, id), parent)
|
||||
}
|
||||
|
||||
// New clients should use CreateScratchLayer instead. Kept in to preserve API compatibility.
|
||||
func CreateSandboxLayer(info DriverInfo, layerId, parentId string, parentLayerPaths []string) error {
|
||||
return wclayer.CreateScratchLayer(layerPath(&info, layerId), parentLayerPaths)
|
||||
return wclayer.CreateScratchLayer(context.Background(), layerPath(&info, layerId), parentLayerPaths)
|
||||
}
|
||||
func CreateScratchLayer(info DriverInfo, layerId, parentId string, parentLayerPaths []string) error {
|
||||
return wclayer.CreateScratchLayer(layerPath(&info, layerId), parentLayerPaths)
|
||||
return wclayer.CreateScratchLayer(context.Background(), layerPath(&info, layerId), parentLayerPaths)
|
||||
}
|
||||
func DeactivateLayer(info DriverInfo, id string) error {
|
||||
return wclayer.DeactivateLayer(layerPath(&info, id))
|
||||
return wclayer.DeactivateLayer(context.Background(), layerPath(&info, id))
|
||||
}
|
||||
func DestroyLayer(info DriverInfo, id string) error {
|
||||
return wclayer.DestroyLayer(layerPath(&info, id))
|
||||
return wclayer.DestroyLayer(context.Background(), layerPath(&info, id))
|
||||
}
|
||||
|
||||
// New clients should use ExpandScratchSize instead. Kept in to preserve API compatibility.
|
||||
func ExpandSandboxSize(info DriverInfo, layerId string, size uint64) error {
|
||||
return wclayer.ExpandScratchSize(layerPath(&info, layerId), size)
|
||||
return wclayer.ExpandScratchSize(context.Background(), layerPath(&info, layerId), size)
|
||||
}
|
||||
func ExpandScratchSize(info DriverInfo, layerId string, size uint64) error {
|
||||
return wclayer.ExpandScratchSize(layerPath(&info, layerId), size)
|
||||
return wclayer.ExpandScratchSize(context.Background(), layerPath(&info, layerId), size)
|
||||
}
|
||||
func ExportLayer(info DriverInfo, layerId string, exportFolderPath string, parentLayerPaths []string) error {
|
||||
return wclayer.ExportLayer(layerPath(&info, layerId), exportFolderPath, parentLayerPaths)
|
||||
return wclayer.ExportLayer(context.Background(), layerPath(&info, layerId), exportFolderPath, parentLayerPaths)
|
||||
}
|
||||
func GetLayerMountPath(info DriverInfo, id string) (string, error) {
|
||||
return wclayer.GetLayerMountPath(layerPath(&info, id))
|
||||
return wclayer.GetLayerMountPath(context.Background(), layerPath(&info, id))
|
||||
}
|
||||
func GetSharedBaseImages() (imageData string, err error) {
|
||||
return wclayer.GetSharedBaseImages()
|
||||
return wclayer.GetSharedBaseImages(context.Background())
|
||||
}
|
||||
func ImportLayer(info DriverInfo, layerID string, importFolderPath string, parentLayerPaths []string) error {
|
||||
return wclayer.ImportLayer(layerPath(&info, layerID), importFolderPath, parentLayerPaths)
|
||||
return wclayer.ImportLayer(context.Background(), layerPath(&info, layerID), importFolderPath, parentLayerPaths)
|
||||
}
|
||||
func LayerExists(info DriverInfo, id string) (bool, error) {
|
||||
return wclayer.LayerExists(layerPath(&info, id))
|
||||
return wclayer.LayerExists(context.Background(), layerPath(&info, id))
|
||||
}
|
||||
func PrepareLayer(info DriverInfo, layerId string, parentLayerPaths []string) error {
|
||||
return wclayer.PrepareLayer(layerPath(&info, layerId), parentLayerPaths)
|
||||
return wclayer.PrepareLayer(context.Background(), layerPath(&info, layerId), parentLayerPaths)
|
||||
}
|
||||
func ProcessBaseLayer(path string) error {
|
||||
return wclayer.ProcessBaseLayer(path)
|
||||
return wclayer.ProcessBaseLayer(context.Background(), path)
|
||||
}
|
||||
func ProcessUtilityVMImage(path string) error {
|
||||
return wclayer.ProcessUtilityVMImage(path)
|
||||
return wclayer.ProcessUtilityVMImage(context.Background(), path)
|
||||
}
|
||||
func UnprepareLayer(info DriverInfo, layerId string) error {
|
||||
return wclayer.UnprepareLayer(layerPath(&info, layerId))
|
||||
return wclayer.UnprepareLayer(context.Background(), layerPath(&info, layerId))
|
||||
}
|
||||
|
||||
type DriverInfo struct {
|
||||
@@ -76,7 +77,7 @@ type DriverInfo struct {
|
||||
type GUID [16]byte
|
||||
|
||||
func NameToGuid(name string) (id GUID, err error) {
|
||||
g, err := wclayer.NameToGuid(name)
|
||||
g, err := wclayer.NameToGuid(context.Background(), name)
|
||||
return g.ToWindowsArray(), err
|
||||
}
|
||||
|
||||
@@ -94,13 +95,13 @@ func (g *GUID) ToString() string {
|
||||
type LayerReader = wclayer.LayerReader
|
||||
|
||||
func NewLayerReader(info DriverInfo, layerID string, parentLayerPaths []string) (LayerReader, error) {
|
||||
return wclayer.NewLayerReader(layerPath(&info, layerID), parentLayerPaths)
|
||||
return wclayer.NewLayerReader(context.Background(), layerPath(&info, layerID), parentLayerPaths)
|
||||
}
|
||||
|
||||
type LayerWriter = wclayer.LayerWriter
|
||||
|
||||
func NewLayerWriter(info DriverInfo, layerID string, parentLayerPaths []string) (LayerWriter, error) {
|
||||
return wclayer.NewLayerWriter(layerPath(&info, layerID), parentLayerPaths)
|
||||
return wclayer.NewLayerWriter(context.Background(), layerPath(&info, layerID), parentLayerPaths)
|
||||
}
|
||||
|
||||
type WC_LAYER_DESCRIPTOR = wclayer.WC_LAYER_DESCRIPTOR
|
||||
|
||||
201
vendor/github.com/containers/common/LICENSE
generated
vendored
Normal file
201
vendor/github.com/containers/common/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
266
vendor/github.com/containers/common/pkg/auth/auth.go
generated
vendored
Normal file
266
vendor/github.com/containers/common/pkg/auth/auth.go
generated
vendored
Normal file
@@ -0,0 +1,266 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/v5/docker"
|
||||
"github.com/containers/image/v5/pkg/docker/config"
|
||||
"github.com/containers/image/v5/pkg/sysregistriesv2"
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"golang.org/x/crypto/ssh/terminal"
|
||||
)
|
||||
|
||||
// GetDefaultAuthFile returns env value REGISTRY_AUTH_FILE as default --authfile path
|
||||
// used in multiple --authfile flag definitions
|
||||
func GetDefaultAuthFile() string {
|
||||
return os.Getenv("REGISTRY_AUTH_FILE")
|
||||
}
|
||||
|
||||
// CheckAuthFile validates filepath given by --authfile
|
||||
// used by command has --authfile flag
|
||||
func CheckAuthFile(authfile string) error {
|
||||
if authfile == "" {
|
||||
return nil
|
||||
}
|
||||
if _, err := os.Stat(authfile); err != nil {
|
||||
return errors.Wrapf(err, "error checking authfile path %s", authfile)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// systemContextWithOptions returns a version of sys
|
||||
// updated with authFile and certDir values (if they are not "").
|
||||
// NOTE: this is a shallow copy that can be used and updated, but may share
|
||||
// data with the original parameter.
|
||||
func systemContextWithOptions(sys *types.SystemContext, authFile, certDir string) *types.SystemContext {
|
||||
if sys != nil {
|
||||
copy := *sys
|
||||
sys = ©
|
||||
} else {
|
||||
sys = &types.SystemContext{}
|
||||
}
|
||||
|
||||
if authFile != "" {
|
||||
sys.AuthFilePath = authFile
|
||||
}
|
||||
if certDir != "" {
|
||||
sys.DockerCertPath = certDir
|
||||
}
|
||||
return sys
|
||||
}
|
||||
|
||||
// Login implements a “log in” command with the provided opts and args
|
||||
// reading the password from opts.Stdin or the options in opts.
|
||||
func Login(ctx context.Context, systemContext *types.SystemContext, opts *LoginOptions, args []string) error {
|
||||
systemContext = systemContextWithOptions(systemContext, opts.AuthFile, opts.CertDir)
|
||||
|
||||
var (
|
||||
server string
|
||||
err error
|
||||
)
|
||||
if len(args) > 1 {
|
||||
return errors.Errorf("login accepts only one registry to login to")
|
||||
}
|
||||
if len(args) == 0 {
|
||||
if !opts.AcceptUnspecifiedRegistry {
|
||||
return errors.Errorf("please provide a registry to login to")
|
||||
}
|
||||
if server, err = defaultRegistryWhenUnspecified(systemContext); err != nil {
|
||||
return err
|
||||
}
|
||||
logrus.Debugf("registry not specified, default to the first registry %q from registries.conf", server)
|
||||
} else {
|
||||
server = getRegistryName(args[0])
|
||||
}
|
||||
authConfig, err := config.GetCredentials(systemContext, server)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "error reading auth file")
|
||||
}
|
||||
if opts.GetLoginSet {
|
||||
if authConfig.Username == "" {
|
||||
return errors.Errorf("not logged into %s", server)
|
||||
}
|
||||
fmt.Fprintf(opts.Stdout, "%s\n", authConfig.Username)
|
||||
return nil
|
||||
}
|
||||
if authConfig.IdentityToken != "" {
|
||||
return errors.Errorf("currently logged in, auth file contains an Identity token")
|
||||
}
|
||||
|
||||
password := opts.Password
|
||||
if opts.StdinPassword {
|
||||
var stdinPasswordStrBuilder strings.Builder
|
||||
if opts.Password != "" {
|
||||
return errors.Errorf("Can't specify both --password-stdin and --password")
|
||||
}
|
||||
if opts.Username == "" {
|
||||
return errors.Errorf("Must provide --username with --password-stdin")
|
||||
}
|
||||
scanner := bufio.NewScanner(opts.Stdin)
|
||||
for scanner.Scan() {
|
||||
fmt.Fprint(&stdinPasswordStrBuilder, scanner.Text())
|
||||
}
|
||||
password = stdinPasswordStrBuilder.String()
|
||||
}
|
||||
|
||||
// If no username and no password is specified, try to use existing ones.
|
||||
if opts.Username == "" && password == "" && authConfig.Username != "" && authConfig.Password != "" {
|
||||
fmt.Println("Authenticating with existing credentials...")
|
||||
if err := docker.CheckAuth(ctx, systemContext, authConfig.Username, authConfig.Password, server); err == nil {
|
||||
fmt.Fprintln(opts.Stdout, "Existing credentials are valid. Already logged in to", server)
|
||||
return nil
|
||||
}
|
||||
fmt.Fprintln(opts.Stdout, "Existing credentials are invalid, please enter valid username and password")
|
||||
}
|
||||
|
||||
username, password, err := getUserAndPass(opts, password, authConfig.Username)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "error getting username and password")
|
||||
}
|
||||
|
||||
if err = docker.CheckAuth(ctx, systemContext, username, password, server); err == nil {
|
||||
// Write the new credentials to the authfile
|
||||
if err = config.SetAuthentication(systemContext, server, username, password); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err == nil {
|
||||
fmt.Fprintln(opts.Stdout, "Login Succeeded!")
|
||||
return nil
|
||||
}
|
||||
if unauthorized, ok := err.(docker.ErrUnauthorizedForCredentials); ok {
|
||||
logrus.Debugf("error logging into %q: %v", server, unauthorized)
|
||||
return errors.Errorf("error logging into %q: invalid username/password", server)
|
||||
}
|
||||
return errors.Wrapf(err, "error authenticating creds for %q", server)
|
||||
}
|
||||
|
||||
// getRegistryName scrubs and parses the input to get the server name
|
||||
func getRegistryName(server string) string {
|
||||
// removes 'http://' or 'https://' from the front of the
|
||||
// server/registry string if either is there. This will be mostly used
|
||||
// for user input from 'Buildah login' and 'Buildah logout'.
|
||||
server = strings.TrimPrefix(strings.TrimPrefix(server, "https://"), "http://")
|
||||
// gets the registry from the input. If the input is of the form
|
||||
// quay.io/myuser/myimage, it will parse it and just return quay.io
|
||||
split := strings.Split(server, "/")
|
||||
if len(split) > 1 {
|
||||
return split[0]
|
||||
}
|
||||
return split[0]
|
||||
}
|
||||
|
||||
// getUserAndPass gets the username and password from STDIN if not given
|
||||
// using the -u and -p flags. If the username prompt is left empty, the
|
||||
// displayed userFromAuthFile will be used instead.
|
||||
func getUserAndPass(opts *LoginOptions, password, userFromAuthFile string) (string, string, error) {
|
||||
var err error
|
||||
reader := bufio.NewReader(opts.Stdin)
|
||||
username := opts.Username
|
||||
if username == "" {
|
||||
if userFromAuthFile != "" {
|
||||
fmt.Fprintf(opts.Stdout, "Username (%s): ", userFromAuthFile)
|
||||
} else {
|
||||
fmt.Fprint(opts.Stdout, "Username: ")
|
||||
}
|
||||
username, err = reader.ReadString('\n')
|
||||
if err != nil {
|
||||
return "", "", errors.Wrapf(err, "error reading username")
|
||||
}
|
||||
// If the user just hit enter, use the displayed user from the
|
||||
// the authentication file. This allows to do a lazy
|
||||
// `$ buildah login -p $NEW_PASSWORD` without specifying the
|
||||
// user.
|
||||
if strings.TrimSpace(username) == "" {
|
||||
username = userFromAuthFile
|
||||
}
|
||||
}
|
||||
if password == "" {
|
||||
fmt.Fprint(opts.Stdout, "Password: ")
|
||||
pass, err := terminal.ReadPassword(0)
|
||||
if err != nil {
|
||||
return "", "", errors.Wrapf(err, "error reading password")
|
||||
}
|
||||
password = string(pass)
|
||||
fmt.Fprintln(opts.Stdout)
|
||||
}
|
||||
return strings.TrimSpace(username), password, err
|
||||
}
|
||||
|
||||
// Logout implements a “log out” command with the provided opts and args
|
||||
func Logout(systemContext *types.SystemContext, opts *LogoutOptions, args []string) error {
|
||||
if err := CheckAuthFile(opts.AuthFile); err != nil {
|
||||
return err
|
||||
}
|
||||
systemContext = systemContextWithOptions(systemContext, opts.AuthFile, "")
|
||||
|
||||
var (
|
||||
server string
|
||||
err error
|
||||
)
|
||||
if len(args) > 1 {
|
||||
return errors.Errorf("logout accepts only one registry to logout from")
|
||||
}
|
||||
if len(args) == 0 && !opts.All {
|
||||
if !opts.AcceptUnspecifiedRegistry {
|
||||
return errors.Errorf("please provide a registry to logout from")
|
||||
}
|
||||
if server, err = defaultRegistryWhenUnspecified(systemContext); err != nil {
|
||||
return err
|
||||
}
|
||||
logrus.Debugf("registry not specified, default to the first registry %q from registries.conf", server)
|
||||
}
|
||||
if len(args) != 0 {
|
||||
if opts.All {
|
||||
return errors.Errorf("--all takes no arguments")
|
||||
}
|
||||
server = getRegistryName(args[0])
|
||||
}
|
||||
|
||||
if opts.All {
|
||||
if err := config.RemoveAllAuthentication(systemContext); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(opts.Stdout, "Removed login credentials for all registries")
|
||||
return nil
|
||||
}
|
||||
|
||||
err = config.RemoveAuthentication(systemContext, server)
|
||||
switch errors.Cause(err) {
|
||||
case nil:
|
||||
fmt.Fprintf(opts.Stdout, "Removed login credentials for %s\n", server)
|
||||
return nil
|
||||
case config.ErrNotLoggedIn:
|
||||
authConfig, err := config.GetCredentials(systemContext, server)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "error reading auth file")
|
||||
}
|
||||
authInvalid := docker.CheckAuth(context.Background(), systemContext, authConfig.Username, authConfig.Password, server)
|
||||
if authConfig.Username != "" && authConfig.Password != "" && authInvalid == nil {
|
||||
fmt.Printf("Not logged into %s with current tool. Existing credentials were established via docker login. Please use docker logout instead.\n", server)
|
||||
return nil
|
||||
}
|
||||
return errors.Errorf("Not logged into %s\n", server)
|
||||
default:
|
||||
return errors.Wrapf(err, "error logging out of %q", server)
|
||||
}
|
||||
}
|
||||
|
||||
// defaultRegistryWhenUnspecified returns first registry from search list of registry.conf
|
||||
// used by login/logout when registry argument is not specified
|
||||
func defaultRegistryWhenUnspecified(systemContext *types.SystemContext) (string, error) {
|
||||
registriesFromFile, err := sysregistriesv2.UnqualifiedSearchRegistries(systemContext)
|
||||
if err != nil {
|
||||
return "", errors.Wrapf(err, "error getting registry from registry.conf, please specify a registry")
|
||||
}
|
||||
if len(registriesFromFile) == 0 {
|
||||
return "", errors.Errorf("no registries found in registries.conf, a registry must be provided")
|
||||
}
|
||||
return registriesFromFile[0], nil
|
||||
}
|
||||
58
vendor/github.com/containers/common/pkg/auth/cli.go
generated
vendored
Normal file
58
vendor/github.com/containers/common/pkg/auth/cli.go
generated
vendored
Normal file
@@ -0,0 +1,58 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/spf13/pflag"
|
||||
)
|
||||
|
||||
// LoginOptions represents common flags in login
|
||||
// In addition, the caller should probably provide a --tls-verify flag (that affects the provided
|
||||
// *types.SystemContest)
|
||||
type LoginOptions struct {
|
||||
// CLI flags managed by the FlagSet returned by GetLoginFlags
|
||||
// Callers that use GetLoginFlags should not need to touch these values at all; callers that use
|
||||
// other CLI frameworks should set them based on user input.
|
||||
AuthFile string
|
||||
CertDir string
|
||||
Password string
|
||||
Username string
|
||||
StdinPassword bool
|
||||
GetLoginSet bool
|
||||
// Options caller can set
|
||||
Stdin io.Reader // set to os.Stdin
|
||||
Stdout io.Writer // set to os.Stdout
|
||||
AcceptUnspecifiedRegistry bool // set to true if allows login with unspecified registry
|
||||
}
|
||||
|
||||
// LogoutOptions represents the results for flags in logout
|
||||
type LogoutOptions struct {
|
||||
// CLI flags managed by the FlagSet returned by GetLogoutFlags
|
||||
// Callers that use GetLogoutFlags should not need to touch these values at all; callers that use
|
||||
// other CLI frameworks should set them based on user input.
|
||||
AuthFile string
|
||||
All bool
|
||||
// Options caller can set
|
||||
Stdout io.Writer // set to os.Stdout
|
||||
AcceptUnspecifiedRegistry bool // set to true if allows logout with unspecified registry
|
||||
}
|
||||
|
||||
// GetLoginFlags defines and returns login flags for containers tools
|
||||
func GetLoginFlags(flags *LoginOptions) *pflag.FlagSet {
|
||||
fs := pflag.FlagSet{}
|
||||
fs.StringVar(&flags.AuthFile, "authfile", GetDefaultAuthFile(), "path of the authentication file. Use REGISTRY_AUTH_FILE environment variable to override")
|
||||
fs.StringVar(&flags.CertDir, "cert-dir", "", "use certificates at the specified path to access the registry")
|
||||
fs.StringVarP(&flags.Password, "password", "p", "", "Password for registry")
|
||||
fs.StringVarP(&flags.Username, "username", "u", "", "Username for registry")
|
||||
fs.BoolVar(&flags.StdinPassword, "password-stdin", false, "Take the password from stdin")
|
||||
fs.BoolVar(&flags.GetLoginSet, "get-login", false, "Return the current login user for the registry")
|
||||
return &fs
|
||||
}
|
||||
|
||||
// GetLogoutFlags defines and returns logout flags for containers tools
|
||||
func GetLogoutFlags(flags *LogoutOptions) *pflag.FlagSet {
|
||||
fs := pflag.FlagSet{}
|
||||
fs.StringVar(&flags.AuthFile, "authfile", GetDefaultAuthFile(), "path of the authentication file. Use REGISTRY_AUTH_FILE environment variable to override")
|
||||
fs.BoolVarP(&flags.All, "all", "a", false, "Remove the cached credentials for all registries in the auth file")
|
||||
return &fs
|
||||
}
|
||||
25
vendor/github.com/containers/image/v5/copy/copy.go
generated
vendored
25
vendor/github.com/containers/image/v5/copy/copy.go
generated
vendored
@@ -659,7 +659,7 @@ func (c *copier) copyOneImage(ctx context.Context, policyContext *signature.Poli
|
||||
// With !ic.canModifyManifest, that would just be a string of repeated failures for the same reason,
|
||||
// so let’s bail out early and with a better error message.
|
||||
if !ic.canModifyManifest {
|
||||
return nil, "", "", errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
|
||||
return nil, "", "", errors.Wrap(err, "Writing manifest failed (and converting it is not possible, image is signed or the destination specifies a digest)")
|
||||
}
|
||||
|
||||
// errs is a list of errors when trying various manifest types. Also serves as an "upload succeeded" flag when set to nil.
|
||||
@@ -757,7 +757,7 @@ func (ic *imageCopier) updateEmbeddedDockerReference() error {
|
||||
}
|
||||
|
||||
if !ic.canModifyManifest {
|
||||
return errors.Errorf("Copying a schema1 image with an embedded Docker reference to %s (Docker reference %s) would invalidate existing signatures. Explicitly enable signature removal to proceed anyway",
|
||||
return errors.Errorf("Copying a schema1 image with an embedded Docker reference to %s (Docker reference %s) would change the manifest, which is not possible (image is signed or the destination specifies a digest)",
|
||||
transports.ImageName(ic.c.dest.Reference()), destRef.String())
|
||||
}
|
||||
ic.manifestUpdates.EmbeddedDockerReference = destRef
|
||||
@@ -784,7 +784,7 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
|
||||
// If we only need to check authorization, no updates required.
|
||||
if updatedSrcInfos != nil && !reflect.DeepEqual(srcInfos, updatedSrcInfos) {
|
||||
if !ic.canModifyManifest {
|
||||
return errors.Errorf("Internal error: copyLayers() needs to use an updated manifest but that was known to be forbidden")
|
||||
return errors.Errorf("Copying this image requires changing layer representation, which is not possible (image is signed or the destination specifies a digest)")
|
||||
}
|
||||
srcInfos = updatedSrcInfos
|
||||
srcInfosUpdated = true
|
||||
@@ -798,7 +798,6 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
|
||||
|
||||
// copyGroup is used to determine if all layers are copied
|
||||
copyGroup := sync.WaitGroup{}
|
||||
copyGroup.Add(numLayers)
|
||||
|
||||
// copySemaphore is used to limit the number of parallel downloads to
|
||||
// avoid malicious images causing troubles and to be nice to servers.
|
||||
@@ -850,18 +849,22 @@ func (ic *imageCopier) copyLayers(ctx context.Context) error {
|
||||
|
||||
if err := func() error { // A scope for defer
|
||||
progressPool, progressCleanup := ic.c.newProgressPool(ctx)
|
||||
defer progressCleanup()
|
||||
defer func() {
|
||||
// Wait for all layers to be copied. progressCleanup() must not be called while any of the copyLayerHelpers interact with the progressPool.
|
||||
copyGroup.Wait()
|
||||
progressCleanup()
|
||||
}()
|
||||
|
||||
for i, srcLayer := range srcInfos {
|
||||
err = copySemaphore.Acquire(ctx, 1)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Can't acquire semaphore")
|
||||
}
|
||||
copyGroup.Add(1)
|
||||
go copyLayerHelper(i, srcLayer, encLayerBitmap[i], progressPool)
|
||||
}
|
||||
|
||||
// Wait for all layers to be copied
|
||||
copyGroup.Wait()
|
||||
// A call to copyGroup.Wait() is done at this point by the defer above.
|
||||
return nil
|
||||
}(); err != nil {
|
||||
return err
|
||||
@@ -1057,6 +1060,14 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
|
||||
logrus.Debugf("Skipping blob %s (already present):", srcInfo.Digest)
|
||||
bar := ic.c.createProgressBar(pool, srcInfo, "blob", "skipped: already exists")
|
||||
bar.SetTotal(0, true)
|
||||
|
||||
// Throw an event that the layer has been skipped
|
||||
if ic.c.progress != nil && ic.c.progressInterval > 0 {
|
||||
ic.c.progress <- types.ProgressProperties{
|
||||
Event: types.ProgressEventSkipped,
|
||||
Artifact: srcInfo,
|
||||
}
|
||||
}
|
||||
return blobInfo, cachedDiffID, nil
|
||||
}
|
||||
}
|
||||
|
||||
6
vendor/github.com/containers/image/v5/docker/docker_client.go
generated
vendored
6
vendor/github.com/containers/image/v5/docker/docker_client.go
generated
vendored
@@ -613,6 +613,9 @@ func (c *dockerClient) getBearerTokenOAuth2(ctx context.Context, challenge chall
|
||||
params.Add("client_id", "containers/image")
|
||||
|
||||
authReq.Body = ioutil.NopCloser(bytes.NewBufferString(params.Encode()))
|
||||
if c.sys != nil && c.sys.DockerRegistryUserAgent != "" {
|
||||
authReq.Header.Add("User-Agent", c.sys.DockerRegistryUserAgent)
|
||||
}
|
||||
authReq.Header.Add("Content-Type", "application/x-www-form-urlencoded")
|
||||
logrus.Debugf("%s %s", authReq.Method, authReq.URL.String())
|
||||
res, err := c.client.Do(authReq)
|
||||
@@ -665,6 +668,9 @@ func (c *dockerClient) getBearerToken(ctx context.Context, challenge challenge,
|
||||
if c.auth.Username != "" && c.auth.Password != "" {
|
||||
authReq.SetBasicAuth(c.auth.Username, c.auth.Password)
|
||||
}
|
||||
if c.sys != nil && c.sys.DockerRegistryUserAgent != "" {
|
||||
authReq.Header.Add("User-Agent", c.sys.DockerRegistryUserAgent)
|
||||
}
|
||||
|
||||
logrus.Debugf("%s %s", authReq.Method, authReq.URL.String())
|
||||
res, err := c.client.Do(authReq)
|
||||
|
||||
4
vendor/github.com/containers/image/v5/docker/docker_image.go
generated
vendored
4
vendor/github.com/containers/image/v5/docker/docker_image.go
generated
vendored
@@ -37,7 +37,7 @@ func newImage(ctx context.Context, sys *types.SystemContext, ref dockerReference
|
||||
|
||||
// SourceRefFullName returns a fully expanded name for the repository this image is in.
|
||||
func (i *Image) SourceRefFullName() string {
|
||||
return i.src.ref.ref.Name()
|
||||
return i.src.logicalRef.ref.Name()
|
||||
}
|
||||
|
||||
// GetRepositoryTags list all tags available in the repository. The tag
|
||||
@@ -45,7 +45,7 @@ func (i *Image) SourceRefFullName() string {
|
||||
// backward-compatible shim method which calls the module-level
|
||||
// GetRepositoryTags)
|
||||
func (i *Image) GetRepositoryTags(ctx context.Context) ([]string, error) {
|
||||
return GetRepositoryTags(ctx, i.src.c.sys, i.src.ref)
|
||||
return GetRepositoryTags(ctx, i.src.c.sys, i.src.logicalRef)
|
||||
}
|
||||
|
||||
// GetRepositoryTags list all tags available in the repository. The tag
|
||||
|
||||
53
vendor/github.com/containers/image/v5/docker/docker_image_dest.go
generated
vendored
53
vendor/github.com/containers/image/v5/docker/docker_image_dest.go
generated
vendored
@@ -16,6 +16,7 @@ import (
|
||||
|
||||
"github.com/containers/image/v5/docker/reference"
|
||||
"github.com/containers/image/v5/internal/iolimits"
|
||||
"github.com/containers/image/v5/internal/uploadreader"
|
||||
"github.com/containers/image/v5/manifest"
|
||||
"github.com/containers/image/v5/pkg/blobinfocache/none"
|
||||
"github.com/containers/image/v5/types"
|
||||
@@ -162,20 +163,31 @@ func (d *dockerImageDestination) PutBlob(ctx context.Context, stream io.Reader,
|
||||
|
||||
digester := digest.Canonical.Digester()
|
||||
sizeCounter := &sizeCounter{}
|
||||
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
|
||||
res, err = d.c.makeRequestToResolvedURL(ctx, "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, v2Auth, nil)
|
||||
uploadLocation, err = func() (*url.URL, error) { // A scope for defer
|
||||
uploadReader := uploadreader.NewUploadReader(io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter)))
|
||||
// This error text should never be user-visible, we terminate only after makeRequestToResolvedURL
|
||||
// returns, so there isn’t a way for the error text to be provided to any of our callers.
|
||||
defer uploadReader.Terminate(errors.New("Reading data from an already terminated upload"))
|
||||
res, err = d.c.makeRequestToResolvedURL(ctx, "PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, uploadReader, inputInfo.Size, v2Auth, nil)
|
||||
if err != nil {
|
||||
logrus.Debugf("Error uploading layer chunked %v", err)
|
||||
return nil, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if !successStatus(res.StatusCode) {
|
||||
return nil, errors.Wrapf(client.HandleErrorResponse(res), "Error uploading layer chunked")
|
||||
}
|
||||
uploadLocation, err := res.Location()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error determining upload URL")
|
||||
}
|
||||
return uploadLocation, nil
|
||||
}()
|
||||
if err != nil {
|
||||
logrus.Debugf("Error uploading layer chunked, response %#v", res)
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
computedDigest := digester.Digest()
|
||||
|
||||
uploadLocation, err = res.Location()
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, errors.Wrap(err, "Error determining upload URL")
|
||||
}
|
||||
|
||||
// FIXME: DELETE uploadLocation on failure (does not really work in docker/distribution servers, which incorrectly require the "delete" action in the token's scope)
|
||||
|
||||
locationQuery := uploadLocation.Query()
|
||||
@@ -469,17 +481,17 @@ func (d *dockerImageDestination) PutSignatures(ctx context.Context, signatures [
|
||||
}
|
||||
switch {
|
||||
case d.c.signatureBase != nil:
|
||||
return d.putSignaturesToLookaside(signatures, instanceDigest)
|
||||
return d.putSignaturesToLookaside(signatures, *instanceDigest)
|
||||
case d.c.supportsSignatures:
|
||||
return d.putSignaturesToAPIExtension(ctx, signatures, instanceDigest)
|
||||
return d.putSignaturesToAPIExtension(ctx, signatures, *instanceDigest)
|
||||
default:
|
||||
return errors.Errorf("X-Registry-Supports-Signatures extension not supported, and lookaside is not configured")
|
||||
}
|
||||
}
|
||||
|
||||
// putSignaturesToLookaside implements PutSignatures() from the lookaside location configured in s.c.signatureBase,
|
||||
// which is not nil.
|
||||
func (d *dockerImageDestination) putSignaturesToLookaside(signatures [][]byte, instanceDigest *digest.Digest) error {
|
||||
// which is not nil, for a manifest with manifestDigest.
|
||||
func (d *dockerImageDestination) putSignaturesToLookaside(signatures [][]byte, manifestDigest digest.Digest) error {
|
||||
// FIXME? This overwrites files one at a time, definitely not atomic.
|
||||
// A failure when updating signatures with a reordered copy could lose some of them.
|
||||
|
||||
@@ -490,7 +502,7 @@ func (d *dockerImageDestination) putSignaturesToLookaside(signatures [][]byte, i
|
||||
|
||||
// NOTE: Keep this in sync with docs/signature-protocols.md!
|
||||
for i, signature := range signatures {
|
||||
url := signatureStorageURL(d.c.signatureBase, *instanceDigest, i)
|
||||
url := signatureStorageURL(d.c.signatureBase, manifestDigest, i)
|
||||
if url == nil {
|
||||
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
|
||||
}
|
||||
@@ -505,7 +517,7 @@ func (d *dockerImageDestination) putSignaturesToLookaside(signatures [][]byte, i
|
||||
// is enough for dockerImageSource to stop looking for other signatures, so that
|
||||
// is sufficient.
|
||||
for i := len(signatures); ; i++ {
|
||||
url := signatureStorageURL(d.c.signatureBase, *instanceDigest, i)
|
||||
url := signatureStorageURL(d.c.signatureBase, manifestDigest, i)
|
||||
if url == nil {
|
||||
return errors.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
|
||||
}
|
||||
@@ -564,8 +576,9 @@ func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error
|
||||
}
|
||||
}
|
||||
|
||||
// putSignaturesToAPIExtension implements PutSignatures() using the X-Registry-Supports-Signatures API extension.
|
||||
func (d *dockerImageDestination) putSignaturesToAPIExtension(ctx context.Context, signatures [][]byte, instanceDigest *digest.Digest) error {
|
||||
// putSignaturesToAPIExtension implements PutSignatures() using the X-Registry-Supports-Signatures API extension,
|
||||
// for a manifest with manifestDigest.
|
||||
func (d *dockerImageDestination) putSignaturesToAPIExtension(ctx context.Context, signatures [][]byte, manifestDigest digest.Digest) error {
|
||||
// Skip dealing with the manifest digest, or reading the old state, if not necessary.
|
||||
if len(signatures) == 0 {
|
||||
return nil
|
||||
@@ -575,7 +588,7 @@ func (d *dockerImageDestination) putSignaturesToAPIExtension(ctx context.Context
|
||||
// always adds signatures. Eventually we should also allow removing signatures,
|
||||
// but the X-Registry-Supports-Signatures API extension does not support that yet.
|
||||
|
||||
existingSignatures, err := d.c.getExtensionsSignatures(ctx, d.ref, *instanceDigest)
|
||||
existingSignatures, err := d.c.getExtensionsSignatures(ctx, d.ref, manifestDigest)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -600,7 +613,7 @@ sigExists:
|
||||
if err != nil || n != 16 {
|
||||
return errors.Wrapf(err, "Error generating random signature len %d", n)
|
||||
}
|
||||
signatureName = fmt.Sprintf("%s@%032x", instanceDigest.String(), randBytes)
|
||||
signatureName = fmt.Sprintf("%s@%032x", manifestDigest.String(), randBytes)
|
||||
if _, ok := existingSigNames[signatureName]; !ok {
|
||||
break
|
||||
}
|
||||
@@ -616,7 +629,7 @@ sigExists:
|
||||
return err
|
||||
}
|
||||
|
||||
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), d.manifestDigest.String())
|
||||
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), manifestDigest.String())
|
||||
res, err := d.c.makeRequest(ctx, "PUT", path, nil, bytes.NewReader(body), v2Auth, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
|
||||
40
vendor/github.com/containers/image/v5/docker/docker_image_src.go
generated
vendored
40
vendor/github.com/containers/image/v5/docker/docker_image_src.go
generated
vendored
@@ -24,8 +24,9 @@ import (
|
||||
)
|
||||
|
||||
type dockerImageSource struct {
|
||||
ref dockerReference
|
||||
c *dockerClient
|
||||
logicalRef dockerReference // The reference the user requested.
|
||||
physicalRef dockerReference // The actual reference we are accessing (possibly a mirror)
|
||||
c *dockerClient
|
||||
// State
|
||||
cachedManifest []byte // nil if not loaded yet
|
||||
cachedManifestMIMEType string // Only valid if cachedManifest != nil
|
||||
@@ -49,7 +50,6 @@ func newImageSource(ctx context.Context, sys *types.SystemContext, ref dockerRef
|
||||
}
|
||||
}
|
||||
|
||||
primaryDomain := reference.Domain(ref.ref)
|
||||
// Check all endpoints for the manifest availability. If we find one that does
|
||||
// contain the image, it will be used for all future pull actions. Always try the
|
||||
// non-mirror original location last; this both transparently handles the case
|
||||
@@ -66,7 +66,7 @@ func newImageSource(ctx context.Context, sys *types.SystemContext, ref dockerRef
|
||||
attempts := []attempt{}
|
||||
for _, pullSource := range pullSources {
|
||||
logrus.Debugf("Trying to access %q", pullSource.Reference)
|
||||
s, err := newImageSourceAttempt(ctx, sys, pullSource, primaryDomain)
|
||||
s, err := newImageSourceAttempt(ctx, sys, ref, pullSource)
|
||||
if err == nil {
|
||||
return s, nil
|
||||
}
|
||||
@@ -95,32 +95,33 @@ func newImageSource(ctx context.Context, sys *types.SystemContext, ref dockerRef
|
||||
}
|
||||
|
||||
// newImageSourceAttempt is an internal helper for newImageSource. Everyone else must call newImageSource.
|
||||
// Given a pullSource and primaryDomain, return a dockerImageSource if it is reachable.
|
||||
// Given a logicalReference and a pullSource, return a dockerImageSource if it is reachable.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func newImageSourceAttempt(ctx context.Context, sys *types.SystemContext, pullSource sysregistriesv2.PullSource, primaryDomain string) (*dockerImageSource, error) {
|
||||
ref, err := newReference(pullSource.Reference)
|
||||
func newImageSourceAttempt(ctx context.Context, sys *types.SystemContext, logicalRef dockerReference, pullSource sysregistriesv2.PullSource) (*dockerImageSource, error) {
|
||||
physicalRef, err := newReference(pullSource.Reference)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
endpointSys := sys
|
||||
// sys.DockerAuthConfig does not explicitly specify a registry; we must not blindly send the credentials intended for the primary endpoint to mirrors.
|
||||
if endpointSys != nil && endpointSys.DockerAuthConfig != nil && reference.Domain(ref.ref) != primaryDomain {
|
||||
if endpointSys != nil && endpointSys.DockerAuthConfig != nil && reference.Domain(physicalRef.ref) != reference.Domain(logicalRef.ref) {
|
||||
copy := *endpointSys
|
||||
copy.DockerAuthConfig = nil
|
||||
copy.DockerBearerRegistryToken = ""
|
||||
endpointSys = ©
|
||||
}
|
||||
|
||||
client, err := newDockerClientFromRef(endpointSys, ref, false, "pull")
|
||||
client, err := newDockerClientFromRef(endpointSys, physicalRef, false, "pull")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
client.tlsClientConfig.InsecureSkipVerify = pullSource.Endpoint.Insecure
|
||||
|
||||
s := &dockerImageSource{
|
||||
ref: ref,
|
||||
c: client,
|
||||
logicalRef: logicalRef,
|
||||
physicalRef: physicalRef,
|
||||
c: client,
|
||||
}
|
||||
|
||||
if err := s.ensureManifestIsLoaded(ctx); err != nil {
|
||||
@@ -132,7 +133,7 @@ func newImageSourceAttempt(ctx context.Context, sys *types.SystemContext, pullSo
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
func (s *dockerImageSource) Reference() types.ImageReference {
|
||||
return s.ref
|
||||
return s.logicalRef
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
@@ -181,7 +182,7 @@ func (s *dockerImageSource) GetManifest(ctx context.Context, instanceDigest *dig
|
||||
}
|
||||
|
||||
func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest string) ([]byte, string, error) {
|
||||
path := fmt.Sprintf(manifestPath, reference.Path(s.ref.ref), tagOrDigest)
|
||||
path := fmt.Sprintf(manifestPath, reference.Path(s.physicalRef.ref), tagOrDigest)
|
||||
headers := map[string][]string{
|
||||
"Accept": manifest.DefaultRequestedManifestMIMETypes,
|
||||
}
|
||||
@@ -189,9 +190,10 @@ func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest strin
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
logrus.Debugf("Content-Type from manifest GET is %q", res.Header.Get("Content-Type"))
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusOK {
|
||||
return nil, "", errors.Wrapf(client.HandleErrorResponse(res), "Error reading manifest %s in %s", tagOrDigest, s.ref.ref.Name())
|
||||
return nil, "", errors.Wrapf(client.HandleErrorResponse(res), "Error reading manifest %s in %s", tagOrDigest, s.physicalRef.ref.Name())
|
||||
}
|
||||
|
||||
manblob, err := iolimits.ReadAtMost(res.Body, iolimits.MaxManifestBodySize)
|
||||
@@ -213,7 +215,7 @@ func (s *dockerImageSource) ensureManifestIsLoaded(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
reference, err := s.ref.tagOrDigest()
|
||||
reference, err := s.physicalRef.tagOrDigest()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -271,7 +273,7 @@ func (s *dockerImageSource) GetBlob(ctx context.Context, info types.BlobInfo, ca
|
||||
return s.getExternalBlob(ctx, info.URLs)
|
||||
}
|
||||
|
||||
path := fmt.Sprintf(blobsPath, reference.Path(s.ref.ref), info.Digest.String())
|
||||
path := fmt.Sprintf(blobsPath, reference.Path(s.physicalRef.ref), info.Digest.String())
|
||||
logrus.Debugf("Downloading %s", path)
|
||||
res, err := s.c.makeRequest(ctx, "GET", path, nil, nil, v2Auth, nil)
|
||||
if err != nil {
|
||||
@@ -280,7 +282,7 @@ func (s *dockerImageSource) GetBlob(ctx context.Context, info types.BlobInfo, ca
|
||||
if err := httpResponseToError(res, "Error fetching blob"); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
cache.RecordKnownLocation(s.ref.Transport(), bicTransportScope(s.ref), info.Digest, newBICLocationReference(s.ref))
|
||||
cache.RecordKnownLocation(s.physicalRef.Transport(), bicTransportScope(s.physicalRef), info.Digest, newBICLocationReference(s.physicalRef))
|
||||
return res.Body, getBlobSize(res), nil
|
||||
}
|
||||
|
||||
@@ -308,7 +310,7 @@ func (s *dockerImageSource) manifestDigest(ctx context.Context, instanceDigest *
|
||||
if instanceDigest != nil {
|
||||
return *instanceDigest, nil
|
||||
}
|
||||
if digested, ok := s.ref.ref.(reference.Digested); ok {
|
||||
if digested, ok := s.physicalRef.ref.(reference.Digested); ok {
|
||||
d := digested.Digest()
|
||||
if d.Algorithm() == digest.Canonical {
|
||||
return d, nil
|
||||
@@ -398,7 +400,7 @@ func (s *dockerImageSource) getSignaturesFromAPIExtension(ctx context.Context, i
|
||||
return nil, err
|
||||
}
|
||||
|
||||
parsedBody, err := s.c.getExtensionsSignatures(ctx, s.ref, manifestDigest)
|
||||
parsedBody, err := s.c.getExtensionsSignatures(ctx, s.physicalRef, manifestDigest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
91
vendor/github.com/containers/image/v5/internal/pkg/platform/platform_matcher.go
generated
vendored
91
vendor/github.com/containers/image/v5/internal/pkg/platform/platform_matcher.go
generated
vendored
@@ -115,12 +115,23 @@ func getCPUVariant(os string, arch string) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
// compatibility contains, for a specified architecture, a list of known variants, in the
|
||||
// order from most capable (most restrictive) to least capable (most compatible).
|
||||
// Architectures that don’t have variants should not have an entry here.
|
||||
var compatibility = map[string][]string{
|
||||
"arm": {"v7", "v6", "v5"},
|
||||
"arm": {"v8", "v7", "v6", "v5"},
|
||||
"arm64": {"v8"},
|
||||
}
|
||||
|
||||
// Returns all compatible platforms with the platform specifics possibly overriden by user,
|
||||
// baseVariants contains, for a specified architecture, a variant that is known to be
|
||||
// supported by _all_ machines using that architecture.
|
||||
// Architectures that don’t have variants, or where there are possible versions without
|
||||
// an established variant name, should not have an entry here.
|
||||
var baseVariants = map[string]string{
|
||||
"arm64": "v8",
|
||||
}
|
||||
|
||||
// WantedPlatforms returns all compatible platforms with the platform specifics possibly overriden by user,
|
||||
// the most compatible platform is first.
|
||||
// If some option (arch, os, variant) is not present, a value from current platform is detected.
|
||||
func WantedPlatforms(ctx *types.SystemContext) ([]imgspecv1.Platform, error) {
|
||||
@@ -145,59 +156,45 @@ func WantedPlatforms(ctx *types.SystemContext) ([]imgspecv1.Platform, error) {
|
||||
wantedOS = ctx.OSChoice
|
||||
}
|
||||
|
||||
var wantedPlatforms []imgspecv1.Platform
|
||||
if wantedVariant != "" && compatibility[wantedArch] != nil {
|
||||
wantedPlatforms = make([]imgspecv1.Platform, 0, len(compatibility[wantedArch]))
|
||||
wantedIndex := -1
|
||||
for i, v := range compatibility[wantedArch] {
|
||||
if wantedVariant == v {
|
||||
wantedIndex = i
|
||||
break
|
||||
var variants []string = nil
|
||||
if wantedVariant != "" {
|
||||
if compatibility[wantedArch] != nil {
|
||||
variantOrder := compatibility[wantedArch]
|
||||
for i, v := range variantOrder {
|
||||
if wantedVariant == v {
|
||||
variants = variantOrder[i:]
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
// user wants a variant which we know nothing about - not even compatibility
|
||||
if wantedIndex == -1 {
|
||||
wantedPlatforms = []imgspecv1.Platform{
|
||||
{
|
||||
OS: wantedOS,
|
||||
Architecture: wantedArch,
|
||||
Variant: wantedVariant,
|
||||
},
|
||||
}
|
||||
} else {
|
||||
for i := wantedIndex; i < len(compatibility[wantedArch]); i++ {
|
||||
v := compatibility[wantedArch][i]
|
||||
wantedPlatforms = append(wantedPlatforms, imgspecv1.Platform{
|
||||
OS: wantedOS,
|
||||
Architecture: wantedArch,
|
||||
Variant: v,
|
||||
})
|
||||
}
|
||||
if variants == nil {
|
||||
// user wants a variant which we know nothing about - not even compatibility
|
||||
variants = []string{wantedVariant}
|
||||
}
|
||||
variants = append(variants, "")
|
||||
} else {
|
||||
wantedPlatforms = []imgspecv1.Platform{
|
||||
{
|
||||
OS: wantedOS,
|
||||
Architecture: wantedArch,
|
||||
Variant: wantedVariant,
|
||||
},
|
||||
variants = append(variants, "") // No variant specified, use a “no variant specified” image if present
|
||||
if baseVariant, ok := baseVariants[wantedArch]; ok {
|
||||
// But also accept an image with the “base” variant for the architecture, if it exists.
|
||||
variants = append(variants, baseVariant)
|
||||
}
|
||||
}
|
||||
|
||||
return wantedPlatforms, nil
|
||||
res := make([]imgspecv1.Platform, 0, len(variants))
|
||||
for _, v := range variants {
|
||||
res = append(res, imgspecv1.Platform{
|
||||
OS: wantedOS,
|
||||
Architecture: wantedArch,
|
||||
Variant: v,
|
||||
})
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// MatchesPlatform returns true if a platform descriptor from a multi-arch image matches
|
||||
// an item from the return value of WantedPlatforms.
|
||||
func MatchesPlatform(image imgspecv1.Platform, wanted imgspecv1.Platform) bool {
|
||||
if image.Architecture != wanted.Architecture {
|
||||
return false
|
||||
}
|
||||
if image.OS != wanted.OS {
|
||||
return false
|
||||
}
|
||||
|
||||
if wanted.Variant == "" || image.Variant == wanted.Variant {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
return image.Architecture == wanted.Architecture &&
|
||||
image.OS == wanted.OS &&
|
||||
image.Variant == wanted.Variant
|
||||
}
|
||||
|
||||
61
vendor/github.com/containers/image/v5/internal/uploadreader/upload_reader.go
generated
vendored
Normal file
61
vendor/github.com/containers/image/v5/internal/uploadreader/upload_reader.go
generated
vendored
Normal file
@@ -0,0 +1,61 @@
|
||||
package uploadreader
|
||||
|
||||
import (
|
||||
"io"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// UploadReader is a pass-through reader for use in sending non-trivial data using the net/http
|
||||
// package (http.NewRequest, http.Post and the like).
|
||||
//
|
||||
// The net/http package uses a separate goroutine to upload data to a HTTP connection,
|
||||
// and it is possible for the server to return a response (typically an error) before consuming
|
||||
// the full body of the request. In that case http.Client.Do can return with an error while
|
||||
// the body is still being read — regardless of of the cancellation, if any, of http.Request.Context().
|
||||
//
|
||||
// As a result, any data used/updated by the io.Reader() provided as the request body may be
|
||||
// used/updated even after http.Client.Do returns, causing races.
|
||||
//
|
||||
// To fix this, UploadReader provides a synchronized Terminate() method, which can block for
|
||||
// a not-completely-negligible time (for a duration of the underlying Read()), but guarantees that
|
||||
// after Terminate() returns, the underlying reader is never used any more (unlike calling
|
||||
// the cancellation callback of context.WithCancel, which returns before any recipients may have
|
||||
// reacted to the cancellation).
|
||||
type UploadReader struct {
|
||||
mutex sync.Mutex
|
||||
// The following members can only be used with mutex held
|
||||
reader io.Reader
|
||||
terminationError error // nil if not terminated yet
|
||||
}
|
||||
|
||||
// NewUploadReader returns an UploadReader for an "underlying" reader.
|
||||
func NewUploadReader(underlying io.Reader) *UploadReader {
|
||||
return &UploadReader{
|
||||
reader: underlying,
|
||||
terminationError: nil,
|
||||
}
|
||||
}
|
||||
|
||||
// Read returns the error set by Terminate, if any, or calls the underlying reader.
|
||||
// It is safe to call this from a different goroutine than Terminate.
|
||||
func (ur *UploadReader) Read(p []byte) (int, error) {
|
||||
ur.mutex.Lock()
|
||||
defer ur.mutex.Unlock()
|
||||
|
||||
if ur.terminationError != nil {
|
||||
return 0, ur.terminationError
|
||||
}
|
||||
return ur.reader.Read(p)
|
||||
}
|
||||
|
||||
// Terminate waits for in-progress Read calls, if any, to finish, and ensures that after
|
||||
// this function returns, any Read calls will fail with the provided error, and the underlying
|
||||
// reader will never be used any more.
|
||||
//
|
||||
// It is safe to call this from a different goroutine than Read.
|
||||
func (ur *UploadReader) Terminate(err error) {
|
||||
ur.mutex.Lock() // May block for some time if ur.reader.Read() is in progress
|
||||
defer ur.mutex.Unlock()
|
||||
|
||||
ur.terminationError = err
|
||||
}
|
||||
118
vendor/github.com/containers/image/v5/manifest/common.go
generated
vendored
Normal file
118
vendor/github.com/containers/image/v5/manifest/common.go
generated
vendored
Normal file
@@ -0,0 +1,118 @@
|
||||
package manifest
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/v5/pkg/compression"
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// dupStringSlice returns a deep copy of a slice of strings, or nil if the
|
||||
// source slice is empty.
|
||||
func dupStringSlice(list []string) []string {
|
||||
if len(list) == 0 {
|
||||
return nil
|
||||
}
|
||||
dup := make([]string, len(list))
|
||||
copy(dup, list)
|
||||
return dup
|
||||
}
|
||||
|
||||
// dupStringStringMap returns a deep copy of a map[string]string, or nil if the
|
||||
// passed-in map is nil or has no keys.
|
||||
func dupStringStringMap(m map[string]string) map[string]string {
|
||||
if len(m) == 0 {
|
||||
return nil
|
||||
}
|
||||
result := make(map[string]string)
|
||||
for k, v := range m {
|
||||
result[k] = v
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// layerInfosToStrings converts a list of layer infos, presumably obtained from a Manifest.LayerInfos()
|
||||
// method call, into a format suitable for inclusion in a types.ImageInspectInfo structure.
|
||||
func layerInfosToStrings(infos []LayerInfo) []string {
|
||||
layers := make([]string, len(infos))
|
||||
for i, info := range infos {
|
||||
layers[i] = info.Digest.String()
|
||||
}
|
||||
return layers
|
||||
}
|
||||
|
||||
// compressionMIMETypeSet describes a set of MIME type “variants” that represent differently-compressed
|
||||
// versions of “the same kind of content”.
|
||||
// The map key is the return value of compression.Algorithm.Name(), or mtsUncompressed;
|
||||
// the map value is a MIME type, or mtsUnsupportedMIMEType to mean "recognized but unsupported".
|
||||
type compressionMIMETypeSet map[string]string
|
||||
|
||||
const mtsUncompressed = "" // A key in compressionMIMETypeSet for the uncompressed variant
|
||||
const mtsUnsupportedMIMEType = "" // A value in compressionMIMETypeSet that means “recognized but unsupported”
|
||||
|
||||
// compressionVariantMIMEType returns a variant of mimeType for the specified algorithm (which may be nil
|
||||
// to mean "no compression"), based on variantTable.
|
||||
func compressionVariantMIMEType(variantTable []compressionMIMETypeSet, mimeType string, algorithm *compression.Algorithm) (string, error) {
|
||||
if mimeType == mtsUnsupportedMIMEType { // Prevent matching against the {algo:mtsUnsupportedMIMEType} entries
|
||||
return "", fmt.Errorf("cannot update unknown MIME type")
|
||||
}
|
||||
for _, variants := range variantTable {
|
||||
for _, mt := range variants {
|
||||
if mt == mimeType { // Found the variant
|
||||
name := mtsUncompressed
|
||||
if algorithm != nil {
|
||||
name = algorithm.Name()
|
||||
}
|
||||
if res, ok := variants[name]; ok {
|
||||
if res != mtsUnsupportedMIMEType {
|
||||
return res, nil
|
||||
}
|
||||
if name != mtsUncompressed {
|
||||
return "", fmt.Errorf("%s compression is not supported", name)
|
||||
}
|
||||
return "", errors.New("uncompressed variant is not supported")
|
||||
}
|
||||
if name != mtsUncompressed {
|
||||
return "", fmt.Errorf("unknown compression algorithm %s", name)
|
||||
}
|
||||
// We can't very well say “the idea of no compression is unknown”
|
||||
return "", errors.New("uncompressed variant is not supported")
|
||||
}
|
||||
}
|
||||
}
|
||||
if algorithm != nil {
|
||||
return "", fmt.Errorf("unsupported MIME type for compression: %s", mimeType)
|
||||
}
|
||||
return "", fmt.Errorf("unsupported MIME type for decompression: %s", mimeType)
|
||||
}
|
||||
|
||||
// updatedMIMEType returns the result of applying edits in updated (MediaType, CompressionOperation) to
|
||||
// mimeType, based on variantTable. It may use updated.Digest for error messages.
|
||||
func updatedMIMEType(variantTable []compressionMIMETypeSet, mimeType string, updated types.BlobInfo) (string, error) {
|
||||
// Note that manifests in containers-storage might be reporting the
|
||||
// wrong media type since the original manifests are stored while layers
|
||||
// are decompressed in storage. Hence, we need to consider the case
|
||||
// that an already {de}compressed layer should be {de}compressed;
|
||||
// compressionVariantMIMEType does that by not caring whether the original is
|
||||
// {de}compressed.
|
||||
switch updated.CompressionOperation {
|
||||
case types.PreserveOriginal:
|
||||
// Keep the original media type.
|
||||
return mimeType, nil
|
||||
|
||||
case types.Decompress:
|
||||
return compressionVariantMIMEType(variantTable, mimeType, nil)
|
||||
|
||||
case types.Compress:
|
||||
if updated.CompressionAlgorithm == nil {
|
||||
logrus.Debugf("Error preparing updated manifest: blob %q was compressed but does not specify by which algorithm: falling back to use the original blob", updated.Digest)
|
||||
return mimeType, nil
|
||||
}
|
||||
return compressionVariantMIMEType(variantTable, mimeType, updated.CompressionAlgorithm)
|
||||
|
||||
default:
|
||||
return "", fmt.Errorf("unknown compression operation (%d)", updated.CompressionOperation)
|
||||
}
|
||||
}
|
||||
97
vendor/github.com/containers/image/v5/manifest/docker_schema2.go
generated
vendored
97
vendor/github.com/containers/image/v5/manifest/docker_schema2.go
generated
vendored
@@ -10,7 +10,6 @@ import (
|
||||
"github.com/containers/image/v5/types"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// Schema2Descriptor is a “descriptor” in docker/distribution schema 2.
|
||||
@@ -213,26 +212,17 @@ func (m *Schema2) LayerInfos() []LayerInfo {
|
||||
return blobs
|
||||
}
|
||||
|
||||
// isSchema2ForeignLayer is a convenience wrapper to check if a given mime type
|
||||
// is a compressed or decompressed schema 2 foreign layer.
|
||||
func isSchema2ForeignLayer(mimeType string) bool {
|
||||
switch mimeType {
|
||||
case DockerV2Schema2ForeignLayerMediaType, DockerV2Schema2ForeignLayerMediaTypeGzip:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// isSchema2Layer is a convenience wrapper to check if a given mime type is a
|
||||
// compressed or decompressed schema 2 layer.
|
||||
func isSchema2Layer(mimeType string) bool {
|
||||
switch mimeType {
|
||||
case DockerV2SchemaLayerMediaTypeUncompressed, DockerV2Schema2LayerMediaType:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
var schema2CompressionMIMETypeSets = []compressionMIMETypeSet{
|
||||
{
|
||||
mtsUncompressed: DockerV2Schema2ForeignLayerMediaType,
|
||||
compression.Gzip.Name(): DockerV2Schema2ForeignLayerMediaTypeGzip,
|
||||
compression.Zstd.Name(): mtsUnsupportedMIMEType,
|
||||
},
|
||||
{
|
||||
mtsUncompressed: DockerV2SchemaLayerMediaTypeUncompressed,
|
||||
compression.Gzip.Name(): DockerV2Schema2LayerMediaType,
|
||||
compression.Zstd.Name(): mtsUnsupportedMIMEType,
|
||||
},
|
||||
}
|
||||
|
||||
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
|
||||
@@ -243,67 +233,16 @@ func (m *Schema2) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
|
||||
original := m.LayersDescriptors
|
||||
m.LayersDescriptors = make([]Schema2Descriptor, len(layerInfos))
|
||||
for i, info := range layerInfos {
|
||||
mimeType := original[i].MediaType
|
||||
// First make sure we support the media type of the original layer.
|
||||
if err := SupportedSchema2MediaType(original[i].MediaType); err != nil {
|
||||
return fmt.Errorf("Error preparing updated manifest: unknown media type of original layer: %q", original[i].MediaType)
|
||||
if err := SupportedSchema2MediaType(mimeType); err != nil {
|
||||
return fmt.Errorf("Error preparing updated manifest: unknown media type of original layer %q: %q", info.Digest, mimeType)
|
||||
}
|
||||
|
||||
// Set the correct media types based on the specified compression
|
||||
// operation, the desired compression algorithm AND the original media
|
||||
// type.
|
||||
//
|
||||
// Note that manifests in containers-storage might be reporting the
|
||||
// wrong media type since the original manifests are stored while layers
|
||||
// are decompressed in storage. Hence, we need to consider the case
|
||||
// that an already {de}compressed layer should be {de}compressed, which
|
||||
// is being addressed in `isSchema2{Foreign}Layer`.
|
||||
switch info.CompressionOperation {
|
||||
case types.PreserveOriginal:
|
||||
// Keep the original media type.
|
||||
m.LayersDescriptors[i].MediaType = original[i].MediaType
|
||||
|
||||
case types.Decompress:
|
||||
// Decompress the original media type and check if it was
|
||||
// non-distributable one or not.
|
||||
mimeType := original[i].MediaType
|
||||
switch {
|
||||
case isSchema2ForeignLayer(mimeType):
|
||||
m.LayersDescriptors[i].MediaType = DockerV2Schema2ForeignLayerMediaType
|
||||
case isSchema2Layer(mimeType):
|
||||
m.LayersDescriptors[i].MediaType = DockerV2SchemaLayerMediaTypeUncompressed
|
||||
default:
|
||||
return fmt.Errorf("Error preparing updated manifest: unsupported media type for decompression: %q", original[i].MediaType)
|
||||
}
|
||||
|
||||
case types.Compress:
|
||||
if info.CompressionAlgorithm == nil {
|
||||
logrus.Debugf("Preparing updated manifest: blob %q was compressed but does not specify by which algorithm: falling back to use the original blob", info.Digest)
|
||||
m.LayersDescriptors[i].MediaType = original[i].MediaType
|
||||
break
|
||||
}
|
||||
// Compress the original media type and set the new one based on
|
||||
// that type (distributable or not) and the specified compression
|
||||
// algorithm. Throw an error if the algorithm is not supported.
|
||||
switch info.CompressionAlgorithm.Name() {
|
||||
case compression.Gzip.Name():
|
||||
mimeType := original[i].MediaType
|
||||
switch {
|
||||
case isSchema2ForeignLayer(mimeType):
|
||||
m.LayersDescriptors[i].MediaType = DockerV2Schema2ForeignLayerMediaTypeGzip
|
||||
case isSchema2Layer(mimeType):
|
||||
m.LayersDescriptors[i].MediaType = DockerV2Schema2LayerMediaType
|
||||
default:
|
||||
return fmt.Errorf("Error preparing updated manifest: unsupported media type for compression: %q", original[i].MediaType)
|
||||
}
|
||||
case compression.Zstd.Name():
|
||||
return fmt.Errorf("Error preparing updated manifest: zstd compression is not supported for docker images")
|
||||
default:
|
||||
return fmt.Errorf("Error preparing updated manifest: unknown compression algorithm %q for layer %q", info.CompressionAlgorithm.Name(), info.Digest)
|
||||
}
|
||||
|
||||
default:
|
||||
return fmt.Errorf("Error preparing updated manifest: unknown compression operation (%d) for layer %q", info.CompressionOperation, info.Digest)
|
||||
mimeType, err := updatedMIMEType(schema2CompressionMIMETypeSets, mimeType, info)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Error preparing updated manifest, layer %q", info.Digest)
|
||||
}
|
||||
m.LayersDescriptors[i].MediaType = mimeType
|
||||
m.LayersDescriptors[i].Digest = info.Digest
|
||||
m.LayersDescriptors[i].Size = info.Size
|
||||
m.LayersDescriptors[i].URLs = info.URLs
|
||||
|
||||
2
vendor/github.com/containers/image/v5/manifest/docker_schema2_list.go
generated
vendored
2
vendor/github.com/containers/image/v5/manifest/docker_schema2_list.go
generated
vendored
@@ -107,7 +107,7 @@ func (list *Schema2List) ChooseInstance(ctx *types.SystemContext) (digest.Digest
|
||||
}
|
||||
}
|
||||
}
|
||||
return "", fmt.Errorf("no image found in manifest list for architecture %s, variant %s, OS %s", wantedPlatforms[0].Architecture, wantedPlatforms[0].Variant, wantedPlatforms[0].OS)
|
||||
return "", fmt.Errorf("no image found in manifest list for architecture %s, variant %q, OS %s", wantedPlatforms[0].Architecture, wantedPlatforms[0].Variant, wantedPlatforms[0].OS)
|
||||
}
|
||||
|
||||
// Serialize returns the list in a blob format.
|
||||
|
||||
24
vendor/github.com/containers/image/v5/manifest/list.go
generated
vendored
24
vendor/github.com/containers/image/v5/manifest/list.go
generated
vendored
@@ -59,30 +59,6 @@ type ListUpdate struct {
|
||||
MediaType string
|
||||
}
|
||||
|
||||
// dupStringSlice returns a deep copy of a slice of strings, or nil if the
|
||||
// source slice is empty.
|
||||
func dupStringSlice(list []string) []string {
|
||||
if len(list) == 0 {
|
||||
return nil
|
||||
}
|
||||
dup := make([]string, len(list))
|
||||
copy(dup, list)
|
||||
return dup
|
||||
}
|
||||
|
||||
// dupStringStringMap returns a deep copy of a map[string]string, or nil if the
|
||||
// passed-in map is nil or has no keys.
|
||||
func dupStringStringMap(m map[string]string) map[string]string {
|
||||
if len(m) == 0 {
|
||||
return nil
|
||||
}
|
||||
result := make(map[string]string)
|
||||
for k, v := range m {
|
||||
result[k] = v
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// ListFromBlob parses a list of manifests.
|
||||
func ListFromBlob(manifest []byte, manifestMIMEType string) (List, error) {
|
||||
normalized := NormalizedMIMEType(manifestMIMEType)
|
||||
|
||||
10
vendor/github.com/containers/image/v5/manifest/manifest.go
generated
vendored
10
vendor/github.com/containers/image/v5/manifest/manifest.go
generated
vendored
@@ -256,13 +256,3 @@ func FromBlob(manblob []byte, mt string) (Manifest, error) {
|
||||
// Note that this may not be reachable, NormalizedMIMEType has a default for unknown values.
|
||||
return nil, fmt.Errorf("Unimplemented manifest MIME type %s (normalized as %s)", mt, nmt)
|
||||
}
|
||||
|
||||
// layerInfosToStrings converts a list of layer infos, presumably obtained from a Manifest.LayerInfos()
|
||||
// method call, into a format suitable for inclusion in a types.ImageInspectInfo structure.
|
||||
func layerInfosToStrings(infos []LayerInfo) []string {
|
||||
layers := make([]string, len(infos))
|
||||
for i, info := range infos {
|
||||
layers[i] = info.Digest.String()
|
||||
}
|
||||
return layers
|
||||
}
|
||||
|
||||
108
vendor/github.com/containers/image/v5/manifest/oci.go
generated
vendored
108
vendor/github.com/containers/image/v5/manifest/oci.go
generated
vendored
@@ -12,7 +12,6 @@ import (
|
||||
"github.com/opencontainers/image-spec/specs-go"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// BlobInfoFromOCI1Descriptor returns a types.BlobInfo based on the input OCI1 descriptor.
|
||||
@@ -95,26 +94,17 @@ func (m *OCI1) LayerInfos() []LayerInfo {
|
||||
return blobs
|
||||
}
|
||||
|
||||
// isOCI1NonDistributableLayer is a convenience wrapper to check if a given mime
|
||||
// type is a compressed or decompressed OCI v1 non-distributable layer.
|
||||
func isOCI1NonDistributableLayer(mimeType string) bool {
|
||||
switch mimeType {
|
||||
case imgspecv1.MediaTypeImageLayerNonDistributable, imgspecv1.MediaTypeImageLayerNonDistributableGzip, imgspecv1.MediaTypeImageLayerNonDistributableZstd:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// isOCI1Layer is a convenience wrapper to check if a given mime type is a
|
||||
// compressed or decompressed OCI v1 layer.
|
||||
func isOCI1Layer(mimeType string) bool {
|
||||
switch mimeType {
|
||||
case imgspecv1.MediaTypeImageLayer, imgspecv1.MediaTypeImageLayerGzip, imgspecv1.MediaTypeImageLayerZstd:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
var oci1CompressionMIMETypeSets = []compressionMIMETypeSet{
|
||||
{
|
||||
mtsUncompressed: imgspecv1.MediaTypeImageLayerNonDistributable,
|
||||
compression.Gzip.Name(): imgspecv1.MediaTypeImageLayerNonDistributableGzip,
|
||||
compression.Zstd.Name(): imgspecv1.MediaTypeImageLayerNonDistributableZstd,
|
||||
},
|
||||
{
|
||||
mtsUncompressed: imgspecv1.MediaTypeImageLayer,
|
||||
compression.Gzip.Name(): imgspecv1.MediaTypeImageLayerGzip,
|
||||
compression.Zstd.Name(): imgspecv1.MediaTypeImageLayerZstd,
|
||||
},
|
||||
}
|
||||
|
||||
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls+mediatype), in order (the root layer first, and then successive layered layers)
|
||||
@@ -133,79 +123,19 @@ func (m *OCI1) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
|
||||
}
|
||||
mimeType = decMimeType
|
||||
}
|
||||
|
||||
// Set the correct media types based on the specified compression
|
||||
// operation, the desired compression algorithm AND the original media
|
||||
// type.
|
||||
//
|
||||
// Note that manifests in containers-storage might be reporting the
|
||||
// wrong media type since the original manifests are stored while layers
|
||||
// are decompressed in storage. Hence, we need to consider the case
|
||||
// that an already {de}compressed layer should be {de}compressed, which
|
||||
// is being addressed in `isSchema2{Foreign}Layer`.
|
||||
switch info.CompressionOperation {
|
||||
case types.PreserveOriginal:
|
||||
// Keep the original media type.
|
||||
m.Layers[i].MediaType = mimeType
|
||||
|
||||
case types.Decompress:
|
||||
// Decompress the original media type and check if it was
|
||||
// non-distributable one or not.
|
||||
switch {
|
||||
case isOCI1NonDistributableLayer(mimeType):
|
||||
m.Layers[i].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable
|
||||
case isOCI1Layer(mimeType):
|
||||
m.Layers[i].MediaType = imgspecv1.MediaTypeImageLayer
|
||||
default:
|
||||
return fmt.Errorf("Error preparing updated manifest: unsupported media type for decompression: %q", mimeType)
|
||||
}
|
||||
|
||||
case types.Compress:
|
||||
if info.CompressionAlgorithm == nil {
|
||||
logrus.Debugf("Error preparing updated manifest: blob %q was compressed but does not specify by which algorithm: falling back to use the original blob", info.Digest)
|
||||
m.Layers[i].MediaType = mimeType
|
||||
break
|
||||
}
|
||||
// Compress the original media type and set the new one based on
|
||||
// that type (distributable or not) and the specified compression
|
||||
// algorithm. Throw an error if the algorithm is not supported.
|
||||
switch info.CompressionAlgorithm.Name() {
|
||||
case compression.Gzip.Name():
|
||||
switch {
|
||||
case isOCI1NonDistributableLayer(mimeType):
|
||||
m.Layers[i].MediaType = imgspecv1.MediaTypeImageLayerNonDistributableGzip
|
||||
case isOCI1Layer(mimeType):
|
||||
m.Layers[i].MediaType = imgspecv1.MediaTypeImageLayerGzip
|
||||
default:
|
||||
return fmt.Errorf("Error preparing updated manifest: unsupported media type for compression: %q", mimeType)
|
||||
}
|
||||
|
||||
case compression.Zstd.Name():
|
||||
switch {
|
||||
case isOCI1NonDistributableLayer(mimeType):
|
||||
m.Layers[i].MediaType = imgspecv1.MediaTypeImageLayerNonDistributableZstd
|
||||
case isOCI1Layer(mimeType):
|
||||
m.Layers[i].MediaType = imgspecv1.MediaTypeImageLayerZstd
|
||||
default:
|
||||
return fmt.Errorf("Error preparing updated manifest: unsupported media type for compression: %q", mimeType)
|
||||
}
|
||||
|
||||
default:
|
||||
return fmt.Errorf("Error preparing updated manifest: unknown compression algorithm %q for layer %q", info.CompressionAlgorithm.Name(), info.Digest)
|
||||
}
|
||||
|
||||
default:
|
||||
return fmt.Errorf("Error preparing updated manifest: unknown compression operation (%d) for layer %q", info.CompressionOperation, info.Digest)
|
||||
mimeType, err := updatedMIMEType(oci1CompressionMIMETypeSets, mimeType, info)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Error preparing updated manifest, layer %q", info.Digest)
|
||||
}
|
||||
|
||||
if info.CryptoOperation == types.Encrypt {
|
||||
encMediaType, err := getEncryptedMediaType(m.Layers[i].MediaType)
|
||||
encMediaType, err := getEncryptedMediaType(mimeType)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error preparing updated manifest: encryption specified but no counterpart for mediatype: %q", m.Layers[i].MediaType)
|
||||
return fmt.Errorf("error preparing updated manifest: encryption specified but no counterpart for mediatype: %q", mimeType)
|
||||
}
|
||||
m.Layers[i].MediaType = encMediaType
|
||||
mimeType = encMediaType
|
||||
}
|
||||
|
||||
m.Layers[i].MediaType = mimeType
|
||||
m.Layers[i].Digest = info.Digest
|
||||
m.Layers[i].Size = info.Size
|
||||
m.Layers[i].Annotations = info.Annotations
|
||||
@@ -242,7 +172,7 @@ func (m *OCI1) Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*type
|
||||
Architecture: v1.Architecture,
|
||||
Os: v1.OS,
|
||||
Layers: layerInfosToStrings(m.LayerInfos()),
|
||||
Env: d1.Config.Env,
|
||||
Env: v1.Config.Env,
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user