Compare commits

...

17 Commits

Author SHA1 Message Date
Antonio Murdaca
b0b750dfa1 version: bump to v0.1.31
Signed-off-by: Antonio Murdaca <runcom@linux.com>
2018-07-29 19:02:33 +02:00
Miloslav Trmač
e3034e1d91 Merge pull request #525 from mtrmac/docker-archive-auto-compression
Vendor after merging mtrmac/image:docker-archive-auto-compression
2018-07-18 01:19:09 +02:00
Miloslav Trmač
1a259b76da Vendor after merging mtrmac/image:docker-archive-auto-compression
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-07-18 01:02:26 +02:00
Antonio Murdaca
ae64ff7084 Merge pull request #522 from AkihiroSuda/minimal
Makefile: add binary-minimal and binary-static-minimal
2018-07-09 15:11:02 +02:00
Akihiro Suda
d67d3a4620 Makefile: add binary-minimal and binary-static-minimal
These targets produce a pure-Go binary, without the following features:
* ostree
* devicemapper
* btrfs
* gpgme

Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
2018-07-09 18:35:58 +09:00
Daniel J Walsh
196bc48723 Merge pull request #520 from mtrmac/copy-error-handling
Ensure that we still return 1, and print something to stderr, on (skopeo copy) failure
2018-07-02 07:57:19 -04:00
Miloslav Trmač
1c6c7bc481 Ensure that we still return 1, and print something to stderr, on (skopeo copy) failure
https://github.com/projectatomic/skopeo/pull/519 made (skopeo copy)
suceed and print nothing to stderr; that could lead to hard-to-diagnose
failures in rare corner cases, e.g. shell scripts which do
(skopeo copy $src $dst) (as opposed to the correct
(skopeo copy "$src" "$dst") ) if $src and $dst are empty due to
a previous failure.
2018-06-30 13:05:23 +02:00
Daniel J Walsh
6e23a32282 Merge pull request #519 from vbatts/copy_help_output
cmd/skopeo: show full help on 'copy' with no args
2018-06-29 11:17:03 -04:00
Vincent Batts
f398c9c035 cmd/skopeo: show full help on 'copy' with no args
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
2018-06-29 10:13:13 -04:00
Miloslav Trmač
0144aa8dc5 Merge pull request #514 from giuseppe/vndr-c-image
vendor: update containers/image
2018-05-30 20:50:35 +02:00
Giuseppe Scrivano
0df5dcf09c vendor: update containers/image
Needed to pick up this change:

ostree: use the same thread for ostree operations

Since https://github.com/ostreedev/ostree/pull/1555, locking is
enabled by default in OSTree.  Unfortunately it uses thread-private
data and it breaks the Golang bindings.  Force the same thread for the
write operations to the OSTree repository.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2018-05-30 19:00:34 +02:00
Miloslav Trmač
f9baaa6b87 Merge pull request #512 from mtrmac/docker_vendor_update
Update docker/docker dependencies.
2018-05-26 05:56:42 +02:00
Max Goltzsche
67ff78925b Update docker/docker dependencies.
Required to update those dependencies in containers/image.
See https://github.com/containers/image/pull/446.

Updated by mitr@redhat.com to vendor from containers/image master again,
which brought in a few more dependency updates.

Signed-off-by: Max Goltzsche <max.goltzsche@gmail.com>
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2018-05-26 05:41:06 +02:00
Miloslav Trmač
5c611083f2 Merge pull request #510 from rhatdan/vendor
Vendor in latest go-selinux and containers/storage
2018-05-22 17:47:23 +02:00
Daniel J Walsh
976d57ea45 Vendor in latest go-selinux and containers/storage
skopeo is failing to build now on 32 bit systems.  go-selinux update
should fix this.  Also container/storage has had some cleanup fixes
to devicemapper support.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-05-22 11:09:34 -04:00
Antonio Murdaca
63569fcd63 Merge pull request #509 from runcom/bump-0.1.30
Bump 0.1.30
2018-05-20 11:11:19 +02:00
Antonio Murdaca
98b3a13b46 version: bump v0.1.31-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-05-20 10:56:32 +02:00
501 changed files with 22950 additions and 4625 deletions

View File

@@ -55,6 +55,12 @@ LIBDM_BUILD_TAG = $(shell hack/libdm_tag.sh)
LOCAL_BUILD_TAGS = $(BTRFS_BUILD_TAG) $(LIBDM_BUILD_TAG) $(DARWIN_BUILD_TAG)
BUILDTAGS += $(LOCAL_BUILD_TAGS)
DOCKER_BUILD_IMAGE := skopeobuildimage
ifeq ($(DISABLE_CGO), 1)
override DOCKER_BUILD_IMAGE = golang:1.10
override BUILDTAGS = containers_image_ostree_stub exclude_graphdriver_devicemapper exclude_graphdriver_btrfs containers_image_openpgp
endif
# make all DEBUG=1
# Note: Uses the -N -l go compiler options to disable compiler optimizations
# and inlining. Using these build options allows you to subsequently
@@ -64,14 +70,18 @@ all: binary docs
# Build a docker image (skopeobuild) that has everything we need to build.
# Then do the build and the output (skopeo) should appear in current dir
binary: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
ifneq ($(DISABLE_CGO), 1)
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t $(DOCKER_BUILD_IMAGE) .
endif
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
skopeobuildimage make binary-local $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
$(DOCKER_BUILD_IMAGE) make binary-local $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
binary-static: cmd/skopeo
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t skopeobuildimage .
ifneq ($(DISABLE_CGO), 1)
docker build ${DOCKER_BUILD_ARGS} -f Dockerfile.build -t $(DOCKER_BUILD_IMAGE) .
endif
docker run --rm --security-opt label:disable -v $$(pwd):/src/github.com/projectatomic/skopeo \
skopeobuildimage make binary-local-static $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
$(DOCKER_BUILD_IMAGE) make binary-local-static $(if $(DEBUG),DEBUG=$(DEBUG)) BUILDTAGS='$(BUILDTAGS)'
# Build w/o using Docker containers
binary-local:

View File

@@ -191,6 +191,12 @@ Building in a container is simpler, but more restrictive:
$ make binary # Or (make all) to also build documentation, see below.
```
To build a pure-Go static binary (disables ostree, devicemapper, btrfs, and gpgme):
```sh
$ make binary-static DISABLE_CGO=1
```
### Building documentation
To build the manual you will need go-md2man.
```sh

View File

@@ -34,7 +34,8 @@ func contextsFromGlobalOptions(c *cli.Context) (*types.SystemContext, *types.Sys
func copyHandler(c *cli.Context) error {
if len(c.Args()) != 2 {
return errors.New("Usage: copy source destination")
cli.ShowCommandHelp(c, "copy")
return errors.New("Exactly two arguments expected")
}
policyContext, err := getPolicyContext(c)

View File

@@ -1,4 +1,6 @@
github.com/urfave/cli v1.17.0
github.com/kr/pretty v0.1.0
github.com/kr/text v0.1.0
github.com/containers/image master
github.com/opencontainers/go-digest master
gopkg.in/cheggaaa/pb.v1 ad4efe000aa550bb54918c06ebbadc0ff17687b9 https://github.com/cheggaaa/pb
@@ -10,9 +12,11 @@ github.com/davecgh/go-spew master
github.com/pmezard/go-difflib master
github.com/pkg/errors master
golang.org/x/crypto master
github.com/ulikunitz/xz v0.5.4
# docker deps from https://github.com/docker/docker/blob/v1.11.2/hack/vendor.sh
github.com/docker/docker 30eb4d8cdc422b023d5f11f29a82ecb73554183b
github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
github.com/docker/docker da99009bbb1165d1ac5688b5c81d2f589d418341
github.com/docker/go-connections 7beb39f0b969b075d1325fecb092faf27fd357b6
github.com/containerd/continuity d8fb8589b0e8e85b8c8bbaa8840226d0dfeb7371
github.com/vbatts/tar-split v0.10.2
github.com/gorilla/context 14f550f51a
github.com/gorilla/mux e444e69cbd
@@ -60,3 +64,4 @@ golang.org/x/sys master
github.com/tchap/go-patricia v2.2.6
github.com/BurntSushi/toml master
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac
github.com/syndtr/gocapability master

202
vendor/github.com/containerd/continuity/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

74
vendor/github.com/containerd/continuity/README.md generated vendored Normal file
View File

@@ -0,0 +1,74 @@
# continuity
[![GoDoc](https://godoc.org/github.com/containerd/continuity?status.svg)](https://godoc.org/github.com/containerd/continuity)
[![Build Status](https://travis-ci.org/containerd/continuity.svg?branch=master)](https://travis-ci.org/containerd/continuity)
A transport-agnostic, filesystem metadata manifest system
This project is a staging area for experiments in providing transport agnostic
metadata storage.
Please see https://github.com/opencontainers/specs/issues/11 for more details.
## Manifest Format
A continuity manifest encodes filesystem metadata in Protocol Buffers.
Please refer to [proto/manifest.proto](proto/manifest.proto).
## Usage
Build:
```console
$ make
```
Create a manifest (of this repo itself):
```console
$ ./bin/continuity build . > /tmp/a.pb
```
Dump a manifest:
```console
$ ./bin/continuity ls /tmp/a.pb
...
-rw-rw-r-- 270 B /.gitignore
-rw-rw-r-- 88 B /.mailmap
-rw-rw-r-- 187 B /.travis.yml
-rw-rw-r-- 359 B /AUTHORS
-rw-rw-r-- 11 kB /LICENSE
-rw-rw-r-- 1.5 kB /Makefile
...
-rw-rw-r-- 986 B /testutil_test.go
drwxrwxr-x 0 B /version
-rw-rw-r-- 478 B /version/version.go
```
Verify a manifest:
```console
$ ./bin/continuity verify . /tmp/a.pb
```
Break the directory and restore using the manifest:
```console
$ chmod 777 Makefile
$ ./bin/continuity verify . /tmp/a.pb
2017/06/23 08:00:34 error verifying manifest: resource "/Makefile" has incorrect mode: -rwxrwxrwx != -rw-rw-r--
$ ./bin/continuity apply . /tmp/a.pb
$ stat -c %a Makefile
664
$ ./bin/continuity verify . /tmp/a.pb
```
## Contribution Guide
### Building Proto Package
If you change the proto file you will need to rebuild the generated Go with `go generate`.
```console
$ go generate ./proto
```

View File

@@ -0,0 +1,85 @@
package pathdriver
import (
"path/filepath"
)
// PathDriver provides all of the path manipulation functions in a common
// interface. The context should call these and never use the `filepath`
// package or any other package to manipulate paths.
type PathDriver interface {
Join(paths ...string) string
IsAbs(path string) bool
Rel(base, target string) (string, error)
Base(path string) string
Dir(path string) string
Clean(path string) string
Split(path string) (dir, file string)
Separator() byte
Abs(path string) (string, error)
Walk(string, filepath.WalkFunc) error
FromSlash(path string) string
ToSlash(path string) string
Match(pattern, name string) (matched bool, err error)
}
// pathDriver is a simple default implementation calls the filepath package.
type pathDriver struct{}
// LocalPathDriver is the exported pathDriver struct for convenience.
var LocalPathDriver PathDriver = &pathDriver{}
func (*pathDriver) Join(paths ...string) string {
return filepath.Join(paths...)
}
func (*pathDriver) IsAbs(path string) bool {
return filepath.IsAbs(path)
}
func (*pathDriver) Rel(base, target string) (string, error) {
return filepath.Rel(base, target)
}
func (*pathDriver) Base(path string) string {
return filepath.Base(path)
}
func (*pathDriver) Dir(path string) string {
return filepath.Dir(path)
}
func (*pathDriver) Clean(path string) string {
return filepath.Clean(path)
}
func (*pathDriver) Split(path string) (dir, file string) {
return filepath.Split(path)
}
func (*pathDriver) Separator() byte {
return filepath.Separator
}
func (*pathDriver) Abs(path string) (string, error) {
return filepath.Abs(path)
}
// Note that filepath.Walk calls os.Stat, so if the context wants to
// to call Driver.Stat() for Walk, they need to create a new struct that
// overrides this method.
func (*pathDriver) Walk(root string, walkFn filepath.WalkFunc) error {
return filepath.Walk(root, walkFn)
}
func (*pathDriver) FromSlash(path string) string {
return filepath.FromSlash(path)
}
func (*pathDriver) ToSlash(path string) string {
return filepath.ToSlash(path)
}
func (*pathDriver) Match(pattern, name string) (bool, error) {
return filepath.Match(pattern, name)
}

13
vendor/github.com/containerd/continuity/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,13 @@
bazil.org/fuse 371fbbdaa8987b715bdd21d6adc4c9b20155f748
github.com/dustin/go-humanize bb3d318650d48840a39aa21a027c6630e198e626
github.com/golang/protobuf 1e59b77b52bf8e4b449a57e6f79f21226d571845
github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
github.com/opencontainers/go-digest 279bed98673dd5bef374d3b6e4b09e2af76183bf
github.com/pkg/errors f15c970de5b76fac0b59abb32d62c17cc7bed265
github.com/sirupsen/logrus 89742aefa4b206dcf400792f3bd35b542998eb3b
github.com/spf13/cobra 2da4a54c5ceefcee7ca5dd0eea1e18a3b6366489
github.com/spf13/pflag 4c012f6dcd9546820e378d0bdda4d8fc772cdfea
golang.org/x/crypto 9f005a07e0d31d45e6656d241bb5c0f2efd4bc94
golang.org/x/net a337091b0525af65de94df2eb7e98bd9962dcbe2
golang.org/x/sync 450f422ab23cf9881c94e2db30cac0eb1b7cf80c
golang.org/x/sys 665f6529cca930e27b831a0d1dafffbe1c172924

View File

@@ -65,7 +65,9 @@ the primary downside is that creating new signatures with the Golang-only implem
- `containers_image_ostree_stub`: Instead of importing `ostree:` transport in `github.com/containers/image/transports/alltransports`, use a stub which reports that the transport is not supported. This allows building the library without requiring the `libostree` development libraries. The `github.com/containers/image/ostree` package is completely disabled
and impossible to import when this build tag is in use.
## Contributing
## [Contributing](CONTRIBUTING.md)**
Information about contributing to this project.
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.

View File

@@ -347,6 +347,9 @@ func checkImageDestinationForCurrentRuntimeOS(ctx context.Context, sys *types.Sy
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
func (ic *imageCopier) updateEmbeddedDockerReference() error {
if ic.c.dest.IgnoresEmbeddedDockerReference() {
return nil // Destination would prefer us not to update the embedded reference.
}
destRef := ic.c.dest.Reference().DockerReference()
if destRef == nil {
return nil // Destination does not care about Docker references
@@ -601,6 +604,7 @@ func computeDiffID(stream io.Reader, decompressor compression.DecompressorFunc)
if err != nil {
return "", err
}
defer s.Close()
stream = s
}
@@ -670,10 +674,12 @@ func (c *copier) copyBlobFromStream(ctx context.Context, srcStream io.Reader, sr
inputInfo.Size = -1
} else if canModifyBlob && c.dest.DesiredLayerCompression() == types.Decompress && isCompressed {
logrus.Debugf("Blob will be decompressed")
destStream, err = decompressor(destStream)
s, err := decompressor(destStream)
if err != nil {
return types.BlobInfo{}, err
}
defer s.Close()
destStream = s
inputInfo.Digest = ""
inputInfo.Size = -1
} else {

View File

@@ -117,6 +117,13 @@ func (d *dirImageDestination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *dirImageDestination) IgnoresEmbeddedDockerReference() bool {
return false // N/A, DockerReference() returns nil.
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -30,13 +30,13 @@ func newDockerClient(sys *types.SystemContext) (*dockerclient.Client, error) {
//
// Similarly, if we want to communicate over plain HTTP on a TCP socket, we also need to set
// TLSClientConfig to nil. This can be achieved by using the form `http://`
proto, _, _, err := dockerclient.ParseHost(host)
url, err := dockerclient.ParseHostURL(host)
if err != nil {
return nil, err
}
var httpClient *http.Client
if proto != "unix" {
if proto == "http" {
if url.Scheme != "unix" {
if url.Scheme == "http" {
httpClient = httpConfig()
} else {
hc, err := tlsConfig(sys)

View File

@@ -280,13 +280,19 @@ func SearchRegistry(ctx context.Context, sys *types.SystemContext, registry, ima
v2Res := &V2Results{}
v1Res := &V1Results{}
// Get credentials from authfile for the underlying hostname
username, password, err := config.GetAuthentication(sys, registry)
if err != nil {
return nil, errors.Wrapf(err, "error getting username and password")
}
// The /v2/_catalog endpoint has been disabled for docker.io therefore the call made to that endpoint will fail
// So using the v1 hostname for docker.io for simplicity of implementation and the fact that it returns search results
if registry == dockerHostname {
registry = dockerV1Hostname
}
client, err := newDockerClientWithDetails(sys, registry, "", "", "", nil, "")
client, err := newDockerClientWithDetails(sys, registry, username, password, "", nil, "")
if err != nil {
return nil, errors.Wrapf(err, "error creating new docker client")
}
@@ -443,6 +449,9 @@ func (c *dockerClient) getBearerToken(ctx context.Context, realm, service, scope
}
authReq = authReq.WithContext(ctx)
getParams := authReq.URL.Query()
if c.username != "" {
getParams.Add("account", c.username)
}
if service != "" {
getParams.Add("service", service)
}

View File

@@ -95,6 +95,13 @@ func (d *dockerImageDestination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *dockerImageDestination) IgnoresEmbeddedDockerReference() bool {
return false // We do want the manifest updated; older registry versions refuse manifests if the embedded reference does not match.
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }

View File

@@ -75,6 +75,13 @@ func (d *Destination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *Destination) IgnoresEmbeddedDockerReference() bool {
return false // N/A, we only accept schema2 images where EmbeddedDockerReferenceConflicts() is always false.
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -3,7 +3,6 @@ package tarfile
import (
"archive/tar"
"bytes"
"compress/gzip"
"context"
"encoding/json"
"io"
@@ -43,8 +42,7 @@ type layerInfo struct {
// the source of an image.
// To do for both the NewSourceFromFile and NewSourceFromStream functions
// NewSourceFromFile returns a tarfile.Source for the specified path
// NewSourceFromFile supports both conpressed and uncompressed input
// NewSourceFromFile returns a tarfile.Source for the specified path.
func NewSourceFromFile(path string) (*Source, error) {
file, err := os.Open(path)
if err != nil {
@@ -52,19 +50,24 @@ func NewSourceFromFile(path string) (*Source, error) {
}
defer file.Close()
reader, err := gzip.NewReader(file)
// If the file is already not compressed we can just return the file itself
// as a source. Otherwise we pass the stream to NewSourceFromStream.
stream, isCompressed, err := compression.AutoDecompress(file)
if err != nil {
return nil, errors.Wrapf(err, "Error detecting compression for file %q", path)
}
defer stream.Close()
if !isCompressed {
return &Source{
tarPath: path,
}, nil
}
defer reader.Close()
return NewSourceFromStream(reader)
return NewSourceFromStream(stream)
}
// NewSourceFromStream returns a tarfile.Source for the specified inputStream, which must be uncompressed.
// The caller can close the inputStream immediately after NewSourceFromFile returns.
// NewSourceFromStream returns a tarfile.Source for the specified inputStream,
// which can be either compressed or uncompressed. The caller can close the
// inputStream immediately after NewSourceFromFile returns.
func NewSourceFromStream(inputStream io.Reader) (*Source, error) {
// FIXME: use SystemContext here.
// Save inputStream to a temporary file
@@ -81,8 +84,20 @@ func NewSourceFromStream(inputStream io.Reader) (*Source, error) {
}
}()
// TODO: This can take quite some time, and should ideally be cancellable using a context.Context.
if _, err := io.Copy(tarCopyFile, inputStream); err != nil {
// In order to be compatible with docker-load, we need to support
// auto-decompression (it's also a nice quality-of-life thing to avoid
// giving users really confusing "invalid tar header" errors).
uncompressedStream, _, err := compression.AutoDecompress(inputStream)
if err != nil {
return nil, errors.Wrap(err, "Error auto-decompressing input")
}
defer uncompressedStream.Close()
// Copy the plain archive to the temporary file.
//
// TODO: This can take quite some time, and should ideally be cancellable
// using a context.Context.
if _, err := io.Copy(tarCopyFile, uncompressedStream); err != nil {
return nil, errors.Wrapf(err, "error copying contents to temporary file %q", tarCopyFile.Name())
}
succeeded = true
@@ -292,7 +307,25 @@ func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *manif
return nil, err
}
if li, ok := unknownLayerSizes[h.Name]; ok {
li.size = h.Size
// Since GetBlob will decompress layers that are compressed we need
// to do the decompression here as well, otherwise we will
// incorrectly report the size. Pretty critical, since tools like
// umoci always compress layer blobs. Obviously we only bother with
// the slower method of checking if it's compressed.
uncompressedStream, isCompressed, err := compression.AutoDecompress(t)
if err != nil {
return nil, errors.Wrapf(err, "Error auto-decompressing %s to determine its size", h.Name)
}
defer uncompressedStream.Close()
uncompressedSize := h.Size
if isCompressed {
uncompressedSize, err = io.Copy(ioutil.Discard, uncompressedStream)
if err != nil {
return nil, errors.Wrapf(err, "Error reading %s to find its size", h.Name)
}
}
li.size = uncompressedSize
delete(unknownLayerSizes, h.Name)
}
}
@@ -346,16 +379,22 @@ func (s *Source) GetManifest(ctx context.Context, instanceDigest *digest.Digest)
return s.generatedManifest, manifest.DockerV2Schema2MediaType, nil
}
type readCloseWrapper struct {
// uncompressedReadCloser is an io.ReadCloser that closes both the uncompressed stream and the underlying input.
type uncompressedReadCloser struct {
io.Reader
closeFunc func() error
underlyingCloser func() error
uncompressedCloser func() error
}
func (r readCloseWrapper) Close() error {
if r.closeFunc != nil {
return r.closeFunc()
func (r uncompressedReadCloser) Close() error {
var res error
if err := r.uncompressedCloser(); err != nil {
res = err
}
return nil
if err := r.underlyingCloser(); err != nil && res == nil {
res = err
}
return res
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
@@ -369,10 +408,16 @@ func (s *Source) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadClose
}
if li, ok := s.knownLayers[info.Digest]; ok { // diffID is a digest of the uncompressed tarball,
stream, err := s.openTarComponent(li.path)
underlyingStream, err := s.openTarComponent(li.path)
if err != nil {
return nil, 0, err
}
closeUnderlyingStream := true
defer func() {
if closeUnderlyingStream {
underlyingStream.Close()
}
}()
// In order to handle the fact that digests != diffIDs (and thus that a
// caller which is trying to verify the blob will run into problems),
@@ -386,22 +431,17 @@ func (s *Source) GetBlob(ctx context.Context, info types.BlobInfo) (io.ReadClose
// be verifing a "digest" which is not the actual layer's digest (but
// is instead the DiffID).
decompressFunc, reader, err := compression.DetectCompression(stream)
uncompressedStream, _, err := compression.AutoDecompress(underlyingStream)
if err != nil {
return nil, 0, errors.Wrapf(err, "Detecting compression in blob %s", info.Digest)
return nil, 0, errors.Wrapf(err, "Error auto-decompressing blob %s", info.Digest)
}
if decompressFunc != nil {
reader, err = decompressFunc(reader)
if err != nil {
return nil, 0, errors.Wrapf(err, "Decompressing blob %s stream", info.Digest)
}
}
newStream := readCloseWrapper{
Reader: reader,
closeFunc: stream.Close,
newStream := uncompressedReadCloser{
Reader: uncompressedStream,
underlyingCloser: underlyingStream.Close,
uncompressedCloser: uncompressedStream.Close,
}
closeUnderlyingStream = false
return newStream, li.size, nil
}

View File

@@ -7,35 +7,44 @@ registries.conf - Syntax of System Registry Configuration File
# DESCRIPTION
The REGISTRIES configuration file is a system-wide configuration file for container image
registries. The file format is TOML.
registries. The file format is TOML. The valid categories are: 'registries.search',
'registries.insecure', and 'registries.block'.
# FORMAT
The TOML_format is used to build simple list format for registries under two
categories: `search` and `insecure`. You can list multiple registries using
as a comma separated list.
The TOML_format is used to build a simple list format for registries under three
categories: `registries.search`, `registries.insecure`, and `registries.block`.
You can list multiple registries using a comma separated list.
Search registries are used when the caller of a container runtime does not fully specify the
container image that they want to execute. These registries are prepended onto the front
of the specified container image until the named image is found at a registry.
of the specified container image until the named image is found at a registry.
Insecure Registries. By default container runtimes use TLS when retrieving images
from a registry. If the registry is not setup with TLS, then the container runtime
will fail to pull images from the registry. If you add the registry to the list of
insecure registries then the container runtime will attempt use standard web protocols to
pull the image. It also allows you to pull from a registry with self-signed certificates.
Note insecure registries can be used for any registry, not just the
registries listed under search.
Note insecure registries can be used for any registry, not just the registries listed
under search.
The following example configuration defines two searchable registries and one
insecure registry.
Block Registries. The registries in this category are are not pulled from when
retrieving images.
# EXAMPLE
The following example configuration defines two searchable registries, one
insecure registry, and two blocked registries.
```
[registries.search]
registries = ["registry1.com", "registry2.com"]
registries = ['registry1.com', 'registry2.com']
[registries.insecure]
registries = ["registry3.com"]
registries = ['registry3.com']
[registries.block]
registries = ['registry.untrusted.com', 'registry.unsafe.com']
```
# HISTORY
Aug 2017, Originally compiled by Brent Baude <bbaude@redhat.com>
Jun 2018, Updated by Tom Sweeney <tsweeney@redhat.com>

View File

@@ -2,7 +2,6 @@ package image
import (
"context"
"encoding/json"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
@@ -25,8 +24,12 @@ func manifestSchema1FromManifest(manifestBlob []byte) (genericManifest, error) {
}
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
func manifestSchema1FromComponents(ref reference.Named, fsLayers []manifest.Schema1FSLayers, history []manifest.Schema1History, architecture string) genericManifest {
return &manifestSchema1{m: manifest.Schema1FromComponents(ref, fsLayers, history, architecture)}
func manifestSchema1FromComponents(ref reference.Named, fsLayers []manifest.Schema1FSLayers, history []manifest.Schema1History, architecture string) (genericManifest, error) {
m, err := manifest.Schema1FromComponents(ref, fsLayers, history, architecture)
if err != nil {
return nil, err
}
return &manifestSchema1{m: m}, nil
}
func (m *manifestSchema1) serialize() ([]byte, error) {
@@ -64,7 +67,7 @@ func (m *manifestSchema1) OCIConfig(ctx context.Context) (*imgspecv1.Image, erro
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
return m.m.LayerInfos()
return manifestLayerInfosToBlobInfos(m.m.LayerInfos())
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
@@ -148,12 +151,12 @@ func (m *manifestSchema1) UpdatedImage(ctx context.Context, options types.Manife
// Based on github.com/docker/docker/distribution/pull_v2.go
func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.BlobInfo, layerDiffIDs []digest.Digest) (genericManifest, error) {
if len(m.m.History) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
if len(m.m.ExtractedV1Compatibility) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on FSLayers[0] and ExtractedV1Compatibility[0] existing.
return nil, errors.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema2MediaType)
}
if len(m.m.History) != len(m.m.FSLayers) {
return nil, errors.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.m.History), len(m.m.FSLayers))
if len(m.m.ExtractedV1Compatibility) != len(m.m.FSLayers) {
return nil, errors.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.m.ExtractedV1Compatibility), len(m.m.FSLayers))
}
if uploadedLayerInfos != nil && len(uploadedLayerInfos) != len(m.m.FSLayers) {
return nil, errors.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.m.FSLayers))
@@ -165,14 +168,10 @@ func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.Bl
// Build a list of the diffIDs for the non-empty layers.
diffIDs := []digest.Digest{}
var layers []manifest.Schema2Descriptor
for v1Index := len(m.m.History) - 1; v1Index >= 0; v1Index-- {
v2Index := (len(m.m.History) - 1) - v1Index
for v1Index := len(m.m.ExtractedV1Compatibility) - 1; v1Index >= 0; v1Index-- {
v2Index := (len(m.m.ExtractedV1Compatibility) - 1) - v1Index
var v1compat manifest.Schema1V1Compatibility
if err := json.Unmarshal([]byte(m.m.History[v1Index].V1Compatibility), &v1compat); err != nil {
return nil, errors.Wrapf(err, "Error decoding history entry %d", v1Index)
}
if !v1compat.ThrowAway {
if !m.m.ExtractedV1Compatibility[v1Index].ThrowAway {
var size int64
if uploadedLayerInfos != nil {
size = uploadedLayerInfos[v2Index].Size

View File

@@ -18,17 +18,17 @@ import (
"github.com/sirupsen/logrus"
)
// gzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
// GzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
// This comes from github.com/docker/distribution/manifest/schema1/config_builder.go; there is
// a non-zero embedded timestamp; we could zero that, but that would just waste storage space
// in registries, so lets use the same values.
var gzippedEmptyLayer = []byte{
var GzippedEmptyLayer = []byte{
31, 139, 8, 0, 0, 9, 110, 136, 0, 255, 98, 24, 5, 163, 96, 20, 140, 88,
0, 8, 0, 0, 255, 255, 46, 175, 181, 239, 0, 4, 0, 0,
}
// gzippedEmptyLayerDigest is a digest of gzippedEmptyLayer
const gzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
// GzippedEmptyLayerDigest is a digest of GzippedEmptyLayer
const GzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
type manifestSchema2 struct {
src types.ImageSource // May be nil if configBlob is not nil
@@ -95,11 +95,7 @@ func (m *manifestSchema2) ConfigBlob(ctx context.Context) ([]byte, error) {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestSchema2")
}
stream, _, err := m.src.GetBlob(ctx, types.BlobInfo{
Digest: m.m.ConfigDescriptor.Digest,
Size: m.m.ConfigDescriptor.Size,
URLs: m.m.ConfigDescriptor.URLs,
})
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromSchema2Descriptor(m.m.ConfigDescriptor))
if err != nil {
return nil, err
}
@@ -121,7 +117,7 @@ func (m *manifestSchema2) ConfigBlob(ctx context.Context) ([]byte, error) {
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
return m.m.LayerInfos()
return manifestLayerInfosToBlobInfos(m.m.LayerInfos())
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
@@ -253,16 +249,16 @@ func (m *manifestSchema2) convertToManifestSchema1(ctx context.Context, dest typ
if historyEntry.EmptyLayer {
if !haveGzippedEmptyLayer {
logrus.Debugf("Uploading empty layer during conversion to schema 1")
info, err := dest.PutBlob(ctx, bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Digest: gzippedEmptyLayerDigest, Size: int64(len(gzippedEmptyLayer))}, false)
info, err := dest.PutBlob(ctx, bytes.NewReader(GzippedEmptyLayer), types.BlobInfo{Digest: GzippedEmptyLayerDigest, Size: int64(len(GzippedEmptyLayer))}, false)
if err != nil {
return nil, errors.Wrap(err, "Error uploading empty layer")
}
if info.Digest != gzippedEmptyLayerDigest {
return nil, errors.Errorf("Internal error: Uploaded empty layer has digest %#v instead of %s", info.Digest, gzippedEmptyLayerDigest)
if info.Digest != GzippedEmptyLayerDigest {
return nil, errors.Errorf("Internal error: Uploaded empty layer has digest %#v instead of %s", info.Digest, GzippedEmptyLayerDigest)
}
haveGzippedEmptyLayer = true
}
blobDigest = gzippedEmptyLayerDigest
blobDigest = GzippedEmptyLayerDigest
} else {
if nonemptyLayerIndex >= len(m.m.LayersDescriptors) {
return nil, errors.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.m.LayersDescriptors))
@@ -308,7 +304,10 @@ func (m *manifestSchema2) convertToManifestSchema1(ctx context.Context, dest typ
}
history[0].V1Compatibility = string(v1Config)
m1 := manifestSchema1FromComponents(dest.Reference().DockerReference(), fsLayers, history, imageConfig.Architecture)
m1, err := manifestSchema1FromComponents(dest.Reference().DockerReference(), fsLayers, history, imageConfig.Architecture)
if err != nil {
return nil, err // This should never happen, we should have created all the components correctly.
}
return memoryImageFromManifest(m1), nil
}

View File

@@ -62,3 +62,12 @@ func manifestInstanceFromBlob(ctx context.Context, sys *types.SystemContext, src
return nil, fmt.Errorf("Unimplemented manifest MIME type %s", mt)
}
}
// manifestLayerInfosToBlobInfos extracts a []types.BlobInfo from a []manifest.LayerInfo.
func manifestLayerInfosToBlobInfos(layers []manifest.LayerInfo) []types.BlobInfo {
blobs := make([]types.BlobInfo, len(layers))
for i, layer := range layers {
blobs[i] = layer.BlobInfo
}
return blobs
}

View File

@@ -60,11 +60,7 @@ func (m *manifestOCI1) ConfigBlob(ctx context.Context) ([]byte, error) {
if m.src == nil {
return nil, errors.Errorf("Internal error: neither src nor configBlob set in manifestOCI1")
}
stream, _, err := m.src.GetBlob(ctx, types.BlobInfo{
Digest: m.m.Config.Digest,
Size: m.m.Config.Size,
URLs: m.m.Config.URLs,
})
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromOCI1Descriptor(m.m.Config))
if err != nil {
return nil, err
}
@@ -101,7 +97,7 @@ func (m *manifestOCI1) OCIConfig(ctx context.Context) (*imgspecv1.Image, error)
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
return m.m.LayerInfos()
return manifestLayerInfosToBlobInfos(m.m.LayerInfos())
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.

View File

@@ -25,25 +25,28 @@ type Schema1History struct {
// Schema1 is a manifest in docker/distribution schema 1.
type Schema1 struct {
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []Schema1FSLayers `json:"fsLayers"`
History []Schema1History `json:"history"`
SchemaVersion int `json:"schemaVersion"`
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []Schema1FSLayers `json:"fsLayers"`
History []Schema1History `json:"history"` // Keep this in sync with ExtractedV1Compatibility!
ExtractedV1Compatibility []Schema1V1Compatibility `json:"-"` // Keep this in sync with History! Does not contain the full config (Schema2V1Image)
SchemaVersion int `json:"schemaVersion"`
}
type schema1V1CompatibilityContainerConfig struct {
Cmd []string
}
// Schema1V1Compatibility is a v1Compatibility in docker/distribution schema 1.
type Schema1V1Compatibility struct {
ID string `json:"id"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig struct {
Cmd []string
} `json:"container_config,omitempty"`
Author string `json:"author,omitempty"`
ThrowAway bool `json:"throwaway,omitempty"`
ID string `json:"id"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig schema1V1CompatibilityContainerConfig `json:"container_config,omitempty"`
Author string `json:"author,omitempty"`
ThrowAway bool `json:"throwaway,omitempty"`
}
// Schema1FromManifest creates a Schema1 manifest instance from a manifest blob.
@@ -57,11 +60,8 @@ func Schema1FromManifest(manifest []byte) (*Schema1, error) {
if s1.SchemaVersion != 1 {
return nil, errors.Errorf("unsupported schema version %d", s1.SchemaVersion)
}
if len(s1.FSLayers) != len(s1.History) {
return nil, errors.New("length of history not equal to number of layers")
}
if len(s1.FSLayers) == 0 {
return nil, errors.New("no FSLayers in manifest")
if err := s1.initialize(); err != nil {
return nil, err
}
if err := s1.fixManifestLayers(); err != nil {
return nil, err
@@ -70,7 +70,7 @@ func Schema1FromManifest(manifest []byte) (*Schema1, error) {
}
// Schema1FromComponents creates an Schema1 manifest instance from the supplied data.
func Schema1FromComponents(ref reference.Named, fsLayers []Schema1FSLayers, history []Schema1History, architecture string) *Schema1 {
func Schema1FromComponents(ref reference.Named, fsLayers []Schema1FSLayers, history []Schema1History, architecture string) (*Schema1, error) {
var name, tag string
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
name = reference.Path(ref)
@@ -78,7 +78,7 @@ func Schema1FromComponents(ref reference.Named, fsLayers []Schema1FSLayers, hist
tag = tagged.Tag()
}
}
return &Schema1{
s1 := Schema1{
Name: name,
Tag: tag,
Architecture: architecture,
@@ -86,6 +86,10 @@ func Schema1FromComponents(ref reference.Named, fsLayers []Schema1FSLayers, hist
History: history,
SchemaVersion: 1,
}
if err := s1.initialize(); err != nil {
return nil, err
}
return &s1, nil
}
// Schema1Clone creates a copy of the supplied Schema1 manifest.
@@ -94,31 +98,51 @@ func Schema1Clone(src *Schema1) *Schema1 {
return &copy
}
// initialize initializes ExtractedV1Compatibility and verifies invariants, so that the rest of this code can assume a minimally healthy manifest.
func (m *Schema1) initialize() error {
if len(m.FSLayers) != len(m.History) {
return errors.New("length of history not equal to number of layers")
}
if len(m.FSLayers) == 0 {
return errors.New("no FSLayers in manifest")
}
m.ExtractedV1Compatibility = make([]Schema1V1Compatibility, len(m.History))
for i, h := range m.History {
if err := json.Unmarshal([]byte(h.V1Compatibility), &m.ExtractedV1Compatibility[i]); err != nil {
return errors.Wrapf(err, "Error parsing v2s1 history entry %d", i)
}
}
return nil
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
func (m *Schema1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{}
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *Schema1) LayerInfos() []types.BlobInfo {
layers := make([]types.BlobInfo, len(m.FSLayers))
func (m *Schema1) LayerInfos() []LayerInfo {
layers := make([]LayerInfo, len(m.FSLayers))
for i, layer := range m.FSLayers { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
layers[(len(m.FSLayers)-1)-i] = types.BlobInfo{Digest: layer.BlobSum, Size: -1}
layers[(len(m.FSLayers)-1)-i] = LayerInfo{
BlobInfo: types.BlobInfo{Digest: layer.BlobSum, Size: -1},
EmptyLayer: m.ExtractedV1Compatibility[i].ThrowAway,
}
}
return layers
}
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
func (m *Schema1) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
// Our LayerInfos includes empty layers (where m.History.V1Compatibility->ThrowAway), so expect them to be included here as well.
// Our LayerInfos includes empty layers (where m.ExtractedV1Compatibility[].ThrowAway), so expect them to be included here as well.
if len(m.FSLayers) != len(layerInfos) {
return errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.FSLayers), len(layerInfos))
}
m.FSLayers = make([]Schema1FSLayers, len(layerInfos))
for i, info := range layerInfos {
// (docker push) sets up m.History.V1Compatibility->{Id,Parent} based on values of info.Digest,
// (docker push) sets up m.ExtractedV1Compatibility[].{Id,Parent} based on values of info.Digest,
// but (docker pull) ignores them in favor of computing DiffIDs from uncompressed data, except verifying the child->parent links and uniqueness.
// So, we don't bother recomputing the IDs in m.History.V1Compatibility.
m.FSLayers[(len(layerInfos)-1)-i].BlobSum = info.Digest
@@ -144,31 +168,19 @@ func (m *Schema1) Serialize() ([]byte, error) {
// Note that even after this succeeds, m.FSLayers may contain duplicate entries
// (for Dockerfile operations which change the configuration but not the filesystem).
func (m *Schema1) fixManifestLayers() error {
type imageV1 struct {
ID string
Parent string
}
// Per the specification, we can assume that len(m.FSLayers) == len(m.History)
imgs := make([]*imageV1, len(m.FSLayers))
for i := range m.FSLayers {
img := &imageV1{}
if err := json.Unmarshal([]byte(m.History[i].V1Compatibility), img); err != nil {
return err
}
imgs[i] = img
if err := validateV1ID(img.ID); err != nil {
// m.initialize() has verified that len(m.FSLayers) == len(m.History)
for _, compat := range m.ExtractedV1Compatibility {
if err := validateV1ID(compat.ID); err != nil {
return err
}
}
if imgs[len(imgs)-1].Parent != "" {
if m.ExtractedV1Compatibility[len(m.ExtractedV1Compatibility)-1].Parent != "" {
return errors.New("Invalid parent ID in the base layer of the image")
}
// check general duplicates to error instead of a deadlock
idmap := make(map[string]struct{})
var lastID string
for _, img := range imgs {
for _, img := range m.ExtractedV1Compatibility {
// skip IDs that appear after each other, we handle those later
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
return errors.Errorf("ID %+v appears multiple times in manifest", img.ID)
@@ -177,12 +189,13 @@ func (m *Schema1) fixManifestLayers() error {
idmap[lastID] = struct{}{}
}
// backwards loop so that we keep the remaining indexes after removing items
for i := len(imgs) - 2; i >= 0; i-- {
if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue
for i := len(m.ExtractedV1Compatibility) - 2; i >= 0; i-- {
if m.ExtractedV1Compatibility[i].ID == m.ExtractedV1Compatibility[i+1].ID { // repeated ID. remove and continue
m.FSLayers = append(m.FSLayers[:i], m.FSLayers[i+1:]...)
m.History = append(m.History[:i], m.History[i+1:]...)
} else if imgs[i].Parent != imgs[i+1].ID {
return errors.Errorf("Invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent)
m.ExtractedV1Compatibility = append(m.ExtractedV1Compatibility[:i], m.ExtractedV1Compatibility[i+1:]...)
} else if m.ExtractedV1Compatibility[i].Parent != m.ExtractedV1Compatibility[i+1].ID {
return errors.Errorf("Invalid parent ID. Expected %v, got %v", m.ExtractedV1Compatibility[i+1].ID, m.ExtractedV1Compatibility[i].Parent)
}
}
return nil
@@ -209,7 +222,7 @@ func (m *Schema1) Inspect(_ func(types.BlobInfo) ([]byte, error)) (*types.ImageI
DockerVersion: s1.DockerVersion,
Architecture: s1.Architecture,
Os: s1.OS,
Layers: LayerInfosToStrings(m.LayerInfos()),
Layers: layerInfosToStrings(m.LayerInfos()),
}
if s1.Config != nil {
i.Labels = s1.Config.Labels
@@ -240,11 +253,7 @@ func (m *Schema1) ToSchema2Config(diffIDs []digest.Digest) ([]byte, error) {
}
// Build the history.
convertedHistory := []Schema2History{}
for _, h := range m.History {
compat := Schema1V1Compatibility{}
if err := json.Unmarshal([]byte(h.V1Compatibility), &compat); err != nil {
return nil, errors.Wrapf(err, "error decoding history information")
}
for _, compat := range m.ExtractedV1Compatibility {
hitem := Schema2History{
Created: compat.Created,
CreatedBy: strings.Join(compat.ContainerConfig.Cmd, " "),

View File

@@ -18,6 +18,16 @@ type Schema2Descriptor struct {
URLs []string `json:"urls,omitempty"`
}
// BlobInfoFromSchema2Descriptor returns a types.BlobInfo based on the input schema 2 descriptor.
func BlobInfoFromSchema2Descriptor(desc Schema2Descriptor) types.BlobInfo {
return types.BlobInfo{
Digest: desc.Digest,
Size: desc.Size,
URLs: desc.URLs,
MediaType: desc.MediaType,
}
}
// Schema2 is a manifest in docker/distribution schema 2.
type Schema2 struct {
SchemaVersion int `json:"schemaVersion"`
@@ -171,19 +181,18 @@ func Schema2Clone(src *Schema2) *Schema2 {
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
func (m *Schema2) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
return BlobInfoFromSchema2Descriptor(m.ConfigDescriptor)
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *Schema2) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
func (m *Schema2) LayerInfos() []LayerInfo {
blobs := []LayerInfo{}
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, types.BlobInfo{
Digest: layer.Digest,
Size: layer.Size,
URLs: layer.URLs,
blobs = append(blobs, LayerInfo{
BlobInfo: BlobInfoFromSchema2Descriptor(layer),
EmptyLayer: false,
})
}
return blobs
@@ -227,7 +236,7 @@ func (m *Schema2) Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*t
DockerVersion: s2.DockerVersion,
Architecture: s2.Architecture,
Os: s2.OS,
Layers: LayerInfosToStrings(m.LayerInfos()),
Layers: layerInfosToStrings(m.LayerInfos()),
}
if s2.Config != nil {
i.Labels = s2.Config.Labels

View File

@@ -50,10 +50,10 @@ var DefaultRequestedManifestMIMETypes = []string{
type Manifest interface {
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
ConfigInfo() types.BlobInfo
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []types.BlobInfo
LayerInfos() []LayerInfo
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
UpdateLayerInfos(layerInfos []types.BlobInfo) error
@@ -73,6 +73,12 @@ type Manifest interface {
Serialize() ([]byte, error)
}
// LayerInfo is an extended version of types.BlobInfo for low-level users of Manifest.LayerInfos.
type LayerInfo struct {
types.BlobInfo
EmptyLayer bool // The layer is an “empty”/“throwaway” one, and may or may not be physically represented in various transport / storage systems. false if the manifest type does not have the concept.
}
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
// FIXME? We should, in general, prefer out-of-band MIME type instead of blindly parsing the manifest,
// but we may not have such metadata available (e.g. when the manifest is a local file).
@@ -227,9 +233,9 @@ func FromBlob(manblob []byte, mt string) (Manifest, error) {
}
}
// LayerInfosToStrings converts a list of layer infos, presumably obtained from a Manifest.LayerInfos()
// layerInfosToStrings converts a list of layer infos, presumably obtained from a Manifest.LayerInfos()
// method call, into a format suitable for inclusion in a types.ImageInspectInfo structure.
func LayerInfosToStrings(infos []types.BlobInfo) []string {
func layerInfosToStrings(infos []LayerInfo) []string {
layers := make([]string, len(infos))
for i, info := range infos {
layers[i] = info.Digest.String()

View File

@@ -10,6 +10,17 @@ import (
"github.com/pkg/errors"
)
// BlobInfoFromOCI1Descriptor returns a types.BlobInfo based on the input OCI1 descriptor.
func BlobInfoFromOCI1Descriptor(desc imgspecv1.Descriptor) types.BlobInfo {
return types.BlobInfo{
Digest: desc.Digest,
Size: desc.Size,
URLs: desc.URLs,
Annotations: desc.Annotations,
MediaType: desc.MediaType,
}
}
// OCI1 is a manifest.Manifest implementation for OCI images.
// The underlying data from imgspecv1.Manifest is also available.
type OCI1 struct {
@@ -45,16 +56,19 @@ func OCI1Clone(src *OCI1) *OCI1 {
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
func (m *OCI1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{Digest: m.Config.Digest, Size: m.Config.Size, Annotations: m.Config.Annotations}
return BlobInfoFromOCI1Descriptor(m.Config)
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *OCI1) LayerInfos() []types.BlobInfo {
blobs := []types.BlobInfo{}
func (m *OCI1) LayerInfos() []LayerInfo {
blobs := []LayerInfo{}
for _, layer := range m.Layers {
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size, Annotations: layer.Annotations, URLs: layer.URLs, MediaType: layer.MediaType})
blobs = append(blobs, LayerInfo{
BlobInfo: BlobInfoFromOCI1Descriptor(layer),
EmptyLayer: false,
})
}
return blobs
}
@@ -101,7 +115,7 @@ func (m *OCI1) Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*type
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Os: v1.OS,
Layers: LayerInfosToStrings(m.LayerInfos()),
Layers: layerInfosToStrings(m.LayerInfos()),
}
return i, nil
}

View File

@@ -70,6 +70,13 @@ func (d *ociArchiveImageDestination) MustMatchRuntimeOS() bool {
return d.unpackedDest.MustMatchRuntimeOS()
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *ociArchiveImageDestination) IgnoresEmbeddedDockerReference() bool {
return d.unpackedDest.IgnoresEmbeddedDockerReference()
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -18,9 +18,10 @@ import (
)
type ociImageDestination struct {
ref ociReference
index imgspecv1.Index
sharedBlobDir string
ref ociReference
index imgspecv1.Index
sharedBlobDir string
acceptUncompressedLayers bool
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
@@ -43,6 +44,7 @@ func newImageDestination(sys *types.SystemContext, ref ociReference) (types.Imag
d := &ociImageDestination{ref: ref, index: *index}
if sys != nil {
d.sharedBlobDir = sys.OCISharedBlobDirPath
d.acceptUncompressedLayers = sys.OCIAcceptUncompressedLayers
}
if err := ensureDirectoryExists(d.ref.dir); err != nil {
@@ -81,6 +83,9 @@ func (d *ociImageDestination) SupportsSignatures(ctx context.Context) error {
}
func (d *ociImageDestination) DesiredLayerCompression() types.LayerCompression {
if d.acceptUncompressedLayers {
return types.PreserveOriginal
}
return types.Compress
}
@@ -95,6 +100,13 @@ func (d *ociImageDestination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *ociImageDestination) IgnoresEmbeddedDockerReference() bool {
return false // N/A, DockerReference() returns nil.
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -23,8 +23,14 @@ func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for OCI directories.
var Transport = ociTransport{}
var (
// Transport is an ImageTransport for OCI directories.
Transport = ociTransport{}
// ErrMoreThanOneImage is an error returned when the manifest includes
// more than one image and the user should choose which one to use.
ErrMoreThanOneImage = errors.New("more than one image in oci, choose an image")
)
type ociTransport struct{}
@@ -184,7 +190,7 @@ func (ref ociReference) getManifestDescriptor() (imgspecv1.Descriptor, error) {
d = &index.Manifests[0]
} else {
// ask user to choose image when more than one image in the oci directory
return imgspecv1.Descriptor{}, errors.Wrapf(err, "more than one image in oci, choose an image")
return imgspecv1.Descriptor{}, ErrMoreThanOneImage
}
} else {
// if image specified, look through all manifests for a match

View File

@@ -369,6 +369,13 @@ func (d *openshiftImageDestination) MustMatchRuntimeOS() bool {
return false
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *openshiftImageDestination) IgnoresEmbeddedDockerReference() bool {
return d.docker.IgnoresEmbeddedDockerReference()
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -14,6 +14,7 @@ import (
"os"
"os/exec"
"path/filepath"
"runtime"
"strconv"
"strings"
"syscall"
@@ -124,6 +125,13 @@ func (d *ostreeImageDestination) MustMatchRuntimeOS() bool {
return true
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (d *ostreeImageDestination) IgnoresEmbeddedDockerReference() bool {
return false // N/A, DockerReference() returns nil.
}
func (d *ostreeImageDestination) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, isConfig bool) (types.BlobInfo, error) {
tmpDir, err := ioutil.TempDir(d.tmpDirPath, "blob")
if err != nil {
@@ -394,6 +402,9 @@ func (d *ostreeImageDestination) PutSignatures(ctx context.Context, signatures [
}
func (d *ostreeImageDestination) Commit(ctx context.Context) error {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
repo, err := otbuiltin.OpenRepo(d.ref.repo)
if err != nil {
return err

View File

@@ -5,28 +5,34 @@ import (
"compress/bzip2"
"compress/gzip"
"io"
"io/ioutil"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/ulikunitz/xz"
)
// DecompressorFunc returns the decompressed stream, given a compressed stream.
type DecompressorFunc func(io.Reader) (io.Reader, error)
// The caller must call Close() on the decompressed stream (even if the compressed input stream does not need closing!).
type DecompressorFunc func(io.Reader) (io.ReadCloser, error)
// GzipDecompressor is a DecompressorFunc for the gzip compression algorithm.
func GzipDecompressor(r io.Reader) (io.Reader, error) {
func GzipDecompressor(r io.Reader) (io.ReadCloser, error) {
return gzip.NewReader(r)
}
// Bzip2Decompressor is a DecompressorFunc for the bzip2 compression algorithm.
func Bzip2Decompressor(r io.Reader) (io.Reader, error) {
return bzip2.NewReader(r), nil
func Bzip2Decompressor(r io.Reader) (io.ReadCloser, error) {
return ioutil.NopCloser(bzip2.NewReader(r)), nil
}
// XzDecompressor is a DecompressorFunc for the xz compression algorithm.
func XzDecompressor(r io.Reader) (io.Reader, error) {
return nil, errors.New("Decompressing xz streams is not supported")
func XzDecompressor(r io.Reader) (io.ReadCloser, error) {
r, err := xz.NewReader(r)
if err != nil {
return nil, err
}
return ioutil.NopCloser(r), nil
}
// compressionAlgos is an internal implementation detail of DetectCompression
@@ -65,3 +71,24 @@ func DetectCompression(input io.Reader) (DecompressorFunc, io.Reader, error) {
return decompressor, io.MultiReader(bytes.NewReader(buffer[:n]), input), nil
}
// AutoDecompress takes a stream and returns an uncompressed version of the
// same stream.
// The caller must call Close() on the returned stream (even if the input does not need,
// or does not even support, closing!).
func AutoDecompress(stream io.Reader) (io.ReadCloser, bool, error) {
decompressor, stream, err := DetectCompression(stream)
if err != nil {
return nil, false, errors.Wrapf(err, "Error detecting compression")
}
var res io.ReadCloser
if decompressor != nil {
res, err = decompressor(stream)
if err != nil {
return nil, false, errors.Wrapf(err, "Error initializing decompression")
}
} else {
res = ioutil.NopCloser(stream)
}
return res, decompressor != nil, nil
}

View File

@@ -7,7 +7,6 @@ import (
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/containers/image/types"
@@ -27,16 +26,12 @@ type dockerConfigFile struct {
CredHelpers map[string]string `json:"credHelpers,omitempty"`
}
const (
defaultPath = "/run"
authCfg = "containers"
authCfgFileName = "auth.json"
dockerCfg = ".docker"
dockerCfgFileName = "config.json"
dockerLegacyCfg = ".dockercfg"
)
var (
defaultPerUIDPathFormat = filepath.FromSlash("/run/containers/%d/auth.json")
xdgRuntimeDirPath = filepath.FromSlash("containers/auth.json")
dockerHomePath = filepath.FromSlash(".docker/config.json")
dockerLegacyHomePath = ".dockercfg"
// ErrNotLoggedIn is returned for users not logged into a registry
// that they are trying to logout of
ErrNotLoggedIn = errors.New("not logged in")
@@ -64,7 +59,7 @@ func GetAuthentication(sys *types.SystemContext, registry string) (string, strin
return sys.DockerAuthConfig.Username, sys.DockerAuthConfig.Password, nil
}
dockerLegacyPath := filepath.Join(homedir.Get(), dockerLegacyCfg)
dockerLegacyPath := filepath.Join(homedir.Get(), dockerLegacyHomePath)
var paths []string
pathToAuth, err := getPathToAuth(sys)
if err == nil {
@@ -75,7 +70,7 @@ func GetAuthentication(sys *types.SystemContext, registry string) (string, strin
// Logging the error as a warning instead and moving on to pulling the image
logrus.Warnf("%v: Trying to pull image in the event that it is a public image.", err)
}
paths = append(paths, filepath.Join(homedir.Get(), dockerCfg, dockerCfgFileName), dockerLegacyPath)
paths = append(paths, filepath.Join(homedir.Get(), dockerHomePath), dockerLegacyPath)
for _, path := range paths {
legacyFormat := path == dockerLegacyPath
@@ -135,15 +130,15 @@ func RemoveAllAuthentication(sys *types.SystemContext) error {
// getPath gets the path of the auth.json file
// The path can be overriden by the user if the overwrite-path flag is set
// If the flag is not set and XDG_RUNTIME_DIR is ser, the auth.json file is saved in XDG_RUNTIME_DIR/containers
// Otherwise, the auth.json file is stored in /run/user/UID/containers
// If the flag is not set and XDG_RUNTIME_DIR is set, the auth.json file is saved in XDG_RUNTIME_DIR/containers
// Otherwise, the auth.json file is stored in /run/containers/UID
func getPathToAuth(sys *types.SystemContext) (string, error) {
if sys != nil {
if sys.AuthFilePath != "" {
return sys.AuthFilePath, nil
}
if sys.RootForImplicitAbsolutePaths != "" {
return filepath.Join(sys.RootForImplicitAbsolutePaths, defaultPath, strconv.Itoa(os.Getuid()), authCfg, authCfgFileName), nil
return filepath.Join(sys.RootForImplicitAbsolutePaths, fmt.Sprintf(defaultPerUIDPathFormat, os.Getuid())), nil
}
}
@@ -155,14 +150,12 @@ func getPathToAuth(sys *types.SystemContext) (string, error) {
if os.IsNotExist(err) {
// This means the user set the XDG_RUNTIME_DIR variable and either forgot to create the directory
// or made a typo while setting the environment variable,
// so return an error referring to $XDG_RUNTIME_DIR instead of …/authCfgFileName inside.
// so return an error referring to $XDG_RUNTIME_DIR instead of xdgRuntimeDirPath inside.
return "", errors.Wrapf(err, "%q directory set by $XDG_RUNTIME_DIR does not exist. Either create the directory or unset $XDG_RUNTIME_DIR.", runtimeDir)
} // else ignore err and let the caller fail accessing …/authCfgFileName.
runtimeDir = filepath.Join(runtimeDir, authCfg)
} else {
runtimeDir = filepath.Join(defaultPath, authCfg, strconv.Itoa(os.Getuid()))
} // else ignore err and let the caller fail accessing xdgRuntimeDirPath.
return filepath.Join(runtimeDir, xdgRuntimeDirPath), nil
}
return filepath.Join(runtimeDir, authCfgFileName), nil
return fmt.Sprintf(defaultPerUIDPathFormat, os.Getuid()), nil
}
// readJSONFile unmarshals the authentications stored in the auth.json file and returns it

View File

@@ -50,8 +50,7 @@ type storageImageSource struct {
}
type storageImageDestination struct {
imageRef storageReference // The reference we'll use to name the image
publicRef storageReference // The reference we return when asked about the name we'll give to the image
imageRef storageReference
directory string // Temporary directory where we store blobs until Commit() time
nextTempFileID int32 // A counter that we use for computing filenames to assign to blobs
manifest []byte // Manifest contents, temporary
@@ -102,6 +101,9 @@ func (s storageImageSource) Close() error {
// GetBlob reads the data blob or filesystem layer which matches the digest and size, if given.
func (s *storageImageSource) GetBlob(ctx context.Context, info types.BlobInfo) (rc io.ReadCloser, n int64, err error) {
if info.Digest == image.GzippedEmptyLayerDigest {
return ioutil.NopCloser(bytes.NewReader(image.GzippedEmptyLayer)), int64(len(image.GzippedEmptyLayer)), nil
}
rc, n, _, err = s.getBlobAndLayerID(info)
return rc, n, err
}
@@ -175,11 +177,15 @@ func (s *storageImageSource) GetManifest(ctx context.Context, instanceDigest *di
// LayerInfosForCopy() returns the list of layer blobs that make up the root filesystem of
// the image, after they've been decompressed.
func (s *storageImageSource) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
updatedBlobInfos := []types.BlobInfo{}
_, manifestType, err := s.GetManifest(ctx, nil)
manifestBlob, manifestType, err := s.GetManifest(ctx, nil)
if err != nil {
return nil, errors.Wrapf(err, "error reading image manifest for %q", s.image.ID)
}
man, err := manifest.FromBlob(manifestBlob, manifestType)
if err != nil {
return nil, errors.Wrapf(err, "error parsing image manifest for %q", s.image.ID)
}
uncompressedLayerType := ""
switch manifestType {
case imgspecv1.MediaTypeImageManifest:
@@ -188,6 +194,8 @@ func (s *storageImageSource) LayerInfosForCopy(ctx context.Context) ([]types.Blo
// This is actually a compressed type, but there's no uncompressed type defined
uncompressedLayerType = manifest.DockerV2Schema2LayerMediaType
}
physicalBlobInfos := []types.BlobInfo{}
layerID := s.image.TopLayer
for layerID != "" {
layer, err := s.imageRef.transport.store.Layer(layerID)
@@ -205,10 +213,43 @@ func (s *storageImageSource) LayerInfosForCopy(ctx context.Context) ([]types.Blo
Size: layer.UncompressedSize,
MediaType: uncompressedLayerType,
}
updatedBlobInfos = append([]types.BlobInfo{blobInfo}, updatedBlobInfos...)
physicalBlobInfos = append([]types.BlobInfo{blobInfo}, physicalBlobInfos...)
layerID = layer.Parent
}
return updatedBlobInfos, nil
res, err := buildLayerInfosForCopy(man.LayerInfos(), physicalBlobInfos)
if err != nil {
return nil, errors.Wrapf(err, "error creating LayerInfosForCopy of image %q", s.image.ID)
}
return res, nil
}
// buildLayerInfosForCopy builds a LayerInfosForCopy return value based on manifestInfos from the original manifest,
// but using layer data which we can actually produce — physicalInfos for non-empty layers,
// and image.GzippedEmptyLayer for empty ones.
// (This is split basically only to allow easily unit-testing the part that has no dependencies on the external environment.)
func buildLayerInfosForCopy(manifestInfos []manifest.LayerInfo, physicalInfos []types.BlobInfo) ([]types.BlobInfo, error) {
nextPhysical := 0
res := make([]types.BlobInfo, len(manifestInfos))
for i, mi := range manifestInfos {
if mi.EmptyLayer {
res[i] = types.BlobInfo{
Digest: image.GzippedEmptyLayerDigest,
Size: int64(len(image.GzippedEmptyLayer)),
MediaType: mi.MediaType,
}
} else {
if nextPhysical >= len(physicalInfos) {
return nil, fmt.Errorf("expected more than %d physical layers to exist", len(physicalInfos))
}
res[i] = physicalInfos[nextPhysical]
nextPhysical++
}
}
if nextPhysical != len(physicalInfos) {
return nil, fmt.Errorf("used only %d out of %d physical layers", nextPhysical, len(physicalInfos))
}
return res, nil
}
// GetSignatures() parses the image's signatures blob into a slice of byte slices.
@@ -243,15 +284,8 @@ func newImageDestination(imageRef storageReference) (*storageImageDestination, e
if err != nil {
return nil, errors.Wrapf(err, "error creating a temporary directory")
}
// Break reading of the reference we're writing, so that copy.Image() won't try to rewrite
// schema1 image manifests to remove embedded references, since that changes the manifest's
// digest, and that makes the image unusable if we subsequently try to access it using a
// reference that mentions the no-longer-correct digest.
publicRef := imageRef
publicRef.name = nil
image := &storageImageDestination{
imageRef: imageRef,
publicRef: publicRef,
directory: directory,
blobDiffIDs: make(map[digest.Digest]digest.Digest),
fileSizes: make(map[digest.Digest]int64),
@@ -261,11 +295,10 @@ func newImageDestination(imageRef storageReference) (*storageImageDestination, e
return image, nil
}
// Reference returns a mostly-usable image reference that can't return a DockerReference, to
// avoid triggering logic in copy.Image() that rewrites schema 1 image manifests in order to
// remove image names that they contain which don't match the value we're using.
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (s storageImageDestination) Reference() types.ImageReference {
return s.publicRef
return s.imageRef
}
// Close cleans up the temporary directory.
@@ -408,12 +441,7 @@ func (s *storageImageDestination) computeID(m manifest.Manifest) string {
case *manifest.Schema1:
// Build a list of the diffIDs we've generated for the non-throwaway FS layers,
// in reverse of the order in which they were originally listed.
for i, history := range m.History {
compat := manifest.Schema1V1Compatibility{}
if err := json.Unmarshal([]byte(history.V1Compatibility), &compat); err != nil {
logrus.Debugf("internal error reading schema 1 history: %v", err)
return ""
}
for i, compat := range m.ExtractedV1Compatibility {
if compat.ThrowAway {
continue
}
@@ -471,8 +499,11 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
layerBlobs := man.LayerInfos()
// Extract or find the layers.
lastLayer := ""
addedLayers := []string{}
for _, blob := range layerBlobs {
if blob.EmptyLayer {
continue
}
var diff io.ReadCloser
// Check if there's already a layer with the ID that we'd give to the result of applying
// this layer blob to its parent, if it has one, or the blob's hex value otherwise.
@@ -481,7 +512,7 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
// Check if it's elsewhere and the caller just forgot to pass it to us in a PutBlob(),
// or to even check if we had it.
logrus.Debugf("looking for diffID for blob %+v", blob.Digest)
has, _, err := s.HasBlob(ctx, blob)
has, _, err := s.HasBlob(ctx, blob.BlobInfo)
if err != nil {
return errors.Wrapf(err, "error checking for a layer based on blob %q", blob.Digest.String())
}
@@ -550,7 +581,6 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
return errors.Wrapf(err, "error adding layer with blob %q", blob.Digest)
}
lastLayer = layer.ID
addedLayers = append([]string{lastLayer}, addedLayers...)
}
// If one of those blobs was a configuration blob, then we can try to dig out the date when the image
// was originally created, in case we're just copying it. If not, no harm done.
@@ -613,7 +643,7 @@ func (s *storageImageDestination) Commit(ctx context.Context) error {
if name := s.imageRef.DockerReference(); len(oldNames) > 0 || name != nil {
names := []string{}
if name != nil {
names = append(names, verboseName(name))
names = append(names, name.String())
}
if len(oldNames) > 0 {
names = append(names, oldNames...)
@@ -703,6 +733,13 @@ func (s *storageImageDestination) MustMatchRuntimeOS() bool {
return true
}
// IgnoresEmbeddedDockerReference returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (s *storageImageDestination) IgnoresEmbeddedDockerReference() bool {
return true // Yes, we want the unmodified manifest
}
// PutSignatures records the image's signatures for committing as a single data blob.
func (s *storageImageDestination) PutSignatures(ctx context.Context, signatures [][]byte) error {
sizes := []int{}

View File

@@ -9,7 +9,6 @@ import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/containers/storage"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -17,55 +16,65 @@ import (
// A storageReference holds an arbitrary name and/or an ID, which is a 32-byte
// value hex-encoded into a 64-character string, and a reference to a Store
// where an image is, or would be, kept.
// Either "named" or "id" must be set.
type storageReference struct {
transport storageTransport
reference string
named reference.Named // may include a tag and/or a digest
id string
name reference.Named
tag string
digest digest.Digest
}
func newReference(transport storageTransport, reference, id string, name reference.Named, tag string, digest digest.Digest) *storageReference {
func newReference(transport storageTransport, named reference.Named, id string) (*storageReference, error) {
if named == nil && id == "" {
return nil, ErrInvalidReference
}
// We take a copy of the transport, which contains a pointer to the
// store that it used for resolving this reference, so that the
// transport that we'll return from Transport() won't be affected by
// further calls to the original transport's SetStore() method.
return &storageReference{
transport: transport,
reference: reference,
named: named,
id: id,
name: name,
tag: tag,
digest: digest,
}, nil
}
// imageMatchesRepo returns true iff image.Names contains an element with the same repo as ref
func imageMatchesRepo(image *storage.Image, ref reference.Named) bool {
repo := ref.Name()
for _, name := range image.Names {
if named, err := reference.ParseNormalizedNamed(name); err == nil {
if named.Name() == repo {
return true
}
}
}
return false
}
// Resolve the reference's name to an image ID in the store, if there's already
// one present with the same name or ID, and return the image.
func (s *storageReference) resolveImage() (*storage.Image, error) {
var loadedImage *storage.Image
if s.id == "" {
// Look for an image that has the expanded reference name as an explicit Name value.
image, err := s.transport.store.Image(s.reference)
image, err := s.transport.store.Image(s.named.String())
if image != nil && err == nil {
loadedImage = image
s.id = image.ID
}
}
if s.id == "" && s.name != nil && s.digest != "" {
// Look for an image with the specified digest that has the same name,
// though possibly with a different tag or digest, as a Name value, so
// that the canonical reference can be implicitly resolved to the image.
images, err := s.transport.store.ImagesByDigest(s.digest)
if images != nil && err == nil {
repo := reference.FamiliarName(reference.TrimNamed(s.name))
search:
for _, image := range images {
for _, name := range image.Names {
if named, err := reference.ParseNormalizedNamed(name); err == nil {
if reference.FamiliarName(reference.TrimNamed(named)) == repo {
s.id = image.ID
break search
}
if s.id == "" && s.named != nil {
if digested, ok := s.named.(reference.Digested); ok {
// Look for an image with the specified digest that has the same name,
// though possibly with a different tag or digest, as a Name value, so
// that the canonical reference can be implicitly resolved to the image.
images, err := s.transport.store.ImagesByDigest(digested.Digest())
if images != nil && err == nil {
for _, image := range images {
if imageMatchesRepo(image, s.named) {
loadedImage = image
s.id = image.ID
break
}
}
}
@@ -75,27 +84,20 @@ func (s *storageReference) resolveImage() (*storage.Image, error) {
logrus.Debugf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return nil, errors.Wrapf(ErrNoSuchImage, "reference %q does not resolve to an image ID", s.StringWithinTransport())
}
img, err := s.transport.store.Image(s.id)
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", s.id)
}
if s.name != nil {
repo := reference.FamiliarName(reference.TrimNamed(s.name))
nameMatch := false
for _, name := range img.Names {
if named, err := reference.ParseNormalizedNamed(name); err == nil {
if reference.FamiliarName(reference.TrimNamed(named)) == repo {
nameMatch = true
break
}
}
if loadedImage == nil {
img, err := s.transport.store.Image(s.id)
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", s.id)
}
if !nameMatch {
loadedImage = img
}
if s.named != nil {
if !imageMatchesRepo(loadedImage, s.named) {
logrus.Errorf("no image matching reference %q found", s.StringWithinTransport())
return nil, ErrNoSuchImage
}
}
return img, nil
return loadedImage, nil
}
// Return a Transport object that defaults to using the same store that we used
@@ -110,20 +112,7 @@ func (s storageReference) Transport() types.ImageTransport {
// Return a name with a tag or digest, if we have either, else return it bare.
func (s storageReference) DockerReference() reference.Named {
if s.name == nil {
return nil
}
if s.tag != "" {
if namedTagged, err := reference.WithTag(s.name, s.tag); err == nil {
return namedTagged
}
}
if s.digest != "" {
if canonical, err := reference.WithDigest(s.name, s.digest); err == nil {
return canonical
}
}
return s.name
return s.named
}
// Return a name with a tag, prefixed with the graph root and driver name, to
@@ -135,25 +124,25 @@ func (s storageReference) StringWithinTransport() string {
if len(options) > 0 {
optionsList = ":" + strings.Join(options, ",")
}
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "+" + s.transport.store.RunRoot() + optionsList + "]"
if s.reference == "" {
return storeSpec + "@" + s.id
res := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "+" + s.transport.store.RunRoot() + optionsList + "]"
if s.named != nil {
res = res + s.named.String()
}
if s.id == "" {
return storeSpec + s.reference
if s.id != "" {
res = res + "@" + s.id
}
return storeSpec + s.reference + "@" + s.id
return res
}
func (s storageReference) PolicyConfigurationIdentity() string {
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
if s.name == nil {
return storeSpec + "@" + s.id
res := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
if s.named != nil {
res = res + s.named.String()
}
if s.id == "" {
return storeSpec + s.reference
if s.id != "" {
res = res + "@" + s.id
}
return storeSpec + s.reference + "@" + s.id
return res
}
// Also accept policy that's tied to the combination of the graph root and
@@ -164,9 +153,17 @@ func (s storageReference) PolicyConfigurationNamespaces() []string {
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
driverlessStoreSpec := "[" + s.transport.store.GraphRoot() + "]"
namespaces := []string{}
if s.name != nil {
name := reference.TrimNamed(s.name)
components := strings.Split(name.String(), "/")
if s.named != nil {
if s.id != "" {
// The reference without the ID is also a valid namespace.
namespaces = append(namespaces, storeSpec+s.named.String())
}
tagged, isTagged := s.named.(reference.Tagged)
_, isDigested := s.named.(reference.Digested)
if isTagged && isDigested { // s.named is "name:tag@digest"; add a "name:tag" parent namespace.
namespaces = append(namespaces, storeSpec+s.named.Name()+":"+tagged.Tag())
}
components := strings.Split(s.named.Name(), "/")
for len(components) > 0 {
namespaces = append(namespaces, storeSpec+strings.Join(components, "/"))
components = components[:len(components)-1]

View File

@@ -3,6 +3,7 @@
package storage
import (
"fmt"
"path/filepath"
"strings"
@@ -105,7 +106,6 @@ func (s *storageTransport) DefaultGIDMap() []idtools.IDMap {
// ParseStoreReference takes a name or an ID, tries to figure out which it is
// relative to the given store, and returns it in a reference object.
func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (*storageReference, error) {
var name reference.Named
if ref == "" {
return nil, errors.Wrapf(ErrInvalidReference, "%q is an empty reference")
}
@@ -118,66 +118,36 @@ func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (
ref = ref[closeIndex+1:]
}
// The last segment, if there's more than one, is either a digest from a reference, or an image ID.
// The reference may end with an image ID. Image IDs and digests use the same "@" separator;
// here we only peel away an image ID, and leave digests alone.
split := strings.LastIndex(ref, "@")
idOrDigest := ""
if split != -1 {
// Peel off that last bit so that we can work on the rest.
idOrDigest = ref[split+1:]
if idOrDigest == "" {
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like a digest or image ID", idOrDigest)
}
ref = ref[:split]
}
// The middle segment (now the last segment), if there is one, is a digest.
split = strings.LastIndex(ref, "@")
sum := digest.Digest("")
if split != -1 {
sum = digest.Digest(ref[split+1:])
if sum == "" {
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like an image digest", sum)
}
ref = ref[:split]
}
// If we have something that unambiguously should be a digest, validate it, and then the third part,
// if we have one, as an ID.
id := ""
if sum != "" {
if idSum, err := digest.Parse("sha256:" + idOrDigest); err != nil || idSum.Validate() != nil {
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like an image ID", idOrDigest)
if split != -1 {
possibleID := ref[split+1:]
if possibleID == "" {
return nil, errors.Wrapf(ErrInvalidReference, "empty trailing digest or ID in %q", ref)
}
if err := sum.Validate(); err != nil {
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like an image digest", sum)
}
id = idOrDigest
if img, err := store.Image(idOrDigest); err == nil && img != nil && len(idOrDigest) >= minimumTruncatedIDLength && strings.HasPrefix(img.ID, idOrDigest) {
// The ID is a truncated version of the ID of an image that's present in local storage,
// so we might as well use the expanded value.
id = img.ID
}
} else if idOrDigest != "" {
// There was no middle portion, so the final portion could be either a digest or an ID.
if idSum, err := digest.Parse("sha256:" + idOrDigest); err == nil && idSum.Validate() == nil {
// It's an ID.
id = idOrDigest
} else if idSum, err := digest.Parse(idOrDigest); err == nil && idSum.Validate() == nil {
// It's a digest.
sum = idSum
} else if img, err := store.Image(idOrDigest); err == nil && img != nil && len(idOrDigest) >= minimumTruncatedIDLength && strings.HasPrefix(img.ID, idOrDigest) {
// It's a truncated version of the ID of an image that's present in local storage,
// and we may need the expanded value.
id = img.ID
} else {
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like a digest or image ID", idOrDigest)
// If it looks like a digest, leave it alone for now.
if _, err := digest.Parse(possibleID); err != nil {
// Otherwise…
if idSum, err := digest.Parse("sha256:" + possibleID); err == nil && idSum.Validate() == nil {
id = possibleID // … it is a full ID
} else if img, err := store.Image(possibleID); err == nil && img != nil && len(possibleID) >= minimumTruncatedIDLength && strings.HasPrefix(img.ID, possibleID) {
// … it is a truncated version of the ID of an image that's present in local storage,
// so we might as well use the expanded value.
id = img.ID
} else {
return nil, errors.Wrapf(ErrInvalidReference, "%q does not look like an image ID or digest", possibleID)
}
// We have recognized an image ID; peel it off.
ref = ref[:split]
}
}
// If we only had one portion, then _maybe_ it's a truncated image ID. Only check on that if it's
// If we only have one @-delimited portion, then _maybe_ it's a truncated image ID. Only check on that if it's
// at least of what we guess is a reasonable minimum length, because we don't want a really short value
// like "a" matching an image by ID prefix when the input was actually meant to specify an image name.
if len(ref) >= minimumTruncatedIDLength && sum == "" && id == "" {
if id == "" && len(ref) >= minimumTruncatedIDLength && !strings.ContainsAny(ref, "@:") {
if img, err := store.Image(ref); err == nil && img != nil && strings.HasPrefix(img.ID, ref) {
// It's a truncated version of the ID of an image that's present in local storage;
// we need to expand it.
@@ -186,53 +156,24 @@ func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (
}
}
// The initial portion is probably a name, possibly with a tag.
var named reference.Named
// Unless we have an un-named "ID" or "@ID" reference (where ID might only have been a prefix), which has been
// completely parsed above, the initial portion should be a name, possibly with a tag and/or a digest..
if ref != "" {
var err error
if name, err = reference.ParseNormalizedNamed(ref); err != nil {
named, err = reference.ParseNormalizedNamed(ref)
if err != nil {
return nil, errors.Wrapf(err, "error parsing named reference %q", ref)
}
}
if name == nil && sum == "" && id == "" {
return nil, errors.Errorf("error parsing reference")
named = reference.TagNameOnly(named)
}
// Construct a copy of the store spec.
optionsList := ""
options := store.GraphOptions()
if len(options) > 0 {
optionsList = ":" + strings.Join(options, ",")
result, err := newReference(storageTransport{store: store, defaultUIDMap: s.defaultUIDMap, defaultGIDMap: s.defaultGIDMap}, named, id)
if err != nil {
return nil, err
}
storeSpec := "[" + store.GraphDriverName() + "@" + store.GraphRoot() + "+" + store.RunRoot() + optionsList + "]"
// Convert the name back into a reference string, if we got a name.
refname := ""
tag := ""
if name != nil {
if sum.Validate() == nil {
canonical, err := reference.WithDigest(name, sum)
if err != nil {
return nil, errors.Wrapf(err, "error mixing name %q with digest %q", name, sum)
}
refname = verboseName(canonical)
} else {
name = reference.TagNameOnly(name)
tagged, ok := name.(reference.Tagged)
if !ok {
return nil, errors.Errorf("error parsing possibly-tagless name %q", ref)
}
refname = verboseName(name)
tag = tagged.Tag()
}
}
if refname == "" {
logrus.Debugf("parsed reference to id into %q", storeSpec+"@"+id)
} else if id == "" {
logrus.Debugf("parsed reference to refname into %q", storeSpec+refname)
} else {
logrus.Debugf("parsed reference to refname@id into %q", storeSpec+refname+"@"+id)
}
return newReference(storageTransport{store: store, defaultUIDMap: s.defaultUIDMap, defaultGIDMap: s.defaultGIDMap}, refname, id, name, tag, sum), nil
logrus.Debugf("parsed reference into %q", result.StringWithinTransport())
return result, nil
}
func (s *storageTransport) GetStore() (storage.Store, error) {
@@ -252,7 +193,7 @@ func (s *storageTransport) GetStore() (storage.Store, error) {
}
// ParseReference takes a name and a tag or digest and/or ID
// ("_name_"/"@_id_"/"_name_:_tag_"/"_name_:_tag_@_id_"/"_name_@_digest_"/"_name_@_digest_@_id_"),
// ("_name_"/"@_id_"/"_name_:_tag_"/"_name_:_tag_@_id_"/"_name_@_digest_"/"_name_@_digest_@_id_"/"_name_:_tag_@_digest_"/"_name_:_tag_@_digest_@_id_"),
// possibly prefixed with a store specifier in the form "[_graphroot_]" or
// "[_driver_@_graphroot_]" or "[_driver_@_graphroot_+_runroot_]" or
// "[_driver_@_graphroot_:_options_]" or "[_driver_@_graphroot_+_runroot_:_options_]",
@@ -338,7 +279,7 @@ func (s *storageTransport) ParseReference(reference string) (types.ImageReferenc
func (s storageTransport) GetStoreImage(store storage.Store, ref types.ImageReference) (*storage.Image, error) {
dref := ref.DockerReference()
if dref != nil {
if img, err := store.Image(verboseName(dref)); err == nil {
if img, err := store.Image(dref.String()); err == nil {
return img, nil
}
}
@@ -399,52 +340,29 @@ func (s storageTransport) ValidatePolicyConfigurationScope(scope string) error {
if scope == "" {
return nil
}
// But if there is anything left, it has to be a name, with or without
// a tag, with or without an ID, since we don't return namespace values
// that are just bare IDs.
scopeInfo := strings.SplitN(scope, "@", 2)
if len(scopeInfo) == 1 && scopeInfo[0] != "" {
_, err := reference.ParseNormalizedNamed(scopeInfo[0])
if err != nil {
fields := strings.SplitN(scope, "@", 3)
switch len(fields) {
case 1: // name only
case 2: // name:tag@ID or name[:tag]@digest
if _, idErr := digest.Parse("sha256:" + fields[1]); idErr != nil {
if _, digestErr := digest.Parse(fields[1]); digestErr != nil {
return fmt.Errorf("%v is neither a valid digest(%s) nor a valid ID(%s)", fields[1], digestErr.Error(), idErr.Error())
}
}
case 3: // name[:tag]@digest@ID
if _, err := digest.Parse(fields[1]); err != nil {
return err
}
} else if len(scopeInfo) == 2 && scopeInfo[0] != "" && scopeInfo[1] != "" {
_, err := reference.ParseNormalizedNamed(scopeInfo[0])
if err != nil {
if _, err := digest.Parse("sha256:" + fields[2]); err != nil {
return err
}
_, err = digest.Parse("sha256:" + scopeInfo[1])
if err != nil {
return err
}
} else {
return ErrInvalidReference
default: // Coverage: This should never happen
return errors.New("Internal error: unexpected number of fields form strings.SplitN")
}
// As for field[0], if it is non-empty at all:
// FIXME? We could be verifying the various character set and length restrictions
// from docker/distribution/reference.regexp.go, but other than that there
// are few semantically invalid strings.
return nil
}
func verboseName(r reference.Reference) string {
if r == nil {
return ""
}
named, isNamed := r.(reference.Named)
digested, isDigested := r.(reference.Digested)
tagged, isTagged := r.(reference.Tagged)
name := ""
tag := ""
sum := ""
if isNamed {
name = (reference.TrimNamed(named)).String()
}
if isTagged {
if tagged.Tag() != "" {
tag = ":" + tagged.Tag()
}
}
if isDigested {
if digested.Digest().Validate() == nil {
sum = "@" + digested.Digest().String()
}
}
return name + tag + sum
}

View File

@@ -174,6 +174,11 @@ type ImageDestination interface {
AcceptsForeignLayerURLs() bool
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
MustMatchRuntimeOS() bool
// IgnoresEmbeddedDockerReference() returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
IgnoresEmbeddedDockerReference() bool
// PutBlob writes contents of stream and returns data representing the result.
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -359,6 +364,8 @@ type SystemContext struct {
OCIInsecureSkipTLSVerify bool
// If not "", use a shared directory for storing blobs rather than within OCI layouts
OCISharedBlobDirPath string
// Allow UnCompress image layer for OCI image layer
OCIAcceptUncompressedLayers bool
// === docker.Transport overrides ===
// If not "", a directory containing a CA certificate (ending with ".crt"),

View File

@@ -5,10 +5,11 @@ github.com/containers/storage master
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/docker-credential-helpers d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1
github.com/docker/distribution 5f6282db7d65e6d72ad7c2cc66310724a57be716
github.com/docker/docker 30eb4d8cdc422b023d5f11f29a82ecb73554183b
github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
github.com/docker/docker da99009bbb1165d1ac5688b5c81d2f589d418341
github.com/docker/go-connections 7beb39f0b969b075d1325fecb092faf27fd357b6
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
github.com/containerd/continuity d8fb8589b0e8e85b8c8bbaa8840226d0dfeb7371
github.com/ghodss/yaml 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
github.com/gorilla/mux 94e7d24fd285520f3d12ae998f7fdd6b5393d453
github.com/imdario/mergo 50d4dbd4eb0e84778abe37cefef140271d96fade
@@ -38,3 +39,7 @@ github.com/BurntSushi/toml b26d9c308763d68093482582cea63d69be07a0f0
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460
github.com/gogo/protobuf fcdc5011193ff531a548e9b0301828d5a5b97fd8
github.com/pquerna/ffjson master
github.com/syndtr/gocapability master
github.com/Microsoft/go-winio ab35fc04b6365e8fcb18e6e9e41ea4a02b10b175
github.com/Microsoft/hcsshim eca7177590cdcbd25bbc5df27e3b693a54b53a6a
github.com/ulikunitz/xz v0.5.4

View File

@@ -41,3 +41,6 @@ memory and stored along with the library's own bookkeeping information.
Additionally, the library can store one or more of what it calls *big data* for
images and containers. This is a named chunk of larger data, which is only in
memory when it is being read from or being written to its own disk file.
**[Contributing](CONTRIBUTING.md)**
Information about contributing to this project.

View File

@@ -2,6 +2,7 @@ package storage
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
@@ -278,7 +279,8 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
names = dedupeNames(names)
for _, name := range names {
if _, nameInUse := r.byname[name]; nameInUse {
return nil, ErrDuplicateName
return nil, errors.Wrapf(ErrDuplicateName,
fmt.Sprintf("the container name \"%s\" is already in use by \"%s\". You have to remove that container to be able to reuse that name.", name, r.byname[name].ID))
}
}
if err == nil {

View File

@@ -1,5 +1,6 @@
// Code generated by ffjson <https://github.com/pquerna/ffjson>. DO NOT EDIT.
// source: ./containers.go
// source: containers.go
//
package storage

View File

@@ -6,7 +6,6 @@ import (
"fmt"
"os"
"path/filepath"
"syscall"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/reexec"
@@ -56,47 +55,7 @@ func chownByMapsMain() {
if err != nil {
return fmt.Errorf("error walking to %q: %v", path, err)
}
sysinfo := info.Sys()
if st, ok := sysinfo.(*syscall.Stat_t); ok {
// Map an on-disk UID/GID pair from host to container
// using the first map, then back to the host using the
// second map. Skip that first step if they're 0, to
// compensate for cases where a parent layer should
// have had a mapped value, but didn't.
uid, gid := int(st.Uid), int(st.Gid)
if toContainer != nil {
pair := idtools.IDPair{
UID: uid,
GID: gid,
}
mappedUid, mappedGid, err := toContainer.ToContainer(pair)
if err != nil {
if (uid != 0) || (gid != 0) {
return fmt.Errorf("error mapping host ID pair %#v for %q to container: %v", pair, path, err)
}
mappedUid, mappedGid = uid, gid
}
uid, gid = mappedUid, mappedGid
}
if toHost != nil {
pair := idtools.IDPair{
UID: uid,
GID: gid,
}
mappedPair, err := toHost.ToHost(pair)
if err != nil {
return fmt.Errorf("error mapping container ID pair %#v for %q to host: %v", pair, path, err)
}
uid, gid = mappedPair.UID, mappedPair.GID
}
if uid != int(st.Uid) || gid != int(st.Gid) {
// Make the change.
if err := syscall.Lchown(path, uid, gid); err != nil {
return fmt.Errorf("%s: chown(%q): %v", os.Args[0], path, err)
}
}
}
return nil
return platformLChown(path, info, toHost, toContainer)
}
if err := filepath.Walk(".", chown); err != nil {
fmt.Fprintf(os.Stderr, "error during chown: %v", err)

View File

@@ -0,0 +1,55 @@
// +build !windows
package graphdriver
import (
"fmt"
"os"
"syscall"
"github.com/containers/storage/pkg/idtools"
)
func platformLChown(path string, info os.FileInfo, toHost, toContainer *idtools.IDMappings) error {
sysinfo := info.Sys()
if st, ok := sysinfo.(*syscall.Stat_t); ok {
// Map an on-disk UID/GID pair from host to container
// using the first map, then back to the host using the
// second map. Skip that first step if they're 0, to
// compensate for cases where a parent layer should
// have had a mapped value, but didn't.
uid, gid := int(st.Uid), int(st.Gid)
if toContainer != nil {
pair := idtools.IDPair{
UID: uid,
GID: gid,
}
mappedUid, mappedGid, err := toContainer.ToContainer(pair)
if err != nil {
if (uid != 0) || (gid != 0) {
return fmt.Errorf("error mapping host ID pair %#v for %q to container: %v", pair, path, err)
}
mappedUid, mappedGid = uid, gid
}
uid, gid = mappedUid, mappedGid
}
if toHost != nil {
pair := idtools.IDPair{
UID: uid,
GID: gid,
}
mappedPair, err := toHost.ToHost(pair)
if err != nil {
return fmt.Errorf("error mapping container ID pair %#v for %q to host: %v", pair, path, err)
}
uid, gid = mappedPair.UID, mappedPair.GID
}
if uid != int(st.Uid) || gid != int(st.Gid) {
// Make the change.
if err := syscall.Lchown(path, uid, gid); err != nil {
return fmt.Errorf("%s: chown(%q): %v", os.Args[0], path, err)
}
}
}
return nil
}

View File

@@ -0,0 +1,14 @@
// +build windows
package graphdriver
import (
"os"
"syscall"
"github.com/containers/storage/pkg/idtools"
)
func platformLChown(path string, info os.FileInfo, toHost, toContainer *idtools.IDMappings) error {
return &os.PathError{"lchown", path, syscall.EWINDOWS}
}

View File

@@ -1,7 +1,7 @@
package graphdriver
import (
"os"
"fmt"
"syscall"
)

View File

@@ -54,7 +54,8 @@ var (
// Slice of drivers that should be used in an order
priority = []string{
"overlay",
"devicemapper",
// We don't support devicemapper without configuration
// "devicemapper",
"aufs",
"btrfs",
"zfs",

View File

@@ -163,6 +163,7 @@ func (gdw *NaiveDiffDriver) ApplyDiff(id string, applyMappings *idtools.IDMappin
start := time.Now().UTC()
logrus.Debug("Start untar layer")
if size, err = ApplyUncompressedLayer(layerFs, diff, options); err != nil {
logrus.Errorf("Error while applying layer: %s", err)
return
}
logrus.Debugf("Untar time: %vs", time.Now().UTC().Sub(start).Seconds())

View File

@@ -24,6 +24,7 @@ import (
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/locker"
"github.com/containers/storage/pkg/mount"
"github.com/containers/storage/pkg/ostree"
"github.com/containers/storage/pkg/parsers"
"github.com/containers/storage/pkg/system"
units "github.com/docker/go-units"
@@ -84,6 +85,9 @@ type overlayOptions struct {
overrideKernelCheck bool
imageStores []string
quota quota.Quota
mountProgram string
ostreeRepo string
skipMountHome bool
}
// Driver contains information about the home directory and the list of active mounts that are created using this driver.
@@ -98,6 +102,7 @@ type Driver struct {
naiveDiff graphdriver.DiffDriver
supportsDType bool
locker *locker.Locker
convert map[string]bool
}
var (
@@ -147,15 +152,28 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
return nil, err
}
supportsDType, err := supportsOverlay(home, fsMagic, rootUID, rootGID)
if err != nil {
os.Remove(filepath.Join(home, linkDir))
os.Remove(home)
return nil, errors.Wrap(err, "kernel does not support overlay fs")
var supportsDType bool
if opts.mountProgram != "" {
supportsDType = true
} else {
supportsDType, err = supportsOverlay(home, fsMagic, rootUID, rootGID)
if err != nil {
os.Remove(filepath.Join(home, linkDir))
os.Remove(home)
return nil, errors.Wrap(err, "kernel does not support overlay fs")
}
}
if err := mount.MakePrivate(home); err != nil {
return nil, err
if !opts.skipMountHome {
if err := mount.MakePrivate(home); err != nil {
return nil, err
}
}
if opts.ostreeRepo != "" {
if err := ostree.CreateOSTreeRepository(opts.ostreeRepo, rootUID, rootGID); err != nil {
return nil, err
}
}
d := &Driver{
@@ -167,6 +185,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
supportsDType: supportsDType,
locker: locker.New(),
options: *opts,
convert: make(map[string]bool),
}
d.naiveDiff = graphdriver.NewNaiveDiffDriver(d, d)
@@ -227,6 +246,25 @@ func parseOptions(options []string) (*overlayOptions, error) {
}
o.imageStores = append(o.imageStores, store)
}
case ".mount_program", "overlay.mount_program", "overlay2.mount_program":
logrus.Debugf("overlay: mount_program=%s", val)
_, err := os.Stat(val)
if err != nil {
return nil, fmt.Errorf("overlay: can't stat program %s: %v", val, err)
}
o.mountProgram = val
case "overlay2.ostree_repo", "overlay.ostree_repo", ".ostree_repo":
logrus.Debugf("overlay: ostree_repo=%s", val)
if !ostree.OstreeSupport() {
return nil, fmt.Errorf("overlay: ostree_repo specified but support for ostree is missing")
}
o.ostreeRepo = val
case "overlay2.skip_mount_home", "overlay.skip_mount_home", ".skip_mount_home":
logrus.Debugf("overlay: skip_mount_home=%s", val)
o.skipMountHome, err = strconv.ParseBool(val)
if err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("overlay: Unknown option %s", key)
}
@@ -236,6 +274,7 @@ func parseOptions(options []string) (*overlayOptions, error) {
func supportsOverlay(home string, homeMagic graphdriver.FsMagic, rootUID, rootGID int) (supportsDType bool, err error) {
// We can try to modprobe overlay first
exec.Command("modprobe", "overlay").Run()
layerDir, err := ioutil.TempDir(home, "compat")
@@ -380,6 +419,11 @@ func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) (retErr
return fmt.Errorf("--storage-opt size is only supported for ReadWrite Layers")
}
}
if d.options.ostreeRepo != "" {
d.convert[id] = true
}
return d.create(id, parent, opts)
}
@@ -547,6 +591,12 @@ func (d *Driver) getLowerDirs(id string) ([]string, error) {
func (d *Driver) Remove(id string) error {
d.locker.Lock(id)
defer d.locker.Unlock(id)
// Ignore errors, we don't want to fail if the ostree branch doesn't exist,
if d.options.ostreeRepo != "" {
ostree.DeleteOSTree(d.options.ostreeRepo, id)
}
dir := d.dir(id)
lid, err := ioutil.ReadFile(path.Join(dir, "link"))
if err == nil {
@@ -663,7 +713,7 @@ func (d *Driver) Get(id, mountLabel string) (_ string, retErr error) {
// the page size. The mount syscall fails if the mount data cannot
// fit within a page and relative links make the mount data much
// smaller at the expense of requiring a fork exec to chroot.
if len(mountData) > pageSize {
if len(mountData) > pageSize || d.options.mountProgram != "" {
//FIXME: We need to figure out to get this to work with additional stores
opts = fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", strings.Join(relLowers, ":"), path.Join(id, "diff"), path.Join(id, "work"))
mountData = label.FormatMountLabel(opts, mountLabel)
@@ -671,8 +721,16 @@ func (d *Driver) Get(id, mountLabel string) (_ string, retErr error) {
return "", fmt.Errorf("cannot mount layer, mount label too large %d", len(mountData))
}
mount = func(source string, target string, mType string, flags uintptr, label string) error {
return mountFrom(d.home, source, target, mType, flags, label)
if d.options.mountProgram != "" {
mount = func(source string, target string, mType string, flags uintptr, label string) error {
mountProgram := exec.Command(d.options.mountProgram, "-o", label, target)
mountProgram.Dir = d.home
return mountProgram.Run()
}
} else {
mount = func(source string, target string, mType string, flags uintptr, label string) error {
return mountFrom(d.home, source, target, mType, flags, label)
}
}
mountTarget = path.Join(id, "merged")
}
@@ -764,6 +822,13 @@ func (d *Driver) ApplyDiff(id string, idMappings *idtools.IDMappings, parent str
return 0, err
}
_, convert := d.convert[id]
if convert {
if err := ostree.ConvertToOSTree(d.options.ostreeRepo, applyDir, id); err != nil {
return 0, err
}
}
return directory.Size(applyDir)
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/chrootarchive"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/ostree"
"github.com/containers/storage/pkg/system"
"github.com/opencontainers/selinux/go-selinux/label"
)
@@ -42,6 +43,27 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
d.homes = append(d.homes, strings.Split(option[12:], ",")...)
continue
}
if strings.HasPrefix(option, "vfs.ostree_repo=") {
if !ostree.OstreeSupport() {
return nil, fmt.Errorf("vfs: ostree_repo specified but support for ostree is missing")
}
d.ostreeRepo = option[16:]
}
if strings.HasPrefix(option, ".ostree_repo=") {
if !ostree.OstreeSupport() {
return nil, fmt.Errorf("vfs: ostree_repo specified but support for ostree is missing")
}
d.ostreeRepo = option[13:]
}
}
if d.ostreeRepo != "" {
rootUID, rootGID, err := idtools.GetRootUIDGID(uidMaps, gidMaps)
if err != nil {
return nil, err
}
if err := ostree.CreateOSTreeRepository(d.ostreeRepo, rootUID, rootGID); err != nil {
return nil, err
}
}
return graphdriver.NewNaiveDiffDriver(d, graphdriver.NewNaiveLayerIDMapUpdater(d)), nil
}
@@ -53,6 +75,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
type Driver struct {
homes []string
idMappings *idtools.IDMappings
ostreeRepo string
}
func (d *Driver) String() string {
@@ -77,11 +100,15 @@ func (d *Driver) Cleanup() error {
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {
return d.Create(id, parent, opts)
return d.create(id, parent, opts, false)
}
// Create prepares the filesystem for the VFS driver and copies the directory for the given id under the parent.
func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) error {
return d.create(id, parent, opts, true)
}
func (d *Driver) create(id, parent string, opts *graphdriver.CreateOpts, ro bool) error {
if opts != nil && len(opts.StorageOpt) != 0 {
return fmt.Errorf("--storage-opt is not supported for vfs")
}
@@ -106,14 +133,23 @@ func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) error {
if _, mountLabel, err := label.InitLabels(labelOpts); err == nil {
label.SetFileLabel(dir, mountLabel)
}
if parent == "" {
return nil
if parent != "" {
parentDir, err := d.Get(parent, "")
if err != nil {
return fmt.Errorf("%s: %s", parent, err)
}
if err := CopyWithTar(parentDir, dir); err != nil {
return err
}
}
parentDir, err := d.Get(parent, "")
if err != nil {
return fmt.Errorf("%s: %s", parent, err)
if ro && d.ostreeRepo != "" {
if err := ostree.ConvertToOSTree(d.ostreeRepo, dir, id); err != nil {
return err
}
}
return CopyWithTar(parentDir, dir)
return nil
}
func (d *Driver) dir(id string) string {
@@ -132,6 +168,10 @@ func (d *Driver) dir(id string) string {
// Remove deletes the content from the directory for a given id.
func (d *Driver) Remove(id string) error {
if d.ostreeRepo != "" {
// Ignore errors, we don't want to fail if the ostree branch doesn't exist,
ostree.DeleteOSTree(d.ostreeRepo, id)
}
return system.EnsureRemoveAll(d.dir(id))
}

View File

@@ -940,11 +940,6 @@ func (d *Driver) AdditionalImageStores() []string {
return nil
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
return nil
}
// UpdateLayerIDMap changes ownerships in the layer's filesystem tree from
// matching those in toContainer to matching those in toHost.
func (d *Driver) UpdateLayerIDMap(id string, toContainer, toHost *idtools.IDMappings, mountLabel string) error {

View File

@@ -40,6 +40,11 @@ type Image struct {
// same top layer.
TopLayer string `json:"layer,omitempty"`
// MappedTopLayers are the IDs of alternate versions of the top layer
// which have the same contents and parent, and which differ from
// TopLayer only in which ID mappings they use.
MappedTopLayers []string `json:"mapped-layers,omitempty"`
// Metadata is data we keep for the convenience of the caller. It is not
// expected to be large, since it is kept in memory.
Metadata string `json:"metadata,omitempty"`
@@ -126,16 +131,17 @@ type imageStore struct {
func copyImage(i *Image) *Image {
return &Image{
ID: i.ID,
Digest: i.Digest,
Names: copyStringSlice(i.Names),
TopLayer: i.TopLayer,
Metadata: i.Metadata,
BigDataNames: copyStringSlice(i.BigDataNames),
BigDataSizes: copyStringInt64Map(i.BigDataSizes),
BigDataDigests: copyStringDigestMap(i.BigDataDigests),
Created: i.Created,
Flags: copyStringInterfaceMap(i.Flags),
ID: i.ID,
Digest: i.Digest,
Names: copyStringSlice(i.Names),
TopLayer: i.TopLayer,
MappedTopLayers: copyStringSlice(i.MappedTopLayers),
Metadata: i.Metadata,
BigDataNames: copyStringSlice(i.BigDataNames),
BigDataSizes: copyStringInt64Map(i.BigDataSizes),
BigDataDigests: copyStringDigestMap(i.BigDataDigests),
Created: i.Created,
Flags: copyStringInterfaceMap(i.Flags),
}
}
@@ -362,6 +368,14 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string, c
return image, err
}
func (r *imageStore) addMappedTopLayer(id, layer string) error {
if image, ok := r.lookup(id); ok {
image.MappedTopLayers = append(image.MappedTopLayers, layer)
return r.Save()
}
return ErrImageUnknown
}
func (r *imageStore) Metadata(id string) (string, error) {
if image, ok := r.lookup(id); ok {
return image.Metadata, nil

View File

@@ -1,5 +1,5 @@
// Code generated by ffjson <https://github.com/pquerna/ffjson>. DO NOT EDIT.
// source: ./images.go
// source: images.go
package storage
@@ -64,6 +64,22 @@ func (j *Image) MarshalJSONBuf(buf fflib.EncodingBuffer) error {
fflib.WriteJsonString(buf, string(j.TopLayer))
buf.WriteByte(',')
}
if len(j.MappedTopLayers) != 0 {
buf.WriteString(`"mapped-layers":`)
if j.MappedTopLayers != nil {
buf.WriteString(`[`)
for i, v := range j.MappedTopLayers {
if i != 0 {
buf.WriteString(`,`)
}
fflib.WriteJsonString(buf, string(v))
}
buf.WriteString(`]`)
} else {
buf.WriteString(`null`)
}
buf.WriteByte(',')
}
if len(j.Metadata) != 0 {
buf.WriteString(`"metadata":`)
fflib.WriteJsonString(buf, string(j.Metadata))
@@ -157,6 +173,8 @@ const (
ffjtImageTopLayer
ffjtImageMappedTopLayers
ffjtImageMetadata
ffjtImageBigDataNames
@@ -178,6 +196,8 @@ var ffjKeyImageNames = []byte("names")
var ffjKeyImageTopLayer = []byte("layer")
var ffjKeyImageMappedTopLayers = []byte("mapped-layers")
var ffjKeyImageMetadata = []byte("metadata")
var ffjKeyImageBigDataNames = []byte("big-data-names")
@@ -311,7 +331,12 @@ mainparse:
case 'm':
if bytes.Equal(ffjKeyImageMetadata, kn) {
if bytes.Equal(ffjKeyImageMappedTopLayers, kn) {
currentKey = ffjtImageMappedTopLayers
state = fflib.FFParse_want_colon
goto mainparse
} else if bytes.Equal(ffjKeyImageMetadata, kn) {
currentKey = ffjtImageMetadata
state = fflib.FFParse_want_colon
goto mainparse
@@ -363,6 +388,12 @@ mainparse:
goto mainparse
}
if fflib.EqualFoldRight(ffjKeyImageMappedTopLayers, kn) {
currentKey = ffjtImageMappedTopLayers
state = fflib.FFParse_want_colon
goto mainparse
}
if fflib.SimpleLetterEqualFold(ffjKeyImageTopLayer, kn) {
currentKey = ffjtImageTopLayer
state = fflib.FFParse_want_colon
@@ -416,6 +447,9 @@ mainparse:
case ffjtImageTopLayer:
goto handle_TopLayer
case ffjtImageMappedTopLayers:
goto handle_MappedTopLayers
case ffjtImageMetadata:
goto handle_Metadata
@@ -600,6 +634,80 @@ handle_TopLayer:
state = fflib.FFParse_after_value
goto mainparse
handle_MappedTopLayers:
/* handler: j.MappedTopLayers type=[]string kind=slice quoted=false*/
{
{
if tok != fflib.FFTok_left_brace && tok != fflib.FFTok_null {
return fs.WrapErr(fmt.Errorf("cannot unmarshal %s into Go value for ", tok))
}
}
if tok == fflib.FFTok_null {
j.MappedTopLayers = nil
} else {
j.MappedTopLayers = []string{}
wantVal := true
for {
var tmpJMappedTopLayers string
tok = fs.Scan()
if tok == fflib.FFTok_error {
goto tokerror
}
if tok == fflib.FFTok_right_brace {
break
}
if tok == fflib.FFTok_comma {
if wantVal == true {
// TODO(pquerna): this isn't an ideal error message, this handles
// things like [,,,] as an array value.
return fs.WrapErr(fmt.Errorf("wanted value token, but got token: %v", tok))
}
continue
} else {
wantVal = true
}
/* handler: tmpJMappedTopLayers type=string kind=string quoted=false*/
{
{
if tok != fflib.FFTok_string && tok != fflib.FFTok_null {
return fs.WrapErr(fmt.Errorf("cannot unmarshal %s into Go value for string", tok))
}
}
if tok == fflib.FFTok_null {
} else {
outBuf := fs.Output.Bytes()
tmpJMappedTopLayers = string(string(outBuf))
}
}
j.MappedTopLayers = append(j.MappedTopLayers, tmpJMappedTopLayers)
wantVal = false
}
}
}
state = fflib.FFParse_after_value
goto mainparse
handle_Metadata:
/* handler: j.Metadata type=string kind=string quoted=false*/

View File

@@ -211,7 +211,10 @@ type LayerStore interface {
Mount(id, mountLabel string) (string, error)
// Unmount unmounts a layer when it is no longer in use.
Unmount(id string) error
Unmount(id string, force bool) (bool, error)
// Mounted returns whether or not the layer is mounted.
Mounted(id string) (bool, error)
// ParentOwners returns the UIDs and GIDs of parents of the layer's mountpoint
// for which the layer's UID and GID maps don't contain corresponding entries.
@@ -624,6 +627,14 @@ func (r *layerStore) Create(id string, parent *Layer, names []string, mountLabel
return r.CreateWithFlags(id, parent, names, mountLabel, options, moreOptions, writeable, nil)
}
func (r *layerStore) Mounted(id string) (bool, error) {
layer, ok := r.lookup(id)
if !ok {
return false, ErrLayerUnknown
}
return layer.MountCount > 0, nil
}
func (r *layerStore) Mount(id, mountLabel string) (string, error) {
if !r.IsReadWrite() {
return "", errors.Wrapf(ErrStoreIsReadOnly, "not allowed to update mount locations for layers at %q", r.mountspath())
@@ -652,21 +663,24 @@ func (r *layerStore) Mount(id, mountLabel string) (string, error) {
return mountpoint, err
}
func (r *layerStore) Unmount(id string) error {
func (r *layerStore) Unmount(id string, force bool) (bool, error) {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to update mount locations for layers at %q", r.mountspath())
return false, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to update mount locations for layers at %q", r.mountspath())
}
layer, ok := r.lookup(id)
if !ok {
layerByMount, ok := r.bymount[filepath.Clean(id)]
if !ok {
return ErrLayerUnknown
return false, ErrLayerUnknown
}
layer = layerByMount
}
if force {
layer.MountCount = 1
}
if layer.MountCount > 1 {
layer.MountCount--
return r.Save()
return true, r.Save()
}
err := r.driver.Put(id)
if err == nil || os.IsNotExist(err) {
@@ -675,9 +689,9 @@ func (r *layerStore) Unmount(id string) error {
}
layer.MountCount--
layer.MountPoint = ""
err = r.Save()
return false, r.Save()
}
return err
return true, err
}
func (r *layerStore) ParentOwners(id string) (uids, gids []int, err error) {
@@ -797,10 +811,8 @@ func (r *layerStore) Delete(id string) error {
return ErrLayerUnknown
}
id = layer.ID
for layer.MountCount > 0 {
if err := r.Unmount(id); err != nil {
return err
}
if _, err := r.Unmount(id, true); err != nil {
return err
}
err := r.driver.Remove(id)
if err == nil {
@@ -917,7 +929,8 @@ func (s *simpleGetCloser) Get(path string) (io.ReadCloser, error) {
}
func (s *simpleGetCloser) Close() error {
return s.r.Unmount(s.id)
_, err := s.r.Unmount(s.id, false)
return err
}
func (r *layerStore) newFileGetter(id string) (drivers.FileGetCloser, error) {

View File

@@ -56,6 +56,11 @@ type (
// replaced with the matching name from this map.
RebaseNames map[string]string
InUserNS bool
// CopyPass indicates that the contents of any archive we're creating
// will instantly be extracted and written to disk, so we can deviate
// from the traditional behavior/format to get features like subsecond
// precision in timestamps.
CopyPass bool
}
)
@@ -396,6 +401,11 @@ type tarAppender struct {
// by the AUFS standard are used as the tar whiteout
// standard.
WhiteoutConverter tarWhiteoutConverter
// CopyPass indicates that the contents of any archive we're creating
// will instantly be extracted and written to disk, so we can deviate
// from the traditional behavior/format to get features like subsecond
// precision in timestamps.
CopyPass bool
}
func newTarAppender(idMapping *idtools.IDMappings, writer io.Writer, chownOpts *idtools.IDPair) *tarAppender {
@@ -446,6 +456,9 @@ func (ta *tarAppender) addTarFile(path, name string) error {
if err := ReadSecurityXattrToTarHeader(path, hdr); err != nil {
return err
}
if ta.CopyPass {
copyPassHeader(hdr)
}
// if it's not a directory and has more than 1 link,
// it's hard linked, so set the type flag accordingly
@@ -710,6 +723,7 @@ func TarWithOptions(srcPath string, options *TarOptions) (io.ReadCloser, error)
options.ChownOpts,
)
ta.WhiteoutConverter = getWhiteoutConverter(options.WhiteoutFormat, options.WhiteoutData)
ta.CopyPass = options.CopyPass
defer func() {
// Make sure to check the error on Close.
@@ -1039,6 +1053,7 @@ func (archiver *Archiver) TarUntar(src, dst string) error {
UIDMaps: tarMappings.UIDs(),
GIDMaps: tarMappings.GIDs(),
Compression: Uncompressed,
CopyPass: true,
}
archive, err := TarWithOptions(src, options)
if err != nil {
@@ -1145,6 +1160,7 @@ func (archiver *Archiver) CopyFileWithTar(src, dst string) (err error) {
}
hdr.Name = filepath.Base(dst)
hdr.Mode = int64(chmodTarEntry(os.FileMode(hdr.Mode)))
copyPassHeader(hdr)
if err := remapIDs(archiver.TarIDMappings, nil, archiver.ChownOpts, hdr); err != nil {
return err

View File

@@ -0,0 +1,11 @@
// +build go1.10
package archive
import (
"archive/tar"
)
func copyPassHeader(hdr *tar.Header) {
hdr.Format = tar.FormatPAX
}

View File

@@ -0,0 +1,10 @@
// +build !go1.10
package archive
import (
"archive/tar"
)
func copyPassHeader(hdr *tar.Header) {
}

View File

@@ -1,5 +1,5 @@
// Code generated by ffjson <https://github.com/pquerna/ffjson>. DO NOT EDIT.
// source: ./pkg/archive/archive.go
// source: pkg/archive/archive.go
package archive
@@ -491,6 +491,11 @@ func (j *TarOptions) MarshalJSONBuf(buf fflib.EncodingBuffer) error {
} else {
buf.WriteString(`,"InUserNS":false`)
}
if j.CopyPass {
buf.WriteString(`,"CopyPass":true`)
} else {
buf.WriteString(`,"CopyPass":false`)
}
buf.WriteByte('}')
return nil
}
@@ -524,6 +529,8 @@ const (
ffjtTarOptionsRebaseNames
ffjtTarOptionsInUserNS
ffjtTarOptionsCopyPass
)
var ffjKeyTarOptionsIncludeFiles = []byte("IncludeFiles")
@@ -552,6 +559,8 @@ var ffjKeyTarOptionsRebaseNames = []byte("RebaseNames")
var ffjKeyTarOptionsInUserNS = []byte("InUserNS")
var ffjKeyTarOptionsCopyPass = []byte("CopyPass")
// UnmarshalJSON umarshall json - template of ffjson
func (j *TarOptions) UnmarshalJSON(input []byte) error {
fs := fflib.NewFFLexer(input)
@@ -624,6 +633,11 @@ mainparse:
currentKey = ffjtTarOptionsChownOpts
state = fflib.FFParse_want_colon
goto mainparse
} else if bytes.Equal(ffjKeyTarOptionsCopyPass, kn) {
currentKey = ffjtTarOptionsCopyPass
state = fflib.FFParse_want_colon
goto mainparse
}
case 'E':
@@ -704,6 +718,12 @@ mainparse:
}
if fflib.EqualFoldRight(ffjKeyTarOptionsCopyPass, kn) {
currentKey = ffjtTarOptionsCopyPass
state = fflib.FFParse_want_colon
goto mainparse
}
if fflib.EqualFoldRight(ffjKeyTarOptionsInUserNS, kn) {
currentKey = ffjtTarOptionsInUserNS
state = fflib.FFParse_want_colon
@@ -838,6 +858,9 @@ mainparse:
case ffjtTarOptionsInUserNS:
goto handle_InUserNS
case ffjtTarOptionsCopyPass:
goto handle_CopyPass
case ffjtTarOptionsnosuchkey:
err = fs.SkipField(tok)
if err != nil {
@@ -1481,6 +1504,41 @@ handle_InUserNS:
state = fflib.FFParse_after_value
goto mainparse
handle_CopyPass:
/* handler: j.CopyPass type=bool kind=bool quoted=false*/
{
if tok != fflib.FFTok_bool && tok != fflib.FFTok_null {
return fs.WrapErr(fmt.Errorf("cannot unmarshal %s into Go value for bool", tok))
}
}
{
if tok == fflib.FFTok_null {
} else {
tmpb := fs.Output.Bytes()
if bytes.Compare([]byte{'t', 'r', 'u', 'e'}, tmpb) == 0 {
j.CopyPass = true
} else if bytes.Compare([]byte{'f', 'a', 'l', 's', 'e'}, tmpb) == 0 {
j.CopyPass = false
} else {
err = errors.New("unexpected bytes for true/false value")
return fs.WrapErr(err)
}
}
}
state = fflib.FFParse_after_value
goto mainparse
wantedvalue:
return fs.WrapErr(fmt.Errorf("wanted value token, but got token: %v", tok))
wrongtokenerror:
@@ -1773,6 +1831,11 @@ func (j *tarAppender) MarshalJSONBuf(buf fflib.EncodingBuffer) error {
if err != nil {
return err
}
if j.CopyPass {
buf.WriteString(`,"CopyPass":true`)
} else {
buf.WriteString(`,"CopyPass":false`)
}
buf.WriteByte('}')
return nil
}
@@ -1792,6 +1855,8 @@ const (
ffjttarAppenderChownOpts
ffjttarAppenderWhiteoutConverter
ffjttarAppenderCopyPass
)
var ffjKeytarAppenderTarWriter = []byte("TarWriter")
@@ -1806,6 +1871,8 @@ var ffjKeytarAppenderChownOpts = []byte("ChownOpts")
var ffjKeytarAppenderWhiteoutConverter = []byte("WhiteoutConverter")
var ffjKeytarAppenderCopyPass = []byte("CopyPass")
// UnmarshalJSON umarshall json - template of ffjson
func (j *tarAppender) UnmarshalJSON(input []byte) error {
fs := fflib.NewFFLexer(input)
@@ -1881,6 +1948,11 @@ mainparse:
currentKey = ffjttarAppenderChownOpts
state = fflib.FFParse_want_colon
goto mainparse
} else if bytes.Equal(ffjKeytarAppenderCopyPass, kn) {
currentKey = ffjttarAppenderCopyPass
state = fflib.FFParse_want_colon
goto mainparse
}
case 'I':
@@ -1917,6 +1989,12 @@ mainparse:
}
if fflib.EqualFoldRight(ffjKeytarAppenderCopyPass, kn) {
currentKey = ffjttarAppenderCopyPass
state = fflib.FFParse_want_colon
goto mainparse
}
if fflib.SimpleLetterEqualFold(ffjKeytarAppenderWhiteoutConverter, kn) {
currentKey = ffjttarAppenderWhiteoutConverter
state = fflib.FFParse_want_colon
@@ -1988,6 +2066,9 @@ mainparse:
case ffjttarAppenderWhiteoutConverter:
goto handle_WhiteoutConverter
case ffjttarAppenderCopyPass:
goto handle_CopyPass
case ffjttarAppendernosuchkey:
err = fs.SkipField(tok)
if err != nil {
@@ -2211,6 +2292,41 @@ handle_WhiteoutConverter:
state = fflib.FFParse_after_value
goto mainparse
handle_CopyPass:
/* handler: j.CopyPass type=bool kind=bool quoted=false*/
{
if tok != fflib.FFTok_bool && tok != fflib.FFTok_null {
return fs.WrapErr(fmt.Errorf("cannot unmarshal %s into Go value for bool", tok))
}
}
{
if tok == fflib.FFTok_null {
} else {
tmpb := fs.Output.Bytes()
if bytes.Compare([]byte{'t', 'r', 'u', 'e'}, tmpb) == 0 {
j.CopyPass = true
} else if bytes.Compare([]byte{'f', 'a', 'l', 's', 'e'}, tmpb) == 0 {
j.CopyPass = false
} else {
err = errors.New("unexpected bytes for true/false value")
return fs.WrapErr(err)
}
}
}
state = fflib.FFParse_after_value
goto mainparse
wantedvalue:
return fs.WrapErr(fmt.Errorf("wanted value token, but got token: %v", tok))
wrongtokenerror:

View File

@@ -7,7 +7,7 @@ import (
"path/filepath"
"github.com/containers/storage/pkg/mount"
rsystem "github.com/opencontainers/runc/libcontainer/system"
"github.com/syndtr/gocapability/capability"
"golang.org/x/sys/unix"
)
@@ -18,10 +18,16 @@ import (
// Old root is removed after the call to pivot_root so it is no longer available under the new root.
// This is similar to how libcontainer sets up a container's rootfs
func chroot(path string) (err error) {
// if the engine is running in a user namespace we need to use actual chroot
if rsystem.RunningInUserNS() {
caps, err := capability.NewPid(0)
if err != nil {
return err
}
// if the process doesn't have CAP_SYS_ADMIN, but does have CAP_SYS_CHROOT, we need to use the actual chroot
if !caps.Get(capability.EFFECTIVE, capability.CAP_SYS_ADMIN) && caps.Get(capability.EFFECTIVE, capability.CAP_SYS_CHROOT) {
return realChroot(path)
}
if err := unix.Unshare(unix.CLONE_NEWNS); err != nil {
return fmt.Errorf("Error creating mount namespace before pivot: %v", err)
}

View File

@@ -1,4 +1,4 @@
// +build linux freebsd solaris
// +build linux darwin freebsd solaris
package directory

View File

@@ -0,0 +1,19 @@
// +build !ostree
package ostree
func OstreeSupport() bool {
return false
}
func DeleteOSTree(repoLocation, id string) error {
return nil
}
func CreateOSTreeRepository(repoLocation string, rootUID int, rootGID int) error {
return nil
}
func ConvertToOSTree(repoLocation, root, id string) error {
return nil
}

View File

@@ -0,0 +1,198 @@
// +build ostree
package ostree
import (
"fmt"
"golang.org/x/sys/unix"
"os"
"path/filepath"
"runtime"
"syscall"
"time"
"unsafe"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/system"
glib "github.com/ostreedev/ostree-go/pkg/glibobject"
"github.com/ostreedev/ostree-go/pkg/otbuiltin"
"github.com/pkg/errors"
)
// #cgo pkg-config: glib-2.0 gobject-2.0 ostree-1
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include <stdlib.h>
// #include <ostree.h>
// #include <gio/ginputstream.h>
import "C"
func OstreeSupport() bool {
return true
}
func fixFiles(dir string, usermode bool) (bool, []string, error) {
var SkipOstree = errors.New("skip ostree deduplication")
var whiteouts []string
err := filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if info.Mode()&(os.ModeNamedPipe|os.ModeSocket|os.ModeDevice) != 0 {
if !usermode {
stat, ok := info.Sys().(*syscall.Stat_t)
if !ok {
return errors.New("not syscall.Stat_t")
}
if stat.Rdev == 0 && (stat.Mode&unix.S_IFCHR) != 0 {
whiteouts = append(whiteouts, path)
return nil
}
}
// Skip the ostree deduplication if we encounter a file type that
// ostree does not manage.
return SkipOstree
}
if info.IsDir() {
if usermode {
if err := os.Chmod(path, info.Mode()|0700); err != nil {
return err
}
}
} else if usermode && (info.Mode().IsRegular()) {
if err := os.Chmod(path, info.Mode()|0600); err != nil {
return err
}
}
return nil
})
if err == SkipOstree {
return true, nil, nil
}
if err != nil {
return false, nil, err
}
return false, whiteouts, nil
}
// Create prepares the filesystem for the OSTREE driver and copies the directory for the given id under the parent.
func ConvertToOSTree(repoLocation, root, id string) error {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
repo, err := otbuiltin.OpenRepo(repoLocation)
if err != nil {
return errors.Wrap(err, "could not open the OSTree repository")
}
skip, whiteouts, err := fixFiles(root, os.Getuid() != 0)
if err != nil {
return errors.Wrap(err, "could not prepare the OSTree directory")
}
if skip {
return nil
}
if _, err := repo.PrepareTransaction(); err != nil {
return errors.Wrap(err, "could not prepare the OSTree transaction")
}
if skip {
return nil
}
commitOpts := otbuiltin.NewCommitOptions()
commitOpts.Timestamp = time.Now()
commitOpts.LinkCheckoutSpeedup = true
commitOpts.Parent = "0000000000000000000000000000000000000000000000000000000000000000"
branch := fmt.Sprintf("containers-storage/%s", id)
for _, w := range whiteouts {
if err := os.Remove(w); err != nil {
return errors.Wrap(err, "could not delete whiteout file")
}
}
if _, err := repo.Commit(root, branch, commitOpts); err != nil {
return errors.Wrap(err, "could not commit the layer")
}
if _, err := repo.CommitTransaction(); err != nil {
return errors.Wrap(err, "could not complete the OSTree transaction")
}
if err := system.EnsureRemoveAll(root); err != nil {
return errors.Wrap(err, "could not delete layer")
}
checkoutOpts := otbuiltin.NewCheckoutOptions()
checkoutOpts.RequireHardlinks = true
checkoutOpts.Whiteouts = false
if err := otbuiltin.Checkout(repoLocation, root, branch, checkoutOpts); err != nil {
return errors.Wrap(err, "could not checkout from OSTree")
}
for _, w := range whiteouts {
if err := unix.Mknod(w, unix.S_IFCHR, 0); err != nil {
return errors.Wrap(err, "could not recreate whiteout file")
}
}
return nil
}
func CreateOSTreeRepository(repoLocation string, rootUID int, rootGID int) error {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
_, err := os.Stat(repoLocation)
if err != nil && !os.IsNotExist(err) {
return err
} else if err != nil {
if err := idtools.MkdirAllAs(repoLocation, 0700, rootUID, rootGID); err != nil {
return errors.Wrap(err, "could not create OSTree repository directory: %v")
}
if _, err := otbuiltin.Init(repoLocation, otbuiltin.NewInitOptions()); err != nil {
return errors.Wrap(err, "could not create OSTree repository")
}
}
return nil
}
func openRepo(path string) (*C.struct_OstreeRepo, error) {
var cerr *C.GError
cpath := C.CString(path)
defer C.free(unsafe.Pointer(cpath))
pathc := C.g_file_new_for_path(cpath)
defer C.g_object_unref(C.gpointer(pathc))
repo := C.ostree_repo_new(pathc)
r := glib.GoBool(glib.GBoolean(C.ostree_repo_open(repo, nil, &cerr)))
if !r {
C.g_object_unref(C.gpointer(repo))
return nil, glib.ConvertGError(glib.ToGError(unsafe.Pointer(cerr)))
}
return repo, nil
}
func DeleteOSTree(repoLocation, id string) error {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
repo, err := openRepo(repoLocation)
if err != nil {
return err
}
defer C.g_object_unref(C.gpointer(repo))
branch := fmt.Sprintf("containers-storage/%s", id)
cbranch := C.CString(branch)
defer C.free(unsafe.Pointer(cbranch))
var cerr *C.GError
r := glib.GoBool(glib.GBoolean(C.ostree_repo_set_ref_immediate(repo, nil, cbranch, nil, nil, &cerr)))
if !r {
return glib.ConvertGError(glib.ToGError(unsafe.Pointer(cerr)))
}
return nil
}

View File

@@ -0,0 +1 @@
This package provides helper functions for dealing with strings

View File

@@ -0,0 +1,99 @@
// Package stringutils provides helper functions for dealing with strings.
package stringutils
import (
"bytes"
"math/rand"
"strings"
)
// GenerateRandomAlphaOnlyString generates an alphabetical random string with length n.
func GenerateRandomAlphaOnlyString(n int) string {
// make a really long string
letters := []byte("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
b := make([]byte, n)
for i := range b {
b[i] = letters[rand.Intn(len(letters))]
}
return string(b)
}
// GenerateRandomASCIIString generates an ASCII random string with length n.
func GenerateRandomASCIIString(n int) string {
chars := "abcdefghijklmnopqrstuvwxyz" +
"ABCDEFGHIJKLMNOPQRSTUVWXYZ" +
"~!@#$%^&*()-_+={}[]\\|<,>.?/\"';:` "
res := make([]byte, n)
for i := 0; i < n; i++ {
res[i] = chars[rand.Intn(len(chars))]
}
return string(res)
}
// Ellipsis truncates a string to fit within maxlen, and appends ellipsis (...).
// For maxlen of 3 and lower, no ellipsis is appended.
func Ellipsis(s string, maxlen int) string {
r := []rune(s)
if len(r) <= maxlen {
return s
}
if maxlen <= 3 {
return string(r[:maxlen])
}
return string(r[:maxlen-3]) + "..."
}
// Truncate truncates a string to maxlen.
func Truncate(s string, maxlen int) string {
r := []rune(s)
if len(r) <= maxlen {
return s
}
return string(r[:maxlen])
}
// InSlice tests whether a string is contained in a slice of strings or not.
// Comparison is case insensitive
func InSlice(slice []string, s string) bool {
for _, ss := range slice {
if strings.ToLower(s) == strings.ToLower(ss) {
return true
}
}
return false
}
func quote(word string, buf *bytes.Buffer) {
// Bail out early for "simple" strings
if word != "" && !strings.ContainsAny(word, "\\'\"`${[|&;<>()~*?! \t\n") {
buf.WriteString(word)
return
}
buf.WriteString("'")
for i := 0; i < len(word); i++ {
b := word[i]
if b == '\'' {
// Replace literal ' with a close ', a \', and an open '
buf.WriteString("'\\''")
} else {
buf.WriteByte(b)
}
}
buf.WriteString("'")
}
// ShellQuoteArguments takes a list of strings and escapes them so they will be
// handled right when passed as arguments to a program via a shell
func ShellQuoteArguments(args []string) string {
var buf bytes.Buffer
for i, arg := range args {
if i != 0 {
buf.WriteByte(' ')
}
quote(arg, &buf)
}
return buf.String()
}

View File

@@ -28,6 +28,20 @@ func (s StatT) Mtim() time.Time {
return time.Time(s.mtim)
}
// UID returns file's user id of owner.
//
// on windows this is always 0 because there is no concept of UID
func (s StatT) UID() uint32 {
return 0
}
// GID returns file's group id of owner.
//
// on windows this is always 0 because there is no concept of GID
func (s StatT) GID() uint32 {
return 0
}
// Stat takes a path to a file and returns
// a system.StatT type pertaining to that file.
//

View File

@@ -7,6 +7,7 @@ import (
"io/ioutil"
"os"
"path/filepath"
"reflect"
"strconv"
"strings"
"sync"
@@ -18,9 +19,11 @@ import (
"github.com/BurntSushi/toml"
drivers "github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/directory"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/stringid"
"github.com/containers/storage/pkg/stringutils"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
@@ -259,8 +262,11 @@ type Store interface {
Mount(id, mountLabel string) (string, error)
// Unmount attempts to unmount a layer, image, or container, given an ID, a
// name, or a mount path.
Unmount(id string) error
// name, or a mount path. Returns whether or not the layer is still mounted.
Unmount(id string, force bool) (bool, error)
// Unmount attempts to discover whether the specified id is mounted.
Mounted(id string) (bool, error)
// Changes returns a summary of the changes which would need to be made
// to one layer to make its contents the same as a second layer. If
@@ -346,6 +352,9 @@ type Store interface {
// with an image.
SetImageBigData(id, key string, data []byte) error
// ImageSize computes the size of the image's layers and ancillary data.
ImageSize(id string) (int64, error)
// ListContainerBigData retrieves a list of the (possibly large) chunks of
// named data associated with a container.
ListContainerBigData(id string) ([]string, error)
@@ -366,6 +375,10 @@ type Store interface {
// associated with a container.
SetContainerBigData(id, key string, data []byte) error
// ContainerSize computes the size of the container's layer and ancillary
// data. Warning: this is a potentially expensive operation.
ContainerSize(id string) (int64, error)
// Layer returns a specific layer.
Layer(id string) (*Layer, error)
@@ -949,6 +962,106 @@ func (s *store) CreateImage(id string, names []string, layer, metadata string, o
return ristore.Create(id, names, layer, metadata, creationDate, options.Digest)
}
func (s *store) imageTopLayerForMapping(image *Image, ristore ROImageStore, readWrite bool, rlstore LayerStore, lstores []ROLayerStore, options IDMappingOptions) (*Layer, error) {
layerMatchesMappingOptions := func(layer *Layer, options IDMappingOptions) bool {
// If we want host mapping, and the layer uses mappings, it's not the best match.
if options.HostUIDMapping && len(layer.UIDMap) != 0 {
return false
}
if options.HostGIDMapping && len(layer.GIDMap) != 0 {
return false
}
// If we don't care about the mapping, it's fine.
if len(options.UIDMap) == 0 && len(options.GIDMap) == 0 {
return true
}
// Compare the maps.
return reflect.DeepEqual(layer.UIDMap, options.UIDMap) && reflect.DeepEqual(layer.GIDMap, options.GIDMap)
}
var layer, parentLayer *Layer
var layerHomeStore ROLayerStore
// Locate the image's top layer and its parent, if it has one.
for _, store := range append([]ROLayerStore{rlstore}, lstores...) {
if store != rlstore {
store.Lock()
defer store.Unlock()
if modified, err := store.Modified(); modified || err != nil {
store.Load()
}
}
// Walk the top layer list.
for _, candidate := range append([]string{image.TopLayer}, image.MappedTopLayers...) {
if cLayer, err := store.Get(candidate); err == nil {
// We want the layer's parent, too, if it has one.
var cParentLayer *Layer
if cLayer.Parent != "" {
// Its parent should be around here, somewhere.
if cParentLayer, err = store.Get(cLayer.Parent); err != nil {
// Nope, couldn't find it. We're not going to be able
// to diff this one properly.
continue
}
}
// If the layer matches the desired mappings, it's a perfect match,
// so we're actually done here.
if layerMatchesMappingOptions(cLayer, options) {
return cLayer, nil
}
// Record the first one that we found, even if it's not ideal, so that
// we have a starting point.
if layer == nil {
layer = cLayer
parentLayer = cParentLayer
layerHomeStore = store
}
}
}
}
if layer == nil {
return nil, ErrLayerUnknown
}
// The top layer's mappings don't match the ones we want, but it's in a read-only
// image store, so we can't create and add a mapped copy of the layer to the image.
if !readWrite {
return layer, nil
}
// The top layer's mappings don't match the ones we want, and it's in an image store
// that lets us edit image metadata...
if istore, ok := ristore.(*imageStore); ok {
// ... so extract the layer's contents, create a new copy of it with the
// desired mappings, and register it as an alternate top layer in the image.
noCompression := archive.Uncompressed
diffOptions := DiffOptions{
Compression: &noCompression,
}
rc, err := layerHomeStore.Diff("", layer.ID, &diffOptions)
if err != nil {
return nil, errors.Wrapf(err, "error reading layer %q to create an ID-mapped version of it")
}
defer rc.Close()
layerOptions := LayerOptions{
IDMappingOptions: IDMappingOptions{
HostUIDMapping: options.HostUIDMapping,
HostGIDMapping: options.HostGIDMapping,
UIDMap: copyIDMap(options.UIDMap),
GIDMap: copyIDMap(options.GIDMap),
},
}
mappedLayer, _, err := rlstore.Put("", parentLayer, nil, layer.MountLabel, nil, &layerOptions, false, nil, rc)
if err != nil {
return nil, errors.Wrapf(err, "error creating ID-mapped copy of layer %q")
}
if err = istore.addMappedTopLayer(image.ID, mappedLayer.ID); err != nil {
if err2 := rlstore.Delete(mappedLayer.ID); err2 != nil {
err = errors.WithMessage(err, fmt.Sprintf("error deleting layer %q: %v", mappedLayer.ID, err2))
}
return nil, errors.Wrapf(err, "error registering ID-mapped layer with image %q", image.ID)
}
layer = mappedLayer
}
return layer, nil
}
func (s *store) CreateContainer(id string, names []string, image, layer, metadata string, options *ContainerOptions) (*Container, error) {
if options == nil {
options = &ContainerOptions{}
@@ -977,6 +1090,7 @@ func (s *store) CreateContainer(id string, names []string, image, layer, metadat
uidMap := options.UIDMap
gidMap := options.GIDMap
if image != "" {
var imageHomeStore ROImageStore
istore, err := s.ImageStore()
if err != nil {
return nil, err
@@ -994,6 +1108,7 @@ func (s *store) CreateContainer(id string, names []string, image, layer, metadat
}
cimage, err = store.Get(image)
if err == nil {
imageHomeStore = store
break
}
}
@@ -1006,22 +1121,9 @@ func (s *store) CreateContainer(id string, names []string, image, layer, metadat
if err != nil {
return nil, err
}
var ilayer *Layer
for _, store := range append([]ROLayerStore{rlstore}, lstores...) {
if store != rlstore {
store.Lock()
defer store.Unlock()
if modified, err := store.Modified(); modified || err != nil {
store.Load()
}
}
ilayer, err = store.Get(cimage.TopLayer)
if err == nil {
break
}
}
if ilayer == nil {
return nil, ErrLayerUnknown
ilayer, err := s.imageTopLayerForMapping(cimage, imageHomeStore, imageHomeStore == istore, rlstore, lstores, options.IDMappingOptions)
if err != nil {
return nil, err
}
imageTopLayer = ilayer
if !options.HostUIDMapping && len(options.UIDMap) == 0 {
@@ -1279,6 +1381,200 @@ func (s *store) SetImageBigData(id, key string, data []byte) error {
return ristore.SetBigData(id, key, data)
}
func (s *store) ImageSize(id string) (int64, error) {
var image *Image
lstore, err := s.LayerStore()
if err != nil {
return -1, errors.Wrapf(err, "error loading primary layer store data")
}
lstores, err := s.ROLayerStores()
if err != nil {
return -1, errors.Wrapf(err, "error loading additional layer stores")
}
for _, store := range append([]ROLayerStore{lstore}, lstores...) {
store.Lock()
defer store.Unlock()
if modified, err := store.Modified(); modified || err != nil {
store.Load()
}
}
var imageStore ROBigDataStore
istore, err := s.ImageStore()
if err != nil {
return -1, errors.Wrapf(err, "error loading primary image store data")
}
istores, err := s.ROImageStores()
if err != nil {
return -1, errors.Wrapf(err, "error loading additional image stores")
}
// Look for the image's record.
for _, store := range append([]ROImageStore{istore}, istores...) {
store.Lock()
defer store.Unlock()
if modified, err := store.Modified(); modified || err != nil {
store.Load()
}
if image, err = store.Get(id); err == nil {
imageStore = store
break
}
}
if image == nil {
return -1, errors.Wrapf(ErrImageUnknown, "error locating image with ID %q", id)
}
// Start with a list of the image's top layers.
queue := make(map[string]struct{})
for _, layerID := range append([]string{image.TopLayer}, image.MappedTopLayers...) {
queue[layerID] = struct{}{}
}
visited := make(map[string]struct{})
// Walk all of the layers.
var size int64
for len(visited) < len(queue) {
for layerID := range queue {
// Visit each layer only once.
if _, ok := visited[layerID]; ok {
continue
}
visited[layerID] = struct{}{}
// Look for the layer and the store that knows about it.
var layerStore ROLayerStore
var layer *Layer
for _, store := range append([]ROLayerStore{lstore}, lstores...) {
if layer, err = store.Get(layerID); err == nil {
layerStore = store
break
}
}
if layer == nil {
return -1, errors.Wrapf(ErrLayerUnknown, "error locating layer with ID %q", layerID)
}
// The UncompressedSize is only valid if there's a digest to go with it.
n := layer.UncompressedSize
if layer.UncompressedDigest == "" {
// Compute the size.
n, err = layerStore.DiffSize("", layer.ID)
if err != nil {
return -1, errors.Wrapf(err, "size/digest of layer with ID %q could not be calculated", layerID)
}
}
// Count this layer.
size += n
// Make a note to visit the layer's parent if we haven't already.
if layer.Parent != "" {
queue[layer.Parent] = struct{}{}
}
}
}
// Count big data items.
names, err := imageStore.BigDataNames(id)
if err != nil {
return -1, errors.Wrapf(err, "error reading list of big data items for image %q", id)
}
for _, name := range names {
n, err := imageStore.BigDataSize(id, name)
if err != nil {
return -1, errors.Wrapf(err, "error reading size of big data item %q for image %q", name, id)
}
size += n
}
return size, nil
}
func (s *store) ContainerSize(id string) (int64, error) {
lstore, err := s.LayerStore()
if err != nil {
return -1, err
}
lstores, err := s.ROLayerStores()
if err != nil {
return -1, err
}
for _, store := range append([]ROLayerStore{lstore}, lstores...) {
store.Lock()
defer store.Unlock()
if modified, err := store.Modified(); modified || err != nil {
store.Load()
}
}
// Get the location of the container directory and container run directory.
// Do it before we lock the container store because they do, too.
cdir, err := s.ContainerDirectory(id)
if err != nil {
return -1, err
}
rdir, err := s.ContainerRunDirectory(id)
if err != nil {
return -1, err
}
rcstore, err := s.ContainerStore()
if err != nil {
return -1, err
}
rcstore.Lock()
defer rcstore.Unlock()
if modified, err := rcstore.Modified(); modified || err != nil {
rcstore.Load()
}
// Read the container record.
container, err := rcstore.Get(id)
if err != nil {
return -1, err
}
// Read the container's layer's size.
var layer *Layer
var size int64
for _, store := range append([]ROLayerStore{lstore}, lstores...) {
if layer, err = store.Get(container.LayerID); err == nil {
size, err = store.DiffSize("", layer.ID)
if err != nil {
return -1, errors.Wrapf(err, "error determining size of layer with ID %q", layer.ID)
}
break
}
}
if layer == nil {
return -1, errors.Wrapf(ErrLayerUnknown, "error locating layer with ID %q", container.LayerID)
}
// Count big data items.
names, err := rcstore.BigDataNames(id)
if err != nil {
return -1, errors.Wrapf(err, "error reading list of big data items for container %q", container.ID)
}
for _, name := range names {
n, err := rcstore.BigDataSize(id, name)
if err != nil {
return -1, errors.Wrapf(err, "error reading size of big data item %q for container %q", name, id)
}
size += n
}
// Count the size of our container directory and container run directory.
n, err := directory.Size(cdir)
if err != nil {
return -1, err
}
size += n
n, err = directory.Size(rdir)
if err != nil {
return -1, err
}
size += n
return size, nil
}
func (s *store) ListContainerBigData(id string) ([]string, error) {
rcstore, err := s.ContainerStore()
if err != nil {
@@ -1614,7 +1910,7 @@ func (s *store) DeleteLayer(id string) error {
return err
}
for _, image := range images {
if image.TopLayer == id {
if image.TopLayer == id || stringutils.InSlice(image.MappedTopLayers, id) {
return errors.Wrapf(ErrLayerUsedByImage, "Layer %v used by image %v", id, image.ID)
}
}
@@ -1697,10 +1993,13 @@ func (s *store) DeleteImage(id string, commit bool) (layers []string, err error)
childrenByParent[parent] = &([]string{layer.ID})
}
}
anyImageByTopLayer := make(map[string]string)
otherImagesByTopLayer := make(map[string]string)
for _, img := range images {
if img.ID != id {
anyImageByTopLayer[img.TopLayer] = img.ID
otherImagesByTopLayer[img.TopLayer] = img.ID
for _, layerID := range img.MappedTopLayers {
otherImagesByTopLayer[layerID] = img.ID
}
}
}
if commit {
@@ -1714,27 +2013,38 @@ func (s *store) DeleteImage(id string, commit bool) (layers []string, err error)
if rcstore.Exists(layer) {
break
}
if _, ok := anyImageByTopLayer[layer]; ok {
if _, ok := otherImagesByTopLayer[layer]; ok {
break
}
parent := ""
if l, err := rlstore.Get(layer); err == nil {
parent = l.Parent
}
otherRefs := 0
if childList, ok := childrenByParent[layer]; ok && childList != nil {
children := *childList
for _, child := range children {
if child != lastRemoved {
otherRefs++
hasOtherRefs := func() bool {
layersToCheck := []string{layer}
if layer == image.TopLayer {
layersToCheck = append(layersToCheck, image.MappedTopLayers...)
}
for _, layer := range layersToCheck {
if childList, ok := childrenByParent[layer]; ok && childList != nil {
children := *childList
for _, child := range children {
if child != lastRemoved {
return true
}
}
}
}
return false
}
if otherRefs != 0 {
if hasOtherRefs() {
break
}
lastRemoved = layer
layersToRemove = append(layersToRemove, lastRemoved)
if layer == image.TopLayer {
layersToRemove = append(layersToRemove, image.MappedTopLayers...)
}
layer = parent
}
} else {
@@ -1938,13 +2248,30 @@ func (s *store) Mount(id, mountLabel string) (string, error) {
return "", ErrLayerUnknown
}
func (s *store) Unmount(id string) error {
func (s *store) Mounted(id string) (bool, error) {
if layerID, err := s.ContainerLayerID(id); err == nil {
id = layerID
}
rlstore, err := s.LayerStore()
if err != nil {
return err
return false, err
}
rlstore.Lock()
defer rlstore.Unlock()
if modified, err := rlstore.Modified(); modified || err != nil {
rlstore.Load()
}
return rlstore.Mounted(id)
}
func (s *store) Unmount(id string, force bool) (bool, error) {
if layerID, err := s.ContainerLayerID(id); err == nil {
id = layerID
}
rlstore, err := s.LayerStore()
if err != nil {
return false, err
}
rlstore.Lock()
defer rlstore.Unlock()
@@ -1952,9 +2279,9 @@ func (s *store) Unmount(id string) error {
rlstore.Load()
}
if rlstore.Exists(id) {
return rlstore.Unmount(id)
return rlstore.Unmount(id, force)
}
return ErrLayerUnknown
return false, ErrLayerUnknown
}
func (s *store) Changes(from, to string) ([]archive.Change, error) {
@@ -2293,7 +2620,7 @@ func (s *store) ImagesByTopLayer(id string) ([]*Image, error) {
return nil, err
}
for _, image := range imageList {
if image.TopLayer == layer.ID {
if image.TopLayer == layer.ID || stringutils.InSlice(image.MappedTopLayers, layer.ID) {
images = append(images, &image)
}
}
@@ -2504,7 +2831,7 @@ func (s *store) Shutdown(force bool) ([]string, error) {
mounted = append(mounted, layer.ID)
if force {
for layer.MountCount > 0 {
err2 := rlstore.Unmount(layer.ID)
_, err2 := rlstore.Unmount(layer.ID, force)
if err2 != nil {
if err == nil {
err = err2
@@ -2593,7 +2920,7 @@ func copyStringInterfaceMap(m map[string]interface{}) map[string]interface{} {
return ret
}
const configFile = "/etc/containers/storage.conf"
const defaultConfigFile = "/etc/containers/storage.conf"
// ThinpoolOptionsConfig represents the "storage.options.thinpool"
// TOML config table.
@@ -2680,6 +3007,14 @@ type OptionsConfig struct {
RemapGroup string `toml:"remap-group"`
// Thinpool container options to be handed to thinpool drivers
Thinpool struct{ ThinpoolOptionsConfig } `toml:"thinpool"`
// OSTree repository
OstreeRepo string `toml:"ostree_repo"`
// Do not create a bind mount on the storage home
SkipMountHome string `toml:"skip_mount_home"`
// Alternative program to use for the mount of the file system
MountProgram string `toml:"mount_program"`
}
// TOML-friendly explicit tables used for conversions.
@@ -2692,11 +3027,9 @@ type tomlConfig struct {
} `toml:"storage"`
}
func init() {
DefaultStoreOptions.RunRoot = "/var/run/containers/storage"
DefaultStoreOptions.GraphRoot = "/var/lib/containers/storage"
DefaultStoreOptions.GraphDriverName = ""
// ReloadConfigurationFile parses the specified configuration file and overrides
// the configuration in storeOptions.
func ReloadConfigurationFile(configFile string, storeOptions *StoreOptions) {
data, err := ioutil.ReadFile(configFile)
if err != nil {
if !os.IsNotExist(err) {
@@ -2712,66 +3045,75 @@ func init() {
return
}
if config.Storage.Driver != "" {
DefaultStoreOptions.GraphDriverName = config.Storage.Driver
storeOptions.GraphDriverName = config.Storage.Driver
}
if config.Storage.RunRoot != "" {
DefaultStoreOptions.RunRoot = config.Storage.RunRoot
storeOptions.RunRoot = config.Storage.RunRoot
}
if config.Storage.GraphRoot != "" {
DefaultStoreOptions.GraphRoot = config.Storage.GraphRoot
storeOptions.GraphRoot = config.Storage.GraphRoot
}
if config.Storage.Options.Thinpool.AutoExtendPercent != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.thinp_autoextend_percent=%s", config.Storage.Options.Thinpool.AutoExtendPercent))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.thinp_autoextend_percent=%s", config.Storage.Options.Thinpool.AutoExtendPercent))
}
if config.Storage.Options.Thinpool.AutoExtendThreshold != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.thinp_autoextend_threshold=%s", config.Storage.Options.Thinpool.AutoExtendThreshold))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.thinp_autoextend_threshold=%s", config.Storage.Options.Thinpool.AutoExtendThreshold))
}
if config.Storage.Options.Thinpool.BaseSize != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.basesize=%s", config.Storage.Options.Thinpool.BaseSize))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.basesize=%s", config.Storage.Options.Thinpool.BaseSize))
}
if config.Storage.Options.Thinpool.BlockSize != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.blocksize=%s", config.Storage.Options.Thinpool.BlockSize))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.blocksize=%s", config.Storage.Options.Thinpool.BlockSize))
}
if config.Storage.Options.Thinpool.DirectLvmDevice != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.directlvm_device=%s", config.Storage.Options.Thinpool.DirectLvmDevice))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.directlvm_device=%s", config.Storage.Options.Thinpool.DirectLvmDevice))
}
if config.Storage.Options.Thinpool.DirectLvmDeviceForce != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.directlvm_device_force=%s", config.Storage.Options.Thinpool.DirectLvmDeviceForce))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.directlvm_device_force=%s", config.Storage.Options.Thinpool.DirectLvmDeviceForce))
}
if config.Storage.Options.Thinpool.Fs != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.fs=%s", config.Storage.Options.Thinpool.Fs))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.fs=%s", config.Storage.Options.Thinpool.Fs))
}
if config.Storage.Options.Thinpool.LogLevel != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.libdm_log_level=%s", config.Storage.Options.Thinpool.LogLevel))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.libdm_log_level=%s", config.Storage.Options.Thinpool.LogLevel))
}
if config.Storage.Options.Thinpool.MinFreeSpace != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.min_free_space=%s", config.Storage.Options.Thinpool.MinFreeSpace))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.min_free_space=%s", config.Storage.Options.Thinpool.MinFreeSpace))
}
if config.Storage.Options.Thinpool.MkfsArg != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.mkfsarg=%s", config.Storage.Options.Thinpool.MkfsArg))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.mkfsarg=%s", config.Storage.Options.Thinpool.MkfsArg))
}
if config.Storage.Options.Thinpool.MountOpt != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.mountopt=%s", config.Storage.Options.Thinpool.MountOpt))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.mountopt=%s", config.Storage.Options.Thinpool.MountOpt))
}
if config.Storage.Options.Thinpool.UseDeferredDeletion != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.use_deferred_deletion=%s", config.Storage.Options.Thinpool.UseDeferredDeletion))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.use_deferred_deletion=%s", config.Storage.Options.Thinpool.UseDeferredDeletion))
}
if config.Storage.Options.Thinpool.UseDeferredRemoval != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.use_deferred_removal=%s", config.Storage.Options.Thinpool.UseDeferredRemoval))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.use_deferred_removal=%s", config.Storage.Options.Thinpool.UseDeferredRemoval))
}
if config.Storage.Options.Thinpool.XfsNoSpaceMaxRetries != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("dm.xfs_nospace_max_retries=%s", config.Storage.Options.Thinpool.XfsNoSpaceMaxRetries))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("dm.xfs_nospace_max_retries=%s", config.Storage.Options.Thinpool.XfsNoSpaceMaxRetries))
}
for _, s := range config.Storage.Options.AdditionalImageStores {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("%s.imagestore=%s", config.Storage.Driver, s))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("%s.imagestore=%s", config.Storage.Driver, s))
}
if config.Storage.Options.Size != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("%s.size=%s", config.Storage.Driver, config.Storage.Options.Size))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("%s.size=%s", config.Storage.Driver, config.Storage.Options.Size))
}
if config.Storage.Options.OstreeRepo != "" {
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("%s.ostree_repo=%s", config.Storage.Driver, config.Storage.Options.OstreeRepo))
}
if config.Storage.Options.SkipMountHome != "" {
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("%s.skip_mount_home=%s", config.Storage.Driver, config.Storage.Options.SkipMountHome))
}
if config.Storage.Options.MountProgram != "" {
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("%s.mount_program=%s", config.Storage.Driver, config.Storage.Options.MountProgram))
}
if config.Storage.Options.OverrideKernelCheck != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, fmt.Sprintf("%s.override_kernel_check=%s", config.Storage.Driver, config.Storage.Options.OverrideKernelCheck))
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, fmt.Sprintf("%s.override_kernel_check=%s", config.Storage.Driver, config.Storage.Options.OverrideKernelCheck))
}
if config.Storage.Options.RemapUser != "" && config.Storage.Options.RemapGroup == "" {
config.Storage.Options.RemapGroup = config.Storage.Options.RemapUser
@@ -2785,8 +3127,8 @@ func init() {
fmt.Printf("Error initializing ID mappings for %s:%s %v\n", config.Storage.Options.RemapUser, config.Storage.Options.RemapGroup, err)
return
}
DefaultStoreOptions.UIDMap = mappings.UIDs()
DefaultStoreOptions.GIDMap = mappings.GIDs()
storeOptions.UIDMap = mappings.UIDs()
storeOptions.GIDMap = mappings.GIDs()
}
nonDigitsToWhitespace := func(r rune) rune {
if strings.IndexRune("0123456789", r) == -1 {
@@ -2836,15 +3178,23 @@ func init() {
}
return idmap
}
DefaultStoreOptions.UIDMap = append(DefaultStoreOptions.UIDMap, parseIDMap(config.Storage.Options.RemapUIDs, "remap-uids")...)
DefaultStoreOptions.GIDMap = append(DefaultStoreOptions.GIDMap, parseIDMap(config.Storage.Options.RemapGIDs, "remap-gids")...)
storeOptions.UIDMap = append(storeOptions.UIDMap, parseIDMap(config.Storage.Options.RemapUIDs, "remap-uids")...)
storeOptions.GIDMap = append(storeOptions.GIDMap, parseIDMap(config.Storage.Options.RemapGIDs, "remap-gids")...)
if os.Getenv("STORAGE_DRIVER") != "" {
DefaultStoreOptions.GraphDriverName = os.Getenv("STORAGE_DRIVER")
storeOptions.GraphDriverName = os.Getenv("STORAGE_DRIVER")
}
if os.Getenv("STORAGE_OPTS") != "" {
DefaultStoreOptions.GraphDriverOptions = append(DefaultStoreOptions.GraphDriverOptions, strings.Split(os.Getenv("STORAGE_OPTS"), ",")...)
storeOptions.GraphDriverOptions = append(storeOptions.GraphDriverOptions, strings.Split(os.Getenv("STORAGE_OPTS"), ",")...)
}
if len(DefaultStoreOptions.GraphDriverOptions) == 1 && DefaultStoreOptions.GraphDriverOptions[0] == "" {
DefaultStoreOptions.GraphDriverOptions = nil
if len(storeOptions.GraphDriverOptions) == 1 && storeOptions.GraphDriverOptions[0] == "" {
storeOptions.GraphDriverOptions = nil
}
}
func init() {
DefaultStoreOptions.RunRoot = "/var/run/containers/storage"
DefaultStoreOptions.GraphRoot = "/var/lib/containers/storage"
DefaultStoreOptions.GraphDriverName = ""
ReloadConfigurationFile(defaultConfigFile, &DefaultStoreOptions)
}

View File

@@ -12,10 +12,12 @@ github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
github.com/pkg/errors master
github.com/pmezard/go-difflib v1.0.0
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac
github.com/sirupsen/logrus v1.0.0
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
github.com/syndtr/gocapability master
github.com/tchap/go-patricia v2.2.6
github.com/vbatts/tar-split v0.10.2
golang.org/x/net 7dcfb8076726a3fdd9353b6b8a1f1b6be6811bd6
golang.org/x/sys 07c182904dbd53199946ba614a412c61d3c548f5
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460

View File

@@ -1,5 +1,5 @@
github.com/Azure/azure-sdk-for-go 088007b3b08cc02b27f2eadfdcd870958460ce7e
github.com/Azure/go-autorest ec5f4903f77ed9927ac95b19ab8e44ada64c1356
github.com/Azure/azure-sdk-for-go 4650843026a7fdec254a8d9cf893693a254edd0b
github.com/Azure/go-autorest eaa7994b2278094c904d31993d26f56324db3052
github.com/sirupsen/logrus 3d4380f53a34dcdc95f0c1db702615992b38d9a4
github.com/aws/aws-sdk-go 5bcc0a238d880469f949fc7cd24e35f32ab80cbd
github.com/bshuster-repo/logrus-logstash-hook d2c0ecc1836d91814e15e23bb5dc309c3ef51f4a
@@ -19,6 +19,8 @@ github.com/gorilla/handlers 60c7bfde3e33c201519a200a4507a158cc03a17b
github.com/gorilla/mux 599cba5e7b6137d46ddf58fb1765f5d928e69604
github.com/inconshreveable/mousetrap 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75
github.com/jmespath/go-jmespath bd40a432e4c76585ef6b72d3fd96fb9b6dc7b68d
github.com/marstr/guid 8bd9a64bf37eb297b492a4101fb28e80ac0b290f
github.com/satori/go.uuid f58768cc1a7a7e77a3bd49e98cdd21419399b6a3
github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c
github.com/miekg/dns 271c58e0c14f552178ea321a545ff9af38930f39
github.com/mitchellh/mapstructure 482a9fd5fa83e8c4e7817413b80f3eb8feec03ef

View File

@@ -1,70 +1,38 @@
### Docker users, see [Moby and Docker](https://mobyproject.org/#moby-and-docker) to clarify the relationship between the projects
### Docker maintainers and contributors, see [Transitioning to Moby](#transitioning-to-moby) for more details
The Moby Project
================
![Moby Project logo](docs/static_files/moby-project-logo.png "The Moby Project")
Moby is an open-source project created by Docker to advance the software containerization movement.
It provides a “Lego set” of dozens of components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts to experiment and exchange ideas.
Moby is an open-source project created by Docker to enable and accelerate software containerization.
# Moby
## Overview
At the core of Moby is a framework to assemble specialized container systems.
It provides:
- A library of containerized components for all vital aspects of a container system: OS, container runtime, orchestration, infrastructure management, networking, storage, security, build, image distribution, etc.
- Tools to assemble the components into runnable artifacts for a variety of platforms and architectures: bare metal (both x86 and Arm); executables for Linux, Mac and Windows; VM images for popular cloud and virtualization providers.
- A set of reference assemblies which can be used as-is, modified, or used as inspiration to create your own.
All Moby components are containers, so creating new components is as easy as building a new OCI-compatible container.
It provides a "Lego set" of toolkit components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts and professionals to experiment and exchange ideas.
Components include container build tools, a container registry, orchestration tools, a runtime and more, and these can be used as building blocks in conjunction with other tools and projects.
## Principles
Moby is an open project guided by strong principles, but modular, flexible and without too strong an opinion on user experience, so it is open to the community to help set its direction.
The guiding principles are:
Moby is an open project guided by strong principles, aiming to be modular, flexible and without too strong an opinion on user experience.
It is open to the community to help set its direction.
- Modular: the project includes lots of components that have well-defined functions and APIs that work together.
- Batteries included but swappable: Moby includes enough components to build fully featured container system, but its modular architecture ensures that most of the components can be swapped by different implementations.
- Usable security: Moby will provide secure defaults without compromising usability.
- Container centric: Moby is built with containers, for running containers.
With Moby, you should be able to describe all the components of your distributed application, from the high-level configuration files down to the kernel you would like to use and build and deploy it easily.
Moby uses [containerd](https://github.com/containerd/containerd) as the default container runtime.
- Usable security: Moby provides secure defaults without compromising usability.
- Developer focused: The APIs are intended to be functional and useful to build powerful tools.
They are not necessarily intended as end user tools but as components aimed at developers.
Documentation and UX is aimed at developers not end users.
## Audience
Moby is recommended for anyone who wants to assemble a container-based system. This includes:
The Moby Project is intended for engineers, integrators and enthusiasts looking to modify, hack, fix, experiment, invent and build systems based on containers.
It is not for people looking for a commercially supported system, but for people who want to work and learn with open source code.
- Hackers who want to customize or patch their Docker build
- System engineers or integrators building a container system
- Infrastructure providers looking to adapt existing container systems to their environment
- Container enthusiasts who want to experiment with the latest container tech
- Open-source developers looking to test their project in a variety of different systems
- Anyone curious about Docker internals and how its built
## Relationship with Docker
Moby is NOT recommended for:
The components and tools in the Moby Project are initially the open source components that Docker and the community have built for the Docker Project.
New projects can be added if they fit with the community goals. Docker is committed to using Moby as the upstream for the Docker Product.
However, other projects are also encouraged to use Moby as an upstream, and to reuse the components in diverse ways, and all these uses will be treated in the same way. External maintainers and contributors are welcomed.
- Application developers looking for an easy way to run their applications in containers. We recommend Docker CE instead.
- Enterprise IT and development teams looking for a ready-to-use, commercially supported container platform. We recommend Docker EE instead.
- Anyone curious about containers and looking for an easy way to learn. We recommend the [docker.com](https://www.docker.com/) website instead.
# Transitioning to Moby
Docker is transitioning all of its open source collaborations to the Moby project going forward.
During the transition, all open source activity should continue as usual.
We are proposing the following list of changes:
- splitting up the engine into more open components
- removing the docker UI, SDK etc to keep them in the Docker org
- clarifying that the project is not limited to the engine, but to the assembly of all the individual components of the Docker platform
- open-source new tools & components which we currently use to assemble the Docker product, but could benefit the community
- defining an open, community-centric governance inspired by the Fedora project (a very successful example of balancing the needs of the community with the constraints of the primary corporate sponsor)
The Moby project is not intended as a location for support or feature requests for Docker products, but as a place for contributors to work on open source code, fix bugs, and make the code more useful.
The releases are supported by the maintainers, community and users, on a best efforts basis only, and are not intended for customers who want enterprise or commercial support; Docker EE is the appropriate product for these use cases.
-----
@@ -82,7 +50,6 @@ violate applicable laws.
For more information, please see https://www.bis.doc.gov
Licensing
=========
Moby is licensed under the Apache License, Version 2.0. See

View File

@@ -10,7 +10,7 @@ It consists of various components in this repository:
- `client/` The Go client used by the command-line client. It can also be used by third-party Go programs.
- `daemon/` The daemon, which serves the API.
## Swagger definition
## Swagger definition
The API is defined by the [Swagger](http://swagger.io/specification/) definition in `api/swagger.yaml`. This definition can be used to:
@@ -20,7 +20,7 @@ The API is defined by the [Swagger](http://swagger.io/specification/) definition
## Updating the API documentation
The API documentation is generated entirely from `api/swagger.yaml`. If you make updates to the API, you'll need to edit this file to represent the change in the documentation.
The API documentation is generated entirely from `api/swagger.yaml`. If you make updates to the API, edit this file to represent the change in the documentation.
The file is split into two main sections:
@@ -29,9 +29,9 @@ The file is split into two main sections:
To make an edit, first look for the endpoint you want to edit under `paths`, then make the required edits. Endpoints may reference reusable objects with `$ref`, which can be found in the `definitions` section.
There is hopefully enough example material in the file for you to copy a similar pattern from elsewhere in the file (e.g. adding new fields or endpoints), but for the full reference, see the [Swagger specification](https://github.com/docker/docker/issues/27919)
There is hopefully enough example material in the file for you to copy a similar pattern from elsewhere in the file (e.g. adding new fields or endpoints), but for the full reference, see the [Swagger specification](https://github.com/docker/docker/issues/27919).
`swagger.yaml` is validated by `hack/validate/swagger` to ensure it is a valid Swagger definition. This is useful for when you are making edits to ensure you are doing the right thing.
`swagger.yaml` is validated by `hack/validate/swagger` to ensure it is a valid Swagger definition. This is useful when making edits to ensure you are doing the right thing.
## Viewing the API documentation

View File

@@ -1,65 +1,11 @@
package api
import (
"encoding/json"
"encoding/pem"
"fmt"
"os"
"path/filepath"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/system"
"github.com/docker/libtrust"
)
package api // import "github.com/docker/docker/api"
// Common constants for daemon and client.
const (
// DefaultVersion of Current REST API
DefaultVersion string = "1.32"
DefaultVersion string = "1.37"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.
NoBaseImageSpecifier string = "scratch"
)
// LoadOrCreateTrustKey attempts to load the libtrust key at the given path,
// otherwise generates a new one
func LoadOrCreateTrustKey(trustKeyPath string) (libtrust.PrivateKey, error) {
err := system.MkdirAll(filepath.Dir(trustKeyPath), 0700, "")
if err != nil {
return nil, err
}
trustKey, err := libtrust.LoadKeyFile(trustKeyPath)
if err == libtrust.ErrKeyFileDoesNotExist {
trustKey, err = libtrust.GenerateECP256PrivateKey()
if err != nil {
return nil, fmt.Errorf("Error generating key: %s", err)
}
encodedKey, err := serializePrivateKey(trustKey, filepath.Ext(trustKeyPath))
if err != nil {
return nil, fmt.Errorf("Error serializing key: %s", err)
}
if err := ioutils.AtomicWriteFile(trustKeyPath, encodedKey, os.FileMode(0600)); err != nil {
return nil, fmt.Errorf("Error saving key file: %s", err)
}
} else if err != nil {
return nil, fmt.Errorf("Error loading key file %s: %s", trustKeyPath, err)
}
return trustKey, nil
}
func serializePrivateKey(key libtrust.PrivateKey, ext string) (encoded []byte, err error) {
if ext == ".json" || ext == ".jwk" {
encoded, err = json.Marshal(key)
if err != nil {
return nil, fmt.Errorf("unable to encode private key JWK: %s", err)
}
} else {
pemBlock, err := key.PEMBlock()
if err != nil {
return nil, fmt.Errorf("unable to encode private key PEM: %s", err)
}
encoded = pem.EncodeToMemory(pemBlock)
}
return
}

View File

@@ -1,6 +1,6 @@
// +build !windows
package api
package api // import "github.com/docker/docker/api"
// MinVersion represents Minimum REST API version supported
const MinVersion string = "1.12"

View File

@@ -1,4 +1,4 @@
package api
package api // import "github.com/docker/docker/api"
// MinVersion represents Minimum REST API version supported
// Technically the first daemon API version released on Windows is v1.25 in

View File

@@ -1,9 +0,0 @@
package api
import "regexp"
// RestrictedNameChars collects the characters allowed to represent a name, normally used to validate container and volume names.
const RestrictedNameChars = `[a-zA-Z0-9][a-zA-Z0-9_.-]`
// RestrictedNamePattern is a regular expression to validate names against the collection of restricted characters.
var RestrictedNamePattern = regexp.MustCompile(`^` + RestrictedNameChars + `+$`)

View File

@@ -1,4 +1,4 @@
package types
package types // import "github.com/docker/docker/api/types"
// AuthConfig contains authorization information for connecting to a Registry
type AuthConfig struct {

View File

@@ -1,4 +1,4 @@
package blkiodev
package blkiodev // import "github.com/docker/docker/api/types/blkiodev"
import "fmt"

View File

@@ -1,4 +1,4 @@
package types
package types // import "github.com/docker/docker/api/types"
import (
"bufio"
@@ -74,6 +74,7 @@ type ContainerLogsOptions struct {
ShowStdout bool
ShowStderr bool
Since string
Until string
Timestamps bool
Follow bool
Tail string
@@ -179,10 +180,7 @@ type ImageBuildOptions struct {
ExtraHosts []string // List of extra hosts
Target string
SessionID string
// TODO @jhowardmsft LCOW Support: This will require extending to include
// `Platform string`, but is ommited for now as it's hard-coded temporarily
// to avoid API changes.
Platform string
}
// ImageBuildResponse holds information
@@ -195,7 +193,8 @@ type ImageBuildResponse struct {
// ImageCreateOptions holds information to create images.
type ImageCreateOptions struct {
RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry
RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry.
Platform string // Platform is the target platform of the image if it needs to be pulled from the registry.
}
// ImageImportSource holds source information for ImageImport
@@ -206,9 +205,10 @@ type ImageImportSource struct {
// ImageImportOptions holds information to import images from the client host.
type ImageImportOptions struct {
Tag string // Tag is the name to tag this image with. This attribute is deprecated.
Message string // Message is the message to tag the image with
Changes []string // Changes are the raw changes to apply to this image
Tag string // Tag is the name to tag this image with. This attribute is deprecated.
Message string // Message is the message to tag the image with
Changes []string // Changes are the raw changes to apply to this image
Platform string // Platform is the target platform of the image
}
// ImageListOptions holds parameters to filter the list of images with.
@@ -229,6 +229,7 @@ type ImagePullOptions struct {
All bool
RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry
PrivilegeFunc RequestPrivilegeFunc
Platform string
}
// RequestPrivilegeFunc is a function interface that

View File

@@ -1,4 +1,4 @@
package types
package types // import "github.com/docker/docker/api/types"
import (
"github.com/docker/docker/api/types/container"
@@ -16,7 +16,6 @@ type ContainerCreateConfig struct {
HostConfig *container.HostConfig
NetworkingConfig *network.NetworkingConfig
AdjustCPUShares bool
Platform string
}
// ContainerRmConfig holds arguments for the container remove
@@ -26,19 +25,6 @@ type ContainerRmConfig struct {
ForceRemove, RemoveVolume, RemoveLink bool
}
// ContainerCommitConfig contains build configs for commit operation,
// and is used when making a commit with the current state of the container.
type ContainerCommitConfig struct {
Pause bool
Repo string
Tag string
Author string
Comment string
// merge container config into commit config before commit
MergeConfigs bool
Config *container.Config
}
// ExecConfig is a small subset of the Config struct that holds the configuration
// for the exec feature of docker.
type ExecConfig struct {
@@ -51,6 +37,7 @@ type ExecConfig struct {
Detach bool // Execute in detach mode
DetachKeys string // Escape keys for detach
Env []string // Environment variables
WorkingDir string // Working directory
Cmd []string // Execution commands and args
}

View File

@@ -1,4 +1,4 @@
package container
package container // import "github.com/docker/docker/api/types/container"
import (
"time"

View File

@@ -7,7 +7,7 @@ package container
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerChangeResponseItem container change response item
// ContainerChangeResponseItem change item in response to ContainerChanges operation
// swagger:model ContainerChangeResponseItem
type ContainerChangeResponseItem struct {

View File

@@ -7,7 +7,7 @@ package container
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerCreateCreatedBody container create created body
// ContainerCreateCreatedBody OK response to ContainerCreate operation
// swagger:model ContainerCreateCreatedBody
type ContainerCreateCreatedBody struct {

View File

@@ -7,7 +7,7 @@ package container
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerTopOKBody container top o k body
// ContainerTopOKBody OK response to ContainerTop operation
// swagger:model ContainerTopOKBody
type ContainerTopOKBody struct {

View File

@@ -7,7 +7,7 @@ package container
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerUpdateOKBody container update o k body
// ContainerUpdateOKBody OK response to ContainerUpdate operation
// swagger:model ContainerUpdateOKBody
type ContainerUpdateOKBody struct {

View File

@@ -7,10 +7,22 @@ package container
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerWaitOKBody container wait o k body
// ContainerWaitOKBodyError container waiting error, if any
// swagger:model ContainerWaitOKBodyError
type ContainerWaitOKBodyError struct {
// Details of an error
Message string `json:"Message,omitempty"`
}
// ContainerWaitOKBody OK response to ContainerWait operation
// swagger:model ContainerWaitOKBody
type ContainerWaitOKBody struct {
// error
// Required: true
Error *ContainerWaitOKBodyError `json:"Error"`
// Exit code of the container
// Required: true
StatusCode int64 `json:"StatusCode"`

View File

@@ -1,4 +1,4 @@
package container
package container // import "github.com/docker/docker/api/types/container"
import (
"strings"
@@ -20,6 +20,27 @@ func (i Isolation) IsDefault() bool {
return strings.ToLower(string(i)) == "default" || string(i) == ""
}
// IsHyperV indicates the use of a Hyper-V partition for isolation
func (i Isolation) IsHyperV() bool {
return strings.ToLower(string(i)) == "hyperv"
}
// IsProcess indicates the use of process isolation
func (i Isolation) IsProcess() bool {
return strings.ToLower(string(i)) == "process"
}
const (
// IsolationEmpty is unspecified (same behavior as default)
IsolationEmpty = Isolation("")
// IsolationDefault is the default isolation mode on current daemon
IsolationDefault = Isolation("default")
// IsolationProcess is process isolation mode
IsolationProcess = Isolation("process")
// IsolationHyperV is HyperV isolation mode
IsolationHyperV = Isolation("hyperv")
)
// IpcMode represents the container ipc stack.
type IpcMode string

View File

@@ -1,6 +1,6 @@
// +build !windows
package container
package container // import "github.com/docker/docker/api/types/container"
// IsValid indicates if an isolation technology is valid
func (i Isolation) IsValid() bool {

View File

@@ -1,8 +1,4 @@
package container
import (
"strings"
)
package container // import "github.com/docker/docker/api/types/container"
// IsBridge indicates whether container uses the bridge network stack
// in windows it is given the name NAT
@@ -21,16 +17,6 @@ func (n NetworkMode) IsUserDefined() bool {
return !n.IsDefault() && !n.IsNone() && !n.IsBridge() && !n.IsContainer()
}
// IsHyperV indicates the use of a Hyper-V partition for isolation
func (i Isolation) IsHyperV() bool {
return strings.ToLower(string(i)) == "hyperv"
}
// IsProcess indicates the use of process isolation
func (i Isolation) IsProcess() bool {
return strings.ToLower(string(i)) == "process"
}
// IsValid indicates if an isolation technology is valid
func (i Isolation) IsValid() bool {
return i.IsDefault() || i.IsHyperV() || i.IsProcess()

View File

@@ -1,4 +1,4 @@
package container
package container // import "github.com/docker/docker/api/types/container"
// WaitCondition is a type used to specify a container state for which
// to wait.

View File

@@ -1,4 +1,4 @@
package events
package events // import "github.com/docker/docker/api/types/events"
const (
// ContainerEventType is the event type that containers generate

View File

@@ -1,6 +1,7 @@
// Package filters provides helper function to parse and handle command line
// filter, used for example in docker ps or docker images commands.
package filters
/*Package filters provides tools for encoding a mapping of keys to a set of
multiple values.
*/
package filters // import "github.com/docker/docker/api/types/filters"
import (
"encoding/json"
@@ -11,27 +12,34 @@ import (
"github.com/docker/docker/api/types/versions"
)
// Args stores filter arguments as map key:{map key: bool}.
// It contains an aggregation of the map of arguments (which are in the form
// of -f 'key=value') based on the key, and stores values for the same key
// in a map with string keys and boolean values.
// e.g given -f 'label=label1=1' -f 'label=label2=2' -f 'image.name=ubuntu'
// the args will be {"image.name":{"ubuntu":true},"label":{"label1=1":true,"label2=2":true}}
// Args stores a mapping of keys to a set of multiple values.
type Args struct {
fields map[string]map[string]bool
}
// NewArgs initializes a new Args struct.
func NewArgs() Args {
return Args{fields: map[string]map[string]bool{}}
// KeyValuePair are used to initialize a new Args
type KeyValuePair struct {
Key string
Value string
}
// ParseFlag parses the argument to the filter flag. Like
// Arg creates a new KeyValuePair for initializing Args
func Arg(key, value string) KeyValuePair {
return KeyValuePair{Key: key, Value: value}
}
// NewArgs returns a new Args populated with the initial args
func NewArgs(initialArgs ...KeyValuePair) Args {
args := Args{fields: map[string]map[string]bool{}}
for _, arg := range initialArgs {
args.Add(arg.Key, arg.Value)
}
return args
}
// ParseFlag parses a key=value string and adds it to an Args.
//
// `docker ps -f 'created=today' -f 'image.name=ubuntu*'`
//
// If prev map is provided, then it is appended to, and returned. By default a new
// map is created.
// Deprecated: Use Args.Add()
func ParseFlag(arg string, prev Args) (Args, error) {
filters := prev
if len(arg) == 0 {
@@ -52,74 +60,95 @@ func ParseFlag(arg string, prev Args) (Args, error) {
return filters, nil
}
// ErrBadFormat is an error returned in case of bad format for a filter.
// ErrBadFormat is an error returned when a filter is not in the form key=value
//
// Deprecated: this error will be removed in a future version
var ErrBadFormat = errors.New("bad format of filter (expected name=value)")
// ToParam packs the Args into a string for easy transport from client to server.
// ToParam encodes the Args as args JSON encoded string
//
// Deprecated: use ToJSON
func ToParam(a Args) (string, error) {
// this way we don't URL encode {}, just empty space
return ToJSON(a)
}
// MarshalJSON returns a JSON byte representation of the Args
func (args Args) MarshalJSON() ([]byte, error) {
if len(args.fields) == 0 {
return []byte{}, nil
}
return json.Marshal(args.fields)
}
// ToJSON returns the Args as a JSON encoded string
func ToJSON(a Args) (string, error) {
if a.Len() == 0 {
return "", nil
}
buf, err := json.Marshal(a.fields)
if err != nil {
return "", err
}
return string(buf), nil
buf, err := json.Marshal(a)
return string(buf), err
}
// ToParamWithVersion packs the Args into a string for easy transport from client to server.
// The generated string will depend on the specified version (corresponding to the API version).
// ToParamWithVersion encodes Args as a JSON string. If version is less than 1.22
// then the encoded format will use an older legacy format where the values are a
// list of strings, instead of a set.
//
// Deprecated: Use ToJSON
func ToParamWithVersion(version string, a Args) (string, error) {
// this way we don't URL encode {}, just empty space
if a.Len() == 0 {
return "", nil
}
// for daemons older than v1.10, filter must be of the form map[string][]string
var buf []byte
var err error
if version != "" && versions.LessThan(version, "1.22") {
buf, err = json.Marshal(convertArgsToSlice(a.fields))
} else {
buf, err = json.Marshal(a.fields)
buf, err := json.Marshal(convertArgsToSlice(a.fields))
return string(buf), err
}
if err != nil {
return "", err
}
return string(buf), nil
return ToJSON(a)
}
// FromParam unpacks the filter Args.
// FromParam decodes a JSON encoded string into Args
//
// Deprecated: use FromJSON
func FromParam(p string) (Args, error) {
if len(p) == 0 {
return NewArgs(), nil
}
r := strings.NewReader(p)
d := json.NewDecoder(r)
m := map[string]map[string]bool{}
if err := d.Decode(&m); err != nil {
r.Seek(0, 0)
// Allow parsing old arguments in slice format.
// Because other libraries might be sending them in this format.
deprecated := map[string][]string{}
if deprecatedErr := d.Decode(&deprecated); deprecatedErr == nil {
m = deprecatedArgs(deprecated)
} else {
return NewArgs(), err
}
}
return Args{m}, nil
return FromJSON(p)
}
// Get returns the list of values associates with a field.
// It returns a slice of strings to keep backwards compatibility with old code.
func (filters Args) Get(field string) []string {
values := filters.fields[field]
// FromJSON decodes a JSON encoded string into Args
func FromJSON(p string) (Args, error) {
args := NewArgs()
if p == "" {
return args, nil
}
raw := []byte(p)
err := json.Unmarshal(raw, &args)
if err == nil {
return args, nil
}
// Fallback to parsing arguments in the legacy slice format
deprecated := map[string][]string{}
if legacyErr := json.Unmarshal(raw, &deprecated); legacyErr != nil {
return args, err
}
args.fields = deprecatedArgs(deprecated)
return args, nil
}
// UnmarshalJSON populates the Args from JSON encode bytes
func (args Args) UnmarshalJSON(raw []byte) error {
if len(raw) == 0 {
return nil
}
return json.Unmarshal(raw, &args.fields)
}
// Get returns the list of values associated with the key
func (args Args) Get(key string) []string {
values := args.fields[key]
if values == nil {
return make([]string, 0)
}
@@ -130,37 +159,34 @@ func (filters Args) Get(field string) []string {
return slice
}
// Add adds a new value to a filter field.
func (filters Args) Add(name, value string) {
if _, ok := filters.fields[name]; ok {
filters.fields[name][value] = true
// Add a new value to the set of values
func (args Args) Add(key, value string) {
if _, ok := args.fields[key]; ok {
args.fields[key][value] = true
} else {
filters.fields[name] = map[string]bool{value: true}
args.fields[key] = map[string]bool{value: true}
}
}
// Del removes a value from a filter field.
func (filters Args) Del(name, value string) {
if _, ok := filters.fields[name]; ok {
delete(filters.fields[name], value)
if len(filters.fields[name]) == 0 {
delete(filters.fields, name)
// Del removes a value from the set
func (args Args) Del(key, value string) {
if _, ok := args.fields[key]; ok {
delete(args.fields[key], value)
if len(args.fields[key]) == 0 {
delete(args.fields, key)
}
}
}
// Len returns the number of fields in the arguments.
func (filters Args) Len() int {
return len(filters.fields)
// Len returns the number of keys in the mapping
func (args Args) Len() int {
return len(args.fields)
}
// MatchKVList returns true if the values for the specified field matches the ones
// from the sources.
// e.g. given Args are {'label': {'label1=1','label2=1'}, 'image.name', {'ubuntu'}},
// field is 'label' and sources are {'label1': '1', 'label2': '2'}
// it returns true.
func (filters Args) MatchKVList(field string, sources map[string]string) bool {
fieldValues := filters.fields[field]
// MatchKVList returns true if all the pairs in sources exist as key=value
// pairs in the mapping at key, or if there are no values at key.
func (args Args) MatchKVList(key string, sources map[string]string) bool {
fieldValues := args.fields[key]
//do not filter if there is no filter set or cannot determine filter
if len(fieldValues) == 0 {
@@ -171,8 +197,8 @@ func (filters Args) MatchKVList(field string, sources map[string]string) bool {
return false
}
for name2match := range fieldValues {
testKV := strings.SplitN(name2match, "=", 2)
for value := range fieldValues {
testKV := strings.SplitN(value, "=", 2)
v, ok := sources[testKV[0]]
if !ok {
@@ -186,16 +212,13 @@ func (filters Args) MatchKVList(field string, sources map[string]string) bool {
return true
}
// Match returns true if the values for the specified field matches the source string
// e.g. given Args are {'label': {'label1=1','label2=1'}, 'image.name', {'ubuntu'}},
// field is 'image.name' and source is 'ubuntu'
// it returns true.
func (filters Args) Match(field, source string) bool {
if filters.ExactMatch(field, source) {
// Match returns true if any of the values at key match the source string
func (args Args) Match(field, source string) bool {
if args.ExactMatch(field, source) {
return true
}
fieldValues := filters.fields[field]
fieldValues := args.fields[field]
for name2match := range fieldValues {
match, err := regexp.MatchString(name2match, source)
if err != nil {
@@ -208,9 +231,9 @@ func (filters Args) Match(field, source string) bool {
return false
}
// ExactMatch returns true if the source matches exactly one of the filters.
func (filters Args) ExactMatch(field, source string) bool {
fieldValues, ok := filters.fields[field]
// ExactMatch returns true if the source matches exactly one of the values.
func (args Args) ExactMatch(key, source string) bool {
fieldValues, ok := args.fields[key]
//do not filter if there is no filter set or cannot determine filter
if !ok || len(fieldValues) == 0 {
return true
@@ -220,14 +243,15 @@ func (filters Args) ExactMatch(field, source string) bool {
return fieldValues[source]
}
// UniqueExactMatch returns true if there is only one filter and the source matches exactly this one.
func (filters Args) UniqueExactMatch(field, source string) bool {
fieldValues := filters.fields[field]
// UniqueExactMatch returns true if there is only one value and the source
// matches exactly the value.
func (args Args) UniqueExactMatch(key, source string) bool {
fieldValues := args.fields[key]
//do not filter if there is no filter set or cannot determine filter
if len(fieldValues) == 0 {
return true
}
if len(filters.fields[field]) != 1 {
if len(args.fields[key]) != 1 {
return false
}
@@ -235,14 +259,14 @@ func (filters Args) UniqueExactMatch(field, source string) bool {
return fieldValues[source]
}
// FuzzyMatch returns true if the source matches exactly one of the filters,
// or the source has one of the filters as a prefix.
func (filters Args) FuzzyMatch(field, source string) bool {
if filters.ExactMatch(field, source) {
// FuzzyMatch returns true if the source matches exactly one value, or the
// source has one of the values as a prefix.
func (args Args) FuzzyMatch(key, source string) bool {
if args.ExactMatch(key, source) {
return true
}
fieldValues := filters.fields[field]
fieldValues := args.fields[key]
for prefix := range fieldValues {
if strings.HasPrefix(source, prefix) {
return true
@@ -251,9 +275,17 @@ func (filters Args) FuzzyMatch(field, source string) bool {
return false
}
// Include returns true if the name of the field to filter is in the filters.
func (filters Args) Include(field string) bool {
_, ok := filters.fields[field]
// Include returns true if the key exists in the mapping
//
// Deprecated: use Contains
func (args Args) Include(field string) bool {
_, ok := args.fields[field]
return ok
}
// Contains returns true if the key exists in the mapping
func (args Args) Contains(field string) bool {
_, ok := args.fields[field]
return ok
}
@@ -265,10 +297,10 @@ func (e invalidFilter) Error() string {
func (invalidFilter) InvalidParameter() {}
// Validate ensures that all the fields in the filter are valid.
// It returns an error as soon as it finds an invalid field.
func (filters Args) Validate(accepted map[string]bool) error {
for name := range filters.fields {
// Validate compared the set of accepted keys against the keys in the mapping.
// An error is returned if any mapping keys are not in the accepted set.
func (args Args) Validate(accepted map[string]bool) error {
for name := range args.fields {
if !accepted[name] {
return invalidFilter(name)
}
@@ -276,21 +308,14 @@ func (filters Args) Validate(accepted map[string]bool) error {
return nil
}
type invalidFilterError string
func (e invalidFilterError) Error() string {
return "Invalid filter: '" + string(e) + "'"
}
func (invalidFilterError) InvalidParameter() {}
// WalkValues iterates over the list of filtered values for a field.
// It stops the iteration if it finds an error and it returns that error.
func (filters Args) WalkValues(field string, op func(value string) error) error {
if _, ok := filters.fields[field]; !ok {
// WalkValues iterates over the list of values for a key in the mapping and calls
// op() for each value. If op returns an error the iteration stops and the
// error is returned.
func (args Args) WalkValues(field string, op func(value string) error) error {
if _, ok := args.fields[field]; !ok {
return nil
}
for v := range filters.fields[field] {
for v := range args.fields[field] {
if err := op(v); err != nil {
return err
}

View File

@@ -7,7 +7,7 @@ package image
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// HistoryResponseItem history response item
// HistoryResponseItem individual image layer information in response to ImageHistory operation
// swagger:model HistoryResponseItem
type HistoryResponseItem struct {

View File

@@ -1,4 +1,4 @@
package mount
package mount // import "github.com/docker/docker/api/types/mount"
import (
"os"
@@ -67,7 +67,7 @@ var Propagations = []Propagation{
type Consistency string
const (
// ConsistencyFull guarantees bind-mount-like consistency
// ConsistencyFull guarantees bind mount-like consistency
ConsistencyFull Consistency = "consistent"
// ConsistencyCached mounts can cache read data and FS structure
ConsistencyCached Consistency = "cached"

View File

@@ -1,4 +1,4 @@
package network
package network // import "github.com/docker/docker/api/types/network"
// Address represents an IP address
type Address struct {

View File

@@ -1,4 +1,4 @@
package types
package types // import "github.com/docker/docker/api/types"
import (
"encoding/json"

View File

@@ -7,7 +7,7 @@ package types
// swagger:model Port
type Port struct {
// IP
// Host IP address that the container's port is mapped to
IP string `json:"IP,omitempty"`
// Port on the container

View File

@@ -1,4 +1,4 @@
package registry
package registry // import "github.com/docker/docker/api/types/registry"
// ----------------------------------------------------------------------------
// DO NOT EDIT THIS FILE

View File

@@ -1,4 +1,4 @@
package registry
package registry // import "github.com/docker/docker/api/types/registry"
import (
"encoding/json"

View File

@@ -1,4 +1,4 @@
package types
package types // import "github.com/docker/docker/api/types"
// Seccomp represents the config for a seccomp profile for syscall restriction.
type Seccomp struct {

View File

@@ -1,6 +1,6 @@
// Package types is used for API stability in the types and response to the
// consumers of the API stats endpoint.
package types
package types // import "github.com/docker/docker/api/types"
import "time"

View File

@@ -1,4 +1,4 @@
package strslice
package strslice // import "github.com/docker/docker/api/types/strslice"
import "encoding/json"

View File

@@ -1,4 +1,4 @@
package swarm
package swarm // import "github.com/docker/docker/api/types/swarm"
import "time"

View File

@@ -1,4 +1,4 @@
package swarm
package swarm // import "github.com/docker/docker/api/types/swarm"
import "os"
@@ -13,6 +13,10 @@ type Config struct {
type ConfigSpec struct {
Annotations
Data []byte `json:",omitempty"`
// Templating controls whether and how to evaluate the config payload as
// a template. If it is not set, no templating is used.
Templating *Driver `json:",omitempty"`
}
// ConfigReferenceFileTarget is a file target in a config reference

Some files were not shown because too many files have changed in this diff Show More