mirror of
https://github.com/containers/skopeo.git
synced 2025-09-06 17:20:57 +00:00
Bump containers/storage and containers/image
Update containers/storage and containers/image to the current-as-of-this-writing versions, 105f7c77aef0c797429e41552743bf5b03b63263 and 23bddaa64cc6bf3f3077cda0dbf1cdd7007434df respectively. Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
This commit is contained in:
74
vendor/github.com/containers/image/README.md
generated
vendored
74
vendor/github.com/containers/image/README.md
generated
vendored
@@ -1,74 +0,0 @@
|
|||||||
[](https://godoc.org/github.com/containers/image) [](https://travis-ci.org/containers/image)
|
|
||||||
=
|
|
||||||
|
|
||||||
`image` is a set of Go libraries aimed at working in various way with
|
|
||||||
containers' images and container image registries.
|
|
||||||
|
|
||||||
The containers/image library allows application to pull and push images from
|
|
||||||
container image registries, like the upstream docker registry. It also
|
|
||||||
implements "simple image signing".
|
|
||||||
|
|
||||||
The containers/image library also allows you to inspect a repository on a
|
|
||||||
container registry without pulling down the image. This means it fetches the
|
|
||||||
repository's manifest and it is able to show you a `docker inspect`-like json
|
|
||||||
output about a whole repository or a tag. This library, in contrast to `docker
|
|
||||||
inspect`, helps you gather useful information about a repository or a tag
|
|
||||||
without requiring you to run `docker pull`.
|
|
||||||
|
|
||||||
The containers/image library also allows you to translate from one image format
|
|
||||||
to another, for example docker container images to OCI images. It also allows
|
|
||||||
you to copy container images between various registries, possibly converting
|
|
||||||
them as necessary, and to sign and verify images.
|
|
||||||
|
|
||||||
## Command-line usage
|
|
||||||
|
|
||||||
The containers/image project is only a library with no user interface;
|
|
||||||
you can either incorporate it into your Go programs, or use the `skopeo` tool:
|
|
||||||
|
|
||||||
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
|
|
||||||
containers/image library and takes advantage of many of its features,
|
|
||||||
e.g. `skopeo copy` exposes the `containers/image/copy.Image` functionality.
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
|
|
||||||
This library does not ship a committed version of its dependencies in a `vendor`
|
|
||||||
subdirectory. This is so you can make well-informed decisions about which
|
|
||||||
libraries you should use with this package in your own projects, and because
|
|
||||||
types defined in the `vendor` directory would be impossible to use from your projects.
|
|
||||||
|
|
||||||
What this project tests against dependencies-wise is located
|
|
||||||
[in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf).
|
|
||||||
|
|
||||||
## Building
|
|
||||||
|
|
||||||
If you want to see what the library can do, or an example of how it is called,
|
|
||||||
consider starting with the [skopeo](https://github.com/projectatomic/skopeo) tool
|
|
||||||
instead.
|
|
||||||
|
|
||||||
To integrate this library into your project, put it into `$GOPATH` or use
|
|
||||||
your preferred vendoring tool to include a copy in your project.
|
|
||||||
Ensure that the dependencies documented [in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf)
|
|
||||||
are also available
|
|
||||||
(using those exact versions or different versions of your choosing).
|
|
||||||
|
|
||||||
This library, by default, also depends on the GpgME C library. Either install it:
|
|
||||||
```sh
|
|
||||||
Fedora$ dnf install gpgme-devel libassuan-devel
|
|
||||||
macOS$ brew install gpgme
|
|
||||||
```
|
|
||||||
or use the `containers_image_openpgp` build tag (e.g. using `go build -tags …`)
|
|
||||||
This will use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
|
|
||||||
the primary downside is that creating new signatures with the Golang-only implementation is not supported.
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
ASL 2.0
|
|
||||||
|
|
||||||
## Contact
|
|
||||||
|
|
||||||
- Mailing list: [containers-dev](https://groups.google.com/forum/?hl=en#!forum/containers-dev)
|
|
||||||
- IRC: #[container-projects](irc://irc.freenode.net:6667/#container-projects) on freenode.net
|
|
2
vendor/github.com/containers/image/docker/reference/README.md
generated
vendored
2
vendor/github.com/containers/image/docker/reference/README.md
generated
vendored
@@ -1,2 +0,0 @@
|
|||||||
This is a copy of github.com/docker/distribution/reference as of commit fb0bebc4b64e3881cc52a2478d749845ed76d2a8,
|
|
||||||
except that ParseAnyReferenceWithSet has been removed to drop the dependency on github.com/docker/distribution/digestset.
|
|
1
vendor/github.com/containers/image/pkg/strslice/README.md
generated
vendored
1
vendor/github.com/containers/image/pkg/strslice/README.md
generated
vendored
@@ -1 +0,0 @@
|
|||||||
This package was replicated from [github.com/docker/docker v17.04.0-ce](https://github.com/docker/docker/tree/v17.04.0-ce/api/types/strslice).
|
|
2
vendor/github.com/containers/image/storage/storage_image.go
generated
vendored
2
vendor/github.com/containers/image/storage/storage_image.go
generated
vendored
@@ -521,7 +521,7 @@ func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64,
|
|||||||
} else {
|
} else {
|
||||||
n = layerMeta.CompressedSize
|
n = layerMeta.CompressedSize
|
||||||
}
|
}
|
||||||
diff, err := store.Diff("", layer.ID)
|
diff, err := store.Diff("", layer.ID, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, -1, err
|
return nil, -1, err
|
||||||
}
|
}
|
||||||
|
2
vendor/github.com/containers/image/transports/alltransports/alltransports.go
generated
vendored
2
vendor/github.com/containers/image/transports/alltransports/alltransports.go
generated
vendored
@@ -12,7 +12,7 @@ import (
|
|||||||
_ "github.com/containers/image/docker/daemon"
|
_ "github.com/containers/image/docker/daemon"
|
||||||
_ "github.com/containers/image/oci/layout"
|
_ "github.com/containers/image/oci/layout"
|
||||||
_ "github.com/containers/image/openshift"
|
_ "github.com/containers/image/openshift"
|
||||||
_ "github.com/containers/image/ostree"
|
// The ostree transport is registered by ostree*.go
|
||||||
_ "github.com/containers/image/storage"
|
_ "github.com/containers/image/storage"
|
||||||
"github.com/containers/image/transports"
|
"github.com/containers/image/transports"
|
||||||
"github.com/containers/image/types"
|
"github.com/containers/image/types"
|
||||||
|
8
vendor/github.com/containers/image/transports/alltransports/ostree.go
generated
vendored
Normal file
8
vendor/github.com/containers/image/transports/alltransports/ostree.go
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
// +build !containers_image_ostree_stub
|
||||||
|
|
||||||
|
package alltransports
|
||||||
|
|
||||||
|
import (
|
||||||
|
// Register the ostree transport
|
||||||
|
_ "github.com/containers/image/ostree"
|
||||||
|
)
|
9
vendor/github.com/containers/image/transports/alltransports/ostree_stub.go
generated
vendored
Normal file
9
vendor/github.com/containers/image/transports/alltransports/ostree_stub.go
generated
vendored
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
// +build containers_image_ostree_stub
|
||||||
|
|
||||||
|
package alltransports
|
||||||
|
|
||||||
|
import "github.com/containers/image/transports"
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
transports.Register(transports.NewStubTransport("ostree"))
|
||||||
|
}
|
36
vendor/github.com/containers/image/transports/stub.go
generated
vendored
Normal file
36
vendor/github.com/containers/image/transports/stub.go
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
package transports
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/containers/image/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
// stubTransport is an implementation of types.ImageTransport which has a name, but rejects any references with “the transport $name: is not supported in this build”.
|
||||||
|
type stubTransport string
|
||||||
|
|
||||||
|
// NewStubTransport returns an implementation of types.ImageTransport which has a name, but rejects any references with “the transport $name: is not supported in this build”.
|
||||||
|
func NewStubTransport(name string) types.ImageTransport {
|
||||||
|
return stubTransport(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name returns the name of the transport, which must be unique among other transports.
|
||||||
|
func (s stubTransport) Name() string {
|
||||||
|
return string(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
|
||||||
|
func (s stubTransport) ParseReference(reference string) (types.ImageReference, error) {
|
||||||
|
return nil, fmt.Errorf(`The transport "%s:" is not supported in this build`, string(s))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
|
||||||
|
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
|
||||||
|
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
|
||||||
|
// scope passed to this function will not be "", that value is always allowed.
|
||||||
|
func (s stubTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||||
|
// Allowing any reference in here allows tools with some transports stubbed-out to still
|
||||||
|
// use signature verification policies which refer to these stubbed-out transports.
|
||||||
|
// See also the treatment of unknown transports in policyTransportScopesWithTransport.UnmarshalJSON .
|
||||||
|
return nil
|
||||||
|
}
|
37
vendor/github.com/containers/image/vendor.conf
generated
vendored
37
vendor/github.com/containers/image/vendor.conf
generated
vendored
@@ -1,37 +0,0 @@
|
|||||||
github.com/Sirupsen/logrus 7f4b1adc791766938c29457bed0703fb9134421a
|
|
||||||
github.com/containers/storage 989b1c1d85f5dfe2076c67b54289cc13dc836c8c
|
|
||||||
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
|
|
||||||
github.com/docker/distribution df5327f76fb6468b84a87771e361762b8be23fdb
|
|
||||||
github.com/docker/docker 75843d36aa5c3eaade50da005f9e0ff2602f3d5e
|
|
||||||
github.com/docker/go-connections 7da10c8c50cad14494ec818dcdfb6506265c0086
|
|
||||||
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
|
|
||||||
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
|
|
||||||
github.com/ghodss/yaml 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
|
|
||||||
github.com/gorilla/context 08b5f424b9271eedf6f9f0ce86cb9396ed337a42
|
|
||||||
github.com/gorilla/mux 94e7d24fd285520f3d12ae998f7fdd6b5393d453
|
|
||||||
github.com/imdario/mergo 50d4dbd4eb0e84778abe37cefef140271d96fade
|
|
||||||
github.com/mattn/go-runewidth 14207d285c6c197daabb5c9793d63e7af9ab2d50
|
|
||||||
github.com/mattn/go-shellwords 005a0944d84452842197c2108bd9168ced206f78
|
|
||||||
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
|
|
||||||
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
|
|
||||||
github.com/opencontainers/go-digest aa2ec055abd10d26d539eb630a92241b781ce4bc
|
|
||||||
github.com/opencontainers/image-spec v1.0.0-rc6
|
|
||||||
github.com/opencontainers/runc 6b1d0e76f239ffb435445e5ae316d2676c07c6e3
|
|
||||||
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
|
|
||||||
github.com/pkg/errors 248dadf4e9068a0b3e79f02ed0a610d935de5302
|
|
||||||
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
|
|
||||||
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
|
|
||||||
github.com/vbatts/tar-split bd4c5d64c3e9297f410025a3b1bd0c58f659e721
|
|
||||||
golang.org/x/crypto 453249f01cfeb54c3d549ddb75ff152ca243f9d8
|
|
||||||
golang.org/x/net 6b27048ae5e6ad1ef927e72e437531493de612fe
|
|
||||||
golang.org/x/sys 075e574b89e4c2d22f2286a7e2b919519c6f3547
|
|
||||||
gopkg.in/cheggaaa/pb.v1 d7e6ca3010b6f084d8056847f55d7f572f180678
|
|
||||||
gopkg.in/yaml.v2 a3f3340b5840cee44f372bddb5880fcbc419b46a
|
|
||||||
k8s.io/client-go bcde30fb7eaed76fd98a36b4120321b94995ffb6
|
|
||||||
github.com/xeipuuv/gojsonschema master
|
|
||||||
github.com/xeipuuv/gojsonreference master
|
|
||||||
github.com/xeipuuv/gojsonpointer master
|
|
||||||
github.com/tchap/go-patricia v2.2.6
|
|
||||||
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
|
|
||||||
github.com/BurntSushi/toml b26d9c308763d68093482582cea63d69be07a0f0
|
|
||||||
github.com/ostreedev/ostree-go 61532f383f1f48e5c27080b0b9c8b022c3706a97
|
|
43
vendor/github.com/containers/storage/README.md
generated
vendored
43
vendor/github.com/containers/storage/README.md
generated
vendored
@@ -1,43 +0,0 @@
|
|||||||
`storage` is a Go library which aims to provide methods for storing filesystem
|
|
||||||
layers, container images, and containers. An `oci-storage` CLI wrapper is also
|
|
||||||
included for manual and scripting use.
|
|
||||||
|
|
||||||
To build the CLI wrapper, use 'make build-binary'.
|
|
||||||
|
|
||||||
Operations which use VMs expect to launch them using 'vagrant', defaulting to
|
|
||||||
using its 'libvirt' provider. The boxes used are also available for the
|
|
||||||
'virtualbox' provider, and can be selected by setting $VAGRANT_PROVIDER to
|
|
||||||
'virtualbox' before kicking off the build.
|
|
||||||
|
|
||||||
The library manages three types of items: layers, images, and containers.
|
|
||||||
|
|
||||||
A *layer* is a copy-on-write filesystem which is notionally stored as a set of
|
|
||||||
changes relative to its *parent* layer, if it has one. A given layer can only
|
|
||||||
have one parent, but any layer can be the parent of multiple layers. Layers
|
|
||||||
which are parents of other layers should be treated as read-only.
|
|
||||||
|
|
||||||
An *image* is a reference to a particular layer (its _top_ layer), along with
|
|
||||||
other information which the library can manage for the convenience of its
|
|
||||||
caller. This information typically includes configuration templates for
|
|
||||||
running a binary contained within the image's layers, and may include
|
|
||||||
cryptographic signatures. Multiple images can reference the same layer, as the
|
|
||||||
differences between two images may not be in their layer contents.
|
|
||||||
|
|
||||||
A *container* is a read-write layer which is a child of an image's top layer,
|
|
||||||
along with information which the library can manage for the convenience of its
|
|
||||||
caller. This information typically includes configuration information for
|
|
||||||
running the specific container. Multiple containers can be derived from a
|
|
||||||
single image.
|
|
||||||
|
|
||||||
Layers, images, and containers are represented primarily by 32 character
|
|
||||||
hexadecimal IDs, but items of each kind can also have one or more arbitrary
|
|
||||||
names attached to them, which the library will automatically resolve to IDs
|
|
||||||
when they are passed in to API calls which expect IDs.
|
|
||||||
|
|
||||||
The library can store what it calls *metadata* for each of these types of
|
|
||||||
items. This is expected to be a small piece of data, since it is cached in
|
|
||||||
memory and stored along with the library's own bookkeeping information.
|
|
||||||
|
|
||||||
Additionally, the library can store one or more of what it calls *big data* for
|
|
||||||
images and containers. This is a named chunk of larger data, which is only in
|
|
||||||
memory when it is being read from or being written to its own disk file.
|
|
25
vendor/github.com/containers/storage/containers.go
generated
vendored
25
vendor/github.com/containers/storage/containers.go
generated
vendored
@@ -50,6 +50,12 @@ type Container struct {
|
|||||||
// that has been stored, if they're known.
|
// that has been stored, if they're known.
|
||||||
BigDataSizes map[string]int64 `json:"big-data-sizes,omitempty"`
|
BigDataSizes map[string]int64 `json:"big-data-sizes,omitempty"`
|
||||||
|
|
||||||
|
// Created is the datestamp for when this container was created. Older
|
||||||
|
// versions of the library did not track this information, so callers
|
||||||
|
// will likely want to use the IsZero() method to verify that a value
|
||||||
|
// is set before using it.
|
||||||
|
Created time.Time `json:"created,omitempty"`
|
||||||
|
|
||||||
Flags map[string]interface{} `json:"flags,omitempty"`
|
Flags map[string]interface{} `json:"flags,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -253,6 +259,7 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
|
|||||||
Metadata: metadata,
|
Metadata: metadata,
|
||||||
BigDataNames: []string{},
|
BigDataNames: []string{},
|
||||||
BigDataSizes: make(map[string]int64),
|
BigDataSizes: make(map[string]int64),
|
||||||
|
Created: time.Now().UTC(),
|
||||||
Flags: make(map[string]interface{}),
|
Flags: make(map[string]interface{}),
|
||||||
}
|
}
|
||||||
r.containers = append(r.containers, container)
|
r.containers = append(r.containers, container)
|
||||||
@@ -309,10 +316,11 @@ func (r *containerStore) Delete(id string) error {
|
|||||||
return ErrContainerUnknown
|
return ErrContainerUnknown
|
||||||
}
|
}
|
||||||
id = container.ID
|
id = container.ID
|
||||||
newContainers := []*Container{}
|
toDeleteIndex := -1
|
||||||
for _, candidate := range r.containers {
|
for i, candidate := range r.containers {
|
||||||
if candidate.ID != id {
|
if candidate.ID == id {
|
||||||
newContainers = append(newContainers, candidate)
|
toDeleteIndex = i
|
||||||
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
delete(r.byid, id)
|
delete(r.byid, id)
|
||||||
@@ -321,7 +329,14 @@ func (r *containerStore) Delete(id string) error {
|
|||||||
for _, name := range container.Names {
|
for _, name := range container.Names {
|
||||||
delete(r.byname, name)
|
delete(r.byname, name)
|
||||||
}
|
}
|
||||||
r.containers = newContainers
|
if toDeleteIndex != -1 {
|
||||||
|
// delete the container at toDeleteIndex
|
||||||
|
if toDeleteIndex == len(r.containers)-1 {
|
||||||
|
r.containers = r.containers[:len(r.containers)-1]
|
||||||
|
} else {
|
||||||
|
r.containers = append(r.containers[:toDeleteIndex], r.containers[toDeleteIndex+1:]...)
|
||||||
|
}
|
||||||
|
}
|
||||||
if err := r.Save(); err != nil {
|
if err := r.Save(); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
31
vendor/github.com/containers/storage/images.go
generated
vendored
31
vendor/github.com/containers/storage/images.go
generated
vendored
@@ -46,6 +46,12 @@ type Image struct {
|
|||||||
// that has been stored, if they're known.
|
// that has been stored, if they're known.
|
||||||
BigDataSizes map[string]int64 `json:"big-data-sizes,omitempty"`
|
BigDataSizes map[string]int64 `json:"big-data-sizes,omitempty"`
|
||||||
|
|
||||||
|
// Created is the datestamp for when this image was created. Older
|
||||||
|
// versions of the library did not track this information, so callers
|
||||||
|
// will likely want to use the IsZero() method to verify that a value
|
||||||
|
// is set before using it.
|
||||||
|
Created time.Time `json:"created,omitempty"`
|
||||||
|
|
||||||
Flags map[string]interface{} `json:"flags,omitempty"`
|
Flags map[string]interface{} `json:"flags,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -80,7 +86,7 @@ type ImageStore interface {
|
|||||||
// Create creates an image that has a specified ID (or a random one) and
|
// Create creates an image that has a specified ID (or a random one) and
|
||||||
// optional names, using the specified layer as its topmost (hopefully
|
// optional names, using the specified layer as its topmost (hopefully
|
||||||
// read-only) layer. That layer can be referenced by multiple images.
|
// read-only) layer. That layer can be referenced by multiple images.
|
||||||
Create(id string, names []string, layer, metadata string) (*Image, error)
|
Create(id string, names []string, layer, metadata string, created time.Time) (*Image, error)
|
||||||
|
|
||||||
// SetNames replaces the list of names associated with an image with the
|
// SetNames replaces the list of names associated with an image with the
|
||||||
// supplied values.
|
// supplied values.
|
||||||
@@ -254,7 +260,7 @@ func (r *imageStore) SetFlag(id string, flag string, value interface{}) error {
|
|||||||
return r.Save()
|
return r.Save()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *imageStore) Create(id string, names []string, layer, metadata string) (image *Image, err error) {
|
func (r *imageStore) Create(id string, names []string, layer, metadata string, created time.Time) (image *Image, err error) {
|
||||||
if !r.IsReadWrite() {
|
if !r.IsReadWrite() {
|
||||||
return nil, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to create new images at %q", r.imagespath())
|
return nil, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to create new images at %q", r.imagespath())
|
||||||
}
|
}
|
||||||
@@ -274,6 +280,9 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
|
|||||||
return nil, ErrDuplicateName
|
return nil, ErrDuplicateName
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if created.IsZero() {
|
||||||
|
created = time.Now().UTC()
|
||||||
|
}
|
||||||
if err == nil {
|
if err == nil {
|
||||||
image = &Image{
|
image = &Image{
|
||||||
ID: id,
|
ID: id,
|
||||||
@@ -282,6 +291,7 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
|
|||||||
Metadata: metadata,
|
Metadata: metadata,
|
||||||
BigDataNames: []string{},
|
BigDataNames: []string{},
|
||||||
BigDataSizes: make(map[string]int64),
|
BigDataSizes: make(map[string]int64),
|
||||||
|
Created: created,
|
||||||
Flags: make(map[string]interface{}),
|
Flags: make(map[string]interface{}),
|
||||||
}
|
}
|
||||||
r.images = append(r.images, image)
|
r.images = append(r.images, image)
|
||||||
@@ -346,10 +356,10 @@ func (r *imageStore) Delete(id string) error {
|
|||||||
return ErrImageUnknown
|
return ErrImageUnknown
|
||||||
}
|
}
|
||||||
id = image.ID
|
id = image.ID
|
||||||
newImages := []*Image{}
|
toDeleteIndex := -1
|
||||||
for _, candidate := range r.images {
|
for i, candidate := range r.images {
|
||||||
if candidate.ID != id {
|
if candidate.ID == id {
|
||||||
newImages = append(newImages, candidate)
|
toDeleteIndex = i
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
delete(r.byid, id)
|
delete(r.byid, id)
|
||||||
@@ -357,7 +367,14 @@ func (r *imageStore) Delete(id string) error {
|
|||||||
for _, name := range image.Names {
|
for _, name := range image.Names {
|
||||||
delete(r.byname, name)
|
delete(r.byname, name)
|
||||||
}
|
}
|
||||||
r.images = newImages
|
if toDeleteIndex != -1 {
|
||||||
|
// delete the image at toDeleteIndex
|
||||||
|
if toDeleteIndex == len(r.images)-1 {
|
||||||
|
r.images = r.images[:len(r.images)-1]
|
||||||
|
} else {
|
||||||
|
r.images = append(r.images[:toDeleteIndex], r.images[toDeleteIndex+1:]...)
|
||||||
|
}
|
||||||
|
}
|
||||||
if err := r.Save(); err != nil {
|
if err := r.Save(); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
207
vendor/github.com/containers/storage/layers.go
generated
vendored
207
vendor/github.com/containers/storage/layers.go
generated
vendored
@@ -15,6 +15,7 @@ import (
|
|||||||
"github.com/containers/storage/pkg/ioutils"
|
"github.com/containers/storage/pkg/ioutils"
|
||||||
"github.com/containers/storage/pkg/stringid"
|
"github.com/containers/storage/pkg/stringid"
|
||||||
"github.com/containers/storage/pkg/truncindex"
|
"github.com/containers/storage/pkg/truncindex"
|
||||||
|
digest "github.com/opencontainers/go-digest"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/vbatts/tar-split/tar/asm"
|
"github.com/vbatts/tar-split/tar/asm"
|
||||||
"github.com/vbatts/tar-split/tar/storage"
|
"github.com/vbatts/tar-split/tar/storage"
|
||||||
@@ -66,6 +67,38 @@ type Layer struct {
|
|||||||
// mounted at the mount point.
|
// mounted at the mount point.
|
||||||
MountCount int `json:"-"`
|
MountCount int `json:"-"`
|
||||||
|
|
||||||
|
// Created is the datestamp for when this layer was created. Older
|
||||||
|
// versions of the library did not track this information, so callers
|
||||||
|
// will likely want to use the IsZero() method to verify that a value
|
||||||
|
// is set before using it.
|
||||||
|
Created time.Time `json:"created,omitempty"`
|
||||||
|
|
||||||
|
// CompressedDigest is the digest of the blob that was last passed to
|
||||||
|
// ApplyDiff() or Put(), as it was presented to us.
|
||||||
|
CompressedDigest digest.Digest `json:"compressed-diff-digest,omitempty"`
|
||||||
|
|
||||||
|
// CompressedSize is the length of the blob that was last passed to
|
||||||
|
// ApplyDiff() or Put(), as it was presented to us. If
|
||||||
|
// CompressedDigest is not set, this should be treated as if it were an
|
||||||
|
// uninitialized value.
|
||||||
|
CompressedSize int64 `json:"compressed-size,omitempty"`
|
||||||
|
|
||||||
|
// UncompressedDigest is the digest of the blob that was last passed to
|
||||||
|
// ApplyDiff() or Put(), after we decompressed it. Often referred to
|
||||||
|
// as a DiffID.
|
||||||
|
UncompressedDigest digest.Digest `json:"diff-digest,omitempty"`
|
||||||
|
|
||||||
|
// UncompressedSize is the length of the blob that was last passed to
|
||||||
|
// ApplyDiff() or Put(), after we decompressed it. If
|
||||||
|
// UncompressedDigest is not set, this should be treated as if it were
|
||||||
|
// an uninitialized value.
|
||||||
|
UncompressedSize int64 `json:"diff-size,omitempty"`
|
||||||
|
|
||||||
|
// CompressionType is the type of compression which we detected on the blob
|
||||||
|
// that was last passed to ApplyDiff() or Put().
|
||||||
|
CompressionType archive.Compression `json:"compression,omitempty"`
|
||||||
|
|
||||||
|
// Flags is arbitrary data about the layer.
|
||||||
Flags map[string]interface{} `json:"flags,omitempty"`
|
Flags map[string]interface{} `json:"flags,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -75,6 +108,12 @@ type layerMountPoint struct {
|
|||||||
MountCount int `json:"count"`
|
MountCount int `json:"count"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// DiffOptions override the default behavior of Diff() methods.
|
||||||
|
type DiffOptions struct {
|
||||||
|
// Compression, if set overrides the default compressor when generating a diff.
|
||||||
|
Compression *archive.Compression
|
||||||
|
}
|
||||||
|
|
||||||
// ROLayerStore wraps a graph driver, adding the ability to refer to layers by
|
// ROLayerStore wraps a graph driver, adding the ability to refer to layers by
|
||||||
// name, and keeping track of parent-child relationships, along with a list of
|
// name, and keeping track of parent-child relationships, along with a list of
|
||||||
// all known layers.
|
// all known layers.
|
||||||
@@ -101,17 +140,31 @@ type ROLayerStore interface {
|
|||||||
// Diff produces a tarstream which can be applied to a layer with the contents
|
// Diff produces a tarstream which can be applied to a layer with the contents
|
||||||
// of the first layer to produce a layer with the contents of the second layer.
|
// of the first layer to produce a layer with the contents of the second layer.
|
||||||
// By default, the parent of the second layer is used as the first
|
// By default, the parent of the second layer is used as the first
|
||||||
// layer, so it need not be specified.
|
// layer, so it need not be specified. Options can be used to override
|
||||||
Diff(from, to string) (io.ReadCloser, error)
|
// default behavior, but are also not required.
|
||||||
|
Diff(from, to string, options *DiffOptions) (io.ReadCloser, error)
|
||||||
|
|
||||||
// DiffSize produces an estimate of the length of the tarstream which would be
|
// DiffSize produces an estimate of the length of the tarstream which would be
|
||||||
// produced by Diff.
|
// produced by Diff.
|
||||||
DiffSize(from, to string) (int64, error)
|
DiffSize(from, to string) (int64, error)
|
||||||
|
|
||||||
|
// Size produces a cached value for the uncompressed size of the layer,
|
||||||
|
// if one is known, or -1 if it is not known. If the layer can not be
|
||||||
|
// found, it returns an error.
|
||||||
|
Size(name string) (int64, error)
|
||||||
|
|
||||||
// Lookup attempts to translate a name to an ID. Most methods do this
|
// Lookup attempts to translate a name to an ID. Most methods do this
|
||||||
// implicitly.
|
// implicitly.
|
||||||
Lookup(name string) (string, error)
|
Lookup(name string) (string, error)
|
||||||
|
|
||||||
|
// LayersByCompressedDigest returns a slice of the layers with the
|
||||||
|
// specified compressed digest value recorded for them.
|
||||||
|
LayersByCompressedDigest(d digest.Digest) ([]Layer, error)
|
||||||
|
|
||||||
|
// LayersByUncompressedDigest returns a slice of the layers with the
|
||||||
|
// specified uncompressed digest value recorded for them.
|
||||||
|
LayersByUncompressedDigest(d digest.Digest) ([]Layer, error)
|
||||||
|
|
||||||
// Layers returns a slice of the known layers.
|
// Layers returns a slice of the known layers.
|
||||||
Layers() ([]Layer, error)
|
Layers() ([]Layer, error)
|
||||||
}
|
}
|
||||||
@@ -164,15 +217,17 @@ type LayerStore interface {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type layerStore struct {
|
type layerStore struct {
|
||||||
lockfile Locker
|
lockfile Locker
|
||||||
rundir string
|
rundir string
|
||||||
driver drivers.Driver
|
driver drivers.Driver
|
||||||
layerdir string
|
layerdir string
|
||||||
layers []*Layer
|
layers []*Layer
|
||||||
idindex *truncindex.TruncIndex
|
idindex *truncindex.TruncIndex
|
||||||
byid map[string]*Layer
|
byid map[string]*Layer
|
||||||
byname map[string]*Layer
|
byname map[string]*Layer
|
||||||
bymount map[string]*Layer
|
bymount map[string]*Layer
|
||||||
|
bycompressedsum map[digest.Digest][]string
|
||||||
|
byuncompressedsum map[digest.Digest][]string
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *layerStore) Layers() ([]Layer, error) {
|
func (r *layerStore) Layers() ([]Layer, error) {
|
||||||
@@ -203,7 +258,8 @@ func (r *layerStore) Load() error {
|
|||||||
ids := make(map[string]*Layer)
|
ids := make(map[string]*Layer)
|
||||||
names := make(map[string]*Layer)
|
names := make(map[string]*Layer)
|
||||||
mounts := make(map[string]*Layer)
|
mounts := make(map[string]*Layer)
|
||||||
parents := make(map[string][]*Layer)
|
compressedsums := make(map[digest.Digest][]string)
|
||||||
|
uncompressedsums := make(map[digest.Digest][]string)
|
||||||
if err = json.Unmarshal(data, &layers); len(data) == 0 || err == nil {
|
if err = json.Unmarshal(data, &layers); len(data) == 0 || err == nil {
|
||||||
for n, layer := range layers {
|
for n, layer := range layers {
|
||||||
ids[layer.ID] = layers[n]
|
ids[layer.ID] = layers[n]
|
||||||
@@ -215,10 +271,11 @@ func (r *layerStore) Load() error {
|
|||||||
}
|
}
|
||||||
names[name] = layers[n]
|
names[name] = layers[n]
|
||||||
}
|
}
|
||||||
if pslice, ok := parents[layer.Parent]; ok {
|
if layer.CompressedDigest != "" {
|
||||||
parents[layer.Parent] = append(pslice, layers[n])
|
compressedsums[layer.CompressedDigest] = append(compressedsums[layer.CompressedDigest], layer.ID)
|
||||||
} else {
|
}
|
||||||
parents[layer.Parent] = []*Layer{layers[n]}
|
if layer.UncompressedDigest != "" {
|
||||||
|
uncompressedsums[layer.UncompressedDigest] = append(uncompressedsums[layer.UncompressedDigest], layer.ID)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -247,6 +304,8 @@ func (r *layerStore) Load() error {
|
|||||||
r.byid = ids
|
r.byid = ids
|
||||||
r.byname = names
|
r.byname = names
|
||||||
r.bymount = mounts
|
r.bymount = mounts
|
||||||
|
r.bycompressedsum = compressedsums
|
||||||
|
r.byuncompressedsum = uncompressedsums
|
||||||
err = nil
|
err = nil
|
||||||
// Last step: if we're writable, try to remove anything that a previous
|
// Last step: if we're writable, try to remove anything that a previous
|
||||||
// user of this storage area marked for deletion but didn't manage to
|
// user of this storage area marked for deletion but didn't manage to
|
||||||
@@ -369,6 +428,20 @@ func (r *layerStore) lookup(id string) (*Layer, bool) {
|
|||||||
return nil, false
|
return nil, false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r *layerStore) Size(name string) (int64, error) {
|
||||||
|
layer, ok := r.lookup(name)
|
||||||
|
if !ok {
|
||||||
|
return -1, ErrLayerUnknown
|
||||||
|
}
|
||||||
|
// We use the presence of a non-empty digest as an indicator that the size value was intentionally set, and that
|
||||||
|
// a zero value is not just present because it was never set to anything else (which can happen if the layer was
|
||||||
|
// created by a version of this library that didn't keep track of digest and size information).
|
||||||
|
if layer.UncompressedDigest != "" {
|
||||||
|
return layer.UncompressedSize, nil
|
||||||
|
}
|
||||||
|
return -1, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (r *layerStore) ClearFlag(id string, flag string) error {
|
func (r *layerStore) ClearFlag(id string, flag string) error {
|
||||||
if !r.IsReadWrite() {
|
if !r.IsReadWrite() {
|
||||||
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to clear flags on layers at %q", r.layerspath())
|
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to clear flags on layers at %q", r.layerspath())
|
||||||
@@ -440,6 +513,7 @@ func (r *layerStore) Put(id, parent string, names []string, mountLabel string, o
|
|||||||
Parent: parent,
|
Parent: parent,
|
||||||
Names: names,
|
Names: names,
|
||||||
MountLabel: mountLabel,
|
MountLabel: mountLabel,
|
||||||
|
Created: time.Now().UTC(),
|
||||||
Flags: make(map[string]interface{}),
|
Flags: make(map[string]interface{}),
|
||||||
}
|
}
|
||||||
r.layers = append(r.layers, layer)
|
r.layers = append(r.layers, layer)
|
||||||
@@ -615,13 +689,21 @@ func (r *layerStore) Delete(id string) error {
|
|||||||
if layer.MountPoint != "" {
|
if layer.MountPoint != "" {
|
||||||
delete(r.bymount, layer.MountPoint)
|
delete(r.bymount, layer.MountPoint)
|
||||||
}
|
}
|
||||||
newLayers := []*Layer{}
|
toDeleteIndex := -1
|
||||||
for _, candidate := range r.layers {
|
for i, candidate := range r.layers {
|
||||||
if candidate.ID != id {
|
if candidate.ID == id {
|
||||||
newLayers = append(newLayers, candidate)
|
toDeleteIndex = i
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if toDeleteIndex != -1 {
|
||||||
|
// delete the layer at toDeleteIndex
|
||||||
|
if toDeleteIndex == len(r.layers)-1 {
|
||||||
|
r.layers = r.layers[:len(r.layers)-1]
|
||||||
|
} else {
|
||||||
|
r.layers = append(r.layers[:toDeleteIndex], r.layers[toDeleteIndex+1:]...)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
r.layers = newLayers
|
|
||||||
if err = r.Save(); err != nil {
|
if err = r.Save(); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -726,20 +808,20 @@ func (r *layerStore) newFileGetter(id string) (drivers.FileGetCloser, error) {
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *layerStore) Diff(from, to string) (io.ReadCloser, error) {
|
func (r *layerStore) Diff(from, to string, options *DiffOptions) (io.ReadCloser, error) {
|
||||||
var metadata storage.Unpacker
|
var metadata storage.Unpacker
|
||||||
|
|
||||||
from, to, toLayer, err := r.findParentAndLayer(from, to)
|
from, to, toLayer, err := r.findParentAndLayer(from, to)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, ErrLayerUnknown
|
return nil, ErrLayerUnknown
|
||||||
}
|
}
|
||||||
compression := archive.Uncompressed
|
// Default to applying the type of encryption that we noted was used
|
||||||
if cflag, ok := toLayer.Flags[compressionFlag]; ok {
|
// for the layerdiff when it was applied.
|
||||||
if ctype, ok := cflag.(float64); ok {
|
compression := toLayer.CompressionType
|
||||||
compression = archive.Compression(ctype)
|
// If a particular compression type (or no compression) was selected,
|
||||||
} else if ctype, ok := cflag.(archive.Compression); ok {
|
// use that instead.
|
||||||
compression = archive.Compression(ctype)
|
if options != nil && options.Compression != nil {
|
||||||
}
|
compression = *options.Compression
|
||||||
}
|
}
|
||||||
if from != toLayer.Parent {
|
if from != toLayer.Parent {
|
||||||
diff, err := r.driver.Diff(to, from)
|
diff, err := r.driver.Diff(to, from)
|
||||||
@@ -827,6 +909,10 @@ func (r *layerStore) DiffSize(from, to string) (size int64, err error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err error) {
|
func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err error) {
|
||||||
|
if !r.IsReadWrite() {
|
||||||
|
return -1, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify layer contents at %q", r.layerspath())
|
||||||
|
}
|
||||||
|
|
||||||
layer, ok := r.lookup(to)
|
layer, ok := r.lookup(to)
|
||||||
if !ok {
|
if !ok {
|
||||||
return -1, ErrLayerUnknown
|
return -1, ErrLayerUnknown
|
||||||
@@ -839,7 +925,9 @@ func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err
|
|||||||
}
|
}
|
||||||
|
|
||||||
compression := archive.DetectCompression(header[:n])
|
compression := archive.DetectCompression(header[:n])
|
||||||
defragmented := io.MultiReader(bytes.NewBuffer(header[:n]), diff)
|
compressedDigest := digest.Canonical.Digester()
|
||||||
|
compressedCounter := ioutils.NewWriteCounter(compressedDigest.Hash())
|
||||||
|
defragmented := io.TeeReader(io.MultiReader(bytes.NewBuffer(header[:n]), diff), compressedCounter)
|
||||||
|
|
||||||
tsdata := bytes.Buffer{}
|
tsdata := bytes.Buffer{}
|
||||||
compressor, err := gzip.NewWriterLevel(&tsdata, gzip.BestSpeed)
|
compressor, err := gzip.NewWriterLevel(&tsdata, gzip.BestSpeed)
|
||||||
@@ -847,15 +935,20 @@ func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err
|
|||||||
compressor = gzip.NewWriter(&tsdata)
|
compressor = gzip.NewWriter(&tsdata)
|
||||||
}
|
}
|
||||||
metadata := storage.NewJSONPacker(compressor)
|
metadata := storage.NewJSONPacker(compressor)
|
||||||
decompressed, err := archive.DecompressStream(defragmented)
|
uncompressed, err := archive.DecompressStream(defragmented)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return -1, err
|
return -1, err
|
||||||
}
|
}
|
||||||
payload, err := asm.NewInputTarStream(decompressed, metadata, storage.NewDiscardFilePutter())
|
uncompressedDigest := digest.Canonical.Digester()
|
||||||
|
uncompressedCounter := ioutils.NewWriteCounter(uncompressedDigest.Hash())
|
||||||
|
payload, err := asm.NewInputTarStream(io.TeeReader(uncompressed, uncompressedCounter), metadata, storage.NewDiscardFilePutter())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return -1, err
|
return -1, err
|
||||||
}
|
}
|
||||||
size, err = r.driver.ApplyDiff(layer.ID, layer.Parent, payload)
|
size, err = r.driver.ApplyDiff(layer.ID, layer.Parent, payload)
|
||||||
|
if err != nil {
|
||||||
|
return -1, err
|
||||||
|
}
|
||||||
compressor.Close()
|
compressor.Close()
|
||||||
if err == nil {
|
if err == nil {
|
||||||
if err := os.MkdirAll(filepath.Dir(r.tspath(layer.ID)), 0700); err != nil {
|
if err := os.MkdirAll(filepath.Dir(r.tspath(layer.ID)), 0700); err != nil {
|
||||||
@@ -866,15 +959,57 @@ func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if compression != archive.Uncompressed {
|
updateDigestMap := func(m *map[digest.Digest][]string, oldvalue, newvalue digest.Digest, id string) {
|
||||||
layer.Flags[compressionFlag] = compression
|
var newList []string
|
||||||
} else {
|
if oldvalue != "" {
|
||||||
delete(layer.Flags, compressionFlag)
|
for _, value := range (*m)[oldvalue] {
|
||||||
|
if value != id {
|
||||||
|
newList = append(newList, value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(newList) > 0 {
|
||||||
|
(*m)[oldvalue] = newList
|
||||||
|
} else {
|
||||||
|
delete(*m, oldvalue)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if newvalue != "" {
|
||||||
|
(*m)[newvalue] = append((*m)[newvalue], id)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
updateDigestMap(&r.bycompressedsum, layer.CompressedDigest, compressedDigest.Digest(), layer.ID)
|
||||||
|
layer.CompressedDigest = compressedDigest.Digest()
|
||||||
|
layer.CompressedSize = compressedCounter.Count
|
||||||
|
updateDigestMap(&r.byuncompressedsum, layer.UncompressedDigest, uncompressedDigest.Digest(), layer.ID)
|
||||||
|
layer.UncompressedDigest = uncompressedDigest.Digest()
|
||||||
|
layer.UncompressedSize = uncompressedCounter.Count
|
||||||
|
layer.CompressionType = compression
|
||||||
|
|
||||||
|
err = r.Save()
|
||||||
|
|
||||||
return size, err
|
return size, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r *layerStore) layersByDigestMap(m map[digest.Digest][]string, d digest.Digest) ([]Layer, error) {
|
||||||
|
var layers []Layer
|
||||||
|
for _, layerID := range m[d] {
|
||||||
|
layer, ok := r.lookup(layerID)
|
||||||
|
if !ok {
|
||||||
|
return nil, ErrLayerUnknown
|
||||||
|
}
|
||||||
|
layers = append(layers, *layer)
|
||||||
|
}
|
||||||
|
return layers, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *layerStore) LayersByCompressedDigest(d digest.Digest) ([]Layer, error) {
|
||||||
|
return r.layersByDigestMap(r.bycompressedsum, d)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *layerStore) LayersByUncompressedDigest(d digest.Digest) ([]Layer, error) {
|
||||||
|
return r.layersByDigestMap(r.byuncompressedsum, d)
|
||||||
|
}
|
||||||
|
|
||||||
func (r *layerStore) Lock() {
|
func (r *layerStore) Lock() {
|
||||||
r.lockfile.Lock()
|
r.lockfile.Lock()
|
||||||
}
|
}
|
||||||
|
91
vendor/github.com/containers/storage/store.go
generated
vendored
91
vendor/github.com/containers/storage/store.go
generated
vendored
@@ -20,6 +20,7 @@ import (
|
|||||||
"github.com/containers/storage/pkg/idtools"
|
"github.com/containers/storage/pkg/idtools"
|
||||||
"github.com/containers/storage/pkg/ioutils"
|
"github.com/containers/storage/pkg/ioutils"
|
||||||
"github.com/containers/storage/pkg/stringid"
|
"github.com/containers/storage/pkg/stringid"
|
||||||
|
"github.com/opencontainers/go-digest"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -298,8 +299,8 @@ type Store interface {
|
|||||||
DiffSize(from, to string) (int64, error)
|
DiffSize(from, to string) (int64, error)
|
||||||
|
|
||||||
// Diff returns the tarstream which would specify the changes returned by
|
// Diff returns the tarstream which would specify the changes returned by
|
||||||
// Changes.
|
// Changes. If options are passed in, they can override default behaviors.
|
||||||
Diff(from, to string) (io.ReadCloser, error)
|
Diff(from, to string, options *DiffOptions) (io.ReadCloser, error)
|
||||||
|
|
||||||
// ApplyDiff applies a tarstream to a layer. Information about the tarstream
|
// ApplyDiff applies a tarstream to a layer. Information about the tarstream
|
||||||
// is cached with the layer. Typically, a layer which is populated using a
|
// is cached with the layer. Typically, a layer which is populated using a
|
||||||
@@ -307,6 +308,18 @@ type Store interface {
|
|||||||
// before or after the diff is applied.
|
// before or after the diff is applied.
|
||||||
ApplyDiff(to string, diff archive.Reader) (int64, error)
|
ApplyDiff(to string, diff archive.Reader) (int64, error)
|
||||||
|
|
||||||
|
// LayersByCompressedDigest returns a slice of the layers with the
|
||||||
|
// specified compressed digest value recorded for them.
|
||||||
|
LayersByCompressedDigest(d digest.Digest) ([]Layer, error)
|
||||||
|
|
||||||
|
// LayersByUncompressedDigest returns a slice of the layers with the
|
||||||
|
// specified uncompressed digest value recorded for them.
|
||||||
|
LayersByUncompressedDigest(d digest.Digest) ([]Layer, error)
|
||||||
|
|
||||||
|
// LayerSize returns a cached approximation of the layer's size, or -1
|
||||||
|
// if we don't have a value on hand.
|
||||||
|
LayerSize(id string) (int64, error)
|
||||||
|
|
||||||
// Layers returns a list of the currently known layers.
|
// Layers returns a list of the currently known layers.
|
||||||
Layers() ([]Layer, error)
|
Layers() ([]Layer, error)
|
||||||
|
|
||||||
@@ -422,6 +435,9 @@ type Store interface {
|
|||||||
|
|
||||||
// ImageOptions is used for passing options to a Store's CreateImage() method.
|
// ImageOptions is used for passing options to a Store's CreateImage() method.
|
||||||
type ImageOptions struct {
|
type ImageOptions struct {
|
||||||
|
// CreationDate, if not zero, will override the default behavior of marking the image as having been
|
||||||
|
// created when CreateImage() was called, recording CreationDate instead.
|
||||||
|
CreationDate time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContainerOptions is used for passing options to a Store's CreateContainer() method.
|
// ContainerOptions is used for passing options to a Store's CreateContainer() method.
|
||||||
@@ -793,7 +809,13 @@ func (s *store) CreateImage(id string, names []string, layer, metadata string, o
|
|||||||
if modified, err := ristore.Modified(); modified || err != nil {
|
if modified, err := ristore.Modified(); modified || err != nil {
|
||||||
ristore.Load()
|
ristore.Load()
|
||||||
}
|
}
|
||||||
return ristore.Create(id, names, layer, metadata)
|
|
||||||
|
creationDate := time.Now().UTC()
|
||||||
|
if options != nil {
|
||||||
|
creationDate = options.CreationDate
|
||||||
|
}
|
||||||
|
|
||||||
|
return ristore.Create(id, names, layer, metadata, creationDate)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *store) CreateContainer(id string, names []string, image, layer, metadata string, options *ContainerOptions) (*Container, error) {
|
func (s *store) CreateContainer(id string, names []string, image, layer, metadata string, options *ContainerOptions) (*Container, error) {
|
||||||
@@ -1747,7 +1769,7 @@ func (s *store) DiffSize(from, to string) (int64, error) {
|
|||||||
return -1, ErrLayerUnknown
|
return -1, ErrLayerUnknown
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *store) Diff(from, to string) (io.ReadCloser, error) {
|
func (s *store) Diff(from, to string, options *DiffOptions) (io.ReadCloser, error) {
|
||||||
rlstore, err := s.LayerStore()
|
rlstore, err := s.LayerStore()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -1764,7 +1786,7 @@ func (s *store) Diff(from, to string) (io.ReadCloser, error) {
|
|||||||
rlstore.Load()
|
rlstore.Load()
|
||||||
}
|
}
|
||||||
if rlstore.Exists(to) {
|
if rlstore.Exists(to) {
|
||||||
return rlstore.Diff(from, to)
|
return rlstore.Diff(from, to, options)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil, ErrLayerUnknown
|
return nil, ErrLayerUnknown
|
||||||
@@ -1786,6 +1808,65 @@ func (s *store) ApplyDiff(to string, diff archive.Reader) (int64, error) {
|
|||||||
return -1, ErrLayerUnknown
|
return -1, ErrLayerUnknown
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *store) layersByMappedDigest(m func(ROLayerStore, digest.Digest) ([]Layer, error), d digest.Digest) ([]Layer, error) {
|
||||||
|
var layers []Layer
|
||||||
|
rlstore, err := s.LayerStore()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
stores, err := s.ROLayerStores()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
stores = append([]ROLayerStore{rlstore}, stores...)
|
||||||
|
|
||||||
|
for _, rlstore := range stores {
|
||||||
|
rlstore.Lock()
|
||||||
|
defer rlstore.Unlock()
|
||||||
|
if modified, err := rlstore.Modified(); modified || err != nil {
|
||||||
|
rlstore.Load()
|
||||||
|
}
|
||||||
|
slayers, err := m(rlstore, d)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
layers = append(layers, slayers...)
|
||||||
|
}
|
||||||
|
return layers, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *store) LayersByCompressedDigest(d digest.Digest) ([]Layer, error) {
|
||||||
|
return s.layersByMappedDigest(func(r ROLayerStore, d digest.Digest) ([]Layer, error) { return r.LayersByCompressedDigest(d) }, d)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *store) LayersByUncompressedDigest(d digest.Digest) ([]Layer, error) {
|
||||||
|
return s.layersByMappedDigest(func(r ROLayerStore, d digest.Digest) ([]Layer, error) { return r.LayersByUncompressedDigest(d) }, d)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *store) LayerSize(id string) (int64, error) {
|
||||||
|
lstore, err := s.LayerStore()
|
||||||
|
if err != nil {
|
||||||
|
return -1, err
|
||||||
|
}
|
||||||
|
lstores, err := s.ROLayerStores()
|
||||||
|
if err != nil {
|
||||||
|
return -1, err
|
||||||
|
}
|
||||||
|
lstores = append([]ROLayerStore{lstore}, lstores...)
|
||||||
|
for _, rlstore := range lstores {
|
||||||
|
rlstore.Lock()
|
||||||
|
defer rlstore.Unlock()
|
||||||
|
if modified, err := rlstore.Modified(); modified || err != nil {
|
||||||
|
rlstore.Load()
|
||||||
|
}
|
||||||
|
if rlstore.Exists(id) {
|
||||||
|
return rlstore.Size(id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return -1, ErrLayerUnknown
|
||||||
|
}
|
||||||
|
|
||||||
func (s *store) Layers() ([]Layer, error) {
|
func (s *store) Layers() ([]Layer, error) {
|
||||||
var layers []Layer
|
var layers []Layer
|
||||||
rlstore, err := s.LayerStore()
|
rlstore, err := s.LayerStore()
|
||||||
|
19
vendor/github.com/containers/storage/vendor.conf
generated
vendored
19
vendor/github.com/containers/storage/vendor.conf
generated
vendored
@@ -1,19 +0,0 @@
|
|||||||
github.com/BurntSushi/toml master
|
|
||||||
github.com/Microsoft/go-winio 307e919c663683a9000576fdc855acaf9534c165
|
|
||||||
github.com/Microsoft/hcsshim 0f615c198a84e0344b4ed49c464d8833d4648dfc
|
|
||||||
github.com/Sirupsen/logrus 61e43dc76f7ee59a82bdf3d71033dc12bea4c77d
|
|
||||||
github.com/docker/engine-api 4290f40c056686fcaa5c9caf02eac1dde9315adf
|
|
||||||
github.com/docker/go-connections eb315e36415380e7c2fdee175262560ff42359da
|
|
||||||
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
|
|
||||||
github.com/go-check/check 20d25e2804050c1cd24a7eea1e7a6447dd0e74ec
|
|
||||||
github.com/mattn/go-shellwords 753a2322a99f87c0eff284980e77f53041555bc6
|
|
||||||
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
|
|
||||||
github.com/opencontainers/runc 6c22e77604689db8725fa866f0f2ec0b3e8c3a07
|
|
||||||
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
|
|
||||||
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
|
|
||||||
github.com/pkg/errors master
|
|
||||||
github.com/tchap/go-patricia v2.2.6
|
|
||||||
github.com/vbatts/tar-split bd4c5d64c3e9297f410025a3b1bd0c58f659e721
|
|
||||||
github.com/vdemeester/shakers 24d7f1d6a71aa5d9cbe7390e4afb66b7eef9e1b3
|
|
||||||
golang.org/x/net f2499483f923065a842d38eb4c7f1927e6fc6e6d
|
|
||||||
golang.org/x/sys d75a52659825e75fff6158388dddc6a5b04f9ba5
|
|
Reference in New Issue
Block a user