-
-
- IRC
- |
-
- #docker-distribution on FreeNode
- |
-
-
-
- Issue Tracker
- |
-
- github.com/docker/distribution/issues
- |
-
-
-
- Google Groups
- |
-
- https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution
- |
-
-
-
- Mailing List
- |
-
- docker@dockerproject.org
- |
-
-
-
-
-## License
-
-This project is distributed under [Apache License, Version 2.0](LICENSE.md).
diff --git a/vendor/github.com/docker/distribution/ROADMAP.md b/vendor/github.com/docker/distribution/ROADMAP.md
deleted file mode 100644
index 9cdfa36c..00000000
--- a/vendor/github.com/docker/distribution/ROADMAP.md
+++ /dev/null
@@ -1,267 +0,0 @@
-# Roadmap
-
-The Distribution Project consists of several components, some of which are
-still being defined. This document defines the high-level goals of the
-project, identifies the current components, and defines the release-
-relationship to the Docker Platform.
-
-* [Distribution Goals](#distribution-goals)
-* [Distribution Components](#distribution-components)
-* [Project Planning](#project-planning): release-relationship to the Docker Platform.
-
-This road map is a living document, providing an overview of the goals and
-considerations made in respect of the future of the project.
-
-## Distribution Goals
-
-- Replace the existing [docker registry](github.com/docker/docker-registry)
- implementation as the primary implementation.
-- Replace the existing push and pull code in the docker engine with the
- distribution package.
-- Define a strong data model for distributing docker images
-- Provide a flexible distribution tool kit for use in the docker platform
-- Unlock new distribution models
-
-## Distribution Components
-
-Components of the Distribution Project are managed via github [milestones](https://github.com/docker/distribution/milestones). Upcoming
-features and bugfixes for a component will be added to the relevant milestone. If a feature or
-bugfix is not part of a milestone, it is currently unscheduled for
-implementation.
-
-* [Registry](#registry)
-* [Distribution Package](#distribution-package)
-
-***
-
-### Registry
-
-The new Docker registry is the main portion of the distribution repository.
-Registry 2.0 is the first release of the next-generation registry. This was
-primarily focused on implementing the [new registry
-API](https://github.com/docker/distribution/blob/master/docs/spec/api.md),
-with a focus on security and performance.
-
-Following from the Distribution project goals above, we have a set of goals
-for registry v2 that we would like to follow in the design. New features
-should be compared against these goals.
-
-#### Data Storage and Distribution First
-
-The registry's first goal is to provide a reliable, consistent storage
-location for Docker images. The registry should only provide the minimal
-amount of indexing required to fetch image data and no more.
-
-This means we should be selective in new features and API additions, including
-those that may require expensive, ever growing indexes. Requests should be
-servable in "constant time".
-
-#### Content Addressability
-
-All data objects used in the registry API should be content addressable.
-Content identifiers should be secure and verifiable. This provides a secure,
-reliable base from which to build more advanced content distribution systems.
-
-#### Content Agnostic
-
-In the past, changes to the image format would require large changes in Docker
-and the Registry. By decoupling the distribution and image format, we can
-allow the formats to progress without having to coordinate between the two.
-This means that we should be focused on decoupling Docker from the registry
-just as much as decoupling the registry from Docker. Such an approach will
-allow us to unlock new distribution models that haven't been possible before.
-
-We can take this further by saying that the new registry should be content
-agnostic. The registry provides a model of names, tags, manifests and content
-addresses and that model can be used to work with content.
-
-#### Simplicity
-
-The new registry should be closer to a microservice component than its
-predecessor. This means it should have a narrower API and a low number of
-service dependencies. It should be easy to deploy.
-
-This means that other solutions should be explored before changing the API or
-adding extra dependencies. If functionality is required, can it be added as an
-extension or companion service.
-
-#### Extensibility
-
-The registry should provide extension points to add functionality. By keeping
-the scope narrow, but providing the ability to add functionality.
-
-Features like search, indexing, synchronization and registry explorers fall
-into this category. No such feature should be added unless we've found it
-impossible to do through an extension.
-
-#### Active Feature Discussions
-
-The following are feature discussions that are currently active.
-
-If you don't see your favorite, unimplemented feature, feel free to contact us
-via IRC or the mailing list and we can talk about adding it. The goal here is
-to make sure that new features go through a rigid design process before
-landing in the registry.
-
-##### Proxying to other Registries
-
-A _pull-through caching_ mode exists for the registry, but is restricted from
-within the docker client to only mirror the official Docker Hub. This functionality
-can be expanded when image provenance has been specified and implemented in the
-distribution project.
-
-##### Metadata storage
-
-Metadata for the registry is currently stored with the manifest and layer data on
-the storage backend. While this is a big win for simplicity and reliably maintaining
-state, it comes with the cost of consistency and high latency. The mutable registry
-metadata operations should be abstracted behind an API which will allow ACID compliant
-storage systems to handle metadata.
-
-##### Peer to Peer transfer
-
-Discussion has started here: https://docs.google.com/document/d/1rYDpSpJiQWmCQy8Cuiaa3NH-Co33oK_SC9HeXYo87QA/edit
-
-##### Indexing, Search and Discovery
-
-The original registry provided some implementation of search for use with
-private registries. Support has been elided from V2 since we'd like to both
-decouple search functionality from the registry. The makes the registry
-simpler to deploy, especially in use cases where search is not needed, and
-let's us decouple the image format from the registry.
-
-There are explorations into using the catalog API and notification system to
-build external indexes. The current line of thought is that we will define a
-common search API to index and query docker images. Such a system could be run
-as a companion to a registry or set of registries to power discovery.
-
-The main issue with search and discovery is that there are so many ways to
-accomplish it. There are two aspects to this project. The first is deciding on
-how it will be done, including an API definition that can work with changing
-data formats. The second is the process of integrating with `docker search`.
-We expect that someone attempts to address the problem with the existing tools
-and propose it as a standard search API or uses it to inform a standardization
-process. Once this has been explored, we integrate with the docker client.
-
-Please see the following for more detail:
-
-- https://github.com/docker/distribution/issues/206
-
-##### Deletes
-
-> __NOTE:__ Deletes are a much asked for feature. Before requesting this
-feature or participating in discussion, we ask that you read this section in
-full and understand the problems behind deletes.
-
-While, at first glance, implementing deleting seems simple, there are a number
-mitigating factors that make many solutions not ideal or even pathological in
-the context of a registry. The following paragraph discuss the background and
-approaches that could be applied to a arrive at a solution.
-
-The goal of deletes in any system is to remove unused or unneeded data. Only
-data requested for deletion should be removed and no other data. Removing
-unintended data is worse than _not_ removing data that was requested for
-removal but ideally, both are supported. Generally, according to this rule, we
-err on holding data longer than needed, ensuring that it is only removed when
-we can be certain that it can be removed. With the current behavior, we opt to
-hold onto the data forever, ensuring that data cannot be incorrectly removed.
-
-To understand the problems with implementing deletes, one must understand the
-data model. All registry data is stored in a filesystem layout, implemented on
-a "storage driver", effectively a _virtual file system_ (VFS). The storage
-system must assume that this VFS layer will be eventually consistent and has
-poor read- after-write consistency, since this is the lower common denominator
-among the storage drivers. This is mitigated by writing values in reverse-
-dependent order, but makes wider transactional operations unsafe.
-
-Layered on the VFS model is a content-addressable _directed, acyclic graph_
-(DAG) made up of blobs. Manifests reference layers. Tags reference manifests.
-Since the same data can be referenced by multiple manifests, we only store
-data once, even if it is in different repositories. Thus, we have a set of
-blobs, referenced by tags and manifests. If we want to delete a blob we need
-to be certain that it is no longer referenced by another manifest or tag. When
-we delete a manifest, we also can try to delete the referenced blobs. Deciding
-whether or not a blob has an active reference is the crux of the problem.
-
-Conceptually, deleting a manifest and its resources is quite simple. Just find
-all the manifests, enumerate the referenced blobs and delete the blobs not in
-that set. An astute observer will recognize this as a garbage collection
-problem. As with garbage collection in programming languages, this is very
-simple when one always has a consistent view. When one adds parallelism and an
-inconsistent view of data, it becomes very challenging.
-
-A simple example can demonstrate this. Let's say we are deleting a manifest
-_A_ in one process. We scan the manifest and decide that all the blobs are
-ready for deletion. Concurrently, we have another process accepting a new
-manifest _B_ referencing one or more blobs from the manifest _A_. Manifest _B_
-is accepted and all the blobs are considered present, so the operation
-proceeds. The original process then deletes the referenced blobs, assuming
-they were unreferenced. The manifest _B_, which we thought had all of its data
-present, can no longer be served by the registry, since the dependent data has
-been deleted.
-
-Deleting data from the registry safely requires some way to coordinate this
-operation. The following approaches are being considered:
-
-- _Reference Counting_ - Maintain a count of references to each blob. This is
- challenging for a number of reasons: 1. maintaining a consistent consensus
- of reference counts across a set of Registries and 2. Building the initial
- list of reference counts for an existing registry. These challenges can be
- met with a consensus protocol like Paxos or Raft in the first case and a
- necessary but simple scan in the second..
-- _Lock the World GC_ - Halt all writes to the data store. Walk the data store
- and find all blob references. Delete all unreferenced blobs. This approach
- is very simple but requires disabling writes for a period of time while the
- service reads all data. This is slow and expensive but very accurate and
- effective.
-- _Generational GC_ - Do something similar to above but instead of blocking
- writes, writes are sent to another storage backend while reads are broadcast
- to the new and old backends. GC is then performed on the read-only portion.
- Because writes land in the new backend, the data in the read-only section
- can be safely deleted. The main drawbacks of this approach are complexity
- and coordination.
-- _Centralized Oracle_ - Using a centralized, transactional database, we can
- know exactly which data is referenced at any given time. This avoids
- coordination problem by managing this data in a single location. We trade
- off metadata scalability for simplicity and performance. This is a very good
- option for most registry deployments. This would create a bottleneck for
- registry metadata. However, metadata is generally not the main bottleneck
- when serving images.
-
-Please let us know if other solutions exist that we have yet to enumerate.
-Note that for any approach, implementation is a massive consideration. For
-example, a mark-sweep based solution may seem simple but the amount of work in
-coordination offset the extra work it might take to build a _Centralized
-Oracle_. We'll accept proposals for any solution but please coordinate with us
-before dropping code.
-
-At this time, we have traded off simplicity and ease of deployment for disk
-space. Simplicity and ease of deployment tend to reduce developer involvement,
-which is currently the most expensive resource in software engineering. Taking
-on any solution for deletes will greatly effect these factors, trading off
-very cheap disk space for a complex deployment and operational story.
-
-Please see the following issues for more detail:
-
-- https://github.com/docker/distribution/issues/422
-- https://github.com/docker/distribution/issues/461
-- https://github.com/docker/distribution/issues/462
-
-### Distribution Package
-
-At its core, the Distribution Project is a set of Go packages that make up
-Distribution Components. At this time, most of these packages make up the
-Registry implementation.
-
-The package itself is considered unstable. If you're using it, please take care to vendor the dependent version.
-
-For feature additions, please see the Registry section. In the future, we may break out a
-separate Roadmap for distribution-specific features that apply to more than
-just the registry.
-
-***
-
-### Project Planning
-
-An [Open-Source Planning Process](https://github.com/docker/distribution/wiki/Open-Source-Planning-Process) is used to define the Roadmap. [Project Pages](https://github.com/docker/distribution/wiki) define the goals for each Milestone and identify current progress.
-
diff --git a/vendor/github.com/docker/distribution/blobs.go b/vendor/github.com/docker/distribution/blobs.go
deleted file mode 100644
index ce43ea2e..00000000
--- a/vendor/github.com/docker/distribution/blobs.go
+++ /dev/null
@@ -1,233 +0,0 @@
-package distribution
-
-import (
- "errors"
- "fmt"
- "io"
- "net/http"
- "time"
-
- "github.com/docker/distribution/context"
- "github.com/docker/distribution/digest"
- "github.com/docker/distribution/reference"
-)
-
-var (
- // ErrBlobExists returned when blob already exists
- ErrBlobExists = errors.New("blob exists")
-
- // ErrBlobDigestUnsupported when blob digest is an unsupported version.
- ErrBlobDigestUnsupported = errors.New("unsupported blob digest")
-
- // ErrBlobUnknown when blob is not found.
- ErrBlobUnknown = errors.New("unknown blob")
-
- // ErrBlobUploadUnknown returned when upload is not found.
- ErrBlobUploadUnknown = errors.New("blob upload unknown")
-
- // ErrBlobInvalidLength returned when the blob has an expected length on
- // commit, meaning mismatched with the descriptor or an invalid value.
- ErrBlobInvalidLength = errors.New("blob invalid length")
-)
-
-// ErrBlobInvalidDigest returned when digest check fails.
-type ErrBlobInvalidDigest struct {
- Digest digest.Digest
- Reason error
-}
-
-func (err ErrBlobInvalidDigest) Error() string {
- return fmt.Sprintf("invalid digest for referenced layer: %v, %v",
- err.Digest, err.Reason)
-}
-
-// ErrBlobMounted returned when a blob is mounted from another repository
-// instead of initiating an upload session.
-type ErrBlobMounted struct {
- From reference.Canonical
- Descriptor Descriptor
-}
-
-func (err ErrBlobMounted) Error() string {
- return fmt.Sprintf("blob mounted from: %v to: %v",
- err.From, err.Descriptor)
-}
-
-// Descriptor describes targeted content. Used in conjunction with a blob
-// store, a descriptor can be used to fetch, store and target any kind of
-// blob. The struct also describes the wire protocol format. Fields should
-// only be added but never changed.
-type Descriptor struct {
- // MediaType describe the type of the content. All text based formats are
- // encoded as utf-8.
- MediaType string `json:"mediaType,omitempty"`
-
- // Size in bytes of content.
- Size int64 `json:"size,omitempty"`
-
- // Digest uniquely identifies the content. A byte stream can be verified
- // against against this digest.
- Digest digest.Digest `json:"digest,omitempty"`
-
- // NOTE: Before adding a field here, please ensure that all
- // other options have been exhausted. Much of the type relationships
- // depend on the simplicity of this type.
-}
-
-// Descriptor returns the descriptor, to make it satisfy the Describable
-// interface. Note that implementations of Describable are generally objects
-// which can be described, not simply descriptors; this exception is in place
-// to make it more convenient to pass actual descriptors to functions that
-// expect Describable objects.
-func (d Descriptor) Descriptor() Descriptor {
- return d
-}
-
-// BlobStatter makes blob descriptors available by digest. The service may
-// provide a descriptor of a different digest if the provided digest is not
-// canonical.
-type BlobStatter interface {
- // Stat provides metadata about a blob identified by the digest. If the
- // blob is unknown to the describer, ErrBlobUnknown will be returned.
- Stat(ctx context.Context, dgst digest.Digest) (Descriptor, error)
-}
-
-// BlobDeleter enables deleting blobs from storage.
-type BlobDeleter interface {
- Delete(ctx context.Context, dgst digest.Digest) error
-}
-
-// BlobDescriptorService manages metadata about a blob by digest. Most
-// implementations will not expose such an interface explicitly. Such mappings
-// should be maintained by interacting with the BlobIngester. Hence, this is
-// left off of BlobService and BlobStore.
-type BlobDescriptorService interface {
- BlobStatter
-
- // SetDescriptor assigns the descriptor to the digest. The provided digest and
- // the digest in the descriptor must map to identical content but they may
- // differ on their algorithm. The descriptor must have the canonical
- // digest of the content and the digest algorithm must match the
- // annotators canonical algorithm.
- //
- // Such a facility can be used to map blobs between digest domains, with
- // the restriction that the algorithm of the descriptor must match the
- // canonical algorithm (ie sha256) of the annotator.
- SetDescriptor(ctx context.Context, dgst digest.Digest, desc Descriptor) error
-
- // Clear enables descriptors to be unlinked
- Clear(ctx context.Context, dgst digest.Digest) error
-}
-
-// ReadSeekCloser is the primary reader type for blob data, combining
-// io.ReadSeeker with io.Closer.
-type ReadSeekCloser interface {
- io.ReadSeeker
- io.Closer
-}
-
-// BlobProvider describes operations for getting blob data.
-type BlobProvider interface {
- // Get returns the entire blob identified by digest along with the descriptor.
- Get(ctx context.Context, dgst digest.Digest) ([]byte, error)
-
- // Open provides a ReadSeekCloser to the blob identified by the provided
- // descriptor. If the blob is not known to the service, an error will be
- // returned.
- Open(ctx context.Context, dgst digest.Digest) (ReadSeekCloser, error)
-}
-
-// BlobServer can serve blobs via http.
-type BlobServer interface {
- // ServeBlob attempts to serve the blob, identifed by dgst, via http. The
- // service may decide to redirect the client elsewhere or serve the data
- // directly.
- //
- // This handler only issues successful responses, such as 2xx or 3xx,
- // meaning it serves data or issues a redirect. If the blob is not
- // available, an error will be returned and the caller may still issue a
- // response.
- //
- // The implementation may serve the same blob from a different digest
- // domain. The appropriate headers will be set for the blob, unless they
- // have already been set by the caller.
- ServeBlob(ctx context.Context, w http.ResponseWriter, r *http.Request, dgst digest.Digest) error
-}
-
-// BlobIngester ingests blob data.
-type BlobIngester interface {
- // Put inserts the content p into the blob service, returning a descriptor
- // or an error.
- Put(ctx context.Context, mediaType string, p []byte) (Descriptor, error)
-
- // Create allocates a new blob writer to add a blob to this service. The
- // returned handle can be written to and later resumed using an opaque
- // identifier. With this approach, one can Close and Resume a BlobWriter
- // multiple times until the BlobWriter is committed or cancelled.
- Create(ctx context.Context, options ...BlobCreateOption) (BlobWriter, error)
-
- // Resume attempts to resume a write to a blob, identified by an id.
- Resume(ctx context.Context, id string) (BlobWriter, error)
-}
-
-// BlobCreateOption is a general extensible function argument for blob creation
-// methods. A BlobIngester may choose to honor any or none of the given
-// BlobCreateOptions, which can be specific to the implementation of the
-// BlobIngester receiving them.
-// TODO (brianbland): unify this with ManifestServiceOption in the future
-type BlobCreateOption interface {
- Apply(interface{}) error
-}
-
-// BlobWriter provides a handle for inserting data into a blob store.
-// Instances should be obtained from BlobWriteService.Writer and
-// BlobWriteService.Resume. If supported by the store, a writer can be
-// recovered with the id.
-type BlobWriter interface {
- io.WriteSeeker
- io.ReaderFrom
- io.Closer
-
- // ID returns the identifier for this writer. The ID can be used with the
- // Blob service to later resume the write.
- ID() string
-
- // StartedAt returns the time this blob write was started.
- StartedAt() time.Time
-
- // Commit completes the blob writer process. The content is verified
- // against the provided provisional descriptor, which may result in an
- // error. Depending on the implementation, written data may be validated
- // against the provisional descriptor fields. If MediaType is not present,
- // the implementation may reject the commit or assign "application/octet-
- // stream" to the blob. The returned descriptor may have a different
- // digest depending on the blob store, referred to as the canonical
- // descriptor.
- Commit(ctx context.Context, provisional Descriptor) (canonical Descriptor, err error)
-
- // Cancel ends the blob write without storing any data and frees any
- // associated resources. Any data written thus far will be lost. Cancel
- // implementations should allow multiple calls even after a commit that
- // result in a no-op. This allows use of Cancel in a defer statement,
- // increasing the assurance that it is correctly called.
- Cancel(ctx context.Context) error
-
- // Get a reader to the blob being written by this BlobWriter
- Reader() (io.ReadCloser, error)
-}
-
-// BlobService combines the operations to access, read and write blobs. This
-// can be used to describe remote blob services.
-type BlobService interface {
- BlobStatter
- BlobProvider
- BlobIngester
-}
-
-// BlobStore represent the entire suite of blob related operations. Such an
-// implementation can access, read, write, delete and serve blobs.
-type BlobStore interface {
- BlobService
- BlobServer
- BlobDeleter
-}
diff --git a/vendor/github.com/docker/distribution/circle.yml b/vendor/github.com/docker/distribution/circle.yml
deleted file mode 100644
index e1995d4b..00000000
--- a/vendor/github.com/docker/distribution/circle.yml
+++ /dev/null
@@ -1,90 +0,0 @@
-# Pony-up!
-machine:
- pre:
- # Install gvm
- - bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/1.0.22/binscripts/gvm-installer)
- # Install ceph to test rados driver & create pool
- - sudo -i ~/distribution/contrib/ceph/ci-setup.sh
- - ceph osd pool create docker-distribution 1
- # Install codecov for coverage
- - pip install --user codecov
-
- post:
- # go
- - gvm install go1.5.3 --prefer-binary --name=stable
-
- environment:
- # Convenient shortcuts to "common" locations
- CHECKOUT: /home/ubuntu/$CIRCLE_PROJECT_REPONAME
- BASE_DIR: src/github.com/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME
- # Trick circle brainflat "no absolute path" behavior
- BASE_STABLE: ../../../$HOME/.gvm/pkgsets/stable/global/$BASE_DIR
- DOCKER_BUILDTAGS: "include_rados include_oss include_gcs"
- # Workaround Circle parsing dumb bugs and/or YAML wonkyness
- CIRCLE_PAIN: "mode: set"
- # Ceph config
- RADOS_POOL: "docker-distribution"
-
- hosts:
- # Not used yet
- fancy: 127.0.0.1
-
-dependencies:
- pre:
- # Copy the code to the gopath of all go versions
- - >
- gvm use stable &&
- mkdir -p "$(dirname $BASE_STABLE)" &&
- cp -R "$CHECKOUT" "$BASE_STABLE"
-
- override:
- # Install dependencies for every copied clone/go version
- - gvm use stable && go get github.com/tools/godep:
- pwd: $BASE_STABLE
-
- post:
- # For the stable go version, additionally install linting tools
- - >
- gvm use stable &&
- go get github.com/axw/gocov/gocov github.com/golang/lint/golint
-
-test:
- pre:
- # Output the go versions we are going to test
- # - gvm use old && go version
- - gvm use stable && go version
-
- # First thing: build everything. This will catch compile errors, and it's
- # also necessary for go vet to work properly (see #807).
- - gvm use stable && godep go install ./...:
- pwd: $BASE_STABLE
-
- # FMT
- - gvm use stable && test -z "$(gofmt -s -l . | grep -v Godeps/_workspace/src/ | tee /dev/stderr)":
- pwd: $BASE_STABLE
-
- # VET
- - gvm use stable && go vet ./...:
- pwd: $BASE_STABLE
-
- # LINT
- - gvm use stable && test -z "$(golint ./... | grep -v Godeps/_workspace/src/ | tee /dev/stderr)":
- pwd: $BASE_STABLE
-
- override:
- # Test stable, and report
- - gvm use stable; export ROOT_PACKAGE=$(go list .); go list -tags "$DOCKER_BUILDTAGS" ./... | xargs -L 1 -I{} bash -c 'export PACKAGE={}; godep go test -tags "$DOCKER_BUILDTAGS" -test.short -coverprofile=$GOPATH/src/$PACKAGE/coverage.out -coverpkg=$(./coverpkg.sh $PACKAGE $ROOT_PACKAGE) $PACKAGE':
- timeout: 600
- pwd: $BASE_STABLE
-
- post:
- # Report to codecov
- - bash <(curl -s https://codecov.io/bash):
- pwd: $BASE_STABLE
-
- ## Notes
- # Disabled the -race detector due to massive memory usage.
- # Do we want these as well?
- # - go get code.google.com/p/go.tools/cmd/goimports
- # - test -z "$(goimports -l -w ./... | tee /dev/stderr)"
- # http://labix.org/gocheck
diff --git a/vendor/github.com/docker/distribution/context/context.go b/vendor/github.com/docker/distribution/context/context.go
deleted file mode 100644
index 23cbf5b5..00000000
--- a/vendor/github.com/docker/distribution/context/context.go
+++ /dev/null
@@ -1,85 +0,0 @@
-package context
-
-import (
- "sync"
-
- "github.com/docker/distribution/uuid"
- "golang.org/x/net/context"
-)
-
-// Context is a copy of Context from the golang.org/x/net/context package.
-type Context interface {
- context.Context
-}
-
-// instanceContext is a context that provides only an instance id. It is
-// provided as the main background context.
-type instanceContext struct {
- Context
- id string // id of context, logged as "instance.id"
- once sync.Once // once protect generation of the id
-}
-
-func (ic *instanceContext) Value(key interface{}) interface{} {
- if key == "instance.id" {
- ic.once.Do(func() {
- // We want to lazy initialize the UUID such that we don't
- // call a random generator from the package initialization
- // code. For various reasons random could not be available
- // https://github.com/docker/distribution/issues/782
- ic.id = uuid.Generate().String()
- })
- return ic.id
- }
-
- return ic.Context.Value(key)
-}
-
-var background = &instanceContext{
- Context: context.Background(),
-}
-
-// Background returns a non-nil, empty Context. The background context
-// provides a single key, "instance.id" that is globally unique to the
-// process.
-func Background() Context {
- return background
-}
-
-// WithValue returns a copy of parent in which the value associated with key is
-// val. Use context Values only for request-scoped data that transits processes
-// and APIs, not for passing optional parameters to functions.
-func WithValue(parent Context, key, val interface{}) Context {
- return context.WithValue(parent, key, val)
-}
-
-// stringMapContext is a simple context implementation that checks a map for a
-// key, falling back to a parent if not present.
-type stringMapContext struct {
- context.Context
- m map[string]interface{}
-}
-
-// WithValues returns a context that proxies lookups through a map. Only
-// supports string keys.
-func WithValues(ctx context.Context, m map[string]interface{}) context.Context {
- mo := make(map[string]interface{}, len(m)) // make our own copy.
- for k, v := range m {
- mo[k] = v
- }
-
- return stringMapContext{
- Context: ctx,
- m: mo,
- }
-}
-
-func (smc stringMapContext) Value(key interface{}) interface{} {
- if ks, ok := key.(string); ok {
- if v, ok := smc.m[ks]; ok {
- return v
- }
- }
-
- return smc.Context.Value(key)
-}
diff --git a/vendor/github.com/docker/distribution/context/doc.go b/vendor/github.com/docker/distribution/context/doc.go
deleted file mode 100644
index 3b4ab888..00000000
--- a/vendor/github.com/docker/distribution/context/doc.go
+++ /dev/null
@@ -1,89 +0,0 @@
-// Package context provides several utilities for working with
-// golang.org/x/net/context in http requests. Primarily, the focus is on
-// logging relevant request information but this package is not limited to
-// that purpose.
-//
-// The easiest way to get started is to get the background context:
-//
-// ctx := context.Background()
-//
-// The returned context should be passed around your application and be the
-// root of all other context instances. If the application has a version, this
-// line should be called before anything else:
-//
-// ctx := context.WithVersion(context.Background(), version)
-//
-// The above will store the version in the context and will be available to
-// the logger.
-//
-// Logging
-//
-// The most useful aspect of this package is GetLogger. This function takes
-// any context.Context interface and returns the current logger from the
-// context. Canonical usage looks like this:
-//
-// GetLogger(ctx).Infof("something interesting happened")
-//
-// GetLogger also takes optional key arguments. The keys will be looked up in
-// the context and reported with the logger. The following example would
-// return a logger that prints the version with each log message:
-//
-// ctx := context.Context(context.Background(), "version", version)
-// GetLogger(ctx, "version").Infof("this log message has a version field")
-//
-// The above would print out a log message like this:
-//
-// INFO[0000] this log message has a version field version=v2.0.0-alpha.2.m
-//
-// When used with WithLogger, we gain the ability to decorate the context with
-// loggers that have information from disparate parts of the call stack.
-// Following from the version example, we can build a new context with the
-// configured logger such that we always print the version field:
-//
-// ctx = WithLogger(ctx, GetLogger(ctx, "version"))
-//
-// Since the logger has been pushed to the context, we can now get the version
-// field for free with our log messages. Future calls to GetLogger on the new
-// context will have the version field:
-//
-// GetLogger(ctx).Infof("this log message has a version field")
-//
-// This becomes more powerful when we start stacking loggers. Let's say we
-// have the version logger from above but also want a request id. Using the
-// context above, in our request scoped function, we place another logger in
-// the context:
-//
-// ctx = context.WithValue(ctx, "http.request.id", "unique id") // called when building request context
-// ctx = WithLogger(ctx, GetLogger(ctx, "http.request.id"))
-//
-// When GetLogger is called on the new context, "http.request.id" will be
-// included as a logger field, along with the original "version" field:
-//
-// INFO[0000] this log message has a version field http.request.id=unique id version=v2.0.0-alpha.2.m
-//
-// Note that this only affects the new context, the previous context, with the
-// version field, can be used independently. Put another way, the new logger,
-// added to the request context, is unique to that context and can have
-// request scoped varaibles.
-//
-// HTTP Requests
-//
-// This package also contains several methods for working with http requests.
-// The concepts are very similar to those described above. We simply place the
-// request in the context using WithRequest. This makes the request variables
-// available. GetRequestLogger can then be called to get request specific
-// variables in a log line:
-//
-// ctx = WithRequest(ctx, req)
-// GetRequestLogger(ctx).Infof("request variables")
-//
-// Like above, if we want to include the request data in all log messages in
-// the context, we push the logger to a new context and use that one:
-//
-// ctx = WithLogger(ctx, GetRequestLogger(ctx))
-//
-// The concept is fairly powerful and ensures that calls throughout the stack
-// can be traced in log messages. Using the fields like "http.request.id", one
-// can analyze call flow for a particular request with a simple grep of the
-// logs.
-package context
diff --git a/vendor/github.com/docker/distribution/context/http.go b/vendor/github.com/docker/distribution/context/http.go
deleted file mode 100644
index 2cb1d041..00000000
--- a/vendor/github.com/docker/distribution/context/http.go
+++ /dev/null
@@ -1,364 +0,0 @@
-package context
-
-import (
- "errors"
- "net"
- "net/http"
- "strings"
- "sync"
- "time"
-
- log "github.com/Sirupsen/logrus"
- "github.com/docker/distribution/uuid"
- "github.com/gorilla/mux"
-)
-
-// Common errors used with this package.
-var (
- ErrNoRequestContext = errors.New("no http request in context")
- ErrNoResponseWriterContext = errors.New("no http response in context")
-)
-
-func parseIP(ipStr string) net.IP {
- ip := net.ParseIP(ipStr)
- if ip == nil {
- log.Warnf("invalid remote IP address: %q", ipStr)
- }
- return ip
-}
-
-// RemoteAddr extracts the remote address of the request, taking into
-// account proxy headers.
-func RemoteAddr(r *http.Request) string {
- if prior := r.Header.Get("X-Forwarded-For"); prior != "" {
- proxies := strings.Split(prior, ",")
- if len(proxies) > 0 {
- remoteAddr := strings.Trim(proxies[0], " ")
- if parseIP(remoteAddr) != nil {
- return remoteAddr
- }
- }
- }
- // X-Real-Ip is less supported, but worth checking in the
- // absence of X-Forwarded-For
- if realIP := r.Header.Get("X-Real-Ip"); realIP != "" {
- if parseIP(realIP) != nil {
- return realIP
- }
- }
-
- return r.RemoteAddr
-}
-
-// RemoteIP extracts the remote IP of the request, taking into
-// account proxy headers.
-func RemoteIP(r *http.Request) string {
- addr := RemoteAddr(r)
-
- // Try parsing it as "IP:port"
- if ip, _, err := net.SplitHostPort(addr); err == nil {
- return ip
- }
-
- return addr
-}
-
-// WithRequest places the request on the context. The context of the request
-// is assigned a unique id, available at "http.request.id". The request itself
-// is available at "http.request". Other common attributes are available under
-// the prefix "http.request.". If a request is already present on the context,
-// this method will panic.
-func WithRequest(ctx Context, r *http.Request) Context {
- if ctx.Value("http.request") != nil {
- // NOTE(stevvooe): This needs to be considered a programming error. It
- // is unlikely that we'd want to have more than one request in
- // context.
- panic("only one request per context")
- }
-
- return &httpRequestContext{
- Context: ctx,
- startedAt: time.Now(),
- id: uuid.Generate().String(),
- r: r,
- }
-}
-
-// GetRequest returns the http request in the given context. Returns
-// ErrNoRequestContext if the context does not have an http request associated
-// with it.
-func GetRequest(ctx Context) (*http.Request, error) {
- if r, ok := ctx.Value("http.request").(*http.Request); r != nil && ok {
- return r, nil
- }
- return nil, ErrNoRequestContext
-}
-
-// GetRequestID attempts to resolve the current request id, if possible. An
-// error is return if it is not available on the context.
-func GetRequestID(ctx Context) string {
- return GetStringValue(ctx, "http.request.id")
-}
-
-// WithResponseWriter returns a new context and response writer that makes
-// interesting response statistics available within the context.
-func WithResponseWriter(ctx Context, w http.ResponseWriter) (Context, http.ResponseWriter) {
- irw := instrumentedResponseWriter{
- ResponseWriter: w,
- Context: ctx,
- }
-
- if closeNotifier, ok := w.(http.CloseNotifier); ok {
- irwCN := &instrumentedResponseWriterCN{
- instrumentedResponseWriter: irw,
- CloseNotifier: closeNotifier,
- }
-
- return irwCN, irwCN
- }
-
- return &irw, &irw
-}
-
-// GetResponseWriter returns the http.ResponseWriter from the provided
-// context. If not present, ErrNoResponseWriterContext is returned. The
-// returned instance provides instrumentation in the context.
-func GetResponseWriter(ctx Context) (http.ResponseWriter, error) {
- v := ctx.Value("http.response")
-
- rw, ok := v.(http.ResponseWriter)
- if !ok || rw == nil {
- return nil, ErrNoResponseWriterContext
- }
-
- return rw, nil
-}
-
-// getVarsFromRequest let's us change request vars implementation for testing
-// and maybe future changes.
-var getVarsFromRequest = mux.Vars
-
-// WithVars extracts gorilla/mux vars and makes them available on the returned
-// context. Variables are available at keys with the prefix "vars.". For
-// example, if looking for the variable "name", it can be accessed as
-// "vars.name". Implementations that are accessing values need not know that
-// the underlying context is implemented with gorilla/mux vars.
-func WithVars(ctx Context, r *http.Request) Context {
- return &muxVarsContext{
- Context: ctx,
- vars: getVarsFromRequest(r),
- }
-}
-
-// GetRequestLogger returns a logger that contains fields from the request in
-// the current context. If the request is not available in the context, no
-// fields will display. Request loggers can safely be pushed onto the context.
-func GetRequestLogger(ctx Context) Logger {
- return GetLogger(ctx,
- "http.request.id",
- "http.request.method",
- "http.request.host",
- "http.request.uri",
- "http.request.referer",
- "http.request.useragent",
- "http.request.remoteaddr",
- "http.request.contenttype")
-}
-
-// GetResponseLogger reads the current response stats and builds a logger.
-// Because the values are read at call time, pushing a logger returned from
-// this function on the context will lead to missing or invalid data. Only
-// call this at the end of a request, after the response has been written.
-func GetResponseLogger(ctx Context) Logger {
- l := getLogrusLogger(ctx,
- "http.response.written",
- "http.response.status",
- "http.response.contenttype")
-
- duration := Since(ctx, "http.request.startedat")
-
- if duration > 0 {
- l = l.WithField("http.response.duration", duration.String())
- }
-
- return l
-}
-
-// httpRequestContext makes information about a request available to context.
-type httpRequestContext struct {
- Context
-
- startedAt time.Time
- id string
- r *http.Request
-}
-
-// Value returns a keyed element of the request for use in the context. To get
-// the request itself, query "request". For other components, access them as
-// "request.