Compare commits

...

68 Commits

Author SHA1 Message Date
Antonio Murdaca
0b73154601 bump to v0.1.21
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 16:58:24 +02:00
Miloslav Trmač
da48399f89 Merge pull request #357 from runcom/up-cstorage
*: update c/storage
2017-06-15 15:12:33 +02:00
Antonio Murdaca
08504d913c *: update c/storage
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 14:29:56 +02:00
Miloslav Trmač
fe67466701 Merge pull request #351 from giuseppe/add-ostree-dep
vendor.conf: add ostree-go
2017-06-14 17:18:51 +02:00
Giuseppe Scrivano
47e5d0cd9e vendor.conf: add ostree-go
it is used by containers/image for pulling images to the OSTree storage.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-06-14 15:35:34 +02:00
Antonio Murdaca
f0d830a8ca Merge pull request #355 from runcom/fail-on-diff-os
fail early when image os doesn't match host os
2017-06-06 13:55:26 +02:00
Antonio Murdaca
6d3f523c57 fail early when image os doesn't match host os
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-06 13:40:16 +02:00
Miloslav Trmač
87a36f9d5b Merge pull request #343 from mtrmac/readme-updates
README.md updates
2017-06-05 17:50:15 +02:00
Miloslav Trmač
9d12d72fb7 Improve documentation on what to do with containers/image failures in test-skopeo 2017-06-05 15:16:43 +02:00
Miloslav Trmač
fc88117065 Drop finished work from TODO
- We now have the docker-archive: transport
- Integration tests with built registries also exist
2017-06-05 15:16:43 +02:00
Miloslav Trmač
0debda0bbb Start all command examples with $
That has mostly been the case, with one outlier.  Fix it.
2017-06-05 15:16:43 +02:00
Miloslav Trmač
49f10736d1 Restructure the “Building without a container” version
Consolidate the Fedora and macOS instructions to prevent duplication,
and to suggest using $GOPATH for both.

Start with installing dependencies.
2017-06-05 15:15:04 +02:00
Miloslav Trmač
3d0d2ea6bb Document that there is a choice in using containers, and what the trade-off is 2017-06-05 15:15:02 +02:00
Miloslav Trmač
a2499d3451 Create “Building {without a container,in a container}” subsections
To make it clearer that the two are alternatives.

Document that a docker command is needed for the in-container build.

Also move the “checkout in $GOPATH” warning into the “without a
container” section, where it belongs.
2017-06-05 15:13:25 +02:00
Miloslav Trmač
7db0aab330 Move documentation build instructions to the end, with a separate header
We want to start with the Go 1.5 dependency and build/checkout
instructions.

Also create a separate subsection, to match the future “Building
in/without a container” subsections
2017-06-05 14:42:19 +02:00
Miloslav Trmač
150eb5bf18 Add Fedora instructions for installing go-md2man
It would be nice to have macOS instructions as well.
2017-06-05 14:40:58 +02:00
Miloslav Trmač
bcc0de69d4 Merge pull request #353 from surajssd/add-fedora-dependency
docs(README): add build dependencies for fedora
2017-06-05 14:39:58 +02:00
Suraj Deshmukh
cab89b9b9c docs(README): add build dependencies for fedora
Two more packages are needed to locally build skopeo
on fedora viz. btrfs-progs-devel & device-mapper-devel,
so added them in README.

Signed-off-by: Suraj Deshmukh <surajssd009005@gmail.com>
2017-06-04 16:03:23 +05:30
Miloslav Trmač
5b95a21401 Merge pull request #350 from mhrivnak/patch-1
Fixes a typo in the Name field
2017-05-31 20:49:28 +02:00
Michael Hrivnak
78c83dbcff Fixes a typo in the Name field 2017-05-31 14:22:22 -04:00
Miloslav Trmač
98ced5196c Merge pull request #347 from mtrmac/docker-certs.d
Support /etc/docker/certs.d
2017-05-30 19:40:36 +02:00
Miloslav Trmač
63272a10d7 Vendor after merging mtrmac/image:docker-certs.d 2017-05-30 18:26:43 +02:00
Antonio Murdaca
07c798ff82 Merge pull request #346 from jingqiuELE/master
Always combine RUN apt-get update with apt-get install in the same RUN statement.
2017-05-25 14:54:31 +02:00
Jing Qiu
750d72873d Always combine RUN apt-get update with apt-get install in the same RUN
statement.

From the [docs](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#build-cache) in March 2017:

Always combine RUN apt-get update with apt-get install in the same  RUN statement, for example

RUN apt-get update && apt-get install -y package-bar

Using apt-get update alone in a RUN statement causes caching issues and subsequent apt-get install instructions fail.

Signed-off-by: Jing Qiu <aqiu0720@gmail.com>
2017-05-25 11:38:35 +08:00
Antonio Murdaca
81dddac7d6 Merge pull request #345 from runcom/image-spec-rc6
update image-spec to v1.0.0-rc6
2017-05-24 16:55:18 +02:00
Antonio Murdaca
405b912f7e update image-spec to v1.0.0-rc6
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-24 16:35:57 +02:00
Miloslav Trmač
e1884aa8a8 Merge pull request #344 from mtrmac/storage-rebase
Pull in rhatdan/image:master
2017-05-17 22:20:50 +02:00
Miloslav Trmač
ffb01385dd Vendor after merging https://github.com/containers/image/pull/275 2017-05-17 17:12:23 +02:00
Miloslav Trmač
c5adc4b580 Merge pull request #331 from mtrmac/storage-rebase
Re-vendor, primarily for https://github.com/containers/storage/pull/11
2017-05-12 18:50:40 +02:00
Miloslav Trmač
69b9106646 Re-vendor, primarily for https://github.com/containers/storage/pull/11
containers/storage got new dependencies, so we will need to re-vendor
eventually anyway, and having this separate from other major work is
cleaner.

But the primary goal of this commit is to see whether it makes skopeo
buildable on OS X.
2017-05-11 13:07:14 +02:00
Antonio Murdaca
565688a963 Merge pull request #341 from projectatomic/jzb-patch-1
Update README.md
2017-05-10 21:21:00 +02:00
Joe Brockmeier
5205f3646d Update README.md
Just clarifying that Skopeo is available in later versions of Fedora as well.
2017-05-10 13:53:36 -04:00
Miloslav Trmač
3c57a0f084 Merge pull request #340 from mtrmac/setup-test
Simplify infrastructure setup, and make registry timeouts less strict
2017-05-10 15:05:52 +02:00
Miloslav Trmač
ed2088a4e5 Increase the time we wait for a registry from 0.5 to 5 seconds
We are not testing registry start-up performance, and killing the test
suite just because Travis is a bit busy doesn’t help; we’re much better
off with a test run which gives the registry a bit more time.
2017-05-10 14:46:28 +02:00
Miloslav Trmač
8b36001c0e Do not build docker/registry and remove remaining helpers 2017-05-10 14:46:28 +02:00
Miloslav Trmač
cd300805d1 Remove registry instances which we don’t use at all 2017-05-10 14:46:12 +02:00
Miloslav Trmač
8d3d0404fe Clean up SigningSuite test initialization
Move "skip if signing is not available" into the test, there may be
tests which only need verification.

Move GNUPGHOME creation from SetUpTest to SetUpSuite, sharing a single
key is fine.  We don’t change the GNUPGHOME contents at test runtime.
2017-05-10 14:45:10 +02:00
Miloslav Trmač
9985f12cd4 Move skopeoBinary check from *Suite.SetUpTest to SetUpSuite
The results are not going to vary across individual tests, so let’s only
check once.
2017-05-10 14:44:20 +02:00
Antonio Murdaca
5a42657cdb Merge pull request #339 from runcom/new-release-v0.1.20
New release v0.1.20
2017-05-10 13:01:27 +02:00
Antonio Murdaca
43d6128036 bump back to v0.1.21-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 12:37:37 +02:00
Antonio Murdaca
e802625b7c bump to v0.1.20
- support image-spec v1.0.0-rc5

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 12:37:07 +02:00
Miloslav Trmač
105be6a0ab Merge pull request #337 from mtrmac/docker-push-to-tag
Update name/tag embedded into schema1 manifests
2017-05-10 12:33:18 +02:00
Miloslav Trmač
8ec2a142c9 Enable tests for schema2→schema1 conversion with docker/distribution registries
Now that we can update the embedded name:tag, the test no longer fails
on a schema1→schema1 copy with the old schema1 server which verifies the
name:tag value.
2017-05-10 12:13:39 +02:00
Miloslav Trmač
c4ec970bb2 Uncomment TestCopyCompression test cases
We have been able to push to Docker tags for a long time, and recently
implemented s2→s1 autodetection.
2017-05-10 12:13:38 +02:00
Miloslav Trmač
cf7e58a297 Test that names are updated as expected when pushing to schema1
Before the update, we have loosened the equality check to ignore the
name/tag; now that we are generating them correctly, test for the
expected values.
2017-05-10 12:13:38 +02:00
Miloslav Trmač
03233a5ca7 Vendor after merging mtrmac/image:docker-push-to-tag 2017-05-10 12:13:24 +02:00
Miloslav Trmač
9bc847e656 Use a schema2 server in TestCopySignatures
TestCopySignatures, among other things, tests handling of a correctly
signed image to a different name without breaking the signature, which
will be impossible with schema1 after we start updating the names
embedded in the schema1 manifest.  So, use the schema2 server binary,
and docker://busybox image versions which use schema2.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
22965c443f Allow updated names when comparing schema1 images
The new version of containers/image will update the name and tag fields
when pushing to schema1; so accept that before we update, so that tests
keep working.

For now, just ignore the name/tag fields, so that both the current and
updated versions of containers/image are acceptable; we will tighten
that after the update.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
6f23c88e84 Make schema1 dir: comparisons nondestructive
Use (diff -x manifest.json) instead of removing the manifest.json files.
Also rename the helper from destructiveCheckDirImageAreEqual to
assertDirImagesAreEqual.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
4afafe9538 Log output of docker/distribution registries instead of sending it to /dev/null
This is useful for diagnosing failures which are logged locally but not
reported to the clients.
2017-05-10 12:01:09 +02:00
Antonio Murdaca
50dda3492c Merge pull request #313 from runcom/update-imgspec-rc5
oci: update to image-spec v1 RC5
2017-05-10 11:57:20 +02:00
Antonio Murdaca
f4a44f00b8 integration: disable check with image-tools for image-spec RC5
We need https://github.com/opencontainers/image-tools/pull/144 first

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 11:35:19 +02:00
Antonio Murdaca
dd13a0d60b oci: update to image-spec v1 RC5
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 11:35:16 +02:00
Antonio Murdaca
80b751a225 Merge pull request #332 from mtrmac/schema12
Run-time manifest type support (e.g. schema1/schema2) autodetection re-vendor + integration tests
2017-05-10 11:04:28 +02:00
Miloslav Trmač
8556dd1aa1 Add tests for automatic schema conversion depending on registry
In addition to the default registry in the OpenShift cluster, start two
more (one known to support s1 only, one known to support s1+s2), and
also a docker/distribution s1-only registry.

Then test that copying images around works as expected.

NOTE: The docker/distribution s1-only tests currently fail and are
disabled.  See the added comment for details.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
c43e4cffaf Collect "processes" to kill in openshiftCluster instead of named members
We don’t really need to differentiate between the master/registry, we
just want to terminate them, maybe in the right order.  So, collect them
in an array instead of using separate members.

This will make it easier to have more registry instances in the near
future.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-05-09 15:08:11 +02:00
Miloslav Trmač
44ee5be6db Do not store a *check.C in openshiftCluster
The *check.C object can not be reused across tests, so storing it in
openshiftCluster is incorrect (and leads to weird behavior like
assertion failures being silently ignored).  So far this hasn't really
been an issue because we have been using the *check.C only in SetUpSuite
and TearDownSuite, and the changes to this have turned out to be
unnecessary after all, but this is still the right thing to do.

This is more or less
> s/c\./cluster\./g; s/cluster\.c/c/g
(paying more attention to the syntax) and corresponding modifications
to the method declarations.

Does not change behavior, apart from using the correct *check.C in
CopySuite.TearDownSuite.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
cfd1cf6def Abort in fileFromFixture if a replacement template is not found
This makes the fixture editation more robust against typos or unexpected
changes (if the “fixture” comes from third parties, like the OpenShift
registry configuration file).
2017-05-09 15:08:11 +02:00
Miloslav Trmač
15ce6488dd Move fileFromFixture from copy_test.go to utils.go
… to make it possible to call it from openshift.go.

Does not change behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
4f199f86f7 Split a startRegistryProcess from openshiftCluster.startRegistry
The helper will be reused in the future.  For now, this does not change
behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
1f1f6801bb Split openshiftCluster.startRegistry into prepareRegistryConfig and startRegistry
This separates creation of the account and configuration, which can be
shared across service instances, from actually starting the registry; we
will soon start several of them.

Only splits a function, does not change behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
1ee776b09b Vendor after merging mtrmac/image:schema12 2017-05-09 15:07:49 +02:00
Antonio Murdaca
8bb9d5f0f2 Merge pull request #335 from runcom/fix-crio-openshift
Fix docker-daemon to containers-storage copy
2017-05-06 12:29:16 +02:00
Antonio Murdaca
88ea901938 vendor c/image to fix docker-daemon -> containers-storage copy
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-06 12:09:29 +02:00
Antonio Murdaca
fd41f20bb8 Merge pull request #314 from giuseppe/ostree
skopeo: support copy to OSTree storage
2017-04-24 22:58:51 +02:00
Giuseppe Scrivano
1d5c681f0f skopeo: support copy to OSTree storage
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-04-24 20:23:45 +02:00
Antonio Murdaca
c467afa37c Merge pull request #328 from runcom/new-release
New release: v0.1.19
2017-04-14 12:45:56 +02:00
Antonio Murdaca
53a90e51d4 bump again to v0.1.20-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-04-14 12:21:28 +02:00
435 changed files with 138321 additions and 12260 deletions

View File

@@ -6,20 +6,13 @@ RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md
device-mapper-devel \
# gpgme bindings deps
libassuan-devel gpgme-devel \
gnupg \
# registry v1 deps
xz-devel \
python-devel \
python-pip \
swig \
redhat-rpm-config \
openssl-devel \
patch
ostree-devel \
gnupg
# Install three versions of the registry. The first is an older version that
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests. Install registry v1 also.
# and schema2 manifests.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN set -x \
@@ -31,17 +24,7 @@ RUN set -x \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH" \
&& export DRV1="$(mktemp -d)" \
&& git clone https://github.com/docker/docker-registry.git "$DRV1" \
# no need for setuptools since we have a version conflict with fedora
&& sed -i.bak s/setuptools==5.8//g "$DRV1/requirements/main.txt" \
&& sed -i.bak s/setuptools==5.8//g "$DRV1/depends/docker-registry-core/requirements/main.txt" \
&& pip install "$DRV1/depends/docker-registry-core" \
&& pip install file://"$DRV1#egg=docker-registry[bugsnag,newrelic,cors]" \
&& patch $(python -c 'import boto; import os; print os.path.dirname(boto.__file__)')/connection.py \
< "$DRV1/contrib/boto_header_patch.diff" \
&& dnf -y update && dnf install -y m2crypto
&& rm -rf "$GOPATH"
RUN set -x \
&& yum install -y which git tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \

View File

@@ -1,5 +1,12 @@
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y golang btrfs-tools git-core libdevmapper-dev libgpgme11-dev go-md2man
RUN apt-get update && apt-get install -y \
golang \
btrfs-tools \
git-core \
libdevmapper-dev \
libgpgme11-dev \
go-md2man
ENV GOPATH=/
WORKDIR /src/github.com/projectatomic/skopeo

View File

@@ -103,43 +103,64 @@ you'll get an error. You can fix this by either logging in (via `docker login`)
Building
-
To build the manual you will need go-md2man.
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag.
There are two ways to build skopeo: in a container, or locally without a container. Choose the one which better matches your needs and environment.
### Building without a container
Building without a container requires a bit more manual work and setup in your environment, but it is more flexible:
- It should work in more environments (e.g. for native macOS builds)
- It does not require root privileges (after dependencies are installed)
- It is faster, therefore more convenient for developing `skopeo`.
Install the necessary dependencies:
```sh
$ sudo apt-get install go-md2man
Fedora$ sudo dnf install gpgme-devel libassuan-devel btrfs-progs-devel device-mapper-devel
macOS$ brew install gpgme
```
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag. Also, make sure to clone the repository in your `GOPATH` - otherwise compilation fails.
Make sure to clone this repository in your `GOPATH` - otherwise compilation fails.
```sh
$ git clone https://github.com/projectatomic/skopeo $GOPATH/src/github.com/projectatomic/skopeo
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make all
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make binary-local
```
To build localy on OSX:
### Building in a container
Building in a container is simpler, but more restrictive:
- It requires the `docker` command and the ability to run Linux containers
- The created executable is a Linux executable, and depends on dynamic libraries which may only be available only in a container of a similar Linux distribution.
```sh
$ brew install gpgme
$ make binary-local
$ make binary # Or (make all) to also build documentation, see below.
```
You may need to install additional development packages: `gpgme-devel` and `libassuan-devel`
### Building documentation
To build the manual you will need go-md2man.
```sh
$ dnf install gpgme-devel libassuan-devel
Debian$ sudo apt-get install go-md2man
Fedora$ sudo dnf install go-md2man
```
Then
```sh
$ make docs
```
Installing
-
If you built from source:
```sh
$ sudo make install
```
`skopeo` is also available from Fedora 23:
`skopeo` is also available from Fedora 23 (and later):
```sh
sudo dnf install skopeo
$ sudo dnf install skopeo
```
TODO
-
- list all images on registry?
- registry v2 search?
- support output to docker load tar(s)
- show repo tags via flag or when reference isn't tagged or digested
- add tests (integration with deployed registries in container - Docker-like)
- support rkt/appc image spec
NOT TODO
@@ -163,10 +184,20 @@ In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `vndr github.com/pkg/errors`
In order to test out new PRs from [containers/image](https://github.com/containers/image) to not break `skopeo`:
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `vndr github.com/containers/image`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `vndr github.com/containers/image`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete succcesfully again, merge the skopeo PR
License
-

View File

@@ -105,5 +105,10 @@ var copyCmd = cli.Command{
Name: "dest-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker destination registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-ostree-tmp-dir",
Value: "",
Usage: "`DIRECTORY` to use for OSTree temporary files",
},
},
}

View File

@@ -16,6 +16,7 @@ func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemC
// DEPRECATED: keep this here for backward compatibility, but override
// them if per subcommand flags are provided (see below).
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
OSTreeTmpDirPath: c.String(flagPrefix + "ostree-tmp-dir"),
}
if c.IsSet(flagPrefix + "tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT(flagPrefix + "tls-verify")

View File

@@ -27,6 +27,7 @@ _skopeo_copy() {
--src-tls-verify
--dest-creds --dcreds
--dest-cert-dir
--dest-ostree-tmp-dir
--dest-tls-verify
"
local boolean_options="

View File

@@ -2,7 +2,7 @@
% Jhon Honce
% August 2016
# NAME
skopeo -- Various operations with container images images and container image registries
skopeo -- Various operations with container images and container image registries
# SYNOPSIS
**skopeo** [_global options_] _command_ [_command options_]
# DESCRIPTION
@@ -74,6 +74,8 @@ Uses the system's trust policy to validate images, rejects images not trusted by
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker destination registry (defaults to true)
Existing signatures, if any, are preserved as well.

View File

@@ -12,9 +12,6 @@ import (
const (
privateRegistryURL0 = "127.0.0.1:5000"
privateRegistryURL1 = "127.0.0.1:5001"
privateRegistryURL2 = "127.0.0.1:5002"
privateRegistryURL3 = "127.0.0.1:5003"
privateRegistryURL4 = "127.0.0.1:5004"
)
func Test(t *testing.T) {
@@ -26,15 +23,13 @@ func init() {
}
type SkopeoSuite struct {
regV1 *testRegistryV1
regV2 *testRegistryV2
regV2Shema1 *testRegistryV2
regV1WithAuth *testRegistryV1 // does v1 support auth?
regV2WithAuth *testRegistryV2
}
func (s *SkopeoSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
}
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
@@ -42,24 +37,14 @@ func (s *SkopeoSuite) TearDownSuite(c *check.C) {
}
func (s *SkopeoSuite) SetUpTest(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.regV1 = setupRegistryV1At(c, privateRegistryURL0, false) // TODO:(runcom)
s.regV2 = setupRegistryV2At(c, privateRegistryURL1, false, false)
s.regV2Shema1 = setupRegistryV2At(c, privateRegistryURL2, false, true)
s.regV1WithAuth = setupRegistryV1At(c, privateRegistryURL3, true) // not used
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL4, true, false)
s.regV2 = setupRegistryV2At(c, privateRegistryURL0, false, false)
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL1, true, false)
}
func (s *SkopeoSuite) TearDownTest(c *check.C) {
// not checking V1 registries now...
if s.regV2 != nil {
s.regV2.Close()
}
if s.regV2Shema1 != nil {
s.regV2Shema1.Close()
}
if s.regV2WithAuth != nil {
//cmd := exec.Command("docker", "logout", s.regV2WithAuth)
//c.Assert(cmd.Run(), check.IsNil)

View File

@@ -1,10 +1,9 @@
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"net/http/httptest"
"os"
@@ -15,19 +14,22 @@ import (
"github.com/containers/image/signature"
"github.com/go-check/check"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-tools/image"
)
func init() {
check.Suite(&CopySuite{})
}
const v2DockerRegistryURL = "localhost:5555" // Update also policy.json
const (
v2DockerRegistryURL = "localhost:5555" // Update also policy.json
v2s1DockerRegistryURL = "localhost:5556"
)
type CopySuite struct {
cluster *openshiftCluster
registry *testRegistryV2
gpgHome string
cluster *openshiftCluster
registry *testRegistryV2
s1Registry *testRegistryV2
gpgHome string
}
func (s *CopySuite) SetUpSuite(c *check.C) {
@@ -37,7 +39,7 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
s.cluster = startOpenshiftCluster(c) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression"} {
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression", "schema1", "schema2"} {
isJSON := fmt.Sprintf(`{
"kind": "ImageStream",
"apiVersion": "v1",
@@ -49,7 +51,9 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
runCommandWithInput(c, isJSON, "oc", "create", "-f", "-")
}
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, false, false) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
// FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, false, false)
s.s1Registry = setupRegistryV2At(c, v2s1DockerRegistryURL, false, true)
gpgHome, err := ioutil.TempDir("", "skopeo-gpg")
c.Assert(err, check.IsNil)
@@ -75,35 +79,23 @@ func (s *CopySuite) TearDownSuite(c *check.C) {
if s.registry != nil {
s.registry.Close()
}
if s.s1Registry != nil {
s.s1Registry.Close()
}
if s.cluster != nil {
s.cluster.tearDown()
s.cluster.tearDown(c)
}
}
// fileFromFixtureFixture applies edits to inputPath and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func fileFromFixture(c *check.C, inputPath string, edits map[string]string) string {
contents, err := ioutil.ReadFile(inputPath)
c.Assert(err, check.IsNil)
for template, value := range edits {
contents = bytes.Replace(contents, []byte(template), []byte(value), -1)
}
file, err := ioutil.TempFile("", "policy.json")
c.Assert(err, check.IsNil)
path := file.Name()
_, err = file.Write(contents)
c.Assert(err, check.IsNil)
err = file.Close()
c.Assert(err, check.IsNil)
return path
}
func (s *CopySuite) TestCopyFailsWithManifestList(c *check.C) {
assertSkopeoFails(c, ".*can not copy docker://estesp/busybox:latest: manifest contains multiple images.*", "copy", "docker://estesp/busybox:latest", "dir:somedir")
}
func (s *CopySuite) TestCopyFailsWhenImageOSDoesntMatchRuntimeOS(c *check.C) {
c.Skip("can't run this on Travis")
assertSkopeoFails(c, `.*image operating system "windows" cannot be used on "linux".*`, "copy", "docker://microsoft/windowsservercore", "containers-storage:test")
}
func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)
@@ -117,9 +109,9 @@ func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
// "push": dir: → atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, "atomic:localhost:5000/myns/unsigned:unsigned")
// The result of pushing and pulling is an unmodified image.
// The result of pushing and pulling is an equivalent image, except for schema1 embedded names.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:unsigned", "dir:"+dir2)
destructiveCheckDirImagesAreEqual(c, dir1, dir2)
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:unsigned")
}
// The most basic (skopeo copy) use:
@@ -153,11 +145,10 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
c.Assert(err, check.IsNil)
}
// Check whether dir: images in dir1 and dir2 are equal.
// WARNING: This modifies the contents of dir1 and dir2!
func destructiveCheckDirImagesAreEqual(c *check.C, dir1, dir2 string) {
// Check whether dir: images in dir1 and dir2 are equal, ignoring schema1 signatures.
func assertDirImagesAreEqual(c *check.C, dir1, dir2 string) {
// The manifests may have different JWS signatures; so, compare the manifests by digests, which
// strips the signatures, and remove them, comparing the rest file by file.
// strips the signatures.
digests := []digest.Digest{}
for _, dir := range []string{dir1, dir2} {
manifestPath := filepath.Join(dir, "manifest.json")
@@ -166,12 +157,37 @@ func destructiveCheckDirImagesAreEqual(c *check.C, dir1, dir2 string) {
digest, err := manifest.Digest(m)
c.Assert(err, check.IsNil)
digests = append(digests, digest)
err = os.Remove(manifestPath)
c.Assert(err, check.IsNil)
c.Logf("Manifest file %s (digest %s) removed", manifestPath, digest)
}
c.Assert(digests[0], check.Equals, digests[1])
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
// Then compare the rest file by file.
out := combinedOutputOfCommand(c, "diff", "-urN", "-x", "manifest.json", dir1, dir2)
c.Assert(out, check.Equals, "")
}
// Check whether schema1 dir: images in dir1 and dir2 are equal, ignoring schema1 signatures and the embedded path/tag values, which should have the expected values.
func assertSchema1DirImagesAreEqualExceptNames(c *check.C, dir1, ref1, dir2, ref2 string) {
// The manifests may have different JWS signatures and names; so, unmarshal and delete these elements.
manifests := []map[string]interface{}{}
for dir, ref := range map[string]string{dir1: ref1, dir2: ref2} {
manifestPath := filepath.Join(dir, "manifest.json")
m, err := ioutil.ReadFile(manifestPath)
c.Assert(err, check.IsNil)
data := map[string]interface{}{}
err = json.Unmarshal(m, &data)
c.Assert(err, check.IsNil)
c.Assert(data["schemaVersion"], check.Equals, float64(1))
colon := strings.LastIndex(ref, ":")
c.Assert(colon, check.Not(check.Equals), -1)
c.Assert(data["name"], check.Equals, ref[:colon])
c.Assert(data["tag"], check.Equals, ref[colon+1:])
for _, key := range []string{"signatures", "name", "tag"} {
delete(data, key)
}
manifests = append(manifests, data)
}
c.Assert(manifests[0], check.DeepEquals, manifests[1])
// Then compare the rest file by file.
out := combinedOutputOfCommand(c, "diff", "-urN", "-x", "manifest.json", dir1, dir2)
c.Assert(out, check.Equals, "")
}
@@ -190,7 +206,7 @@ func (s *CopySuite) TestCopyStreaming(c *check.C) {
// Compare (copies of) the original and the copy:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:streaming", "dir:"+dir2)
destructiveCheckDirImagesAreEqual(c, dir1, dir2)
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:streaming")
// FIXME: Also check pushing to docker://
}
@@ -225,13 +241,13 @@ func (s *CopySuite) TestCopyOCIRoundTrip(c *check.C) {
c.Assert(out, check.Equals, "")
// For some silly reason we pass a logger to the OCI library here...
logger := log.New(os.Stderr, "", 0)
//logger := log.New(os.Stderr, "", 0)
// TODO: Verify using the upstream OCI image validator.
err = image.ValidateLayout(oci1, nil, logger)
c.Assert(err, check.IsNil)
err = image.ValidateLayout(oci2, nil, logger)
c.Assert(err, check.IsNil)
//err = image.ValidateLayout(oci1, nil, logger)
//c.Assert(err, check.IsNil)
//err = image.ValidateLayout(oci2, nil, logger)
//c.Assert(err, check.IsNil)
// Now verify that everything is identical. Currently this is true, but
// because we recompute the manifests on-the-fly this doesn't necessarily
@@ -267,34 +283,34 @@ func (s *CopySuite) TestCopySignatures(c *check.C) {
// type: signedBy
// Sign the images
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "docker://busybox:1.23", "atomic:localhost:5000/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", "docker://busybox:1.23.2", "atomic:localhost:5000/myns/official:official")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "docker://busybox:1.26", "atomic:localhost:5006/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", "docker://busybox:1.26.1", "atomic:localhost:5006/myns/official:official")
// Verify that we can pull them
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/personal:personal", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/official:official", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/personal:personal", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/official:official", dirDest)
// Verify that mis-signed images are rejected
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/personal:personal", "atomic:localhost:5000/myns/official:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/personal:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/personal:personal", "atomic:localhost:5006/myns/official:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/personal:attack")
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/personal:attack", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/personal:attack", dirDest)
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/official:attack", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/official:attack", dirDest)
// Verify that signed identity is verified.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/naming:test1")
assertSkopeoFails(c, ".*Source image rejected: Signature for identity localhost:5000/myns/official:official is not accepted.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/naming:test1", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/naming:test1")
assertSkopeoFails(c, ".*Source image rejected: Signature for identity localhost:5006/myns/official:official is not accepted.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/naming:test1", dirDest)
// signedIdentity works
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/naming:naming")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/naming:naming", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/naming:naming")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/naming:naming", dirDest)
// Verify that cosigning requirements are enforced
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/cosigned:cosigned")
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/cosigned:cosigned", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/cosigned:cosigned", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/cosigned:cosigned", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/cosigned:cosigned", dirDest)
}
// --policy copy for dir: sources
@@ -356,10 +372,10 @@ func (s *CopySuite) TestCopyCompression(c *check.C) {
defer os.RemoveAll(topDir)
for i, t := range []struct{ fixture, remote string }{
//{"uncompressed-image-s1", "docker://" + v2DockerRegistryURL + "/compression/compression:s1"}, // FIXME: depends on push to tag working
//{"uncompressed-image-s2", "docker://" + v2DockerRegistryURL + "/compression/compression:s2"}, // FIXME: depends on push to tag working
{"uncompressed-image-s1", "docker://" + v2DockerRegistryURL + "/compression/compression:s1"},
{"uncompressed-image-s2", "docker://" + v2DockerRegistryURL + "/compression/compression:s2"},
{"uncompressed-image-s1", "atomic:localhost:5000/myns/compression:s1"},
//{"uncompressed-image-s2", "atomic:localhost:5000/myns/compression:s2"}, // FIXME: The unresolved "MANIFEST_UNKNOWN"/"unexpected end of JSON input" failure
{"uncompressed-image-s2", "atomic:localhost:5000/myns/compression:s2"},
} {
dir := filepath.Join(topDir, fmt.Sprintf("case%d", i))
err := os.MkdirAll(dir, 0755)
@@ -515,7 +531,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:atomic", dirDest+"/dirAD")
// Both access methods result in the same data.
destructiveCheckDirImagesAreEqual(c, filepath.Join(topDir, "dirAA"), filepath.Join(topDir, "dirAD"))
assertDirImagesAreEqual(c, filepath.Join(topDir, "dirAA"), filepath.Join(topDir, "dirAD"))
// Get another image (different so that they don't share signatures, and sign it using docker://)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir,
@@ -528,7 +544,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
assertSkopeoSucceeds(c, "", "--debug", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:extension", dirDest+"/dirDD")
// Both access methods result in the same data.
destructiveCheckDirImagesAreEqual(c, filepath.Join(topDir, "dirDA"), filepath.Join(topDir, "dirDD"))
assertDirImagesAreEqual(c, filepath.Join(topDir, "dirDA"), filepath.Join(topDir, "dirDD"))
}
func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
@@ -556,3 +572,52 @@ func (s *CopySuite) TestCopyNoPanicOnHTTPResponseWOTLSVerifyFalse(c *check.C) {
assertSkopeoFails(c, ".*server gave HTTP response to HTTPS client.*",
"copy", ourRegistry+"foobar", "dir:test")
}
func (s *CopySuite) TestCopySchemaConversion(c *check.C) {
// Test conversion / schema autodetection both for the OpenShift embedded registry…
s.testCopySchemaConversionRegistries(c, "docker://localhost:5005/myns/schema1", "docker://localhost:5006/myns/schema2")
// … and for various docker/distribution registry versions.
s.testCopySchemaConversionRegistries(c, "docker://"+v2s1DockerRegistryURL+"/schema1", "docker://"+v2DockerRegistryURL+"/schema2")
}
func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Registry, schema2Registry string) {
topDir, err := ioutil.TempDir("", "schema-conversion")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
for _, subdir := range []string{"input1", "input2", "dest2"} {
err := os.MkdirAll(filepath.Join(topDir, subdir), 0755)
c.Assert(err, check.IsNil)
}
input1Dir := filepath.Join(topDir, "input1")
input2Dir := filepath.Join(topDir, "input2")
destDir := filepath.Join(topDir, "dest2")
// Ensure we are working with a schema2 image.
// dir: accepts any manifest format, i.e. this makes …/input2 a schema2 source which cannot be asked to produce schema1 like ordinary docker: registries can.
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+input2Dir)
verifyManifestMIMEType(c, input2Dir, manifest.DockerV2Schema2MediaType)
// 2→2 (the "f2t2" in tag means "from 2 to 2")
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input2Dir, schema2Registry+":f2t2")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema2Registry+":f2t2", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema2MediaType)
// 2→1; we will use the result as a schema1 image for further tests.
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input2Dir, schema1Registry+":f2t1")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema1Registry+":f2t1", "dir:"+input1Dir)
verifyManifestMIMEType(c, input1Dir, manifest.DockerV2Schema1SignedMediaType)
// 1→1
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input1Dir, schema1Registry+":f1t1")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema1Registry+":f1t1", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema1SignedMediaType)
// 1→2: image stays unmodified schema1
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input1Dir, schema2Registry+":f1t2")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema2Registry+":f1t2", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema1SignedMediaType)
}
// Verify manifest in a dir: image at dir is expectedMIMEType.
func verifyManifestMIMEType(c *check.C, dir string, expectedMIMEType string) {
manifestBlob, err := ioutil.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
mimeType := manifest.GuessMIMEType(manifestBlob)
c.Assert(mimeType, check.Equals, expectedMIMEType)
}

View File

@@ -45,46 +45,46 @@
]
},
"atomic": {
"localhost:5000/myns/personal": [
"localhost:5006/myns/personal": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"localhost:5000/myns/official": [
"localhost:5006/myns/official": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg"
}
],
"localhost:5000/myns/naming:test1": [
"localhost:5006/myns/naming:test1": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg"
}
],
"localhost:5000/myns/naming:naming": [
"localhost:5006/myns/naming:naming": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "localhost:5000/myns/official"
"dockerRepository": "localhost:5006/myns/official"
}
}
],
"localhost:5000/myns/cosigned:cosigned": [
"localhost:5006/myns/cosigned:cosigned": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "localhost:5000/myns/official"
"dockerRepository": "localhost:5006/myns/official"
}
},
{

View File

@@ -21,35 +21,34 @@ var adminKUBECONFIG = map[string]string{
// openshiftCluster is an OpenShift API master and integrated registry
// running on localhost.
type openshiftCluster struct {
c *check.C
workingDir string
master *exec.Cmd
registry *exec.Cmd
processes []*exec.Cmd // Processes to terminate on teardown; append to the end, terminate from end to the start.
}
// startOpenshiftCluster creates a new openshiftCluster.
// WARNING: This affects state in users' home directory! Only run
// in isolated test environment.
func startOpenshiftCluster(c *check.C) *openshiftCluster {
cluster := &openshiftCluster{c: c}
cluster := &openshiftCluster{}
dir, err := ioutil.TempDir("", "openshift-cluster")
cluster.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
cluster.workingDir = dir
cluster.startMaster()
cluster.startRegistry()
cluster.ocLoginToProject()
cluster.dockerLogin()
cluster.relaxImageSignerPermissions()
cluster.startMaster(c)
cluster.prepareRegistryConfig(c)
cluster.startRegistry(c)
cluster.ocLoginToProject(c)
cluster.dockerLogin(c)
cluster.relaxImageSignerPermissions(c)
return cluster
}
// clusterCmd creates an exec.Cmd in c.workingDir with current environment modified by environment
func (c *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
// clusterCmd creates an exec.Cmd in cluster.workingDir with current environment modified by environment
func (cluster *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
cmd := exec.Command(name, args...)
cmd.Dir = c.workingDir
cmd.Dir = cluster.workingDir
cmd.Env = os.Environ()
for key, value := range env {
cmd.Env = modifyEnviron(cmd.Env, key, value)
@@ -58,19 +57,20 @@ func (c *openshiftCluster) clusterCmd(env map[string]string, name string, args .
}
// startMaster starts the OpenShift master (etcd+API server) and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startMaster() {
c.master = c.clusterCmd(nil, "openshift", "start", "master")
stdout, err := c.master.StdoutPipe()
func (cluster *openshiftCluster) startMaster(c *check.C) {
cmd := cluster.clusterCmd(nil, "openshift", "start", "master")
cluster.processes = append(cluster.processes, cmd)
stdout, err := cmd.StdoutPipe()
// Send both to the same pipe. This might cause the two streams to be mixed up,
// but logging actually goes only to stderr - this primarily ensure we log any
// unexpected output to stdout.
c.master.Stderr = c.master.Stdout
err = c.master.Start()
c.c.Assert(err, check.IsNil)
cmd.Stderr = cmd.Stdout
err = cmd.Start()
c.Assert(err, check.IsNil)
portOpen, terminatePortCheck := newPortChecker(c.c, 8443)
portOpen, terminatePortCheck := newPortChecker(c, 8443)
defer func() {
c.c.Logf("Terminating port check")
c.Logf("Terminating port check")
terminatePortCheck <- true
}()
@@ -78,12 +78,12 @@ func (c *openshiftCluster) startMaster() {
logCheckFound := make(chan bool)
go func() {
defer func() {
c.c.Logf("Log checker exiting")
c.Logf("Log checker exiting")
}()
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
line := scanner.Text()
c.c.Logf("Log line: %s", line)
c.Logf("Log line: %s", line)
if strings.Contains(line, "Started Origin Controllers") {
logCheckFound <- true
return
@@ -92,7 +92,7 @@ func (c *openshiftCluster) startMaster() {
// Note: we can block before we get here.
select {
case <-terminateLogCheck:
c.c.Logf("terminated")
c.Logf("terminated")
return
default:
// Do not block here and read the next line.
@@ -101,31 +101,31 @@ func (c *openshiftCluster) startMaster() {
logCheckFound <- false
}()
defer func() {
c.c.Logf("Terminating log check")
c.Logf("Terminating log check")
terminateLogCheck <- true
}()
gotPortCheck := false
gotLogCheck := false
for !gotPortCheck || !gotLogCheck {
c.c.Logf("Waiting for master")
c.Logf("Waiting for master")
select {
case <-portOpen:
c.c.Logf("port check done")
c.Logf("port check done")
gotPortCheck = true
case found := <-logCheckFound:
c.c.Logf("log check done, found: %t", found)
c.Logf("log check done, found: %t", found)
if !found {
c.c.Fatal("log check done, success message not found")
c.Fatal("log check done, success message not found")
}
gotLogCheck = true
}
}
c.c.Logf("OK, master started!")
c.Logf("OK, master started!")
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startRegistry() {
// prepareRegistryConfig creates a registry service account and a related k8s client configuration in ${cluster.workingDir}/openshift.local.registry.
func (cluster *openshiftCluster) prepareRegistryConfig(c *check.C) {
// This partially mimics the objects created by (oadm registry), except that we run the
// server directly as an ordinary process instead of a pod with an implicitly attached service account.
saJSON := `{
@@ -135,90 +135,118 @@ func (c *openshiftCluster) startRegistry() {
"name": "registry"
}
}`
cmd := c.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c.c, cmd, saJSON)
cmd := cluster.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c, cmd, saJSON)
cmd = c.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
cmd = c.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
out, err = cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "")
}
//KUBECONFIG=openshift.local.registry/openshift-registry.kubeconfig DOCKER_REGISTRY_URL=127.0.0.1:5000
c.registry = c.clusterCmd(map[string]string{
// startRegistry starts the OpenShift registry with configPart on port, waits for it to be ready, and returns the process object, or terminates on failure.
func (cluster *openshiftCluster) startRegistryProcess(c *check.C, port int, configPath string) *exec.Cmd {
cmd := cluster.clusterCmd(map[string]string{
"KUBECONFIG": "openshift.local.registry/openshift-registry.kubeconfig",
"DOCKER_REGISTRY_URL": "127.0.0.1:5000",
}, "dockerregistry", "/atomic-registry-config.yml")
consumeAndLogOutputs(c.c, "registry", c.registry)
err = c.registry.Start()
c.c.Assert(err, check.IsNil)
"DOCKER_REGISTRY_URL": fmt.Sprintf("127.0.0.1:%d", port),
}, "dockerregistry", configPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%d", port), cmd)
err := cmd.Start()
c.Assert(err, check.IsNil)
portOpen, terminatePortCheck := newPortChecker(c.c, 5000)
portOpen, terminatePortCheck := newPortChecker(c, port)
defer func() {
terminatePortCheck <- true
}()
c.c.Logf("Waiting for registry to start")
c.Logf("Waiting for registry to start")
<-portOpen
c.c.Logf("OK, Registry port open")
c.Logf("OK, Registry port open")
return cmd
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (cluster *openshiftCluster) startRegistry(c *check.C) {
// Our “primary” registry
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5000, "/atomic-registry-config.yml"))
// A registry configured with acceptschema2:false
schema1Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5005",
"rootdirectory: /registry": "rootdirectory: /registry-schema1",
// The default configuration currently already contains acceptschema2: false
})
// Make sure the configuration contains "acceptschema2: false", because eventually it will be enabled upstream and this function will need to be updated.
configContents, err := ioutil.ReadFile(schema1Config)
c.Assert(err, check.IsNil)
c.Assert(string(configContents), check.Matches, "(?s).*acceptschema2: false.*")
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5005, schema1Config))
// A registry configured with acceptschema2:true
schema2Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5006",
"rootdirectory: /registry": "rootdirectory: /registry-schema2",
"acceptschema2: false": "acceptschema2: true",
})
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5006, schema2Config))
}
// ocLogin runs (oc login) and (oc new-project) on the cluster, or terminates on failure.
func (c *openshiftCluster) ocLoginToProject() {
c.c.Logf("oc login")
cmd := c.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
func (cluster *openshiftCluster) ocLoginToProject(c *check.C) {
c.Logf("oc login")
cmd := cluster.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c.c, "oc", "new-project", "myns")
c.c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c, "oc", "new-project", "myns")
c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
}
// dockerLogin simulates (docker login) to the cluster, or terminates on failure.
// We do not run (docker login) directly, because that requires a running daemon and a docker package.
func (c *openshiftCluster) dockerLogin() {
func (cluster *openshiftCluster) dockerLogin(c *check.C) {
dockerDir := filepath.Join(homedir.Get(), ".docker")
err := os.Mkdir(dockerDir, 0700)
c.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
out := combinedOutputOfCommand(c.c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.c.Logf("oc config value: %s", out)
configJSON := fmt.Sprintf(`{
"auths": {
"localhost:5000": {
out := combinedOutputOfCommand(c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.Logf("oc config value: %s", out)
authValue := base64.StdEncoding.EncodeToString([]byte("unused:" + out))
auths := []string{}
for _, port := range []int{5000, 5005, 5006} {
auths = append(auths, fmt.Sprintf(`"localhost:%d": {
"auth": "%s",
"email": "unused"
}
}
}`, base64.StdEncoding.EncodeToString([]byte("unused:"+out)))
}`, port, authValue))
}
configJSON := `{"auths": {` + strings.Join(auths, ",") + `}}`
err = ioutil.WriteFile(filepath.Join(dockerDir, "config.json"), []byte(configJSON), 0600)
c.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
}
// relaxImageSignerPermissions opens up the system:image-signer permissions so that
// anyone can work with signatures
// FIXME: This also allows anyone to DoS anyone else; this design is really not all
// that workable, but it is the best we can do for now.
func (c *openshiftCluster) relaxImageSignerPermissions() {
cmd := c.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
func (cluster *openshiftCluster) relaxImageSignerPermissions(c *check.C) {
cmd := cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
}
// tearDown stops the cluster services and deletes (only some!) of the state.
func (c *openshiftCluster) tearDown() {
if c.registry != nil && c.registry.Process != nil {
c.registry.Process.Kill()
func (cluster *openshiftCluster) tearDown(c *check.C) {
for i := len(cluster.processes) - 1; i >= 0; i-- {
cluster.processes[i].Process.Kill()
}
if c.master != nil && c.master.Process != nil {
c.master.Process.Kill()
}
if c.workingDir != "" {
os.RemoveAll(c.workingDir)
if cluster.workingDir != "" {
os.RemoveAll(cluster.workingDir)
}
}

View File

@@ -13,23 +13,10 @@ import (
)
const (
binaryV1 = "docker-registry"
binaryV2 = "registry-v2"
binaryV2Schema1 = "registry-v2-schema1"
)
type testRegistryV1 struct {
cmd *exec.Cmd
url string
dir string
}
func setupRegistryV1At(c *check.C, url string, auth bool) *testRegistryV1 {
return &testRegistryV1{
url: url,
}
}
type testRegistryV2 struct {
cmd *exec.Cmd
url string
@@ -44,7 +31,7 @@ func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistry
c.Assert(err, check.IsNil)
// Wait for registry to be ready to serve requests.
for i := 0; i != 5; i++ {
for i := 0; i != 50; i++ {
if err = reg.Ping(); err == nil {
break
}
@@ -109,6 +96,7 @@ http:
}
cmd := exec.Command(binary, confPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%s", url), cmd)
if err := cmd.Start(); err != nil {
os.RemoveAll(tmp)
if os.IsNotExist(err) {

View File

@@ -36,15 +36,8 @@ func findFingerprint(lineBytes []byte) (string, error) {
return "", errors.New("No fingerprint found")
}
func (s *SigningSuite) SetUpTest(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
_, err = exec.LookPath(skopeoBinary)
func (s *SigningSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.gpgHome, err = ioutil.TempDir("", "skopeo-gpg")
@@ -59,7 +52,7 @@ func (s *SigningSuite) SetUpTest(c *check.C) {
c.Assert(err, check.IsNil)
}
func (s *SigningSuite) TearDownTest(c *check.C) {
func (s *SigningSuite) TearDownSuite(c *check.C) {
if s.gpgHome != "" {
err := os.RemoveAll(s.gpgHome)
c.Assert(err, check.IsNil)
@@ -70,6 +63,13 @@ func (s *SigningSuite) TearDownTest(c *check.C) {
}
func (s *SigningSuite) TestSignVerifySmoke(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/smoketest"

View File

@@ -1,7 +1,9 @@
package main
import (
"bytes"
"io"
"io/ioutil"
"net"
"os/exec"
"strings"
@@ -150,3 +152,25 @@ func modifyEnviron(env []string, name, value string) []string {
}
return append(res, prefix+value)
}
// fileFromFixtureFixture applies edits to inputPath and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func fileFromFixture(c *check.C, inputPath string, edits map[string]string) string {
contents, err := ioutil.ReadFile(inputPath)
c.Assert(err, check.IsNil)
for template, value := range edits {
updated := bytes.Replace(contents, []byte(template), []byte(value), -1)
c.Assert(bytes.Equal(updated, contents), check.Equals, false, check.Commentf("Replacing %s in %#v failed", template, string(contents))) // Verify that the template has matched something and we are not silently ignoring it.
contents = updated
}
file, err := ioutil.TempFile("", "policy.json")
c.Assert(err, check.IsNil)
path := file.Name()
_, err = file.Write(contents)
c.Assert(err, check.IsNil)
err = file.Close()
c.Assert(err, check.IsNil)
return path
}

View File

@@ -23,7 +23,7 @@ golang.org/x/text master
github.com/docker/distribution master
github.com/docker/libtrust master
github.com/opencontainers/runc master
github.com/opencontainers/image-spec v1.0.0-rc4
github.com/opencontainers/image-spec v1.0.0-rc6
# -- start OCI image validation requirements.
github.com/opencontainers/runtime-spec v1.0.0-rc4
github.com/opencontainers/image-tools v0.1.0
@@ -31,6 +31,7 @@ github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
go4.org master https://github.com/camlistore/go4
github.com/ostreedev/ostree-go 61532f383f1f48e5c27080b0b9c8b022c3706a97
# -- end OCI image validation requirements
github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
@@ -43,3 +44,6 @@ github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
github.com/opencontainers/selinux master
golang.org/x/sys master
github.com/tchap/go-patricia v2.2.6
github.com/BurntSushi/toml master

14
vendor/github.com/BurntSushi/toml/COPYING generated vendored Normal file
View File

@@ -0,0 +1,14 @@
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.

218
vendor/github.com/BurntSushi/toml/README.md generated vendored Normal file
View File

@@ -0,0 +1,218 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/toml-lang/toml
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
Documentation: https://godoc.org/github.com/BurntSushi/toml
Installation:
```bash
go get github.com/BurntSushi/toml
```
Try the toml validator:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.

509
vendor/github.com/BurntSushi/toml/decode.go generated vendored Normal file
View File

@@ -0,0 +1,509 @@
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

121
vendor/github.com/BurntSushi/toml/decode_meta.go generated vendored Normal file
View File

@@ -0,0 +1,121 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

27
vendor/github.com/BurntSushi/toml/doc.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/toml-lang/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml

568
vendor/github.com/BurntSushi/toml/encode.go generated vendored Normal file
View File

@@ -0,0 +1,568 @@
package toml
import (
"bufio"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
"\"", "\\\"",
"\\", "\\\\",
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
case reflect.Struct:
enc.eStruct(key, rv)
default:
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
continue
}
enc.encode(key.add(mapKey), mrv)
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
continue
}
frv := rv.Field(i)
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
if opts.skip {
continue
}
keyName := sft.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
continue
}
if opts.omitzero && isZero(sf) {
continue
}
enc.encode(key.add(keyName), sf)
}
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

19
vendor/github.com/BurntSushi/toml/encoding_types.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View File

@@ -0,0 +1,18 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

953
vendor/github.com/BurntSushi/toml/lex.go generated vendored Normal file
View File

@@ -0,0 +1,953 @@
package toml
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
itemRawString
itemMultilineString
itemRawMultilineString
itemBool
itemInteger
itemFloat
itemDatetime
itemArray // the start of an array
itemArrayEnd
itemTableStart
itemTableEnd
itemArrayTableStart
itemArrayTableEnd
itemKeyStart
itemCommentStart
itemInlineTableStart
itemInlineTableEnd
)
const (
eof = 0
comma = ','
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
inlineTableStart = '{'
inlineTableEnd = '}'
)
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
// Allow for backing up up to three runes.
// This is necessary because TOML contains 3-rune tokens (""" and ''').
prevWidths [3]int
nprev int // how many of prevWidths are in use
// If we emit an eof, we can still back up, but it is not OK to call
// next again.
atEOF bool
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
}
func (lx *lexer) nextItem() item {
for {
select {
case item := <-lx.items:
return item
default:
lx.state = lx.state(lx)
}
}
}
func lex(input string) *lexer {
lx := &lexer{
input: input,
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
}
return lx
}
func (lx *lexer) push(state stateFn) {
lx.stack = append(lx.stack, state)
}
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
return last
}
func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.start = lx.pos
}
func (lx *lexer) next() (r rune) {
if lx.atEOF {
panic("next called after EOF")
}
if lx.pos >= len(lx.input) {
lx.atEOF = true
return eof
}
if lx.input[lx.pos] == '\n' {
lx.line++
}
lx.prevWidths[2] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[0]
if lx.nprev < 3 {
lx.nprev++
}
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
lx.prevWidths[0] = w
lx.pos += w
return r
}
// ignore skips over the pending input before this point.
func (lx *lexer) ignore() {
lx.start = lx.pos
}
// backup steps back one rune. Can be called only twice between calls to next.
func (lx *lexer) backup() {
if lx.atEOF {
lx.atEOF = false
return
}
if lx.nprev < 1 {
panic("backed up too far")
}
w := lx.prevWidths[0]
lx.prevWidths[0] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[2]
lx.nprev--
lx.pos -= w
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
}
}
// accept consumes the next rune if it's equal to `valid`.
func (lx *lexer) accept(valid rune) bool {
if lx.next() == valid {
return true
}
lx.backup()
return false
}
// peek returns but does not consume the next rune in the input.
func (lx *lexer) peek() rune {
r := lx.next()
lx.backup()
return r
}
// skip ignores all input that matches the given predicate.
func (lx *lexer) skip(pred func(rune) bool) {
for {
r := lx.next()
if pred(r) {
continue
}
lx.backup()
lx.ignore()
return
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (newlines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
}
return nil
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
if isWhitespace(r) || isNL(r) {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
lx.push(lexTop)
return lexCommentStart
case tableStart:
return lexTableStart
case eof:
if lx.pos > lx.start {
return lx.errorf("unexpected EOF")
}
lx.emit(itemEOF)
return nil
}
// At this point, the only valid item can be a key, so we back up
// and let the key lexer do the rest.
lx.backup()
lx.push(lexTopEnd)
return lexKeyStart
}
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a newline. If it sees EOF, it will quit the lexer successfully.
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
// a comment will read to a newline for us.
lx.push(lexTop)
return lexCommentStart
case isWhitespace(r):
return lexTopEnd
case isNL(r):
lx.ignore()
return lexTop
case r == eof:
lx.emit(itemEOF)
return nil
}
return lx.errorf("expected a top-level item to end with a newline, "+
"comment, or EOF, but got %q instead", r)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
// it starts with a character other than '.' and ']'.
// It assumes that '[' has already been consumed.
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
} else {
lx.emit(itemTableStart)
lx.push(lexTableEnd)
}
return lexTableNameStart
}
func lexTableEnd(lx *lexer) stateFn {
lx.emit(itemTableEnd)
return lexTopEnd
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf("expected end of table array name delimiter %q, "+
"but got %q instead", arrayTableEnd, r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
}
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
return lx.errorf("unexpected end of table name " +
"(table names cannot be empty)")
case r == tableSep:
return lx.errorf("unexpected table separator " +
"(table names cannot be empty)")
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.push(lexTableNameEnd)
return lexValue // reuse string lexing
default:
return lexBareTableName
}
}
// lexBareTableName lexes the name of a table. It assumes that at least one
// valid character for the table has already been read.
func lexBareTableName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
return lexBareTableName
}
lx.backup()
lx.emit(itemText)
return lexTableNameEnd
}
// lexTableNameEnd reads the end of a piece of a table name, optionally
// consuming whitespace.
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
lx.ignore()
return lexTableNameStart
case r == tableEnd:
return lx.pop()
default:
return lx.errorf("expected '.' or ']' to end table name, "+
"but got %q instead", r)
}
}
// lexKeyStart consumes a key name up until the first non-whitespace character.
// lexKeyStart will ignore whitespace.
func lexKeyStart(lx *lexer) stateFn {
r := lx.peek()
switch {
case r == keySep:
return lx.errorf("unexpected key separator %q", keySep)
case isWhitespace(r) || isNL(r):
lx.next()
return lexSkip(lx, lexKeyStart)
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.emit(itemKeyStart)
lx.push(lexKeyEnd)
return lexValue // reuse string lexing
default:
lx.ignore()
lx.emit(itemKeyStart)
return lexBareKey
}
}
// lexBareKey consumes the text of a bare key. Assumes that the first character
// (which is not whitespace) has not yet been consumed.
func lexBareKey(lx *lexer) stateFn {
switch r := lx.next(); {
case isBareKeyChar(r):
return lexBareKey
case isWhitespace(r):
lx.backup()
lx.emit(itemText)
return lexKeyEnd
case r == keySep:
lx.backup()
lx.emit(itemText)
return lexKeyEnd
default:
return lx.errorf("bare keys cannot contain %q", r)
}
}
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
// separator).
func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case r == keySep:
return lexSkip(lx, lexValue)
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
default:
return lx.errorf("expected key separator %q, but got %q instead",
keySep, r)
}
}
// lexValue starts the consumption of a value anywhere a value is expected.
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func lexValue(lx *lexer) stateFn {
// We allow whitespace to precede a value, but NOT newlines.
// In array syntax, the array states are responsible for ignoring newlines.
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case isDigit(r):
lx.backup() // avoid an extra state and use the same as above
return lexNumberOrDateStart
}
switch r {
case arrayStart:
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case inlineTableStart:
lx.ignore()
lx.emit(itemInlineTableStart)
return lexInlineTableValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
lx.ignore() // Ignore """
return lexMultilineString
}
lx.backup()
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
lx.ignore() // Ignore """
return lexMultilineRawString
}
lx.backup()
}
lx.ignore() // ignore the "'"
return lexRawString
case '+', '-':
return lexNumberStart
case '.': // special error case, be kind to users
return lx.errorf("floats must start with a digit, not '.'")
}
if unicode.IsLetter(r) {
// Be permissive here; lexBool will give a nice error if the
// user wrote something like
// x = foo
// (i.e. not 'true' or 'false' but is something else word-like.)
lx.backup()
return lexBool
}
return lx.errorf("expected value but found %q instead", r)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and newlines are ignored.
func lexArrayValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
lx.push(lexArrayValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == arrayEnd:
// NOTE(caleb): The spec isn't clear about whether you can have
// a trailing comma or not, so we'll allow it.
return lexArrayEnd
}
lx.backup()
lx.push(lexArrayValueEnd)
return lexValue
}
// lexArrayValueEnd consumes everything between the end of an array value and
// the next value (or the end of the array): it ignores whitespace and newlines
// and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
return lexArrayEnd
}
return lx.errorf(
"expected a comma or array terminator %q, but got %q instead",
arrayEnd, r,
)
}
// lexArrayEnd finishes the lexing of an array.
// It assumes that a ']' has just been consumed.
func lexArrayEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemArrayEnd)
return lx.pop()
}
// lexInlineTableValue consumes one key/value pair in an inline table.
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
func lexInlineTableValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValue)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == inlineTableEnd:
return lexInlineTableEnd
}
lx.backup()
lx.push(lexInlineTableValueEnd)
return lexKeyStart
}
// lexInlineTableValueEnd consumes everything between the end of an inline table
// key/value pair and the next pair (or the end of the table):
// it ignores whitespace and expects either a ',' or a '}'.
func lexInlineTableValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValueEnd)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexInlineTableValue
case r == inlineTableEnd:
return lexInlineTableEnd
}
return lx.errorf("expected a comma or an inline table terminator %q, "+
"but got %q instead", inlineTableEnd, r)
}
// lexInlineTableEnd finishes the lexing of an inline table.
// It assumes that a '}' has just been consumed.
func lexInlineTableEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemInlineTableEnd)
return lx.pop()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func lexString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
lx.backup()
lx.emit(itemString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexString
}
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func lexMultilineString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case '\\':
return lexMultilineStringEscape
case stringEnd:
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
// It assumes that the beginning "'" has already been consumed and ignored.
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == rawStringEnd:
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'''" has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case rawStringEnd:
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemRawMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
return lexMultilineString
}
lx.backup()
lx.push(lexMultilineString)
return lexStringEscape(lx)
}
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'b':
fallthrough
case 't':
fallthrough
case 'n':
fallthrough
case 'f':
fallthrough
case 'r':
fallthrough
case '"':
fallthrough
case '\\':
return lx.pop()
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("invalid escape character %q; only the following "+
"escape characters are allowed: "+
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 4; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected four hexadecimal digits after '\u', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
func lexLongUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 8; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected eight hexadecimal digits after '\U', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '_':
return lexNumber
case 'e', 'E':
return lexFloat
case '.':
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
// lexNumberOrDate consumes either an integer, float or datetime.
func lexNumberOrDate(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '-':
return lexDatetime
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexDatetime consumes a Datetime, to a first approximation.
// The parser validates that it matches one of the accepted formats.
func lexDatetime(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexDatetime
}
switch r {
case '-', 'T', ':', '.', 'Z':
return lexDatetime
}
lx.backup()
lx.emit(itemDatetime)
return lx.pop()
}
// lexNumberStart consumes either an integer or a float. It assumes that a sign
// has already been read, but that *no* digits have been consumed.
// lexNumberStart will move to the appropriate integer or float states.
func lexNumberStart(lx *lexer) stateFn {
// We MUST see a digit. Even floats have to start with a digit.
r := lx.next()
if !isDigit(r) {
if r == '.' {
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
return lexNumber
}
// lexNumber consumes an integer or a float after seeing the first digit.
func lexNumber(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumber
}
switch r {
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexFloat consumes the elements of a float. It allows any sequence of
// float-like characters, so floats emitted by the lexer are only a first
// approximation and must be validated by the parser.
func lexFloat(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexFloat
}
switch r {
case '_', '.', '-', '+', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemFloat)
return lx.pop()
}
// lexBool consumes a bool string: 'true' or 'false.
func lexBool(lx *lexer) stateFn {
var rs []rune
for {
r := lx.next()
if !unicode.IsLetter(r) {
lx.backup()
break
}
rs = append(rs, r)
}
s := string(rs)
switch s {
case "true", "false":
lx.emit(itemBool)
return lx.pop()
}
return lx.errorf("expected value but found %q instead", s)
}
// lexCommentStart begins the lexing of a comment. It will emit
// itemCommentStart and consume no characters, passing control to lexComment.
func lexCommentStart(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemCommentStart)
return lexComment
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first newline character, and pass control
// back to the last state on the stack.
func lexComment(lx *lexer) stateFn {
r := lx.peek()
if isNL(r) || r == eof {
lx.emit(itemText)
return lx.pop()
}
lx.next()
return lexComment
}
// lexSkip ignores all slurped input and moves on to the next state.
func lexSkip(lx *lexer, nextState stateFn) stateFn {
return func(lx *lexer) stateFn {
lx.ignore()
return nextState
}
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
return "Text"
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
return "String"
case itemBool:
return "Bool"
case itemInteger:
return "Integer"
case itemFloat:
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemCommentStart:
return "CommentStart"
}
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
}
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}

592
vendor/github.com/BurntSushi/toml/parse.go generated vendored Normal file
View File

@@ -0,0 +1,592 @@
package toml
import (
"fmt"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
// A list of keys in the order that they appear in the TOML data.
ordered []Key
// the full key for the current hash in scope
context Key
// the base key name for everything except hashes
currentKey string
// rough approximation of line number
approxLine int
// A map of 'key.group.names' to whether they were created implicitly.
implicits map[string]bool
}
type parseError string
func (pe parseError) Error() string {
return string(pe)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(parseError); ok {
return
}
panic(r)
}
}()
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
p.approxLine, p.current(), fmt.Sprintf(format, v...))
panic(parseError(msg))
}
func (p *parser) next() item {
it := p.lx.nextItem()
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart:
p.approxLine = item.line
p.expect(itemText)
case itemTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemTableEnd, kg.typ)
p.establishContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemArrayTableEnd, kg.typ)
p.establishContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart:
kname := p.next()
p.approxLine = kname.line
p.currentKey = p.keyString(kname)
val, typ := p.value(p.next())
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
panic("unreachable")
}
}
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
}
p.bug("Expected boolean value, but got '%s'.", it.val)
case itemInteger:
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseInt(val, 10, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit "+
"signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemFloat:
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be "+
"surrounded by digits", it.val)
}
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed "+
"by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit "+
"IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemDatetime:
var t time.Time
var ok bool
var err error
for _, format := range []string{
"2006-01-02T15:04:05Z07:00",
"2006-01-02T15:04:05",
"2006-01-02",
} {
t, err = time.ParseInLocation(format, it.val, time.Local)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
case itemArray:
array := make([]interface{}, 0)
types := make([]tomlType, 0)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it)
array = append(array, val)
types = append(types, typ)
}
return array, p.typeOfArray(types)
case itemInlineTableStart:
var (
hash = make(map[string]interface{})
outerContext = p.context
outerKey = p.currentKey
)
p.context = append(p.context, p.currentKey)
p.currentKey = ""
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
if it.typ != itemKeyStart {
p.bug("Expected key start but instead found %q, around line %d",
it.val, p.approxLine)
}
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
// retrieve key
k := p.next()
p.approxLine = k.line
kname := p.keyString(k)
// retrieve value
p.currentKey = kname
val, typ := p.value(p.next())
// make sure we keep metadata up to date
p.setType(kname, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
hash[kname] = val
}
p.context = outerContext
p.currentKey = outerKey
return hash, tomlHash
}
p.bug("Unexpected value type: %s", it.typ)
panic("unreachable")
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
accept = false
continue
}
accept = true
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// establishContext sets the current context of the parser,
// where the context is either a hash or an array of hashes. Which one is
// set depends on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) establishContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 5)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as "+
"an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var tmpHash interface{}
var ok bool
hash := p.mapping
keyContext := make(Key, 0)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.bug("Expected hash to have type 'map[string]interface{}', but "+
"it has '%T' instead.", tmpHash)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Typically, if the given key has already been set, then we have
// to raise an error since duplicate keys are disallowed. However,
// it's possible that a key was previously defined implicitly. In this
// case, it is allowed to be redefined concretely. (See the
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// addImplicit sets the given Key as having been created implicitly.
func (p *parser) addImplicit(key Key) {
p.implicits[key.String()] = true
}
// removeImplicit stops tagging the given key as having been implicitly
// created.
func (p *parser) removeImplicit(key Key) {
p.implicits[key.String()] = false
}
// isImplicit returns true if the key group pointed to by the key was created
// implicitly.
func (p *parser) isImplicit(key Key) bool {
return p.implicits[key.String()]
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) == 0 || s[0] != '\n' {
return s
}
return s[1:]
}
func stripEscapedWhitespace(s string) string {
esc := strings.Split(s, "\\\n")
if len(esc) > 1 {
for i := 1; i < len(esc); i++ {
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
}
}
return strings.Join(esc, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}
func isStringType(ty itemType) bool {
return ty == itemString || ty == itemMultilineString ||
ty == itemRawString || ty == itemRawMultilineString
}

91
vendor/github.com/BurntSushi/toml/type_check.go generated vendored Normal file
View File

@@ -0,0 +1,91 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}

242
vendor/github.com/BurntSushi/toml/type_fields.go generated vendored Normal file
View File

@@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

View File

@@ -20,28 +20,50 @@ to another, for example docker container images to OCI images. It also allows
you to copy container images between various registries, possibly converting
them as necessary, and to sign and verify images.
## Command-line usage
The containers/image project is only a library with no user interface;
you can either incorporate it into your Go programs, or use the `skopeo` tool:
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
containers/image library and takes advantage of its many features.
containers/image library and takes advantage of many of its features,
e.g. `skopeo copy` exposes the `containers/image/copy.Image` functionality.
## Dependencies
Dependencies that this library prefers will not be found in the `vendor`
directory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects.
This library does not ship a committed version of its dependencies in a `vendor`
subdirectory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects, and because
types defined in the `vendor` directory would be impossible to use from your projects.
What this project tests against dependencies-wise is located
[here](https://github.com/containers/image/blob/master/vendor.conf).
[in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf).
## Building
For ordinary use, `go build ./...` is sufficient.
If you want to see what the library can do, or an example of how it is called,
consider starting with the [skopeo](https://github.com/projectatomic/skopeo) tool
instead.
When developing this library, please use `make` to take advantage of the tests and validation.
To integrate this library into your project, put it into `$GOPATH` or use
your preferred vendoring tool to include a copy in your project.
Ensure that the dependencies documented [in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf)
are also available
(using those exact versions or different versions of your choosing).
Optionally, you can use the `containers_image_openpgp` build tag (using `go build -tags …`, or `make … BUILDTAGS=…`).
This library, by default, also depends on the GpgME C library. Either install it:
```sh
Fedora$ dnf install gpgme-devel libassuan-devel
macOS$ brew install gpgme
```
or use the `containers_image_openpgp` build tag (e.g. using `go build -tags …`)
This will use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
the primary downside is that creating new signatures with the Golang-only implementation is not supported.
## Contributing
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.
## License
ASL 2.0

View File

@@ -7,13 +7,14 @@ import (
"io"
"io/ioutil"
"reflect"
"runtime"
"strings"
"time"
pb "gopkg.in/cheggaaa/pb.v1"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
@@ -22,11 +23,6 @@ import (
"github.com/pkg/errors"
)
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
type digestingReader struct {
source io.Reader
digester digest.Digester
@@ -162,6 +158,10 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
}
}()
if err := checkImageDestinationForCurrentRuntimeOS(src, dest); err != nil {
return err
}
if src.IsMultiImage() {
return errors.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
}
@@ -186,8 +186,16 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
canModifyManifest := len(sigs) == 0
manifestUpdates := types.ManifestUpdateOptions{}
manifestUpdates.InformationOnly.Destination = dest
if err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest); err != nil {
if err := updateEmbeddedDockerReference(&manifestUpdates, dest, src, canModifyManifest); err != nil {
return err
}
// We compute preferredManifestMIMEType only to show it in error messages.
// Without having to add this context in an error message, we would be happy enough to know only that no conversion is needed.
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest)
if err != nil {
return err
}
@@ -210,54 +218,58 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
return err
}
pendingImage := src
if !reflect.DeepEqual(manifestUpdates, types.ManifestUpdateOptions{InformationOnly: manifestUpdates.InformationOnly}) {
if !canModifyManifest {
return errors.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
}
manifestUpdates.InformationOnly.Destination = dest
pendingImage, err = src.UpdatedImage(manifestUpdates)
if err != nil {
return errors.Wrap(err, "Error creating an updated image manifest")
}
}
manifest, _, err := pendingImage.Manifest()
// With docker/distribution registries we do not know whether the registry accepts schema2 or schema1 only;
// and at least with the OpenShift registry "acceptschema2" option, there is no way to detect the support
// without actually trying to upload something and getting a types.ManifestTypeRejectedError.
// So, try the preferred manifest MIME type. If the process succeeds, fine…
manifest, err := ic.copyUpdatedConfigAndManifest()
if err != nil {
return errors.Wrap(err, "Error reading manifest")
}
logrus.Debugf("Writing manifest using preferred type %s failed: %v", preferredManifestMIMEType, err)
// … if it fails, _and_ the failure is because the manifest is rejected, we may have other options.
if _, isManifestRejected := errors.Cause(err).(types.ManifestTypeRejectedError); !isManifestRejected || len(otherManifestMIMETypeCandidates) == 0 {
// We dont have other options.
// In principle the code below would handle this as well, but the resulting error message is fairly ugly.
// Dont bother the user with MIME types if we have no choice.
return err
}
// If the original MIME type is acceptable, determineManifestConversion always uses it as preferredManifestMIMEType.
// So if we are here, we will definitely be trying to convert the manifest.
// With !canModifyManifest, that would just be a string of repeated failures for the same reason,
// so lets bail out early and with a better error message.
if !canModifyManifest {
return errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
}
if err := ic.copyConfig(pendingImage); err != nil {
return err
// errs is a list of errors when trying various manifest types. Also serves as an "upload succeeded" flag when set to nil.
errs := []string{fmt.Sprintf("%s(%v)", preferredManifestMIMEType, err)}
for _, manifestMIMEType := range otherManifestMIMETypeCandidates {
logrus.Debugf("Trying to use manifest type %s…", manifestMIMEType)
manifestUpdates.ManifestMIMEType = manifestMIMEType
attemptedManifest, err := ic.copyUpdatedConfigAndManifest()
if err != nil {
logrus.Debugf("Upload of manifest type %s failed: %v", manifestMIMEType, err)
errs = append(errs, fmt.Sprintf("%s(%v)", manifestMIMEType, err))
continue
}
// We have successfully uploaded a manifest.
manifest = attemptedManifest
errs = nil // Mark this as a success so that we don't abort below.
break
}
if errs != nil {
return fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
}
}
if options.SignBy != "" {
mech, err := signature.NewGPGSigningMechanism()
newSig, err := createSignature(dest, manifest, options.SignBy, reportWriter)
if err != nil {
return errors.Wrap(err, "Error initializing GPG")
}
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
return errors.Wrap(err, "Signing not supported")
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
writeReport("Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, options.SignBy)
if err != nil {
return errors.Wrap(err, "Error creating signature")
return err
}
sigs = append(sigs, newSig)
}
writeReport("Writing manifest to image destination\n")
if err := dest.PutManifest(manifest); err != nil {
return errors.Wrap(err, "Error writing manifest")
}
writeReport("Storing signatures\n")
if err := dest.PutSignatures(sigs); err != nil {
return errors.Wrap(err, "Error writing signatures")
@@ -270,6 +282,40 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
return nil
}
func checkImageDestinationForCurrentRuntimeOS(src types.Image, dest types.ImageDestination) error {
if dest.MustMatchRuntimeOS() {
c, err := src.OCIConfig()
if err != nil {
return errors.Wrapf(err, "Error parsing image configuration")
}
osErr := fmt.Errorf("image operating system %q cannot be used on %q", c.OS, runtime.GOOS)
if runtime.GOOS == "windows" && c.OS == "linux" {
return osErr
} else if runtime.GOOS != "windows" && c.OS == "windows" {
return osErr
}
}
return nil
}
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
func updateEmbeddedDockerReference(manifestUpdates *types.ManifestUpdateOptions, dest types.ImageDestination, src types.Image, canModifyManifest bool) error {
destRef := dest.Reference().DockerReference()
if destRef == nil {
return nil // Destination does not care about Docker references
}
if !src.EmbeddedDockerReferenceConflicts(destRef) {
return nil // No reference embedded in the manifest, or it matches destRef already.
}
if !canModifyManifest {
return errors.Errorf("Copying a schema1 image with an embedded Docker reference to %s (Docker reference %s) would invalidate existing signatures. Explicitly enable signature removal to proceed anyway",
transports.ImageName(dest.Reference()), destRef.String())
}
manifestUpdates.EmbeddedDockerReference = destRef
return nil
}
// copyLayers copies layers from src/rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
func (ic *imageCopier) copyLayers() error {
srcInfos := ic.src.LayerInfos()
@@ -322,6 +368,45 @@ func layerDigestsDiffer(a, b []types.BlobInfo) bool {
return false
}
// copyUpdatedConfigAndManifest updates the image per ic.manifestUpdates, if necessary,
// stores the resulting config and manifest to the destination, and returns the stored manifest.
func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
pendingImage := ic.src
if !reflect.DeepEqual(*ic.manifestUpdates, types.ManifestUpdateOptions{InformationOnly: ic.manifestUpdates.InformationOnly}) {
if !ic.canModifyManifest {
return nil, errors.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
}
if !ic.diffIDsAreNeeded && ic.src.UpdatedImageNeedsLayerDiffIDs(*ic.manifestUpdates) {
// We have set ic.diffIDsAreNeeded based on the preferred MIME type returned by determineManifestConversion.
// So, this can only happen if we are trying to upload using one of the other MIME type candidates.
// Because UpdatedImageNeedsLayerDiffIDs is true only when converting from s1 to s2, this case should only arise
// when ic.dest.SupportedManifestMIMETypes() includes both s1 and s2, the upload using s1 failed, and we are now trying s2.
// Supposedly s2-only registries do not exist or are extremely rare, so failing with this error message is good enough for now.
// If handling such registries turns out to be necessary, we could compute ic.diffIDsAreNeeded based on the full list of manifest MIME type candidates.
return nil, errors.Errorf("Can not convert image to %s, preparing DiffIDs for this case is not supported", ic.manifestUpdates.ManifestMIMEType)
}
pi, err := ic.src.UpdatedImage(*ic.manifestUpdates)
if err != nil {
return nil, errors.Wrap(err, "Error creating an updated image manifest")
}
pendingImage = pi
}
manifest, _, err := pendingImage.Manifest()
if err != nil {
return nil, errors.Wrap(err, "Error reading manifest")
}
if err := ic.copyConfig(pendingImage); err != nil {
return nil, err
}
fmt.Fprintf(ic.reportWriter, "Writing manifest to image destination\n")
if err := ic.dest.PutManifest(manifest); err != nil {
return nil, errors.Wrap(err, "Error writing manifest")
}
return manifest, nil
}
// copyConfig copies config.json, if any, from src to dest.
func (ic *imageCopier) copyConfig(src types.Image) error {
srcInfo := src.ConfigInfo()
@@ -575,41 +660,3 @@ func compressGoroutine(dest *io.PipeWriter, src io.Reader) {
_, err = io.Copy(zipper, src) // Sets err to nil, i.e. causes dest.Close()
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) error {
if len(destSupportedManifestMIMETypes) == 0 {
return nil // Anything goes
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
supportedByDest[t] = struct{}{}
}
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return errors.Wrap(err, "Error reading manifest")
}
if _, ok := supportedByDest[srcType]; ok {
logrus.Debugf("Manifest MIME type %s is declared supported by the destination", srcType)
return nil
}
// OK, we should convert the manifest.
if !canModifyManifest {
logrus.Debugf("Manifest MIME type %s is not supported by the destination, but we can't modify the manifest, hoping for the best...")
return nil // Take our chances - FIXME? Or should we fail without trying?
}
var chosenType = destSupportedManifestMIMETypes[0] // This one is known to be supported.
for _, t := range preferredManifestMIMETypes {
if _, ok := supportedByDest[t]; ok {
chosenType = t
break
}
}
logrus.Debugf("Will convert manifest from MIME type %s to %s", srcType, chosenType)
manifestUpdates.ManifestMIMEType = chosenType
return nil
}

102
vendor/github.com/containers/image/copy/manifest.go generated vendored Normal file
View File

@@ -0,0 +1,102 @@
package copy
import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
// orderedSet is a list of strings (MIME types in our case), with each string appearing at most once.
type orderedSet struct {
list []string
included map[string]struct{}
}
// newOrderedSet creates a correctly initialized orderedSet.
// [Sometimes it would be really nice if Golang had constructors…]
func newOrderedSet() *orderedSet {
return &orderedSet{
list: []string{},
included: map[string]struct{}{},
}
}
// append adds s to the end of os, only if it is not included already.
func (os *orderedSet) append(s string) {
if _, ok := os.included[s]; !ok {
os.list = append(os.list, s)
os.included[s] = struct{}{}
}
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
// Returns the preferred manifest MIME type (whether we are converting to it or using it unmodified),
// and a list of other possible alternatives, in order.
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) (string, []string, error) {
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return "", nil, errors.Wrap(err, "Error reading manifest")
}
if len(destSupportedManifestMIMETypes) == 0 {
return srcType, []string{}, nil // Anything goes; just use the original as is, do not try any conversions.
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
supportedByDest[t] = struct{}{}
}
// destSupportedManifestMIMETypes is a static guess; a particular registry may still only support a subset of the types.
// So, build a list of types to try in order of decreasing preference.
// FIXME? This treats manifest.DockerV2Schema1SignedMediaType and manifest.DockerV2Schema1MediaType as distinct,
// although we are not really making any conversion, and it is very unlikely that a destination would support one but not the other.
// In practice, schema1 is probably the lowest common denominator, so we would expect to try the first one of the MIME types
// and never attempt the other one.
prioritizedTypes := newOrderedSet()
// First of all, prefer to keep the original manifest unmodified.
if _, ok := supportedByDest[srcType]; ok {
prioritizedTypes.append(srcType)
}
if !canModifyManifest {
// We could also drop the !canModifyManifest parameter and have the caller
// make the choice; it is already doing that to an extent, to improve error
// messages. But it is nice to hide the “if !canModifyManifest, do no conversion”
// special case in here; the caller can then worry (or not) only about a good UI.
logrus.Debugf("We can't modify the manifest, hoping for the best...")
return srcType, []string{}, nil // Take our chances - FIXME? Or should we fail without trying?
}
// Then use our list of preferred types.
for _, t := range preferredManifestMIMETypes {
if _, ok := supportedByDest[t]; ok {
prioritizedTypes.append(t)
}
}
// Finally, try anything else the destination supports.
for _, t := range destSupportedManifestMIMETypes {
prioritizedTypes.append(t)
}
logrus.Debugf("Manifest has MIME type %s, ordered candidate list [%s]", srcType, strings.Join(prioritizedTypes.list, ", "))
if len(prioritizedTypes.list) == 0 { // Coverage: destSupportedManifestMIMETypes is not empty (or we would have exited in the “Anything goes” case above), so this should never happen.
return "", nil, errors.New("Internal error: no candidate MIME types")
}
preferredType := prioritizedTypes.list[0]
if preferredType != srcType {
manifestUpdates.ManifestMIMEType = preferredType
} else {
logrus.Debugf("... will first try using the original manifest unmodified")
}
return preferredType, prioritizedTypes.list[1:], nil
}

35
vendor/github.com/containers/image/copy/sign.go generated vendored Normal file
View File

@@ -0,0 +1,35 @@
package copy
import (
"fmt"
"io"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// createSignature creates a new signature of manifest at (identified by) dest using keyIdentity.
func createSignature(dest types.ImageDestination, manifest []byte, keyIdentity string, reportWriter io.Writer) ([]byte, error) {
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return nil, errors.Wrap(err, "Error initializing GPG")
}
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
return nil, errors.Wrap(err, "Signing not supported")
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return nil, errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
fmt.Fprintf(reportWriter, "Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, keyIdentity)
if err != nil {
return nil, errors.Wrap(err, "Error creating signature")
}
return newSig, nil
}

View File

@@ -51,6 +51,11 @@ func (d *dirImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dirImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -118,6 +123,10 @@ func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo,
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dirImageDestination) PutManifest(manifest []byte) error {
return ioutil.WriteFile(d.ref.manifestPath(), manifest, 0644)
}

View File

@@ -78,6 +78,11 @@ func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeRe
defer resp.Body.Close()
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *daemonImageDestination) MustMatchRuntimeOS() bool {
return true
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *daemonImageDestination) Close() error {
if !d.committed {

View File

@@ -34,6 +34,8 @@ const (
dockerCfgFileName = "config.json"
dockerCfgObsolete = ".dockercfg"
systemPerHostCertDirPath = "/etc/docker/certs.d"
resolvedPingV2URL = "%s://%s/v2/"
resolvedPingV1URL = "%s://%s/v1/_ping"
tagsPath = "/v2/%s/tags/list"
@@ -129,12 +131,29 @@ func newTransport() *http.Transport {
return tr
}
func setupCertificates(dir string, tlsc *tls.Config) error {
if dir == "" {
return nil
// dockerCertDir returns a path to a directory to be consumed by setupCertificates() depending on ctx and hostPort.
func dockerCertDir(ctx *types.SystemContext, hostPort string) string {
if ctx != nil && ctx.DockerCertPath != "" {
return ctx.DockerCertPath
}
var hostCertDir string
if ctx != nil && ctx.DockerPerHostCertDirPath != "" {
hostCertDir = ctx.DockerPerHostCertDirPath
} else if ctx != nil && ctx.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(ctx.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
}
return filepath.Join(hostCertDir, hostPort)
}
func setupCertificates(dir string, tlsc *tls.Config) error {
logrus.Debugf("Looking for TLS certificates and private keys in %s", dir)
fs, err := ioutil.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
if err != nil {
if os.IsNotExist(err) {
return nil
}
return err
}
@@ -146,7 +165,7 @@ func setupCertificates(dir string, tlsc *tls.Config) error {
return errors.Wrap(err, "unable to get system cert pool")
}
tlsc.RootCAs = systemPool
logrus.Debugf("crt: %s", fullPath)
logrus.Debugf(" crt: %s", fullPath)
data, err := ioutil.ReadFile(fullPath)
if err != nil {
return err
@@ -156,7 +175,7 @@ func setupCertificates(dir string, tlsc *tls.Config) error {
if strings.HasSuffix(f.Name(), ".cert") {
certName := f.Name()
keyName := certName[:len(certName)-5] + ".key"
logrus.Debugf("cert: %s", fullPath)
logrus.Debugf(" cert: %s", fullPath)
if !hasFile(fs, keyName) {
return errors.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
@@ -169,7 +188,7 @@ func setupCertificates(dir string, tlsc *tls.Config) error {
if strings.HasSuffix(f.Name(), ".key") {
keyName := f.Name()
certName := keyName[:len(keyName)-4] + ".cert"
logrus.Debugf("key: %s", fullPath)
logrus.Debugf(" key: %s", fullPath)
if !hasFile(fs, certName) {
return errors.Errorf("missing client certificate %s for key %s", certName, keyName)
}
@@ -199,18 +218,18 @@ func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool,
return nil, err
}
tr := newTransport()
if ctx != nil && (ctx.DockerCertPath != "" || ctx.DockerInsecureSkipTLSVerify) {
tlsc := &tls.Config{}
if err := setupCertificates(ctx.DockerCertPath, tlsc); err != nil {
return nil, err
}
tlsc.InsecureSkipVerify = ctx.DockerInsecureSkipTLSVerify
tr.TLSClientConfig = tlsc
tr.TLSClientConfig = serverDefault()
// It is undefined whether the host[:port] string for dockerHostname should be dockerHostname or dockerRegistry,
// because docker/docker does not read the certs.d subdirectory at all in that case. We use the user-visible
// dockerHostname here, because it is more symmetrical to read the configuration in that case as well, and because
// generally the UI hides the existence of the different dockerRegistry. But note that this behavior is
// undocumented and may change if docker/docker changes.
certDir := dockerCertDir(ctx, reference.Domain(ref.ref))
if err := setupCertificates(certDir, tr.TLSClientConfig); err != nil {
return nil, err
}
if tr.TLSClientConfig == nil {
tr.TLSClientConfig = serverDefault()
if ctx != nil && ctx.DockerInsecureSkipTLSVerify {
tr.TLSClientConfig.InsecureSkipVerify = true
}
client := &http.Client{Transport: tr}

View File

@@ -16,6 +16,9 @@ import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
@@ -96,6 +99,11 @@ func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dockerImageDestination) MustMatchRuntimeOS() bool {
return false
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }
@@ -209,6 +217,10 @@ func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInf
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dockerImageDestination) PutManifest(m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
@@ -233,16 +245,31 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
body, err := ioutil.ReadAll(res.Body)
if err == nil {
logrus.Debugf("Error body %s", string(body))
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest to %s", path)
if isManifestInvalidError(errors.Cause(err)) {
err = types.ManifestTypeRejectedError{Err: err}
}
logrus.Debugf("Error uploading manifest, status %d, %#v", res.StatusCode, res)
return errors.Errorf("Error uploading manifest to %s, status %d", path, res.StatusCode)
return err
}
return nil
}
// isManifestInvalidError returns true iff err from client.HandleErrorReponse is a “manifest invalid” error.
func isManifestInvalidError(err error) bool {
errors, ok := err.(errcode.Errors)
if !ok || len(errors) == 0 {
return false
}
ec, ok := errors[0].(errcode.ErrorCoder)
if !ok {
return false
}
// ErrorCodeManifestInvalid is returned by OpenShift with acceptschema2=false.
// ErrorCodeTagInvalid is returned by docker/distribution (at least as of commit ec87e9b6971d831f0eff752ddb54fb64693e51cd)
// when uploading to a tag (because it cant find a matching tag inside the manifest)
return ec.ErrorCode() == v2.ErrorCodeManifestInvalid || ec.ErrorCode() == v2.ErrorCodeTagInvalid
}
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
// Do not fail if we dont really need to support signatures.
if len(signatures) == 0 {

View File

@@ -81,6 +81,11 @@ func (d *Destination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *Destination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -156,10 +161,13 @@ func (d *Destination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest sends the given manifest blob to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate
// between schema versions.
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *Destination) PutManifest(m []byte) error {
// We do not bother with types.ManifestTypeRejectedError; our .SupportedManifestMIMETypes() above is already providing only one alternative,
// so the caller trying a different manifest kind would be pointless.
var man schema2Manifest
if err := json.Unmarshal(m, &man); err != nil {
return errors.Wrap(err, "Error parsing manifest")

View File

@@ -135,6 +135,27 @@ func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
return layers
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
// This is a bit convoluted: We cant just have a "get embedded docker reference" method
// and have the “does it conflict” logic in the generic copy code, because the manifest does not actually
// embed a full docker/distribution reference, but only the repo name and tag (without the host name).
// So we would have to provide a “return repo without host name, and tag” getter for the generic code,
// which would be very awkward. Instead, we do the matching here in schema1-specific code, and all the
// generic copy code needs to know about is reference.Named and that a manifest may need updating
// for some destinations.
name := reference.Path(ref)
var tag string
if tagged, isTagged := ref.(reference.NamedTagged); isTagged {
tag = tagged.Tag()
} else {
tag = ""
}
return m.Name != name || m.Tag != tag
}
func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
v1 := &v1Image{}
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), v1); err != nil {
@@ -173,6 +194,14 @@ func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (typ
copy.FSLayers[(len(options.LayerInfos)-1)-i].BlobSum = info.Digest
}
}
if options.EmbeddedDockerReference != nil {
copy.Name = reference.Path(options.EmbeddedDockerReference)
if tagged, isTagged := options.EmbeddedDockerReference.(reference.NamedTagged); isTagged {
copy.Tag = tagged.Tag()
} else {
copy.Tag = ""
}
}
switch options.ManifestMIMEType {
case "": // No conversion, OK

View File

@@ -9,6 +9,7 @@ import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
@@ -140,6 +141,13 @@ func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
return blobs
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema2) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
@@ -180,6 +188,7 @@ func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (typ
copy.LayersDescriptors[i].URLs = info.URLs
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1 to schema2, but we really don't care.
switch options.ManifestMIMEType {
case "": // No conversion, OK

View File

@@ -3,6 +3,7 @@ package image
import (
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/strslice"
"github.com/containers/image/types"
@@ -72,6 +73,10 @@ type genericManifest interface {
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []types.BlobInfo
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
imageInspectInfo() (*types.ImageInspectInfo, error) // To be called by inspectManifest
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"io/ioutil"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
@@ -107,6 +108,13 @@ func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
return blobs
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestOCI1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
@@ -146,6 +154,7 @@ func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.
copy.LayersDescriptors[i].Size = info.Size
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1, but we really don't care.
switch options.ManifestMIMEType {
case "": // No conversion, OK

View File

@@ -54,7 +54,7 @@ func GuessMIMEType(manifest []byte) string {
}
switch meta.MediaType {
case DockerV2Schema2MediaType, DockerV2ListMediaType, imgspecv1.MediaTypeImageManifest, imgspecv1.MediaTypeImageManifestList: // A recognized type.
case DockerV2Schema2MediaType, DockerV2ListMediaType: // A recognized type.
return meta.MediaType
}
// this is the only way the function can return DockerV2Schema1MediaType, and recognizing that is essential for stripping the JWS signatures = computing the correct manifest digest.
@@ -64,7 +64,31 @@ func GuessMIMEType(manifest []byte) string {
return DockerV2Schema1SignedMediaType
}
return DockerV2Schema1MediaType
case 2: // Really should not happen, meta.MediaType should have been set. But given the data, this is our best guess.
case 2:
// best effort to understand if this is an OCI image since mediaType
// isn't in the manifest for OCI anymore
// for docker v2s2 meta.MediaType should have been set. But given the data, this is our best guess.
ociMan := struct {
Config struct {
MediaType string `json:"mediaType"`
} `json:"config"`
Layers []imgspecv1.Descriptor `json:"layers"`
}{}
if err := json.Unmarshal(manifest, &ociMan); err != nil {
return ""
}
if ociMan.Config.MediaType == imgspecv1.MediaTypeImageConfig && len(ociMan.Layers) != 0 {
return imgspecv1.MediaTypeImageManifest
}
ociIndex := struct {
Manifests []imgspecv1.Descriptor `json:"manifests"`
}{}
if err := json.Unmarshal(manifest, &ociIndex); err != nil {
return ""
}
if len(ociIndex.Manifests) != 0 && ociIndex.Manifests[0].MediaType == imgspecv1.MediaTypeImageManifest {
return imgspecv1.MediaTypeImageIndex
}
return DockerV2Schema2MediaType
}
return ""

View File

@@ -6,22 +6,30 @@ import (
"io/ioutil"
"os"
"path/filepath"
"runtime"
"github.com/pkg/errors"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspec "github.com/opencontainers/image-spec/specs-go"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type ociImageDestination struct {
ref ociReference
ref ociReference
index imgspecv1.Index
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ref ociReference) types.ImageDestination {
return &ociImageDestination{ref: ref}
index := imgspecv1.Index{
Versioned: imgspec.Versioned{
SchemaVersion: 2,
},
}
return &ociImageDestination{ref: ref, index: index}
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
@@ -58,6 +66,11 @@ func (d *ociImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ociImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -138,6 +151,10 @@ func (d *ociImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo,
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *ociImageDestination) PutManifest(m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
@@ -148,10 +165,6 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
// TODO(runcom): beaware and add support for OCI manifest list
desc.MediaType = imgspecv1.MediaTypeImageManifest
desc.Size = int64(len(m))
data, err := json.Marshal(desc)
if err != nil {
return err
}
blobPath, err := d.ref.blobPath(digest)
if err != nil {
@@ -163,15 +176,17 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
if err := ioutil.WriteFile(blobPath, m, 0644); err != nil {
return err
}
// TODO(runcom): ugly here?
if err := ioutil.WriteFile(d.ref.ociLayoutPath(), []byte(`{"imageLayoutVersion": "1.0.0"}`), 0644); err != nil {
return err
annotations := make(map[string]string)
annotations["org.opencontainers.image.ref.name"] = d.ref.tag
desc.Annotations = annotations
desc.Platform = &imgspecv1.Platform{
Architecture: runtime.GOARCH,
OS: runtime.GOOS,
}
descriptorPath := d.ref.descriptorPath(d.ref.tag)
if err := ensureParentDirectoryExists(descriptorPath); err != nil {
return err
}
return ioutil.WriteFile(descriptorPath, data, 0644)
d.index.Manifests = append(d.index.Manifests, desc)
return nil
}
func ensureDirectoryExists(path string) error {
@@ -200,5 +215,12 @@ func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *ociImageDestination) Commit() error {
return nil
if err := ioutil.WriteFile(d.ref.ociLayoutPath(), []byte(`{"imageLayoutVersion": "1.0.0"}`), 0644); err != nil {
return err
}
indexJSON, err := json.Marshal(d.index)
if err != nil {
return err
}
return ioutil.WriteFile(d.ref.indexPath(), indexJSON, 0644)
}

View File

@@ -1,7 +1,6 @@
package layout
import (
"encoding/json"
"io"
"io/ioutil"
"os"
@@ -12,12 +11,17 @@ import (
)
type ociImageSource struct {
ref ociReference
ref ociReference
descriptor imgspecv1.Descriptor
}
// newImageSource returns an ImageSource for reading from an existing directory.
func newImageSource(ref ociReference) types.ImageSource {
return &ociImageSource{ref: ref}
func newImageSource(ref ociReference) (types.ImageSource, error) {
descriptor, err := ref.getManifestDescriptor()
if err != nil {
return nil, err
}
return &ociImageSource{ref: ref, descriptor: descriptor}, nil
}
// Reference returns the reference used to set up this source.
@@ -33,19 +37,7 @@ func (s *ociImageSource) Close() error {
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *ociImageSource) GetManifest() ([]byte, string, error) {
descriptorPath := s.ref.descriptorPath(s.ref.tag)
data, err := ioutil.ReadFile(descriptorPath)
if err != nil {
return nil, "", err
}
desc := imgspecv1.Descriptor{}
err = json.Unmarshal(data, &desc)
if err != nil {
return nil, "", err
}
manifestPath, err := s.ref.blobPath(digest.Digest(desc.Digest))
manifestPath, err := s.ref.blobPath(digest.Digest(s.descriptor.Digest))
if err != nil {
return nil, "", err
}
@@ -54,7 +46,7 @@ func (s *ociImageSource) GetManifest() ([]byte, string, error) {
return nil, "", err
}
return m, desc.MediaType, nil
return m, s.descriptor.MediaType, nil
}
func (s *ociImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {

View File

@@ -1,7 +1,9 @@
package layout
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"regexp"
"strings"
@@ -12,6 +14,7 @@ import (
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -176,16 +179,49 @@ func (ref ociReference) PolicyConfigurationNamespaces() []string {
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src := newImageSource(ref)
src, err := newImageSource(ref)
if err != nil {
return nil, err
}
return image.FromSource(src)
}
func (ref ociReference) getManifestDescriptor() (imgspecv1.Descriptor, error) {
indexJSON, err := os.Open(ref.indexPath())
if err != nil {
return imgspecv1.Descriptor{}, err
}
defer indexJSON.Close()
index := imgspecv1.Index{}
if err := json.NewDecoder(indexJSON).Decode(&index); err != nil {
return imgspecv1.Descriptor{}, err
}
var d *imgspecv1.Descriptor
for _, md := range index.Manifests {
if md.MediaType != imgspecv1.MediaTypeImageManifest {
continue
}
refName, ok := md.Annotations["org.opencontainers.image.ref.name"]
if !ok {
continue
}
if refName == ref.tag {
d = &md
break
}
}
if d == nil {
return imgspecv1.Descriptor{}, fmt.Errorf("no descriptor found for reference %q", ref.tag)
}
return *d, nil
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref ociReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ref), nil
return newImageSource(ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
@@ -199,11 +235,16 @@ func (ref ociReference) DeleteImage(ctx *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for oci: images")
}
// ociLayoutPathPath returns a path for the oci-layout within a directory using OCI conventions.
// ociLayoutPath returns a path for the oci-layout within a directory using OCI conventions.
func (ref ociReference) ociLayoutPath() string {
return filepath.Join(ref.dir, "oci-layout")
}
// indexPath returns a path for the index.json within a directory using OCI conventions.
func (ref ociReference) indexPath() string {
return filepath.Join(ref.dir, "index.json")
}
// blobPath returns a path for a blob within a directory using OCI image-layout conventions.
func (ref ociReference) blobPath(digest digest.Digest) (string, error) {
if err := digest.Validate(); err != nil {
@@ -211,8 +252,3 @@ func (ref ociReference) blobPath(digest digest.Digest) (string, error) {
}
return filepath.Join(ref.dir, "blobs", digest.Algorithm().String(), digest.Hex()), nil
}
// descriptorPath returns a path for the manifest within a directory using OCI conventions.
func (ref ociReference) descriptorPath(digest string) string {
return filepath.Join(ref.dir, "refs", digest)
}

View File

@@ -338,10 +338,7 @@ func (d *openshiftImageDestination) Close() error {
}
func (d *openshiftImageDestination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
return d.docker.SupportedManifestMIMETypes()
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
@@ -361,6 +358,11 @@ func (d *openshiftImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *openshiftImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -383,6 +385,10 @@ func (d *openshiftImageDestination) ReapplyBlob(info types.BlobInfo) (types.Blob
return d.docker.ReapplyBlob(info)
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *openshiftImageDestination) PutManifest(m []byte) error {
manifestDigest, err := manifest.Digest(m)
if err != nil {

View File

@@ -0,0 +1,323 @@
package ostree
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/ostreedev/ostree-go/pkg/otbuiltin"
)
type blobToImport struct {
Size int64
Digest digest.Digest
BlobPath string
}
type descriptor struct {
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
}
type manifestSchema struct {
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
}
type ostreeImageDestination struct {
ref ostreeReference
manifest string
schema manifestSchema
tmpDirPath string
blobs map[string]*blobToImport
}
// newImageDestination returns an ImageDestination for writing to an existing ostree.
func newImageDestination(ref ostreeReference, tmpDirPath string) (types.ImageDestination, error) {
tmpDirPath = filepath.Join(tmpDirPath, ref.branchName)
if err := ensureDirectoryExists(tmpDirPath); err != nil {
return nil, err
}
return &ostreeImageDestination{ref, "", manifestSchema{}, tmpDirPath, map[string]*blobToImport{}}, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *ostreeImageDestination) Reference() types.ImageReference {
return d.ref
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *ostreeImageDestination) Close() error {
return os.RemoveAll(d.tmpDirPath)
}
func (d *ostreeImageDestination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema2MediaType,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *ostreeImageDestination) SupportsSignatures() error {
return nil
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *ostreeImageDestination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ostreeImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ostreeImageDestination) MustMatchRuntimeOS() bool {
return true
}
func (d *ostreeImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
tmpDir, err := ioutil.TempDir(d.tmpDirPath, "blob")
if err != nil {
return types.BlobInfo{}, err
}
blobPath := filepath.Join(tmpDir, "content")
blobFile, err := os.Create(blobPath)
if err != nil {
return types.BlobInfo{}, err
}
defer blobFile.Close()
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
}
hash := computedDigest.Hex()
d.blobs[hash] = &blobToImport{Size: size, Digest: computedDigest, BlobPath: blobPath}
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
func fixFiles(dir string, usermode bool) error {
entries, err := ioutil.ReadDir(dir)
if err != nil {
return err
}
for _, info := range entries {
fullpath := filepath.Join(dir, info.Name())
if info.Mode()&(os.ModeNamedPipe|os.ModeSocket|os.ModeDevice) != 0 {
if err := os.Remove(fullpath); err != nil {
return err
}
continue
}
if info.IsDir() {
if usermode {
if err := os.Chmod(fullpath, info.Mode()|0700); err != nil {
return err
}
}
err = fixFiles(fullpath, usermode)
if err != nil {
return err
}
} else if usermode && (info.Mode().IsRegular() || (info.Mode()&os.ModeSymlink) != 0) {
if err := os.Chmod(fullpath, info.Mode()|0600); err != nil {
return err
}
}
}
return nil
}
func (d *ostreeImageDestination) ostreeCommit(repo *otbuiltin.Repo, branch string, root string, metadata []string) error {
opts := otbuiltin.NewCommitOptions()
opts.AddMetadataString = metadata
opts.Timestamp = time.Now()
// OCI layers have no parent OSTree commit
opts.Parent = "0000000000000000000000000000000000000000000000000000000000000000"
_, err := repo.Commit(root, branch, opts)
return err
}
func (d *ostreeImageDestination) importBlob(repo *otbuiltin.Repo, blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
destinationPath := filepath.Join(d.tmpDirPath, blob.Digest.Hex(), "root")
if err := ensureDirectoryExists(destinationPath); err != nil {
return err
}
defer func() {
os.Remove(blob.BlobPath)
os.RemoveAll(destinationPath)
}()
if os.Getuid() == 0 {
if err := archive.UntarPath(blob.BlobPath, destinationPath); err != nil {
return err
}
if err := fixFiles(destinationPath, false); err != nil {
return err
}
} else {
os.MkdirAll(destinationPath, 0755)
if err := exec.Command("tar", "-C", destinationPath, "--no-same-owner", "--no-same-permissions", "--delay-directory-restore", "-xf", blob.BlobPath).Run(); err != nil {
return err
}
if err := fixFiles(destinationPath, true); err != nil {
return err
}
}
return d.ostreeCommit(repo, ostreeBranch, destinationPath, []string{fmt.Sprintf("docker.size=%d", blob.Size)})
}
func (d *ostreeImageDestination) importConfig(blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
return exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.size=%d", blob.Size),
"--branch", ostreeBranch, filepath.Dir(blob.BlobPath)).Run()
}
func (d *ostreeImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
branch := fmt.Sprintf("ociimage/%s", info.Digest.Hex())
output, err := exec.Command("ostree", "show", "--repo", d.ref.repo, "--print-metadata-key=docker.size", branch).CombinedOutput()
if err != nil {
if bytes.Index(output, []byte("not found")) >= 0 || bytes.Index(output, []byte("No such")) >= 0 {
return false, -1, nil
}
return false, -1, err
}
size, err := strconv.ParseInt(strings.Trim(string(output), "'\n"), 10, 64)
if err != nil {
return false, -1, err
}
return true, size, nil
}
func (d *ostreeImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *ostreeImageDestination) PutManifest(manifest []byte) error {
d.manifest = string(manifest)
if err := json.Unmarshal(manifest, &d.schema); err != nil {
return err
}
manifestPath := filepath.Join(d.tmpDirPath, d.ref.manifestPath())
if err := ensureParentDirectoryExists(manifestPath); err != nil {
return err
}
return ioutil.WriteFile(manifestPath, manifest, 0644)
}
func (d *ostreeImageDestination) PutSignatures(signatures [][]byte) error {
path := filepath.Join(d.tmpDirPath, d.ref.signaturePath(0))
if err := ensureParentDirectoryExists(path); err != nil {
return err
}
for i, sig := range signatures {
signaturePath := filepath.Join(d.tmpDirPath, d.ref.signaturePath(i))
if err := ioutil.WriteFile(signaturePath, sig, 0644); err != nil {
return err
}
}
return nil
}
func (d *ostreeImageDestination) Commit() error {
repo, err := otbuiltin.OpenRepo(d.ref.repo)
if err != nil {
return err
}
_, err = repo.PrepareTransaction()
if err != nil {
return err
}
for _, layer := range d.schema.LayersDescriptors {
hash := layer.Digest.Hex()
blob := d.blobs[hash]
// if the blob is not present in d.blobs then it is already stored in OSTree,
// and we don't need to import it.
if blob == nil {
continue
}
err := d.importBlob(repo, blob)
if err != nil {
return err
}
}
hash := d.schema.ConfigDescriptor.Digest.Hex()
blob := d.blobs[hash]
if blob != nil {
err := d.importConfig(blob)
if err != nil {
return err
}
}
manifestPath := filepath.Join(d.tmpDirPath, "manifest")
metadata := []string{fmt.Sprintf("docker.manifest=%s", string(d.manifest))}
err = d.ostreeCommit(repo, fmt.Sprintf("ociimage/%s", d.ref.branchName), manifestPath, metadata)
_, err = repo.CommitTransaction()
return err
}
func ensureDirectoryExists(path string) error {
if _, err := os.Stat(path); err != nil && os.IsNotExist(err) {
if err := os.MkdirAll(path, 0755); err != nil {
return err
}
}
return nil
}
func ensureParentDirectoryExists(path string) error {
return ensureDirectoryExists(filepath.Dir(path))
}

View File

@@ -0,0 +1,235 @@
package ostree
import (
"bytes"
"fmt"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
)
const defaultOSTreeRepo = "/ostree/repo"
// Transport is an ImageTransport for ostree paths.
var Transport = ostreeTransport{}
type ostreeTransport struct{}
func (t ostreeTransport) Name() string {
return "ostree"
}
func init() {
transports.Register(Transport)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t ostreeTransport) ValidatePolicyConfigurationScope(scope string) error {
sep := strings.Index(scope, ":")
if sep < 0 {
return errors.Errorf("Invalid ostree: scope %s: Must include a repo", scope)
}
repo := scope[:sep]
if !strings.HasPrefix(repo, "/") {
return errors.Errorf("Invalid ostree: scope %s: repository must be an absolute path", scope)
}
cleaned := filepath.Clean(repo)
if cleaned != repo {
return errors.Errorf(`Invalid ostree: scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
}
// FIXME? In the namespaces within a repo,
// we could be verifying the various character set and length restrictions
// from docker/distribution/reference.regexp.go, but other than that there
// are few semantically invalid strings.
return nil
}
// ostreeReference is an ImageReference for ostree paths.
type ostreeReference struct {
image string
branchName string
repo string
}
func (t ostreeTransport) ParseReference(ref string) (types.ImageReference, error) {
var repo = ""
var image = ""
s := strings.SplitN(ref, "@/", 2)
if len(s) == 1 {
image, repo = s[0], defaultOSTreeRepo
} else {
image, repo = s[0], "/"+s[1]
}
return NewReference(image, repo)
}
// NewReference returns an OSTree reference for a specified repo and image.
func NewReference(image string, repo string) (types.ImageReference, error) {
// image is not _really_ in a containers/image/docker/reference format;
// as far as the libOSTree ociimage/* namespace is concerned, it is more or
// less an arbitrary string with an implied tag.
// We use the reference.* parsers basically for the default tag name in
// reference.TagNameOnly, and incidentally for some character set and length
// restrictions.
var ostreeImage reference.Named
s := strings.SplitN(image, ":", 2)
named, err := reference.WithName(s[0])
if err != nil {
return nil, err
}
if len(s) == 1 {
ostreeImage = reference.TagNameOnly(named)
} else {
ostreeImage, err = reference.WithTag(named, s[1])
if err != nil {
return nil, err
}
}
resolved, err := explicitfilepath.ResolvePathToFullyExplicit(repo)
if err != nil {
// With os.IsNotExist(err), the parent directory of repo is also not existent;
// that should ordinarily not happen, but it would be a bit weird to reject
// references which do not specify a repo just because the implicit defaultOSTreeRepo
// does not exist.
if os.IsNotExist(err) && repo == defaultOSTreeRepo {
resolved = repo
} else {
return nil, err
}
}
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
// from being ambiguous with values of PolicyConfigurationIdentity.
if strings.Contains(resolved, ":") {
return nil, errors.Errorf("Invalid OSTreeCI reference %s@%s: path %s contains a colon", image, repo, resolved)
}
return ostreeReference{
image: ostreeImage.String(),
branchName: encodeOStreeRef(ostreeImage.String()),
repo: resolved,
}, nil
}
func (ref ostreeReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref ostreeReference) StringWithinTransport() string {
return fmt.Sprintf("%s@%s", ref.image, ref.repo)
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref ostreeReference) DockerReference() reference.Named {
return nil
}
func (ref ostreeReference) PolicyConfigurationIdentity() string {
return fmt.Sprintf("%s:%s", ref.repo, ref.image)
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref ostreeReference) PolicyConfigurationNamespaces() []string {
s := strings.SplitN(ref.image, ":", 2)
if len(s) != 2 { // Coverage: Should never happen, NewReference above ensures ref.image has a :tag.
panic(fmt.Sprintf("Internal inconsistency: ref.image value %q does not have a :tag", ref.image))
}
name := s[0]
res := []string{}
for {
res = append(res, fmt.Sprintf("%s:%s", ref.repo, name))
lastSlash := strings.LastIndex(name, "/")
if lastSlash == -1 {
break
}
name = name[:lastSlash]
}
return res
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ostreeReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return nil, errors.New("Reading ostree: images is currently not supported")
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref ostreeReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return nil, errors.New("Reading ostree: images is currently not supported")
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref ostreeReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
var tmpDir string
if ctx == nil || ctx.OSTreeTmpDirPath == "" {
tmpDir = os.TempDir()
} else {
tmpDir = ctx.OSTreeTmpDirPath
}
return newImageDestination(ref, tmpDir)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref ostreeReference) DeleteImage(ctx *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for ostree: images")
}
var ostreeRefRegexp = regexp.MustCompile(`^[A-Za-z0-9.-]$`)
func encodeOStreeRef(in string) string {
var buffer bytes.Buffer
for i := range in {
sub := in[i : i+1]
if ostreeRefRegexp.MatchString(sub) {
buffer.WriteString(sub)
} else {
buffer.WriteString(fmt.Sprintf("_%02X", sub[0]))
}
}
return buffer.String()
}
// manifestPath returns a path for the manifest within a ostree using our conventions.
func (ref ostreeReference) manifestPath() string {
return filepath.Join("manifest", "manifest.json")
}
// signaturePath returns a path for a signature within a ostree using our conventions.
func (ref ostreeReference) signaturePath(index int) string {
return filepath.Join("manifest", fmt.Sprintf("signature-%d", index+1))
}

View File

@@ -1,5 +1,7 @@
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
// NOTE: Keep this in sync with docs/atomic-signature.md and docs/atomic-signature-embedded.json!
package signature
import (

View File

@@ -13,9 +13,9 @@ import (
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/storage"
ddigest "github.com/opencontainers/go-digest"
)
@@ -71,14 +71,9 @@ type storageImage struct {
// newImageSource sets us up to read out an image, which needs to already exist.
func newImageSource(imageRef storageReference) (*storageImageSource, error) {
id := imageRef.resolveID()
if id == "" {
logrus.Errorf("no image matching reference %q found", imageRef.StringWithinTransport())
return nil, ErrNoSuchImage
}
img, err := imageRef.transport.store.GetImage(id)
img, err := imageRef.resolveImage()
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", id)
return nil, err
}
image := &storageImageSource{
imageRef: imageRef,
@@ -143,9 +138,9 @@ func (s *storageImageDestination) putBlob(stream io.Reader, blobinfo types.BlobI
Size: -1,
}
// Try to read an initial snippet of the blob.
header := make([]byte, 10240)
n, err := stream.Read(header)
if err != nil && err != io.EOF {
buf := [archive.HeaderSize]byte{}
n, err := io.ReadAtLeast(stream, buf[:], len(buf))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
return errorBlobInfo, err
}
// Set up to read the whole blob (the initial snippet, plus the rest)
@@ -159,9 +154,9 @@ func (s *storageImageDestination) putBlob(stream io.Reader, blobinfo types.BlobI
}
hash := ""
counter := ioutils.NewWriteCounter(hasher.Hash())
defragmented := io.MultiReader(bytes.NewBuffer(header[:n]), stream)
defragmented := io.MultiReader(bytes.NewBuffer(buf[:n]), stream)
multi := io.TeeReader(defragmented, counter)
if (n > 0) && archive.IsArchive(header[:n]) {
if (n > 0) && archive.IsArchive(buf[:n]) {
// It's a filesystem layer. If it's not the first one in the
// image, we assume that the most recently added layer is its
// parent.
@@ -312,7 +307,7 @@ func (s *storageImageDestination) ReapplyBlob(blobinfo types.BlobInfo) (types.Bl
return types.BlobInfo{}, err
}
if layerList, ok := s.Layers[blobinfo.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, blobinfo.Digest.String())
b, err := s.imageRef.transport.store.ImageBigData(s.ID, blobinfo.Digest.String())
if err != nil {
return types.BlobInfo{}, err
}
@@ -336,21 +331,37 @@ func (s *storageImageDestination) Commit() error {
}
img, err := s.imageRef.transport.store.CreateImage(s.ID, nil, lastLayer, "", nil)
if err != nil {
logrus.Debugf("error creating image: %q", err)
return err
if err != storage.ErrDuplicateID {
logrus.Debugf("error creating image: %q", err)
return errors.Wrapf(err, "error creating image %q", s.ID)
}
img, err = s.imageRef.transport.store.Image(s.ID)
if err != nil {
return errors.Wrapf(err, "error reading image %q", s.ID)
}
if img.TopLayer != lastLayer {
logrus.Debugf("error creating image: image with ID %q exists, but uses different layers", err)
return errors.Wrapf(err, "image with ID %q already exists, but uses a different top layer", s.ID)
}
logrus.Debugf("reusing image ID %q", img.ID)
} else {
logrus.Debugf("created new image ID %q", img.ID)
}
logrus.Debugf("created new image ID %q", img.ID)
s.ID = img.ID
names := img.Names
if s.Tag != "" {
// We have a name to set, so move the name to this image.
if err := s.imageRef.transport.store.SetNames(img.ID, []string{s.Tag}); err != nil {
names = append(names, s.Tag)
}
// We have names to set, so move those names to this image.
if len(names) > 0 {
if err := s.imageRef.transport.store.SetNames(img.ID, names); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error setting names on image %q: %v", img.ID, err)
return err
}
logrus.Debugf("set name of image %q to %q", img.ID, s.Tag)
logrus.Debugf("set names of image %q to %v", img.ID, names)
}
// Save the data blobs to disk, and drop their contents from memory.
keys := []ddigest.Digest{}
@@ -405,10 +416,21 @@ func (s *storageImageDestination) Commit() error {
return nil
}
func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
return nil
var manifestMIMETypes = []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
return manifestMIMETypes
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (s *storageImageDestination) PutManifest(manifest []byte) error {
s.Manifest = make([]byte, len(manifest))
copy(s.Manifest, manifest)
@@ -427,6 +449,11 @@ func (s *storageImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (s *storageImageDestination) MustMatchRuntimeOS() bool {
return true
}
func (s *storageImageDestination) PutSignatures(signatures [][]byte) error {
sizes := []int{}
sigblob := []byte{}
@@ -453,7 +480,7 @@ func (s *storageImageSource) getBlobAndLayerID(info types.BlobInfo) (rc io.ReadC
return nil, -1, "", err
}
if layerList, ok := s.Layers[info.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, info.Digest.String())
b, err := s.imageRef.transport.store.ImageBigData(s.ID, info.Digest.String())
if err != nil {
return nil, -1, "", err
}
@@ -477,7 +504,7 @@ func (s *storageImageSource) getBlobAndLayerID(info types.BlobInfo) (rc io.ReadC
}
func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64, err error) {
layer, err := store.GetLayer(layerID)
layer, err := store.Layer(layerID)
if err != nil {
return nil, -1, err
}
@@ -502,7 +529,7 @@ func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64,
}
func (s *storageImageSource) GetManifest() (manifestBlob []byte, MIMEType string, err error) {
manifestBlob, err = s.imageRef.transport.store.GetImageBigData(s.ID, "manifest")
manifestBlob, err = s.imageRef.transport.store.ImageBigData(s.ID, "manifest")
return manifestBlob, manifest.GuessMIMEType(manifestBlob), err
}
@@ -512,7 +539,7 @@ func (s *storageImageSource) GetTargetManifest(digest ddigest.Digest) (manifestB
func (s *storageImageSource) GetSignatures() (signatures [][]byte, err error) {
var offset int
signature, err := s.imageRef.transport.store.GetImageBigData(s.ID, "signatures")
signature, err := s.imageRef.transport.store.ImageBigData(s.ID, "signatures")
if err != nil {
return nil, err
}
@@ -534,7 +561,7 @@ func (s *storageImageSource) getSize() (int64, error) {
return -1, errors.Wrapf(err, "error reading image %q", s.imageRef.id)
}
for _, name := range names {
bigSize, err := s.imageRef.transport.store.GetImageBigDataSize(s.imageRef.id, name)
bigSize, err := s.imageRef.transport.store.ImageBigDataSize(s.imageRef.id, name)
if err != nil {
return -1, errors.Wrapf(err, "error reading data blob size %q for %q", name, s.imageRef.id)
}
@@ -545,7 +572,7 @@ func (s *storageImageSource) getSize() (int64, error) {
}
for _, layerList := range s.Layers {
for _, layerID := range layerList {
layer, err := s.imageRef.transport.store.GetLayer(layerID)
layer, err := s.imageRef.transport.store.Layer(layerID)
if err != nil {
return -1, err
}

View File

@@ -6,6 +6,8 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/pkg/errors"
)
// A storageReference holds an arbitrary name and/or an ID, which is a 32-byte
@@ -32,15 +34,36 @@ func newReference(transport storageTransport, reference, id string, name referen
}
// Resolve the reference's name to an image ID in the store, if there's already
// one present with the same name or ID.
func (s *storageReference) resolveID() string {
// one present with the same name or ID, and return the image.
func (s *storageReference) resolveImage() (*storage.Image, error) {
if s.id == "" {
image, err := s.transport.store.GetImage(s.reference)
image, err := s.transport.store.Image(s.reference)
if image != nil && err == nil {
s.id = image.ID
}
}
return s.id
if s.id == "" {
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return nil, ErrNoSuchImage
}
img, err := s.transport.store.Image(s.id)
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", s.id)
}
if s.reference != "" {
nameMatch := false
for _, name := range img.Names {
if name == s.reference {
nameMatch = true
break
}
}
if !nameMatch {
logrus.Errorf("no image matching reference %q found", s.StringWithinTransport())
return nil, ErrNoSuchImage
}
}
return img, nil
}
// Return a Transport object that defaults to using the same store that we used
@@ -60,7 +83,7 @@ func (s storageReference) DockerReference() reference.Named {
// disambiguate between images which may be present in multiple stores and
// share only their names.
func (s storageReference) StringWithinTransport() string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
if s.name == nil {
return storeSpec + "@" + s.id
}
@@ -79,8 +102,8 @@ func (s storageReference) PolicyConfigurationIdentity() string {
// graph root, in case we're using multiple drivers in the same directory for
// some reason.
func (s storageReference) PolicyConfigurationNamespaces() []string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
driverlessStoreSpec := "[" + s.transport.store.GetGraphRoot() + "]"
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
driverlessStoreSpec := "[" + s.transport.store.GraphRoot() + "]"
namespaces := []string{}
if s.name != nil {
if s.id != "" {
@@ -103,14 +126,13 @@ func (s storageReference) NewImage(ctx *types.SystemContext) (types.Image, error
}
func (s storageReference) DeleteImage(ctx *types.SystemContext) error {
id := s.resolveID()
if id == "" {
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return ErrNoSuchImage
img, err := s.resolveImage()
if err != nil {
return err
}
layers, err := s.transport.store.DeleteImage(id, true)
layers, err := s.transport.store.DeleteImage(img.ID, true)
if err == nil {
logrus.Debugf("deleted image %q", id)
logrus.Debugf("deleted image %q", img.ID)
for _, layer := range layers {
logrus.Debugf("deleted layer %q", layer)
}

View File

@@ -2,7 +2,6 @@ package storage
import (
"path/filepath"
"regexp"
"strings"
"github.com/pkg/errors"
@@ -11,7 +10,7 @@ import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/containers/storage/storage"
"github.com/containers/storage"
"github.com/opencontainers/go-digest"
ddigest "github.com/opencontainers/go-digest"
)
@@ -30,7 +29,6 @@ var (
// ErrPathNotAbsolute is returned when a graph root is not an absolute
// path name.
ErrPathNotAbsolute = errors.New("path name is not absolute")
idRegexp = regexp.MustCompile("^(sha256:)?([0-9a-fA-F]{64})$")
)
// StoreTransport is an ImageTransport that uses a storage.Store to parse
@@ -100,16 +98,19 @@ func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (
return nil, err
}
}
sum, err = digest.Parse("sha256:" + refInfo[1])
if err != nil {
return nil, err
sum, err = digest.Parse(refInfo[1])
if err != nil || sum.Validate() != nil {
sum, err = digest.Parse("sha256:" + refInfo[1])
if err != nil || sum.Validate() != nil {
return nil, err
}
}
} else { // Coverage: len(refInfo) is always 1 or 2
// Anything else: store specified in a form we don't
// recognize.
return nil, ErrInvalidReference
}
storeSpec := "[" + store.GetGraphDriverName() + "@" + store.GetGraphRoot() + "]"
storeSpec := "[" + store.GraphDriverName() + "@" + store.GraphRoot() + "]"
id := ""
if sum.Validate() == nil {
id = sum.Hex()
@@ -204,14 +205,14 @@ func (s storageTransport) GetStoreImage(store storage.Store, ref types.ImageRefe
if dref == nil {
if sref, ok := ref.(*storageReference); ok {
if sref.id != "" {
if img, err := store.GetImage(sref.id); err == nil {
if img, err := store.Image(sref.id); err == nil {
return img, nil
}
}
}
return nil, ErrInvalidReference
}
return store.GetImage(verboseName(dref))
return store.Image(verboseName(dref))
}
func (s *storageTransport) GetImage(ref types.ImageReference) (*storage.Image, error) {
@@ -285,7 +286,7 @@ func verboseName(name reference.Named) string {
name = reference.TagNameOnly(name)
tag := ""
if tagged, ok := name.(reference.NamedTagged); ok {
tag = tagged.Tag()
tag = ":" + tagged.Tag()
}
return name.Name() + ":" + tag
return name.Name() + tag
}

View File

@@ -12,6 +12,7 @@ import (
_ "github.com/containers/image/docker/daemon"
_ "github.com/containers/image/oci/layout"
_ "github.com/containers/image/openshift"
_ "github.com/containers/image/ostree"
_ "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/types"

View File

@@ -148,11 +148,11 @@ type ImageDestination interface {
SupportsSignatures() error
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
ShouldCompressLayers() bool
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
AcceptsForeignLayerURLs() bool
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
MustMatchRuntimeOS() bool
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -167,8 +167,11 @@ type ImageDestination interface {
HasBlob(info BlobInfo) (bool, int64, error)
// ReapplyBlob informs the image destination that a blob for which HasBlob previously returned true would have been passed to PutBlob if it had returned false. Like HasBlob and unlike PutBlob, the digest can not be empty. If the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree.
ReapplyBlob(info BlobInfo) (BlobInfo, error)
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
PutManifest([]byte) error
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
PutManifest(manifest []byte) error
PutSignatures(signatures [][]byte) error
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
@@ -177,6 +180,16 @@ type ImageDestination interface {
Commit() error
}
// ManifestTypeRejectedError is returned by ImageDestination.PutManifest if the destination is in principle available,
// refuses specifically this manifest type, but may accept a different manifest type.
type ManifestTypeRejectedError struct { // We only use a struct to allow a type assertion, without limiting the contents of the error otherwise.
Err error
}
func (e ManifestTypeRejectedError) Error() string {
return e.Err.Error()
}
// UnparsedImage is an Image-to-be; until it is verified and accepted, it only caries its identity and caches manifest and signature blobs.
// Thus, an UnparsedImage can be created from an ImageSource simply by fetching blobs without interpreting them,
// allowing cryptographic signature verification to happen first, before even fetching the manifest, or parsing anything else.
@@ -213,6 +226,10 @@ type Image interface {
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []BlobInfo
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
Inspect() (*ImageInspectInfo, error)
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
@@ -232,8 +249,9 @@ type Image interface {
// ManifestUpdateOptions is a way to pass named optional arguments to Image.UpdatedManifest
type ManifestUpdateOptions struct {
LayerInfos []BlobInfo // Complete BlobInfos (size+digest+urls) which should replace the originals, in order (the root layer first, and then successive layered layers)
ManifestMIMEType string
LayerInfos []BlobInfo // Complete BlobInfos (size+digest+urls) which should replace the originals, in order (the root layer first, and then successive layered layers)
EmbeddedDockerReference reference.Named
ManifestMIMEType string
// The values below are NOT requests to modify the image; they provide optional context which may or may not be used.
InformationOnly ManifestUpdateInformation
}
@@ -289,7 +307,10 @@ type SystemContext struct {
// If not "", a directory containing a CA certificate (ending with ".crt"),
// a client certificate (ending with ".cert") and a client ceritificate key
// (ending with ".key") used when talking to a Docker Registry.
DockerCertPath string
DockerCertPath string
// If not "", overrides the systems default path for a directory containing host[:port] subdirectories with the same structure as DockerCertPath above.
// Ignored if DockerCertPath is non-empty.
DockerPerHostCertDirPath string
DockerInsecureSkipTLSVerify bool // Allow contacting docker registries over HTTP, or HTTPS with failed TLS verification. Note that this does not affect other TLS connections.
// if nil, the library tries to parse ~/.docker/config.json to retrieve credentials
DockerAuthConfig *DockerAuthConfig
@@ -299,6 +320,8 @@ type SystemContext struct {
// Note that this field is used mainly to integrate containers/image into projectatomic/docker
// in order to not break any existing docker's integration tests.
DockerDisableV1Ping bool
// Directory to use for OSTree temporary files
OSTreeTmpDirPath string
}
// ProgressProperties is used to pass information from the copy code to a monitor which

View File

@@ -1,5 +1,5 @@
github.com/Sirupsen/logrus 7f4b1adc791766938c29457bed0703fb9134421a
github.com/containers/storage 5cbbc6bafb45bd7ef10486b673deb3b81bb3b787
github.com/containers/storage 29d2c86eadb88a0cbfbbedec8762126a3987d4c3
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/distribution df5327f76fb6468b84a87771e361762b8be23fdb
github.com/docker/docker 75843d36aa5c3eaade50da005f9e0ff2602f3d5e
@@ -15,7 +15,7 @@ github.com/mattn/go-shellwords 005a0944d84452842197c2108bd9168ced206f78
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
github.com/opencontainers/go-digest aa2ec055abd10d26d539eb630a92241b781ce4bc
github.com/opencontainers/image-spec v1.0.0-rc4
github.com/opencontainers/image-spec v1.0.0-rc6
github.com/opencontainers/runc 6b1d0e76f239ffb435445e5ae316d2676c07c6e3
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
github.com/pkg/errors 248dadf4e9068a0b3e79f02ed0a610d935de5302
@@ -29,3 +29,8 @@ gopkg.in/cheggaaa/pb.v1 d7e6ca3010b6f084d8056847f55d7f572f180678
gopkg.in/yaml.v2 a3f3340b5840cee44f372bddb5880fcbc419b46a
k8s.io/client-go bcde30fb7eaed76fd98a36b4120321b94995ffb6
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
github.com/tchap/go-patricia v2.2.6
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/BurntSushi/toml b26d9c308763d68093482582cea63d69be07a0f0

View File

@@ -10,6 +10,7 @@ import (
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/stringid"
"github.com/containers/storage/pkg/truncindex"
)
var (
@@ -92,14 +93,19 @@ type ContainerStore interface {
type containerStore struct {
lockfile Locker
dir string
containers []Container
containers []*Container
idindex *truncindex.TruncIndex
byid map[string]*Container
bylayer map[string]*Container
byname map[string]*Container
}
func (r *containerStore) Containers() ([]Container, error) {
return r.containers, nil
containers := make([]Container, len(r.containers))
for i := range r.containers {
containers[i] = *(r.containers[i])
}
return containers, nil
}
func (r *containerStore) containerspath() string {
@@ -121,29 +127,31 @@ func (r *containerStore) Load() error {
if err != nil && !os.IsNotExist(err) {
return err
}
containers := []Container{}
containers := []*Container{}
layers := make(map[string]*Container)
idlist := []string{}
ids := make(map[string]*Container)
names := make(map[string]*Container)
if err = json.Unmarshal(data, &containers); len(data) == 0 || err == nil {
for n, container := range containers {
ids[container.ID] = &containers[n]
layers[container.LayerID] = &containers[n]
idlist = append(idlist, container.ID)
ids[container.ID] = containers[n]
layers[container.LayerID] = containers[n]
for _, name := range container.Names {
if conflict, ok := names[name]; ok {
r.removeName(conflict, name)
needSave = true
}
names[name] = &containers[n]
names[name] = containers[n]
}
}
}
r.containers = containers
r.idindex = truncindex.NewTruncIndex(idlist)
r.byid = ids
r.bylayer = layers
r.byname = names
if needSave {
r.Touch()
return r.Save()
}
return nil
@@ -158,6 +166,7 @@ func (r *containerStore) Save() error {
if err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(rpath, jdata, 0600)
}
@@ -174,7 +183,7 @@ func newContainerStore(dir string) (ContainerStore, error) {
cstore := containerStore{
lockfile: lockfile,
dir: dir,
containers: []Container{},
containers: []*Container{},
byid: make(map[string]*Container),
bylayer: make(map[string]*Container),
byname: make(map[string]*Container),
@@ -185,30 +194,35 @@ func newContainerStore(dir string) (ContainerStore, error) {
return &cstore, nil
}
func (r *containerStore) ClearFlag(id string, flag string) error {
if container, ok := r.byname[id]; ok {
id = container.ID
func (r *containerStore) lookup(id string) (*Container, bool) {
if container, ok := r.byid[id]; ok {
return container, ok
} else if container, ok := r.byname[id]; ok {
return container, ok
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
return container, ok
} else if longid, err := r.idindex.Get(id); err == nil {
if container, ok := r.byid[longid]; ok {
return container, ok
}
}
if _, ok := r.byid[id]; !ok {
return nil, false
}
func (r *containerStore) ClearFlag(id string, flag string) error {
container, ok := r.lookup(id)
if !ok {
return ErrContainerUnknown
}
container := r.byid[id]
delete(container.Flags, flag)
return r.Save()
}
func (r *containerStore) SetFlag(id string, flag string, value interface{}) error {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if _, ok := r.byid[id]; !ok {
container, ok := r.lookup(id)
if !ok {
return ErrContainerUnknown
}
container := r.byid[id]
container.Flags[flag] = value
return r.Save()
}
@@ -231,7 +245,7 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
}
}
if err == nil {
newContainer := Container{
container = &Container{
ID: id,
Names: names,
ImageID: image,
@@ -241,9 +255,9 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
BigDataSizes: make(map[string]int64),
Flags: make(map[string]interface{}),
}
r.containers = append(r.containers, newContainer)
container = &r.containers[len(r.containers)-1]
r.containers = append(r.containers, container)
r.byid[id] = container
r.idindex.Add(id)
r.bylayer[layer] = container
for _, name := range names {
r.byname[name] = container
@@ -253,25 +267,15 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
return container, err
}
func (r *containerStore) GetMetadata(id string) (string, error) {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if container, ok := r.byid[id]; ok {
func (r *containerStore) Metadata(id string) (string, error) {
if container, ok := r.lookup(id); ok {
return container.Metadata, nil
}
return "", ErrContainerUnknown
}
func (r *containerStore) SetMetadata(id, metadata string) error {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if container, ok := r.byid[id]; ok {
if container, ok := r.lookup(id); ok {
container.Metadata = metadata
return r.Save()
}
@@ -279,22 +283,11 @@ func (r *containerStore) SetMetadata(id, metadata string) error {
}
func (r *containerStore) removeName(container *Container, name string) {
newNames := []string{}
for _, oldName := range container.Names {
if oldName != name {
newNames = append(newNames, oldName)
}
}
container.Names = newNames
container.Names = stringSliceWithoutValue(container.Names, name)
}
func (r *containerStore) SetNames(id string, names []string) error {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if container, ok := r.byid[id]; ok {
if container, ok := r.lookup(id); ok {
for _, name := range container.Names {
delete(r.byname, name)
}
@@ -311,133 +304,104 @@ func (r *containerStore) SetNames(id string, names []string) error {
}
func (r *containerStore) Delete(id string) error {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if _, ok := r.byid[id]; !ok {
container, ok := r.lookup(id)
if !ok {
return ErrContainerUnknown
}
if container, ok := r.byid[id]; ok {
newContainers := []Container{}
for _, candidate := range r.containers {
if candidate.ID != id {
newContainers = append(newContainers, candidate)
}
}
delete(r.byid, container.ID)
delete(r.bylayer, container.LayerID)
for _, name := range container.Names {
delete(r.byname, name)
}
r.containers = newContainers
if err := r.Save(); err != nil {
return err
}
if err := os.RemoveAll(r.datadir(id)); err != nil {
return err
id = container.ID
newContainers := []*Container{}
for _, candidate := range r.containers {
if candidate.ID != id {
newContainers = append(newContainers, candidate)
}
}
delete(r.byid, id)
r.idindex.Delete(id)
delete(r.bylayer, container.LayerID)
for _, name := range container.Names {
delete(r.byname, name)
}
r.containers = newContainers
if err := r.Save(); err != nil {
return err
}
if err := os.RemoveAll(r.datadir(id)); err != nil {
return err
}
return nil
}
func (r *containerStore) Get(id string) (*Container, error) {
if c, ok := r.byname[id]; ok {
return c, nil
} else if c, ok := r.bylayer[id]; ok {
return c, nil
}
if c, ok := r.byid[id]; ok {
return c, nil
if container, ok := r.lookup(id); ok {
return container, nil
}
return nil, ErrContainerUnknown
}
func (r *containerStore) Lookup(name string) (id string, err error) {
container, ok := r.byname[name]
if !ok {
container, ok = r.byid[name]
if !ok {
return "", ErrContainerUnknown
}
if container, ok := r.lookup(name); ok {
return container.ID, nil
}
return container.ID, nil
return "", ErrContainerUnknown
}
func (r *containerStore) Exists(id string) bool {
if _, ok := r.byname[id]; ok {
return true
}
if _, ok := r.bylayer[id]; ok {
return true
}
if _, ok := r.byid[id]; ok {
return true
}
return false
_, ok := r.lookup(id)
return ok
}
func (r *containerStore) GetBigData(id, key string) ([]byte, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *containerStore) BigData(id, key string) ([]byte, error) {
c, ok := r.lookup(id)
if !ok {
return nil, ErrContainerUnknown
}
return ioutil.ReadFile(r.datapath(id, key))
return ioutil.ReadFile(r.datapath(c.ID, key))
}
func (r *containerStore) GetBigDataSize(id, key string) (int64, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *containerStore) BigDataSize(id, key string) (int64, error) {
c, ok := r.lookup(id)
if !ok {
return -1, ErrContainerUnknown
}
if size, ok := r.byid[id].BigDataSizes[key]; ok {
if size, ok := c.BigDataSizes[key]; ok {
return size, nil
}
return -1, ErrSizeUnknown
}
func (r *containerStore) GetBigDataNames(id string) ([]string, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *containerStore) BigDataNames(id string) ([]string, error) {
c, ok := r.lookup(id)
if !ok {
return nil, ErrContainerUnknown
}
return r.byid[id].BigDataNames, nil
return c.BigDataNames, nil
}
func (r *containerStore) SetBigData(id, key string, data []byte) error {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
c, ok := r.lookup(id)
if !ok {
return ErrContainerUnknown
}
if err := os.MkdirAll(r.datadir(id), 0700); err != nil {
if err := os.MkdirAll(r.datadir(c.ID), 0700); err != nil {
return err
}
err := ioutils.AtomicWriteFile(r.datapath(id, key), data, 0600)
err := ioutils.AtomicWriteFile(r.datapath(c.ID, key), data, 0600)
if err == nil {
save := false
oldSize, ok := r.byid[id].BigDataSizes[key]
r.byid[id].BigDataSizes[key] = int64(len(data))
if !ok || oldSize != r.byid[id].BigDataSizes[key] {
oldSize, ok := c.BigDataSizes[key]
c.BigDataSizes[key] = int64(len(data))
if !ok || oldSize != c.BigDataSizes[key] {
save = true
}
add := true
for _, name := range r.byid[id].BigDataNames {
for _, name := range c.BigDataNames {
if name == key {
add = false
break
}
}
if add {
r.byid[id].BigDataNames = append(r.byid[id].BigDataNames, key)
c.BigDataNames = append(c.BigDataNames, key)
save = true
}
if save {
@@ -476,6 +440,10 @@ func (r *containerStore) Modified() (bool, error) {
return r.lockfile.Modified()
}
func (r *containerStore) IsReadWrite() bool {
return r.lockfile.IsReadWrite()
}
func (r *containerStore) TouchedSince(when time.Time) bool {
return r.lockfile.TouchedSince(when)
}

View File

@@ -185,8 +185,8 @@ func (a *Driver) Status() [][2]string {
}
}
// GetMetadata not implemented
func (a *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata not implemented
func (a *Driver) Metadata(id string) (map[string]string, error) {
return nil, nil
}
@@ -372,6 +372,12 @@ func (a *Driver) Diff(id, parent string) (archive.Archive, error) {
})
}
// AdditionalImageStores returns additional image stores supported by the driver
func (a *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}
type fileGetNilCloser struct {
storage.FileGetter
}

View File

@@ -143,8 +143,8 @@ func (d *Driver) Status() [][2]string {
return status
}
// GetMetadata returns empty metadata for this driver.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata returns empty metadata for this driver.
func (d *Driver) Metadata(id string) (map[string]string, error) {
return nil, nil
}
@@ -518,3 +518,9 @@ func (d *Driver) Exists(id string) bool {
_, err := os.Stat(dir)
return err == nil
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}

View File

@@ -28,7 +28,6 @@ import (
"github.com/containers/storage/pkg/loopback"
"github.com/containers/storage/pkg/mount"
"github.com/containers/storage/pkg/parsers"
"github.com/containers/storage/storageversion"
"github.com/docker/go-units"
"github.com/opencontainers/selinux/go-selinux/label"
@@ -1668,17 +1667,17 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
}
// https://github.com/docker/docker/issues/4036
if supported := devicemapper.UdevSetSyncSupport(true); !supported {
if storageversion.IAmStatic == "true" {
logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a dynamic binary to use devicemapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
} else {
logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a more recent version of libdevmapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
}
if !devices.overrideUdevSyncCheck {
return graphdriver.ErrNotSupported
}
}
// if supported := devicemapper.UdevSetSyncSupport(true); !supported {
// if storageversion.IAmStatic == "true" {
// logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a dynamic binary to use devicemapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
// } else {
// logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a more recent version of libdevmapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
// }
//
// if !devices.overrideUdevSyncCheck {
// return graphdriver.ErrNotSupported
// }
// }
//create the root dir of the devmapper driver ownership to match this
//daemon's remapped root uid/gid so containers can start properly

View File

@@ -94,8 +94,8 @@ func (d *Driver) Status() [][2]string {
return status
}
// GetMetadata returns a map of information about the device.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata returns a map of information about the device.
func (d *Driver) Metadata(id string) (map[string]string, error) {
m, err := d.DeviceSet.exportDeviceMetadata(id)
if err != nil {
@@ -224,3 +224,9 @@ func (d *Driver) Put(id string) error {
func (d *Driver) Exists(id string) bool {
return d.DeviceSet.HasDevice(id)
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}

View File

@@ -69,11 +69,13 @@ type ProtoDriver interface {
Status() [][2]string
// Returns a set of key-value pairs which give low level information
// about the image/container driver is managing.
GetMetadata(id string) (map[string]string, error)
Metadata(id string) (map[string]string, error)
// Cleanup performs necessary tasks to release resources
// held by the driver, e.g., unmounting all layered filesystems
// known to this driver.
Cleanup() error
// AdditionalImageStores returns additional image stores supported by the driver
AdditionalImageStores() []string
}
// Driver is the interface for layered/snapshot file system drivers.

View File

@@ -1,6 +1,6 @@
// +build linux
package overlay2
package overlay
import (
"bytes"

View File

@@ -1,6 +1,6 @@
// +build linux
package overlay2
package overlay
import (
"bufio"
@@ -10,6 +10,7 @@ import (
"os"
"os/exec"
"path"
"path/filepath"
"strconv"
"strings"
"syscall"
@@ -61,10 +62,9 @@ var (
// that mounts do not fail due to length.
const (
driverName = "overlay2"
linkDir = "l"
lowerFile = "lower"
maxDepth = 128
linkDir = "l"
lowerFile = "lower"
maxDepth = 128
// idLength represents the number of random characters
// which can be used to create the unique link identifer
@@ -78,22 +78,24 @@ const (
// Driver contains information about the home directory and the list of active mounts that are created using this driver.
type Driver struct {
name string
home string
uidMaps []idtools.IDMap
gidMaps []idtools.IDMap
ctr *graphdriver.RefCounter
opts *overlayOptions
}
var backingFs = "<unknown>"
func init() {
graphdriver.Register(driverName, Init)
graphdriver.Register("overlay", InitAsOverlay)
graphdriver.Register("overlay2", InitAsOverlay2)
}
// Init returns the a native diff driver for overlay filesystem.
// If overlay filesystem is not supported on the host, graphdriver.ErrNotSupported is returned as error.
// If a overlay filesystem is not supported over a existing filesystem then error graphdriver.ErrIncompatibleFS is returned.
func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
// InitWithName returns the a naive diff driver for the overlay filesystem,
// which returns the passed-in name when asked which driver it is.
func InitWithName(name, home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
opts, err := parseOptions(options)
if err != nil {
return nil, err
@@ -112,7 +114,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
if !opts.overrideKernelCheck {
return nil, graphdriver.ErrNotSupported
}
logrus.Warnf("Using pre-4.0.0 kernel for overlay2, mount failures may require kernel update")
logrus.Warnf("Using pre-4.0.0 kernel for overlay, mount failures may require kernel update")
}
fsMagic, err := graphdriver.GetFSMagic(home)
@@ -126,7 +128,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
// check if they are running over btrfs, aufs, zfs, overlay, or ecryptfs
switch fsMagic {
case graphdriver.FsMagicBtrfs, graphdriver.FsMagicAufs, graphdriver.FsMagicZfs, graphdriver.FsMagicOverlay, graphdriver.FsMagicEcryptfs:
logrus.Errorf("'overlay2' is not supported over %s", backingFs)
logrus.Errorf("'overlay' is not supported over %s", backingFs)
return nil, graphdriver.ErrIncompatibleFS
}
@@ -144,17 +146,34 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
}
d := &Driver{
name: name,
home: home,
uidMaps: uidMaps,
gidMaps: gidMaps,
ctr: graphdriver.NewRefCounter(graphdriver.NewFsChecker(graphdriver.FsMagicOverlay)),
opts: opts,
}
return d, nil
}
// InitAsOverlay returns the a naive diff driver for overlay filesystem.
// If overlay filesystem is not supported on the host, graphdriver.ErrNotSupported is returned as error.
// If a overlay filesystem is not supported over a existing filesystem then error graphdriver.ErrIncompatibleFS is returned.
func InitAsOverlay(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
return InitWithName("overlay", home, options, uidMaps, gidMaps)
}
// InitAsOverlay2 returns the a naive diff driver for overlay filesystem.
// If overlay filesystem is not supported on the host, graphdriver.ErrNotSupported is returned as error.
// If a overlay filesystem is not supported over a existing filesystem then error graphdriver.ErrIncompatibleFS is returned.
func InitAsOverlay2(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
return InitWithName("overlay2", home, options, uidMaps, gidMaps)
}
type overlayOptions struct {
overrideKernelCheck bool
imageStores []string
}
func parseOptions(options []string) (*overlayOptions, error) {
@@ -166,13 +185,29 @@ func parseOptions(options []string) (*overlayOptions, error) {
}
key = strings.ToLower(key)
switch key {
case "overlay2.override_kernel_check":
case "overlay.override_kernel_check", "overlay2.override_kernel_check":
o.overrideKernelCheck, err = strconv.ParseBool(val)
if err != nil {
return nil, err
}
case "overlay.imagestore":
// Additional read only image stores to use for lower paths
for _, store := range strings.Split(val, ",") {
store = filepath.Clean(store)
if !filepath.IsAbs(store) {
return nil, fmt.Errorf("overlay: image path %q is not absolute. Can not be relative", store)
}
st, err := os.Stat(store)
if err != nil {
return nil, fmt.Errorf("overlay: Can't stat imageStore dir %s: %v", store, err)
}
if !st.IsDir() {
return nil, fmt.Errorf("overlay: image path %q must be a directory", store)
}
o.imageStores = append(o.imageStores, store)
}
default:
return nil, fmt.Errorf("overlay2: Unknown option %s", key)
return nil, fmt.Errorf("overlay: Unknown option %s", key)
}
}
return o, nil
@@ -200,7 +235,7 @@ func supportsOverlay() error {
}
func (d *Driver) String() string {
return driverName
return d.name
}
// Status returns current driver information in a two dimensional string array.
@@ -211,9 +246,9 @@ func (d *Driver) Status() [][2]string {
}
}
// GetMetadata returns meta data about the overlay driver such as
// Metadata returns meta data about the overlay driver such as
// LowerDir, UpperDir, WorkDir and MergeDir used to store data.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
func (d *Driver) Metadata(id string) (map[string]string, error) {
dir := d.dir(id)
if _, err := os.Stat(dir); err != nil {
return nil, err
@@ -342,8 +377,18 @@ func (d *Driver) getLower(parent string) (string, error) {
return strings.Join(lowers, ":"), nil
}
func (d *Driver) dir(id string) string {
return path.Join(d.home, id)
func (d *Driver) dir(val string) string {
newpath := path.Join(d.home, val)
if _, err := os.Stat(newpath); err != nil {
for _, p := range d.AdditionalImageStores() {
l := path.Join(p, d.name, val)
_, err = os.Stat(l)
if err == nil {
return l
}
}
}
return newpath
}
func (d *Driver) getLowerDirs(id string) ([]string, error) {
@@ -351,11 +396,12 @@ func (d *Driver) getLowerDirs(id string) ([]string, error) {
lowers, err := ioutil.ReadFile(path.Join(d.dir(id), lowerFile))
if err == nil {
for _, s := range strings.Split(string(lowers), ":") {
lp, err := os.Readlink(path.Join(d.home, s))
lower := d.dir(s)
lp, err := os.Readlink(lower)
if err != nil {
return nil, err
}
lowersArray = append(lowersArray, path.Clean(path.Join(d.home, "link", lp)))
lowersArray = append(lowersArray, path.Clean(d.dir(path.Join("link", lp))))
}
} else if !os.IsNotExist(err) {
return nil, err
@@ -396,6 +442,31 @@ func (d *Driver) Get(id string, mountLabel string) (s string, err error) {
return "", err
}
newlowers := ""
for _, l := range strings.Split(string(lowers), ":") {
lower := ""
newpath := path.Join(d.home, l)
if _, err := os.Stat(newpath); err != nil {
for _, p := range d.AdditionalImageStores() {
lower = path.Join(p, d.name, l)
if _, err2 := os.Stat(lower); err2 == nil {
break
}
lower = ""
}
if lower == "" {
return "", fmt.Errorf("Can't stat lower layer %q: %v", newpath, err)
}
} else {
lower = l
}
if newlowers == "" {
newlowers = lower
} else {
newlowers = newlowers + ":" + lower
}
}
mergedDir := path.Join(dir, "merged")
if count := d.ctr.Increment(mergedDir); count > 1 {
return mergedDir, nil
@@ -409,7 +480,7 @@ func (d *Driver) Get(id string, mountLabel string) (s string, err error) {
}()
workDir := path.Join(dir, "work")
opts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", string(lowers), path.Join(id, "diff"), path.Join(id, "work"))
opts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", newlowers, path.Join(id, "diff"), path.Join(id, "work"))
mountLabel = label.FormatMountLabel(opts, mountLabel)
if len(mountLabel) > syscall.Getpagesize() {
return "", fmt.Errorf("cannot mount layer, mount label too large %d", len(mountLabel))
@@ -512,3 +583,8 @@ func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
return archive.OverlayChanges(layers, diffPath)
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
return d.opts.imageStores
}

View File

@@ -1,6 +1,6 @@
// +build linux
package overlay2
package overlay
import (
"crypto/rand"

View File

@@ -144,12 +144,12 @@ func (d *graphDriverProxy) Status() [][2]string {
return ret.Status
}
func (d *graphDriverProxy) GetMetadata(id string) (map[string]string, error) {
func (d *graphDriverProxy) Metadata(id string) (map[string]string, error) {
args := &graphDriverRequest{
ID: id,
}
var ret graphDriverResponse
if err := d.client.Call("GraphDriver.GetMetadata", args, &ret); err != nil {
if err := d.client.Call("GraphDriver.Metadata", args, &ret); err != nil {
return nil, err
}
if ret.Err != "" {

View File

@@ -3,6 +3,6 @@
package register
import (
// register the overlay2 graphdriver
_ "github.com/containers/storage/drivers/overlay2"
// register the overlay graphdriver
_ "github.com/containers/storage/drivers/overlay"
)

View File

@@ -58,8 +58,8 @@ func (d *Driver) Status() [][2]string {
return nil
}
// GetMetadata is used for implementing the graphdriver.ProtoDriver interface. VFS does not currently have any meta data.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata is used for implementing the graphdriver.ProtoDriver interface. VFS does not currently have any meta data.
func (d *Driver) Metadata(id string) (map[string]string, error) {
return nil, nil
}
@@ -143,3 +143,9 @@ func (d *Driver) Exists(id string) bool {
_, err := os.Stat(d.dir(id))
return err == nil
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}

View File

@@ -133,7 +133,7 @@ func (d *Driver) create(id, parent, mountLabel string, readOnly bool, storageOpt
var layerChain []string
if rPId != "" {
parentPath, err := hcsshim.GetLayerMountPath(d.info, rPId)
parentPath, err := hcsshim.LayerMountPath(d.info, rPId)
if err != nil {
return err
}
@@ -248,7 +248,7 @@ func (d *Driver) Get(id, mountLabel string) (string, error) {
return "", err
}
mountPath, err := hcsshim.GetLayerMountPath(d.info, rID)
mountPath, err := hcsshim.LayerMountPath(d.info, rID)
if err != nil {
d.ctr.Decrement(rID)
if err2 := hcsshim.DeactivateLayer(d.info, rID); err2 != nil {
@@ -403,7 +403,7 @@ func (d *Driver) ApplyDiff(id, parent string, diff archive.Reader) (int64, error
if err != nil {
return 0, err
}
parentPath, err := hcsshim.GetLayerMountPath(d.info, rPId)
parentPath, err := hcsshim.LayerMountPath(d.info, rPId)
if err != nil {
return 0, err
}
@@ -446,8 +446,8 @@ func (d *Driver) DiffSize(id, parent string) (size int64, err error) {
return archive.ChangesSize(layerFs, changes), nil
}
// GetMetadata returns custom driver information.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata returns custom driver information.
func (d *Driver) Metadata(id string) (map[string]string, error) {
m := make(map[string]string)
m["dir"] = d.dir(id)
return m, nil

View File

@@ -210,8 +210,8 @@ func (d *Driver) Status() [][2]string {
}
}
// GetMetadata returns image/container metadata related to graph driver
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata returns image/container metadata related to graph driver
func (d *Driver) Metadata(id string) (map[string]string, error) {
return nil, nil
}
@@ -403,3 +403,9 @@ func (d *Driver) Exists(id string) bool {
defer d.Unlock()
return d.filesystemsCache[d.zfsPath(id)] == true
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}

View File

@@ -2,7 +2,6 @@ package storage
import (
"encoding/json"
"errors"
"io/ioutil"
"os"
"path/filepath"
@@ -10,6 +9,8 @@ import (
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/stringid"
"github.com/containers/storage/pkg/truncindex"
"github.com/pkg/errors"
)
var (
@@ -48,11 +49,32 @@ type Image struct {
Flags map[string]interface{} `json:"flags,omitempty"`
}
// ROImageStore provides bookkeeping for information about Images.
type ROImageStore interface {
ROFileBasedStore
ROMetadataStore
ROBigDataStore
// Exists checks if there is an image with the given ID or name.
Exists(id string) bool
// Get retrieves information about an image given an ID or name.
Get(id string) (*Image, error)
// Lookup attempts to translate a name to an ID. Most methods do this
// implicitly.
Lookup(name string) (string, error)
// Images returns a slice enumerating the known images.
Images() ([]Image, error)
}
// ImageStore provides bookkeeping for information about Images.
type ImageStore interface {
FileBasedStore
MetadataStore
BigDataStore
ROImageStore
RWFileBasedStore
RWMetadataStore
RWBigDataStore
FlaggableStore
// Create creates an image that has a specified ID (or a random one) and
@@ -64,36 +86,28 @@ type ImageStore interface {
// supplied values.
SetNames(id string, names []string) error
// Exists checks if there is an image with the given ID or name.
Exists(id string) bool
// Get retrieves information about an image given an ID or name.
Get(id string) (*Image, error)
// Delete removes the record of the image.
Delete(id string) error
// Wipe removes records of all images.
Wipe() error
// Lookup attempts to translate a name to an ID. Most methods do this
// implicitly.
Lookup(name string) (string, error)
// Images returns a slice enumerating the known images.
Images() ([]Image, error)
}
type imageStore struct {
lockfile Locker
dir string
images []Image
images []*Image
idindex *truncindex.TruncIndex
byid map[string]*Image
byname map[string]*Image
}
func (r *imageStore) Images() ([]Image, error) {
return r.images, nil
images := make([]Image, len(r.images))
for i := range r.images {
images[i] = *(r.images[i])
}
return images, nil
}
func (r *imageStore) imagespath() string {
@@ -109,38 +123,46 @@ func (r *imageStore) datapath(id, key string) string {
}
func (r *imageStore) Load() error {
needSave := false
shouldSave := false
rpath := r.imagespath()
data, err := ioutil.ReadFile(rpath)
if err != nil && !os.IsNotExist(err) {
return err
}
images := []Image{}
images := []*Image{}
idlist := []string{}
ids := make(map[string]*Image)
names := make(map[string]*Image)
if err = json.Unmarshal(data, &images); len(data) == 0 || err == nil {
for n, image := range images {
ids[image.ID] = &images[n]
ids[image.ID] = images[n]
idlist = append(idlist, image.ID)
for _, name := range image.Names {
if conflict, ok := names[name]; ok {
r.removeName(conflict, name)
needSave = true
shouldSave = true
}
names[name] = &images[n]
names[name] = images[n]
}
}
}
if shouldSave && !r.IsReadWrite() {
return errors.New("image store assigns the same name to multiple images")
}
r.images = images
r.idindex = truncindex.NewTruncIndex(idlist)
r.byid = ids
r.byname = names
if needSave {
r.Touch()
if shouldSave {
return r.Save()
}
return nil
}
func (r *imageStore) Save() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify the image store at %q", r.imagespath())
}
rpath := r.imagespath()
if err := os.MkdirAll(filepath.Dir(rpath), 0700); err != nil {
return err
@@ -149,6 +171,7 @@ func (r *imageStore) Save() error {
if err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(rpath, jdata, 0600)
}
@@ -165,7 +188,7 @@ func newImageStore(dir string) (ImageStore, error) {
istore := imageStore{
lockfile: lockfile,
dir: dir,
images: []Image{},
images: []*Image{},
byid: make(map[string]*Image),
byname: make(map[string]*Image),
}
@@ -175,31 +198,66 @@ func newImageStore(dir string) (ImageStore, error) {
return &istore, nil
}
func (r *imageStore) ClearFlag(id string, flag string) error {
if image, ok := r.byname[id]; ok {
id = image.ID
func newROImageStore(dir string) (ROImageStore, error) {
lockfile, err := GetROLockfile(filepath.Join(dir, "images.lock"))
if err != nil {
return nil, err
}
if _, ok := r.byid[id]; !ok {
lockfile.Lock()
defer lockfile.Unlock()
istore := imageStore{
lockfile: lockfile,
dir: dir,
images: []*Image{},
byid: make(map[string]*Image),
byname: make(map[string]*Image),
}
if err := istore.Load(); err != nil {
return nil, err
}
return &istore, nil
}
func (r *imageStore) lookup(id string) (*Image, bool) {
if image, ok := r.byid[id]; ok {
return image, ok
} else if image, ok := r.byname[id]; ok {
return image, ok
} else if longid, err := r.idindex.Get(id); err == nil {
image, ok := r.byid[longid]
return image, ok
}
return nil, false
}
func (r *imageStore) ClearFlag(id string, flag string) error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to clear flags on images at %q", r.imagespath())
}
image, ok := r.lookup(id)
if !ok {
return ErrImageUnknown
}
image := r.byid[id]
delete(image.Flags, flag)
return r.Save()
}
func (r *imageStore) SetFlag(id string, flag string, value interface{}) error {
if image, ok := r.byname[id]; ok {
id = image.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to set flags on images at %q", r.imagespath())
}
if _, ok := r.byid[id]; !ok {
image, ok := r.lookup(id)
if !ok {
return ErrImageUnknown
}
image := r.byid[id]
image.Flags[flag] = value
return r.Save()
}
func (r *imageStore) Create(id string, names []string, layer, metadata string) (image *Image, err error) {
if !r.IsReadWrite() {
return nil, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to create new images at %q", r.imagespath())
}
if id == "" {
id = stringid.GenerateRandomID()
_, idInUse := r.byid[id]
@@ -217,7 +275,7 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
}
}
if err == nil {
newImage := Image{
image = &Image{
ID: id,
Names: names,
TopLayer: layer,
@@ -226,8 +284,8 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
BigDataSizes: make(map[string]int64),
Flags: make(map[string]interface{}),
}
r.images = append(r.images, newImage)
image = &r.images[len(r.images)-1]
r.images = append(r.images, image)
r.idindex.Add(id)
r.byid[id] = image
for _, name := range names {
r.byname[name] = image
@@ -237,21 +295,18 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
return image, err
}
func (r *imageStore) GetMetadata(id string) (string, error) {
if image, ok := r.byname[id]; ok {
id = image.ID
}
if image, ok := r.byid[id]; ok {
func (r *imageStore) Metadata(id string) (string, error) {
if image, ok := r.lookup(id); ok {
return image.Metadata, nil
}
return "", ErrImageUnknown
}
func (r *imageStore) SetMetadata(id, metadata string) error {
if image, ok := r.byname[id]; ok {
id = image.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify image metadata at %q", r.imagespath())
}
if image, ok := r.byid[id]; ok {
if image, ok := r.lookup(id); ok {
image.Metadata = metadata
return r.Save()
}
@@ -259,20 +314,14 @@ func (r *imageStore) SetMetadata(id, metadata string) error {
}
func (r *imageStore) removeName(image *Image, name string) {
newNames := []string{}
for _, oldName := range image.Names {
if oldName != name {
newNames = append(newNames, oldName)
}
}
image.Names = newNames
image.Names = stringSliceWithoutValue(image.Names, name)
}
func (r *imageStore) SetNames(id string, names []string) error {
if image, ok := r.byname[id]; ok {
id = image.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to change image name assignments at %q", r.imagespath())
}
if image, ok := r.byid[id]; ok {
if image, ok := r.lookup(id); ok {
for _, name := range image.Names {
delete(r.byname, name)
}
@@ -289,125 +338,109 @@ func (r *imageStore) SetNames(id string, names []string) error {
}
func (r *imageStore) Delete(id string) error {
if image, ok := r.byname[id]; ok {
id = image.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to delete images at %q", r.imagespath())
}
if _, ok := r.byid[id]; !ok {
image, ok := r.lookup(id)
if !ok {
return ErrImageUnknown
}
if image, ok := r.byid[id]; ok {
newImages := []Image{}
for _, candidate := range r.images {
if candidate.ID != id {
newImages = append(newImages, candidate)
}
}
delete(r.byid, image.ID)
for _, name := range image.Names {
delete(r.byname, name)
}
r.images = newImages
if err := r.Save(); err != nil {
return err
}
if err := os.RemoveAll(r.datadir(id)); err != nil {
return err
id = image.ID
newImages := []*Image{}
for _, candidate := range r.images {
if candidate.ID != id {
newImages = append(newImages, candidate)
}
}
delete(r.byid, id)
r.idindex.Delete(id)
for _, name := range image.Names {
delete(r.byname, name)
}
r.images = newImages
if err := r.Save(); err != nil {
return err
}
if err := os.RemoveAll(r.datadir(id)); err != nil {
return err
}
return nil
}
func (r *imageStore) Get(id string) (*Image, error) {
if image, ok := r.byname[id]; ok {
return image, nil
}
if image, ok := r.byid[id]; ok {
if image, ok := r.lookup(id); ok {
return image, nil
}
return nil, ErrImageUnknown
}
func (r *imageStore) Lookup(name string) (id string, err error) {
image, ok := r.byname[name]
if !ok {
image, ok = r.byid[name]
if !ok {
return "", ErrImageUnknown
}
if image, ok := r.lookup(name); ok {
return image.ID, nil
}
return image.ID, nil
return "", ErrImageUnknown
}
func (r *imageStore) Exists(id string) bool {
if _, ok := r.byname[id]; ok {
return true
}
if _, ok := r.byid[id]; ok {
return true
}
return false
_, ok := r.lookup(id)
return ok
}
func (r *imageStore) GetBigData(id, key string) ([]byte, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *imageStore) BigData(id, key string) ([]byte, error) {
image, ok := r.lookup(id)
if !ok {
return nil, ErrImageUnknown
}
return ioutil.ReadFile(r.datapath(id, key))
return ioutil.ReadFile(r.datapath(image.ID, key))
}
func (r *imageStore) GetBigDataSize(id, key string) (int64, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *imageStore) BigDataSize(id, key string) (int64, error) {
image, ok := r.lookup(id)
if !ok {
return -1, ErrImageUnknown
}
if size, ok := r.byid[id].BigDataSizes[key]; ok {
if size, ok := image.BigDataSizes[key]; ok {
return size, nil
}
return -1, ErrSizeUnknown
}
func (r *imageStore) GetBigDataNames(id string) ([]string, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *imageStore) BigDataNames(id string) ([]string, error) {
image, ok := r.lookup(id)
if !ok {
return nil, ErrImageUnknown
}
return r.byid[id].BigDataNames, nil
return image.BigDataNames, nil
}
func (r *imageStore) SetBigData(id, key string, data []byte) error {
if img, ok := r.byname[id]; ok {
id = img.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to save data items associated with images at %q", r.imagespath())
}
if _, ok := r.byid[id]; !ok {
image, ok := r.lookup(id)
if !ok {
return ErrImageUnknown
}
if err := os.MkdirAll(r.datadir(id), 0700); err != nil {
if err := os.MkdirAll(r.datadir(image.ID), 0700); err != nil {
return err
}
err := ioutils.AtomicWriteFile(r.datapath(id, key), data, 0600)
err := ioutils.AtomicWriteFile(r.datapath(image.ID, key), data, 0600)
if err == nil {
add := true
save := false
oldSize, ok := r.byid[id].BigDataSizes[key]
r.byid[id].BigDataSizes[key] = int64(len(data))
if !ok || oldSize != r.byid[id].BigDataSizes[key] {
oldSize, ok := image.BigDataSizes[key]
image.BigDataSizes[key] = int64(len(data))
if !ok || oldSize != image.BigDataSizes[key] {
save = true
}
for _, name := range r.byid[id].BigDataNames {
for _, name := range image.BigDataNames {
if name == key {
add = false
break
}
}
if add {
r.byid[id].BigDataNames = append(r.byid[id].BigDataNames, key)
image.BigDataNames = append(image.BigDataNames, key)
save = true
}
if save {
@@ -418,6 +451,9 @@ func (r *imageStore) SetBigData(id, key string, data []byte) error {
}
func (r *imageStore) Wipe() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to delete images at %q", r.imagespath())
}
ids := []string{}
for id := range r.byid {
ids = append(ids, id)
@@ -446,6 +482,10 @@ func (r *imageStore) Modified() (bool, error) {
return r.lockfile.Modified()
}
func (r *imageStore) IsReadWrite() bool {
return r.lockfile.IsReadWrite()
}
func (r *imageStore) TouchedSince(when time.Time) bool {
return r.lockfile.TouchedSince(when)
}

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"compress/gzip"
"encoding/json"
"errors"
"io"
"io/ioutil"
"os"
@@ -15,6 +14,8 @@ import (
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/stringid"
"github.com/containers/storage/pkg/truncindex"
"github.com/pkg/errors"
"github.com/vbatts/tar-split/tar/asm"
"github.com/vbatts/tar-split/tar/storage"
)
@@ -74,28 +75,12 @@ type layerMountPoint struct {
MountCount int `json:"count"`
}
// LayerStore wraps a graph driver, adding the ability to refer to layers by
// ROLayerStore wraps a graph driver, adding the ability to refer to layers by
// name, and keeping track of parent-child relationships, along with a list of
// all known layers.
type LayerStore interface {
FileBasedStore
MetadataStore
FlaggableStore
// Create creates a new layer, optionally giving it a specified ID rather than
// a randomly-generated one, either inheriting data from another specified
// layer or the empty base layer. The new layer can optionally be given names
// and have an SELinux label specified for use when mounting it. Some
// underlying drivers can accept a "size" option. At this time, most
// underlying drivers do not themselves distinguish between writeable
// and read-only layers.
Create(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool) (*Layer, error)
// CreateWithFlags combines the functions of Create and SetFlag.
CreateWithFlags(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}) (layer *Layer, err error)
// Put combines the functions of CreateWithFlags and ApplyDiff.
Put(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}, diff archive.Reader) (*Layer, int64, error)
type ROLayerStore interface {
ROFileBasedStore
ROMetadataStore
// Exists checks if a layer with the specified name or ID is known.
Exists(id string) bool
@@ -103,28 +88,10 @@ type LayerStore interface {
// Get retrieves information about a layer given an ID or name.
Get(id string) (*Layer, error)
// SetNames replaces the list of names associated with a layer with the
// supplied values.
SetNames(id string, names []string) error
// Status returns an slice of key-value pairs, suitable for human consumption,
// relaying whatever status information the underlying driver can share.
Status() ([][2]string, error)
// Delete deletes a layer with the specified name or ID.
Delete(id string) error
// Wipe deletes all layers.
Wipe() error
// Mount mounts a layer for use. If the specified layer is the parent of other
// layers, it should not be written to. An SELinux label to be applied to the
// mount can be specified to override the one configured for the layer.
Mount(id, mountLabel string) (string, error)
// Unmount unmounts a layer when it is no longer in use.
Unmount(id string) error
// Changes returns a slice of Change structures, which contain a pathname
// (Path) and a description of what sort of change (Kind) was made by the
// layer (either ChangeModify, ChangeAdd, or ChangeDelete), relative to a
@@ -141,10 +108,6 @@ type LayerStore interface {
// produced by Diff.
DiffSize(from, to string) (int64, error)
// ApplyDiff reads a tarstream which was created by a previous call to Diff and
// applies its changes to a specified layer.
ApplyDiff(to string, diff archive.Reader) (int64, error)
// Lookup attempts to translate a name to an ID. Most methods do this
// implicitly.
Lookup(name string) (string, error)
@@ -153,20 +116,71 @@ type LayerStore interface {
Layers() ([]Layer, error)
}
// LayerStore wraps a graph driver, adding the ability to refer to layers by
// name, and keeping track of parent-child relationships, along with a list of
// all known layers.
type LayerStore interface {
ROLayerStore
RWFileBasedStore
RWMetadataStore
FlaggableStore
// Create creates a new layer, optionally giving it a specified ID rather than
// a randomly-generated one, either inheriting data from another specified
// layer or the empty base layer. The new layer can optionally be given names
// and have an SELinux label specified for use when mounting it. Some
// underlying drivers can accept a "size" option. At this time, most
// underlying drivers do not themselves distinguish between writeable
// and read-only layers.
Create(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool) (*Layer, error)
// CreateWithFlags combines the functions of Create and SetFlag.
CreateWithFlags(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}) (layer *Layer, err error)
// Put combines the functions of CreateWithFlags and ApplyDiff.
Put(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}, diff archive.Reader) (*Layer, int64, error)
// SetNames replaces the list of names associated with a layer with the
// supplied values.
SetNames(id string, names []string) error
// Delete deletes a layer with the specified name or ID.
Delete(id string) error
// Wipe deletes all layers.
Wipe() error
// Mount mounts a layer for use. If the specified layer is the parent of other
// layers, it should not be written to. An SELinux label to be applied to the
// mount can be specified to override the one configured for the layer.
Mount(id, mountLabel string) (string, error)
// Unmount unmounts a layer when it is no longer in use.
Unmount(id string) error
// ApplyDiff reads a tarstream which was created by a previous call to Diff and
// applies its changes to a specified layer.
ApplyDiff(to string, diff archive.Reader) (int64, error)
}
type layerStore struct {
lockfile Locker
rundir string
driver drivers.Driver
layerdir string
layers []Layer
layers []*Layer
idindex *truncindex.TruncIndex
byid map[string]*Layer
byname map[string]*Layer
byparent map[string][]*Layer
bymount map[string]*Layer
}
func (r *layerStore) Layers() ([]Layer, error) {
return r.layers, nil
layers := make([]Layer, len(r.layers))
for i := range r.layers {
layers[i] = *(r.layers[i])
}
return layers, nil
}
func (r *layerStore) mountspath() string {
@@ -178,34 +192,39 @@ func (r *layerStore) layerspath() string {
}
func (r *layerStore) Load() error {
needSave := false
shouldSave := false
rpath := r.layerspath()
data, err := ioutil.ReadFile(rpath)
if err != nil && !os.IsNotExist(err) {
return err
}
layers := []Layer{}
layers := []*Layer{}
idlist := []string{}
ids := make(map[string]*Layer)
names := make(map[string]*Layer)
mounts := make(map[string]*Layer)
parents := make(map[string][]*Layer)
if err = json.Unmarshal(data, &layers); len(data) == 0 || err == nil {
for n, layer := range layers {
ids[layer.ID] = &layers[n]
ids[layer.ID] = layers[n]
idlist = append(idlist, layer.ID)
for _, name := range layer.Names {
if conflict, ok := names[name]; ok {
r.removeName(conflict, name)
needSave = true
shouldSave = true
}
names[name] = &layers[n]
names[name] = layers[n]
}
if pslice, ok := parents[layer.Parent]; ok {
parents[layer.Parent] = append(pslice, &layers[n])
parents[layer.Parent] = append(pslice, layers[n])
} else {
parents[layer.Parent] = []*Layer{&layers[n]}
parents[layer.Parent] = []*Layer{layers[n]}
}
}
}
if shouldSave && !r.IsReadWrite() {
return errors.New("layer store assigns the same name to multiple layers")
}
mpath := r.mountspath()
data, err = ioutil.ReadFile(mpath)
if err != nil && !os.IsNotExist(err) {
@@ -224,33 +243,37 @@ func (r *layerStore) Load() error {
}
}
r.layers = layers
r.idindex = truncindex.NewTruncIndex(idlist)
r.byid = ids
r.byname = names
r.byparent = parents
r.bymount = mounts
err = nil
// Last step: try to remove anything that a previous user of this
// storage area marked for deletion but didn't manage to actually
// delete.
for _, layer := range r.layers {
if cleanup, ok := layer.Flags[incompleteFlag]; ok {
if b, ok := cleanup.(bool); ok && b {
err = r.Delete(layer.ID)
if err != nil {
break
// Last step: if we're writable, try to remove anything that a previous
// user of this storage area marked for deletion but didn't manage to
// actually delete.
if r.IsReadWrite() {
for _, layer := range r.layers {
if cleanup, ok := layer.Flags[incompleteFlag]; ok {
if b, ok := cleanup.(bool); ok && b {
err = r.Delete(layer.ID)
if err != nil {
break
}
shouldSave = true
}
needSave = true
}
}
}
if needSave {
r.Touch()
return r.Save()
if shouldSave {
return r.Save()
}
}
return err
}
func (r *layerStore) Save() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify the layer store at %q", r.layerspath())
}
rpath := r.layerspath()
if err := os.MkdirAll(filepath.Dir(rpath), 0700); err != nil {
return err
@@ -280,6 +303,7 @@ func (r *layerStore) Save() error {
if err := ioutils.AtomicWriteFile(rpath, jldata, 0600); err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(mpath, jmdata, 0600)
}
@@ -304,7 +328,6 @@ func newLayerStore(rundir string, layerdir string, driver drivers.Driver) (Layer
byid: make(map[string]*Layer),
bymount: make(map[string]*Layer),
byname: make(map[string]*Layer),
byparent: make(map[string][]*Layer),
}
if err := rlstore.Load(); err != nil {
return nil, err
@@ -312,26 +335,60 @@ func newLayerStore(rundir string, layerdir string, driver drivers.Driver) (Layer
return &rlstore, nil
}
func (r *layerStore) ClearFlag(id string, flag string) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
func newROLayerStore(rundir string, layerdir string, driver drivers.Driver) (ROLayerStore, error) {
lockfile, err := GetROLockfile(filepath.Join(layerdir, "layers.lock"))
if err != nil {
return nil, err
}
if _, ok := r.byid[id]; !ok {
lockfile.Lock()
defer lockfile.Unlock()
rlstore := layerStore{
lockfile: lockfile,
driver: driver,
rundir: rundir,
layerdir: layerdir,
byid: make(map[string]*Layer),
bymount: make(map[string]*Layer),
byname: make(map[string]*Layer),
}
if err := rlstore.Load(); err != nil {
return nil, err
}
return &rlstore, nil
}
func (r *layerStore) lookup(id string) (*Layer, bool) {
if layer, ok := r.byid[id]; ok {
return layer, ok
} else if layer, ok := r.byname[id]; ok {
return layer, ok
} else if longid, err := r.idindex.Get(id); err == nil {
layer, ok := r.byid[longid]
return layer, ok
}
return nil, false
}
func (r *layerStore) ClearFlag(id string, flag string) error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to clear flags on layers at %q", r.layerspath())
}
layer, ok := r.lookup(id)
if !ok {
return ErrLayerUnknown
}
layer := r.byid[id]
delete(layer.Flags, flag)
return r.Save()
}
func (r *layerStore) SetFlag(id string, flag string, value interface{}) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to set flags on layers at %q", r.layerspath())
}
if _, ok := r.byid[id]; !ok {
layer, ok := r.lookup(id)
if !ok {
return ErrLayerUnknown
}
layer := r.byid[id]
layer.Flags[flag] = value
return r.Save()
}
@@ -341,6 +398,9 @@ func (r *layerStore) Status() ([][2]string, error) {
}
func (r *layerStore) Put(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}, diff archive.Reader) (layer *Layer, size int64, err error) {
if !r.IsReadWrite() {
return nil, -1, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to create new layers at %q", r.layerspath())
}
size = -1
if err := os.MkdirAll(r.rundir, 0700); err != nil {
return nil, -1, err
@@ -348,8 +408,10 @@ func (r *layerStore) Put(id, parent string, names []string, mountLabel string, o
if err := os.MkdirAll(r.layerdir, 0700); err != nil {
return nil, -1, err
}
if parentLayer, ok := r.byname[parent]; ok {
parent = parentLayer.ID
if parent != "" {
if parentLayer, ok := r.lookup(parent); ok {
parent = parentLayer.ID
}
}
if id == "" {
id = stringid.GenerateRandomID()
@@ -373,25 +435,19 @@ func (r *layerStore) Put(id, parent string, names []string, mountLabel string, o
err = r.driver.Create(id, parent, mountLabel, options)
}
if err == nil {
newLayer := Layer{
layer = &Layer{
ID: id,
Parent: parent,
Names: names,
MountLabel: mountLabel,
Flags: make(map[string]interface{}),
}
r.layers = append(r.layers, newLayer)
layer = &r.layers[len(r.layers)-1]
r.layers = append(r.layers, layer)
r.idindex.Add(id)
r.byid[id] = layer
for _, name := range names {
r.byname[name] = layer
}
if pslice, ok := r.byparent[parent]; ok {
pslice = append(pslice, layer)
r.byparent[parent] = pslice
} else {
r.byparent[parent] = []*Layer{layer}
}
for flag, value := range flags {
layer.Flags[flag] = value
}
@@ -436,48 +492,45 @@ func (r *layerStore) Create(id, parent string, names []string, mountLabel string
}
func (r *layerStore) Mount(id, mountLabel string) (string, error) {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return "", errors.Wrapf(ErrStoreIsReadOnly, "not allowed to update mount locations for layers at %q", r.mountspath())
}
if _, ok := r.byid[id]; !ok {
layer, ok := r.lookup(id)
if !ok {
return "", ErrLayerUnknown
}
layer := r.byid[id]
if layer.MountCount > 0 {
layer.MountCount++
return layer.MountPoint, r.Save()
}
if mountLabel == "" {
if layer, ok := r.byid[id]; ok {
mountLabel = layer.MountLabel
}
mountLabel = layer.MountLabel
}
mountpoint, err := r.driver.Get(id, mountLabel)
if mountpoint != "" && err == nil {
if layer, ok := r.byid[id]; ok {
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
}
layer.MountPoint = filepath.Clean(mountpoint)
layer.MountCount++
r.bymount[layer.MountPoint] = layer
err = r.Save()
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
}
layer.MountPoint = filepath.Clean(mountpoint)
layer.MountCount++
r.bymount[layer.MountPoint] = layer
err = r.Save()
}
return mountpoint, err
}
func (r *layerStore) Unmount(id string) error {
if layer, ok := r.bymount[filepath.Clean(id)]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to update mount locations for layers at %q", r.mountspath())
}
if layer, ok := r.byname[id]; ok {
id = layer.ID
layer, ok := r.lookup(id)
if !ok {
layerByMount, ok := r.bymount[filepath.Clean(id)]
if !ok {
return ErrLayerUnknown
}
layer = layerByMount
}
if _, ok := r.byid[id]; !ok {
return ErrLayerUnknown
}
layer := r.byid[id]
if layer.MountCount > 1 {
layer.MountCount--
return r.Save()
@@ -495,20 +548,14 @@ func (r *layerStore) Unmount(id string) error {
}
func (r *layerStore) removeName(layer *Layer, name string) {
newNames := []string{}
for _, oldName := range layer.Names {
if oldName != name {
newNames = append(newNames, oldName)
}
}
layer.Names = newNames
layer.Names = stringSliceWithoutValue(layer.Names, name)
}
func (r *layerStore) SetNames(id string, names []string) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to change layer name assignments at %q", r.layerspath())
}
if layer, ok := r.byid[id]; ok {
if layer, ok := r.lookup(id); ok {
for _, name := range layer.Names {
delete(r.byname, name)
}
@@ -524,21 +571,18 @@ func (r *layerStore) SetNames(id string, names []string) error {
return ErrLayerUnknown
}
func (r *layerStore) GetMetadata(id string) (string, error) {
if layer, ok := r.byname[id]; ok {
id = layer.ID
}
if layer, ok := r.byid[id]; ok {
func (r *layerStore) Metadata(id string) (string, error) {
if layer, ok := r.lookup(id); ok {
return layer.Metadata, nil
}
return "", ErrLayerUnknown
}
func (r *layerStore) SetMetadata(id, metadata string) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify layer metadata at %q", r.layerspath())
}
if layer, ok := r.byid[id]; ok {
if layer, ok := r.lookup(id); ok {
layer.Metadata = metadata
return r.Save()
}
@@ -550,13 +594,15 @@ func (r *layerStore) tspath(id string) string {
}
func (r *layerStore) Delete(id string) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to delete layers at %q", r.layerspath())
}
if _, ok := r.byid[id]; !ok {
layer, ok := r.lookup(id)
if !ok {
return ErrLayerUnknown
}
for r.byid[id].MountCount > 0 {
id = layer.ID
for layer.MountCount > 0 {
if err := r.Unmount(id); err != nil {
return err
}
@@ -564,71 +610,48 @@ func (r *layerStore) Delete(id string) error {
err := r.driver.Remove(id)
if err == nil {
os.Remove(r.tspath(id))
if layer, ok := r.byid[id]; ok {
pslice := r.byparent[layer.Parent]
newPslice := []*Layer{}
for _, candidate := range pslice {
if candidate.ID != id {
newPslice = append(newPslice, candidate)
}
}
delete(r.byid, layer.ID)
if len(newPslice) > 0 {
r.byparent[layer.Parent] = newPslice
} else {
delete(r.byparent, layer.Parent)
}
for _, name := range layer.Names {
delete(r.byname, name)
}
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
}
newLayers := []Layer{}
for _, candidate := range r.layers {
if candidate.ID != id {
newLayers = append(newLayers, candidate)
}
}
r.layers = newLayers
if err = r.Save(); err != nil {
return err
delete(r.byid, id)
r.idindex.Delete(id)
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
}
newLayers := []*Layer{}
for _, candidate := range r.layers {
if candidate.ID != id {
newLayers = append(newLayers, candidate)
}
}
r.layers = newLayers
if err = r.Save(); err != nil {
return err
}
}
return err
}
func (r *layerStore) Lookup(name string) (id string, err error) {
layer, ok := r.byname[name]
if !ok {
layer, ok = r.byid[name]
if !ok {
return "", ErrLayerUnknown
}
if layer, ok := r.lookup(name); ok {
return layer.ID, nil
}
return layer.ID, nil
return "", ErrLayerUnknown
}
func (r *layerStore) Exists(id string) bool {
if layer, ok := r.byname[id]; ok {
id = layer.ID
}
l, exists := r.byid[id]
return l != nil && exists
_, ok := r.lookup(id)
return ok
}
func (r *layerStore) Get(id string) (*Layer, error) {
if l, ok := r.byname[id]; ok {
return l, nil
}
if l, ok := r.byid[id]; ok {
return l, nil
if layer, ok := r.lookup(id); ok {
return layer, nil
}
return nil, ErrLayerUnknown
}
func (r *layerStore) Wipe() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to delete layers at %q", r.layerspath())
}
ids := []string{}
for id := range r.byid {
ids = append(ids, id)
@@ -641,22 +664,34 @@ func (r *layerStore) Wipe() error {
return nil
}
func (r *layerStore) Changes(from, to string) ([]archive.Change, error) {
if layer, ok := r.byname[from]; ok {
from = layer.ID
}
if layer, ok := r.byname[to]; ok {
to = layer.ID
func (r *layerStore) findParentAndLayer(from, to string) (fromID string, toID string, toLayer *Layer, err error) {
var ok bool
var fromLayer *Layer
toLayer, ok = r.lookup(to)
if !ok {
return "", "", nil, ErrLayerUnknown
}
to = toLayer.ID
if from == "" {
if layer, ok := r.byid[to]; ok {
from = layer.Parent
from = toLayer.Parent
}
if from != "" {
fromLayer, ok = r.lookup(from)
if ok {
from = fromLayer.ID
} else {
fromLayer, ok = r.lookup(toLayer.Parent)
if ok {
from = fromLayer.ID
}
}
}
if to == "" {
return nil, ErrLayerUnknown
}
if _, ok := r.byid[to]; !ok {
return from, to, toLayer, nil
}
func (r *layerStore) Changes(from, to string) ([]archive.Change, error) {
from, to, _, err := r.findParentAndLayer(from, to)
if err != nil {
return nil, ErrLayerUnknown
}
return r.driver.Changes(to, from)
@@ -694,32 +729,19 @@ func (r *layerStore) newFileGetter(id string) (drivers.FileGetCloser, error) {
func (r *layerStore) Diff(from, to string) (io.ReadCloser, error) {
var metadata storage.Unpacker
if layer, ok := r.byname[from]; ok {
from = layer.ID
}
if layer, ok := r.byname[to]; ok {
to = layer.ID
}
if from == "" {
if layer, ok := r.byid[to]; ok {
from = layer.Parent
}
}
if to == "" {
return nil, ErrParentUnknown
}
if _, ok := r.byid[to]; !ok {
from, to, toLayer, err := r.findParentAndLayer(from, to)
if err != nil {
return nil, ErrLayerUnknown
}
compression := archive.Uncompressed
if cflag, ok := r.byid[to].Flags[compressionFlag]; ok {
if cflag, ok := toLayer.Flags[compressionFlag]; ok {
if ctype, ok := cflag.(float64); ok {
compression = archive.Compression(ctype)
} else if ctype, ok := cflag.(archive.Compression); ok {
compression = archive.Compression(ctype)
}
}
if from != r.byid[to].Parent {
if from != toLayer.Parent {
diff, err := r.driver.Diff(to, from)
if err == nil && (compression != archive.Uncompressed) {
preader, pwriter := io.Pipe()
@@ -797,31 +819,15 @@ func (r *layerStore) Diff(from, to string) (io.ReadCloser, error) {
}
func (r *layerStore) DiffSize(from, to string) (size int64, err error) {
if layer, ok := r.byname[from]; ok {
from = layer.ID
}
if layer, ok := r.byname[to]; ok {
to = layer.ID
}
if from == "" {
if layer, ok := r.byid[to]; ok {
from = layer.Parent
}
}
if to == "" {
return -1, ErrParentUnknown
}
if _, ok := r.byid[to]; !ok {
from, to, _, err = r.findParentAndLayer(from, to)
if err != nil {
return -1, ErrLayerUnknown
}
return r.driver.DiffSize(to, from)
}
func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err error) {
if layer, ok := r.byname[to]; ok {
to = layer.ID
}
layer, ok := r.byid[to]
layer, ok := r.lookup(to)
if !ok {
return -1, ErrLayerUnknown
}
@@ -885,6 +891,10 @@ func (r *layerStore) Modified() (bool, error) {
return r.lockfile.Modified()
}
func (r *layerStore) IsReadWrite() bool {
return r.lockfile.IsReadWrite()
}
func (r *layerStore) TouchedSince(when time.Time) bool {
return r.lockfile.TouchedSince(when)
}

186
vendor/github.com/containers/storage/lockfile.go generated vendored Normal file
View File

@@ -0,0 +1,186 @@
package storage
import (
"fmt"
"os"
"path/filepath"
"sync"
"time"
"github.com/containers/storage/pkg/stringid"
"github.com/pkg/errors"
"golang.org/x/sys/unix"
)
// A Locker represents a file lock where the file is used to cache an
// identifier of the last party that made changes to whatever's being protected
// by the lock.
type Locker interface {
sync.Locker
// Touch records, for others sharing the lock, that the caller was the
// last writer. It should only be called with the lock held.
Touch() error
// Modified() checks if the most recent writer was a party other than the
// last recorded writer. It should only be called with the lock held.
Modified() (bool, error)
// TouchedSince() checks if the most recent writer modified the file (likely using Touch()) after the specified time.
TouchedSince(when time.Time) bool
// IsReadWrite() checks if the lock file is read-write
IsReadWrite() bool
}
type lockfile struct {
mu sync.Mutex
file string
fd uintptr
lw string
locktype int16
}
var (
lockfiles map[string]*lockfile
lockfilesLock sync.Mutex
// ErrLockReadOnly indicates that the caller only took a read-only lock, and is not allowed to write
ErrLockReadOnly = errors.New("lock is not a read-write lock")
)
// GetLockfile opens a read-write lock file, creating it if necessary. The
// Locker object it returns will be returned unlocked.
func GetLockfile(path string) (Locker, error) {
lockfilesLock.Lock()
defer lockfilesLock.Unlock()
if lockfiles == nil {
lockfiles = make(map[string]*lockfile)
}
cleanPath := filepath.Clean(path)
if locker, ok := lockfiles[cleanPath]; ok {
if !locker.IsReadWrite() {
return nil, errors.Wrapf(ErrLockReadOnly, "lock %q is a read-only lock", cleanPath)
}
return locker, nil
}
fd, err := unix.Open(cleanPath, os.O_RDWR|os.O_CREATE, unix.S_IRUSR|unix.S_IWUSR)
if err != nil {
return nil, errors.Wrapf(err, "error opening %q", cleanPath)
}
unix.CloseOnExec(fd)
locker := &lockfile{file: path, fd: uintptr(fd), lw: stringid.GenerateRandomID(), locktype: unix.F_WRLCK}
lockfiles[filepath.Clean(path)] = locker
return locker, nil
}
// GetROLockfile opens a read-only lock file. The Locker object it returns
// will be returned unlocked.
func GetROLockfile(path string) (Locker, error) {
lockfilesLock.Lock()
defer lockfilesLock.Unlock()
if lockfiles == nil {
lockfiles = make(map[string]*lockfile)
}
cleanPath := filepath.Clean(path)
if locker, ok := lockfiles[cleanPath]; ok {
if locker.IsReadWrite() {
return nil, fmt.Errorf("lock %q is a read-write lock", cleanPath)
}
return locker, nil
}
fd, err := unix.Open(cleanPath, os.O_RDONLY, 0)
if err != nil {
return nil, errors.Wrapf(err, "error opening %q", cleanPath)
}
unix.CloseOnExec(fd)
locker := &lockfile{file: path, fd: uintptr(fd), lw: stringid.GenerateRandomID(), locktype: unix.F_RDLCK}
lockfiles[filepath.Clean(path)] = locker
return locker, nil
}
// Lock locks the lock file
func (l *lockfile) Lock() {
lk := unix.Flock_t{
Type: l.locktype,
Whence: int16(os.SEEK_SET),
Start: 0,
Len: 0,
Pid: int32(os.Getpid()),
}
l.mu.Lock()
for unix.FcntlFlock(l.fd, unix.F_SETLKW, &lk) != nil {
time.Sleep(10 * time.Millisecond)
}
}
// Unlock unlocks the lock file
func (l *lockfile) Unlock() {
lk := unix.Flock_t{
Type: unix.F_UNLCK,
Whence: int16(os.SEEK_SET),
Start: 0,
Len: 0,
Pid: int32(os.Getpid()),
}
for unix.FcntlFlock(l.fd, unix.F_SETLKW, &lk) != nil {
time.Sleep(10 * time.Millisecond)
}
l.mu.Unlock()
}
// Touch updates the lock file with the UID of the user
func (l *lockfile) Touch() error {
l.lw = stringid.GenerateRandomID()
id := []byte(l.lw)
_, err := unix.Seek(int(l.fd), 0, os.SEEK_SET)
if err != nil {
return err
}
n, err := unix.Write(int(l.fd), id)
if err != nil {
return err
}
if n != len(id) {
return unix.ENOSPC
}
err = unix.Fsync(int(l.fd))
if err != nil {
return err
}
return nil
}
// Modified indicates if the lock file has been updated since the last time it was loaded
func (l *lockfile) Modified() (bool, error) {
id := []byte(l.lw)
_, err := unix.Seek(int(l.fd), 0, os.SEEK_SET)
if err != nil {
return true, err
}
n, err := unix.Read(int(l.fd), id)
if err != nil {
return true, err
}
if n != len(id) {
return true, unix.ENOSPC
}
lw := l.lw
l.lw = string(id)
return l.lw != lw, nil
}
// TouchedSince indicates if the lock file has been touched since the specified time
func (l *lockfile) TouchedSince(when time.Time) bool {
st := unix.Stat_t{}
err := unix.Fstat(int(l.fd), &st)
if err != nil {
return true
}
touched := time.Unix(statTMtimeUnix(st))
return when.Before(touched)
}
// IsRWLock indicates if the lock file is a read-write lock
func (l *lockfile) IsReadWrite() bool {
return (l.locktype == unix.F_WRLCK)
}

View File

@@ -0,0 +1,137 @@
// Package truncindex provides a general 'index tree', used by Docker
// in order to be able to reference containers by only a few unambiguous
// characters of their id.
package truncindex
import (
"errors"
"fmt"
"strings"
"sync"
"github.com/tchap/go-patricia/patricia"
)
var (
// ErrEmptyPrefix is an error returned if the prefix was empty.
ErrEmptyPrefix = errors.New("Prefix can't be empty")
// ErrIllegalChar is returned when a space is in the ID
ErrIllegalChar = errors.New("illegal character: ' '")
// ErrNotExist is returned when ID or its prefix not found in index.
ErrNotExist = errors.New("ID does not exist")
)
// ErrAmbiguousPrefix is returned if the prefix was ambiguous
// (multiple ids for the prefix).
type ErrAmbiguousPrefix struct {
prefix string
}
func (e ErrAmbiguousPrefix) Error() string {
return fmt.Sprintf("Multiple IDs found with provided prefix: %s", e.prefix)
}
// TruncIndex allows the retrieval of string identifiers by any of their unique prefixes.
// This is used to retrieve image and container IDs by more convenient shorthand prefixes.
type TruncIndex struct {
sync.RWMutex
trie *patricia.Trie
ids map[string]struct{}
}
// NewTruncIndex creates a new TruncIndex and initializes with a list of IDs.
func NewTruncIndex(ids []string) (idx *TruncIndex) {
idx = &TruncIndex{
ids: make(map[string]struct{}),
// Change patricia max prefix per node length,
// because our len(ID) always 64
trie: patricia.NewTrie(patricia.MaxPrefixPerNode(64)),
}
for _, id := range ids {
idx.addID(id)
}
return
}
func (idx *TruncIndex) addID(id string) error {
if strings.Contains(id, " ") {
return ErrIllegalChar
}
if id == "" {
return ErrEmptyPrefix
}
if _, exists := idx.ids[id]; exists {
return fmt.Errorf("id already exists: '%s'", id)
}
idx.ids[id] = struct{}{}
if inserted := idx.trie.Insert(patricia.Prefix(id), struct{}{}); !inserted {
return fmt.Errorf("failed to insert id: %s", id)
}
return nil
}
// Add adds a new ID to the TruncIndex.
func (idx *TruncIndex) Add(id string) error {
idx.Lock()
defer idx.Unlock()
if err := idx.addID(id); err != nil {
return err
}
return nil
}
// Delete removes an ID from the TruncIndex. If there are multiple IDs
// with the given prefix, an error is thrown.
func (idx *TruncIndex) Delete(id string) error {
idx.Lock()
defer idx.Unlock()
if _, exists := idx.ids[id]; !exists || id == "" {
return fmt.Errorf("no such id: '%s'", id)
}
delete(idx.ids, id)
if deleted := idx.trie.Delete(patricia.Prefix(id)); !deleted {
return fmt.Errorf("no such id: '%s'", id)
}
return nil
}
// Get retrieves an ID from the TruncIndex. If there are multiple IDs
// with the given prefix, an error is thrown.
func (idx *TruncIndex) Get(s string) (string, error) {
if s == "" {
return "", ErrEmptyPrefix
}
var (
id string
)
subTreeVisitFunc := func(prefix patricia.Prefix, item patricia.Item) error {
if id != "" {
// we haven't found the ID if there are two or more IDs
id = ""
return ErrAmbiguousPrefix{prefix: string(prefix)}
}
id = string(prefix)
return nil
}
idx.RLock()
defer idx.RUnlock()
if err := idx.trie.VisitSubtree(patricia.Prefix(s), subTreeVisitFunc); err != nil {
return "", err
}
if id != "" {
return id, nil
}
return "", ErrNotExist
}
// Iterate iterates over all stored IDs, and passes each of them to the given handler.
func (idx *TruncIndex) Iterate(handler func(id string)) {
idx.trie.Visit(func(prefix patricia.Prefix, item patricia.Item) error {
handler(string(prefix))
return nil
})
}

11
vendor/github.com/containers/storage/stat_mtim.go generated vendored Normal file
View File

@@ -0,0 +1,11 @@
// +build linux solaris
package storage
import (
"golang.org/x/sys/unix"
)
func statTMtimeUnix(st unix.Stat_t) (int64, int64) {
return st.Mtim.Unix()
}

11
vendor/github.com/containers/storage/stat_mtimespec.go generated vendored Normal file
View File

@@ -0,0 +1,11 @@
// +build !linux,!solaris
package storage
import (
"golang.org/x/sys/unix"
)
func statTMtimeUnix(st unix.Stat_t) (int64, int64) {
return st.Mtimespec.Unix()
}

View File

@@ -1,138 +0,0 @@
package storage
import (
"os"
"path/filepath"
"sync"
"syscall"
"time"
"github.com/containers/storage/pkg/stringid"
)
// A Locker represents a file lock where the file is used to cache an
// identifier of the last party that made changes to whatever's being protected
// by the lock.
type Locker interface {
sync.Locker
// Touch records, for others sharing the lock, that the caller was the
// last writer. It should only be called with the lock held.
Touch() error
// Modified() checks if the most recent writer was a party other than the
// last recorded writer. It should only be called with the lock held.
Modified() (bool, error)
// TouchedSince() checks if the most recent writer modified the file (likely using Touch()) after the specified time.
TouchedSince(when time.Time) bool
}
type lockfile struct {
mu sync.Mutex
file string
fd uintptr
lw string
}
var (
lockfiles map[string]*lockfile
lockfilesLock sync.Mutex
)
// GetLockfile opens a lock file, creating it if necessary. The Locker object
// return will be returned unlocked.
func GetLockfile(path string) (Locker, error) {
lockfilesLock.Lock()
defer lockfilesLock.Unlock()
if lockfiles == nil {
lockfiles = make(map[string]*lockfile)
}
if locker, ok := lockfiles[filepath.Clean(path)]; ok {
return locker, nil
}
fd, err := syscall.Open(filepath.Clean(path), os.O_RDWR|os.O_CREATE, syscall.S_IRUSR|syscall.S_IWUSR)
if err != nil {
return nil, err
}
locker := &lockfile{file: path, fd: uintptr(fd), lw: stringid.GenerateRandomID()}
lockfiles[filepath.Clean(path)] = locker
return locker, nil
}
func (l *lockfile) Lock() {
lk := syscall.Flock_t{
Type: syscall.F_WRLCK,
Whence: int16(os.SEEK_SET),
Start: 0,
Len: 0,
Pid: int32(os.Getpid()),
}
l.mu.Lock()
for syscall.FcntlFlock(l.fd, syscall.F_SETLKW, &lk) != nil {
time.Sleep(10 * time.Millisecond)
}
}
func (l *lockfile) Unlock() {
lk := syscall.Flock_t{
Type: syscall.F_UNLCK,
Whence: int16(os.SEEK_SET),
Start: 0,
Len: 0,
Pid: int32(os.Getpid()),
}
for syscall.FcntlFlock(l.fd, syscall.F_SETLKW, &lk) != nil {
time.Sleep(10 * time.Millisecond)
}
l.mu.Unlock()
}
func (l *lockfile) Touch() error {
l.lw = stringid.GenerateRandomID()
id := []byte(l.lw)
_, err := syscall.Seek(int(l.fd), 0, os.SEEK_SET)
if err != nil {
return err
}
n, err := syscall.Write(int(l.fd), id)
if err != nil {
return err
}
if n != len(id) {
return syscall.ENOSPC
}
err = syscall.Fsync(int(l.fd))
if err != nil {
return err
}
return nil
}
func (l *lockfile) Modified() (bool, error) {
id := []byte(l.lw)
_, err := syscall.Seek(int(l.fd), 0, os.SEEK_SET)
if err != nil {
return true, err
}
n, err := syscall.Read(int(l.fd), id)
if err != nil {
return true, err
}
if n != len(id) {
return true, syscall.ENOSPC
}
lw := l.lw
l.lw = string(id)
return l.lw != lw, nil
}
func (l *lockfile) TouchedSince(when time.Time) bool {
st := syscall.Stat_t{}
err := syscall.Fstat(int(l.fd), &st)
if err != nil {
return true
}
touched := time.Unix(st.Mtim.Unix())
return when.Before(touched)
}

View File

@@ -1,13 +0,0 @@
// +build !containersstorageautogen
// Package storageversion is auto-generated at build-time
package storageversion
// Default build-time variable for library-import.
// This file is overridden on build with build-time informations.
const (
GitCommit string = "library-import"
Version string = "library-import"
BuildTime string = "library-import"
IAmStatic string = "library-import"
)

19
vendor/github.com/containers/storage/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,19 @@
github.com/BurntSushi/toml master
github.com/Microsoft/go-winio 307e919c663683a9000576fdc855acaf9534c165
github.com/Microsoft/hcsshim 0f615c198a84e0344b4ed49c464d8833d4648dfc
github.com/Sirupsen/logrus 61e43dc76f7ee59a82bdf3d71033dc12bea4c77d
github.com/docker/engine-api 4290f40c056686fcaa5c9caf02eac1dde9315adf
github.com/docker/go-connections eb315e36415380e7c2fdee175262560ff42359da
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/go-check/check 20d25e2804050c1cd24a7eea1e7a6447dd0e74ec
github.com/mattn/go-shellwords 753a2322a99f87c0eff284980e77f53041555bc6
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
github.com/opencontainers/runc 6c22e77604689db8725fa866f0f2ec0b3e8c3a07
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
github.com/pkg/errors master
github.com/tchap/go-patricia v2.2.6
github.com/vbatts/tar-split bd4c5d64c3e9297f410025a3b1bd0c58f659e721
github.com/vdemeester/shakers 24d7f1d6a71aa5d9cbe7390e4afb66b7eef9e1b3
golang.org/x/net f2499483f923065a842d38eb4c7f1927e6fc6e6d
golang.org/x/sys d75a52659825e75fff6158388dddc6a5b04f9ba5

View File

@@ -76,8 +76,7 @@ may be the better choice.
For those who have previously deployed their own registry based on the Registry
1.0 implementation and wish to deploy a Registry 2.0 while retaining images,
data migration is required. A tool to assist with migration efforts has been
created. For more information see [docker/migrator]
(https://github.com/docker/migrator).
created. For more information see [docker/migrator](https://github.com/docker/migrator).
## Contribute

View File

@@ -15,7 +15,7 @@
// tag := /[\w][\w.-]{0,127}/
//
// digest := digest-algorithm ":" digest-hex
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]*
// digest-algorithm-separator := /[+.-_]/
// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value

View File

@@ -125,6 +125,14 @@ func (a Algorithm) Hash() hash.Hash {
return algorithms[a].New()
}
// Encode encodes the raw bytes of a digest, typically from a hash.Hash, into
// the encoded portion of the digest.
func (a Algorithm) Encode(d []byte) string {
// TODO(stevvooe): Currently, all algorithms use a hex encoding. When we
// add support for back registration, we can modify this accordingly.
return fmt.Sprintf("%x", d)
}
// FromReader returns the digest of the reader using the algorithm.
func (a Algorithm) FromReader(rd io.Reader) (Digest, error) {
digester := a.Digester()

View File

@@ -45,16 +45,21 @@ func NewDigest(alg Algorithm, h hash.Hash) Digest {
// functions. This is also useful for rebuilding digests from binary
// serializations.
func NewDigestFromBytes(alg Algorithm, p []byte) Digest {
return Digest(fmt.Sprintf("%s:%x", alg, p))
return NewDigestFromEncoded(alg, alg.Encode(p))
}
// NewDigestFromHex returns a Digest from alg and a the hex encoded digest.
// NewDigestFromHex is deprecated. Please use NewDigestFromEncoded.
func NewDigestFromHex(alg, hex string) Digest {
return Digest(fmt.Sprintf("%s:%s", alg, hex))
return NewDigestFromEncoded(Algorithm(alg), hex)
}
// NewDigestFromEncoded returns a Digest from alg and the encoded digest.
func NewDigestFromEncoded(alg Algorithm, encoded string) Digest {
return Digest(fmt.Sprintf("%s:%s", alg, encoded))
}
// DigestRegexp matches valid digest types.
var DigestRegexp = regexp.MustCompile(`[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+`)
var DigestRegexp = regexp.MustCompile(`[a-z0-9]+(?:[.+_-][a-z0-9]+)*:[a-zA-Z0-9=_-]+`)
// DigestRegexpAnchored matches valid digest types, anchored to the start and end of the match.
var DigestRegexpAnchored = regexp.MustCompile(`^` + DigestRegexp.String() + `$`)
@@ -133,12 +138,17 @@ func (d Digest) Verifier() Verifier {
}
}
// Hex returns the hex digest portion of the digest. This will panic if the
// Encoded returns the encoded portion of the digest. This will panic if the
// underlying digest is not in a valid format.
func (d Digest) Hex() string {
func (d Digest) Encoded() string {
return string(d[d.sepIndex()+1:])
}
// Hex is deprecated. Please use Digest.Encoded.
func (d Digest) Hex() string {
return d.Encoded()
}
func (d Digest) String() string {
return string(d)
}

View File

@@ -7,7 +7,7 @@
The OCI Image Format project creates and maintains the software shipping container image format spec (OCI Image Format).
The specification can be found [here](spec.md).
**[The specification can be found here](spec.md).**
This repository also provides [Go types](specs-go), [intra-blob validation tooling, and JSON Schema](schema).
The Go types and validation should be compatible with the current Go release; earlier Go releases are not supported.
@@ -19,7 +19,7 @@ Additional documentation about how this group operates:
- [Releases](RELEASES.md)
- [Project Documentation](project.md)
The _optional_ and _base_ layers of all OCI projects are tracked in the [OCI Scope Table](https://www.opencontainers.org/governance/oci-scope-table).
The _optional_ and _base_ layers of all OCI projects are tracked in the [OCI Scope Table](https://www.opencontainers.org/about/oci-scope-table).
## Running an OCI Image
@@ -39,25 +39,16 @@ To support this UX the OCI Image Format contains sufficient information to launc
**Q: Why doesn't this project mention distribution?**
A: Distribution, for example using HTTP as both Docker v2.2 and AppC do today, is currently out of scope on the [OCI Scope Table](https://www.opencontainers.org/governance/oci-scope-table).
A: Distribution, for example using HTTP as both Docker v2.2 and AppC do today, is currently out of scope on the [OCI Scope Table](https://www.opencontainers.org/about/oci-scope-table).
There has been [some discussion on the TOB mailing list](https://groups.google.com/a/opencontainers.org/d/msg/tob/A3JnmI-D-6Y/tLuptPDHAgAJ) to make distribution an optional layer, but this topic is a work in progress.
**Q: Why a new project?**
A: The [first OCI spec](https://github.com/opencontainers/runtime-spec) centered around defining the run side of a container.
This is generally seen to be an orthogonal concern to the shipping container component.
As practical examples of this separation you see many organizations separating these concerns into different teams and organizations: the Docker Distribution project and the Docker containerd project; Amazon ECS and Amazon EC2 Container Registry, etc.
**Q: Why work on this?**
A: We are seeing many independent implementations of container image handling including build systems, registries, and image analysis tools.
As an organization we would like to encourage this growth and bring people together to ensure a technically correct and open specification continues to evolve reflecting the OCI values.
**Q: What happens to AppC or Docker Image Formats?**
A: Existing formats can continue to be a proving ground for technologies, as needed.
The OCI Image Format project strives to provide a dependable open specification that can be shared between different tools and be evolved for years or decades of compatibility; as the deb and rpm format have.
Find more [FAQ on the OCI site](https://www.opencontainers.org/faq).
## Roadmap
The [GitHub milestones](https://github.com/opencontainers/image-spec/milestones) lay out the path to the OCI v1.0.0 release in late 2016.
@@ -85,7 +76,7 @@ When in doubt, start on the [mailing-list](#mailing-list).
The contributors and maintainers of all OCI projects have a weekly meeting Wednesdays at 2:00 PM (USA Pacific).
Everyone is welcome to participate via [UberConference web][UberConference] or audio-only: +1-415-968-0849 (no PIN needed).
An initial agenda will be posted to the [mailing list](#mailing-list) earlier in the week, and everyone is welcome to propose additional topics or suggest other agenda alterations there.
Minutes are posted to the [mailing list](#mailing-list) and minutes from past calls are archived to the [wiki](https://github.com/opencontainers/runtime-spec/wiki) for those who are unable to join the call.
Minutes are posted to the [mailing list](#mailing-list) and minutes from past calls are archived [here][minutes].
## Mailing List
@@ -173,3 +164,4 @@ Read more on [How to Write a Git Commit Message](http://chris.beams.io/posts/git
[UberConference]: https://www.uberconference.com/opencontainers
[irc-logs]: http://ircbot.wl.linuxfoundation.org/eavesdrop/%23opencontainers/
[minutes]: http://ircbot.wl.linuxfoundation.org/meetings/opencontainers/

View File

@@ -1,16 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package schema defines the OCI image media types, schema definitions and validation functions.
package schema

View File

@@ -1,44 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
import (
"encoding/json"
"io"
"go4.org/errorutil"
)
// A SyntaxError is a description of a JSON syntax error
// including line, column and offset in the JSON file.
type SyntaxError struct {
msg string
Line, Col int
Offset int64
}
func (e *SyntaxError) Error() string { return e.msg }
// WrapSyntaxError checks whether the given error is a *json.SyntaxError
// and converts it into a *schema.SyntaxError containing line/col information using the given reader.
// If the given error is not a *json.SyntaxError it is returned unchanged.
func WrapSyntaxError(r io.Reader, err error) error {
if serr, ok := err.(*json.SyntaxError); ok {
line, col, _ := errorutil.HighlightBytePosition(r, serr.Offset)
return &SyntaxError{serr.Error(), line, col, serr.Offset}
}
return err
}

View File

@@ -1,335 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
import (
"bytes"
"compress/gzip"
"encoding/base64"
"io/ioutil"
"net/http"
"os"
"path"
"sync"
"time"
)
type _escLocalFS struct{}
var _escLocal _escLocalFS
type _escStaticFS struct{}
var _escStatic _escStaticFS
type _escDirectory struct {
fs http.FileSystem
name string
}
type _escFile struct {
compressed string
size int64
modtime int64
local string
isDir bool
once sync.Once
data []byte
name string
}
func (_escLocalFS) Open(name string) (http.File, error) {
f, present := _escData[path.Clean(name)]
if !present {
return nil, os.ErrNotExist
}
return os.Open(f.local)
}
func (_escStaticFS) prepare(name string) (*_escFile, error) {
f, present := _escData[path.Clean(name)]
if !present {
return nil, os.ErrNotExist
}
var err error
f.once.Do(func() {
f.name = path.Base(name)
if f.size == 0 {
return
}
var gr *gzip.Reader
b64 := base64.NewDecoder(base64.StdEncoding, bytes.NewBufferString(f.compressed))
gr, err = gzip.NewReader(b64)
if err != nil {
return
}
f.data, err = ioutil.ReadAll(gr)
})
if err != nil {
return nil, err
}
return f, nil
}
func (fs _escStaticFS) Open(name string) (http.File, error) {
f, err := fs.prepare(name)
if err != nil {
return nil, err
}
return f.File()
}
func (dir _escDirectory) Open(name string) (http.File, error) {
return dir.fs.Open(dir.name + name)
}
func (f *_escFile) File() (http.File, error) {
type httpFile struct {
*bytes.Reader
*_escFile
}
return &httpFile{
Reader: bytes.NewReader(f.data),
_escFile: f,
}, nil
}
func (f *_escFile) Close() error {
return nil
}
func (f *_escFile) Readdir(count int) ([]os.FileInfo, error) {
return nil, nil
}
func (f *_escFile) Stat() (os.FileInfo, error) {
return f, nil
}
func (f *_escFile) Name() string {
return f.name
}
func (f *_escFile) Size() int64 {
return f.size
}
func (f *_escFile) Mode() os.FileMode {
return 0
}
func (f *_escFile) ModTime() time.Time {
return time.Unix(f.modtime, 0)
}
func (f *_escFile) IsDir() bool {
return f.isDir
}
func (f *_escFile) Sys() interface{} {
return f
}
// _escFS returns a http.Filesystem for the embedded assets. If useLocal is true,
// the filesystem's contents are instead used.
func _escFS(useLocal bool) http.FileSystem {
if useLocal {
return _escLocal
}
return _escStatic
}
// _escDir returns a http.Filesystem for the embedded assets on a given prefix dir.
// If useLocal is true, the filesystem's contents are instead used.
func _escDir(useLocal bool, name string) http.FileSystem {
if useLocal {
return _escDirectory{fs: _escLocal, name: name}
}
return _escDirectory{fs: _escStatic, name: name}
}
// _escFSByte returns the named file from the embedded assets. If useLocal is
// true, the filesystem's contents are instead used.
func _escFSByte(useLocal bool, name string) ([]byte, error) {
if useLocal {
f, err := _escLocal.Open(name)
if err != nil {
return nil, err
}
b, err := ioutil.ReadAll(f)
f.Close()
return b, err
}
f, err := _escStatic.prepare(name)
if err != nil {
return nil, err
}
return f.data, nil
}
// _escFSMustByte is the same as _escFSByte, but panics if name is not present.
func _escFSMustByte(useLocal bool, name string) []byte {
b, err := _escFSByte(useLocal, name)
if err != nil {
panic(err)
}
return b
}
// _escFSString is the string version of _escFSByte.
func _escFSString(useLocal bool, name string) (string, error) {
b, err := _escFSByte(useLocal, name)
return string(b), err
}
// _escFSMustString is the string version of _escFSMustByte.
func _escFSMustString(useLocal bool, name string) string {
return string(_escFSMustByte(useLocal, name))
}
var _escData = map[string]*_escFile{
"/config-schema.json": {
local: "config-schema.json",
size: 774,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/5SRvW7rMAyFdz+F4WS8ju7QKWsfoEPHooMqUzEDWFRJZgiKvHv1EzcxEBTuEsSH/M4R
ya+mbbsBxDFGRQrdvu1eIoRnCmoxALfpn8dD+xrBoUdnS9e/jG3FjTDZjIyqcW/MUSj0Vd0RH8zA1mv/
/8lUbVM5HGZEEkMpzc1pUrDabXCyBzCu5FdSzxEySx9HcFq1yMmBFUFSJY+TNMdgFYYf4Q4VZQzVruie
eLKaK0NCesUJulK71JbOnnQk/sVq2c1uRE2POzGsZUjWdl53cde9ZfDl8eClr+VdvsLGJAUD5mvJvMOF
FxOpl797XbmF14iixOdHY1hme76tO+1mug9dHTtHXLlLM/+WN3QMnyfkcvK3B5e4bXo5ffp4by7NdwAA
AP//XlvgsQYDAAA=
`,
},
"/content-descriptor.json": {
local: "content-descriptor.json",
size: 956,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/5STP2/bMBDFd3+KgxIgSxx1CDoYQZZ279BuRQeaPEmXmn96PCNVC3/3HkUpttsigRfD
fHyP9zvy9HsF0DjMlikJxdBsoPmUMHyIQQwFZCj/MAh8nE2R4XNCSx1ZMyVuyxHX2Q7oTYkPImnTtk85
hnVV7yL3rWPTyfrdfVu1q5ojt0SyZqJWtkvlPMWqu3Uv1WtOxoQlGbdPaKVqiTXPQph1pzSmmkdH5ks1
V+nffmVAmHzlUIgdFIGxQ1YadHBSY4pf617JOezymrzp8a40e6WQHQUqx+b2WHiKHWq6yfTrLZRiAQqw
HQXzhTj/AaEg7+/PIRz1mOUNDMujXnfPJg1kQV/Bfs97DzW7YFWW24JblsmIIAe4eRhMHh43DwP+NE6H
xZvdnHy8ufAiZ9izBva8y6/gG9hRZSxG6Dh6eNYuBoWkPEODNyNsEVx8DruolO4ItkyXYTbjUSZBf1r3
xJmFKfQvVt3pIntTLllpqZn1w2r5nVppGH/sibF8BF//HtjTiTl/OF18Wx1WfwIAAP//z1UgVLwDAAA=
`,
},
"/defs-config.json": {
local: "defs-config.json",
size: 2236,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/+RVzY7TMBC+5yksw7Gwd6673JCKVAEHhCo3Ge/OEnvMeIKI0L47TrZ0888qpac9VG0m
/n7m89j9nSmlC4g5YxAkr98pfQMWPTZPUQXDgnlVGlZCahvAX5MXgx5YpV8Wb9UuQI4Wc9PiN4+EJ4ZE
2GikYt4uPz2nitQBGkE63EMuLbStB6YASRdiZ3Wqf4rAvUqHIwqjv9WnVw+bJ9z7X4EiFB+JJQ7xrxls
g0+W49v7SP7VVcf9lTNh1zJvHz1O8/ufc7YMs6n1pvsKBdzQxkIjSWpGVLgOhF6G2uRh2/T0tSfQl1u0
uGDzH1b7dgeWF134qiz7TF2eb5MRXLvixfb+mcrKwWicn9n/2qm/dFdfiL8n2Rtcdc4/mAOUl45kN7Hx
/z2SrPt9ZNdMJDaec4EWaO0ei1FEl7+tjuuXtrQnC75yoz3TpamBo17O7JQCw48KGYoez1MGQ3dZl/Fv
5ncYhbg+J/ScwQiMbqql7i2xM9JOY4K+EXQwPfGmkjtadVaOrvaHehWanIPxP89zoOCC1Pt2J+fgB6IS
jNdz5yFrPg/ZnwAAAP//3oH4m7wIAAA=
`,
},
"/defs-image.json": {
local: "defs-image.json",
size: 2916,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/8SWTU/bTBDH7/kU+xj0cMiLaVUhNUJIVbn0xIGeigIa1uN4qL3r7m6gIfJ3767t+D3Q
lKg9wc6u//P/zc443owY8wLUXFFqSApvzrxLDEmQW2mWgjLEVzEoZiS7SlF8lsIACVTsSwJLZNcpcgqJ
Q/74pNCrBKyeS2GDCQYEX9cpViEbpMAljIxJ9dz3pZXnW3k9k2rpax5hAj65VH4tMdkKmELQ00aRWNbx
FIxBlePc3nyafoPp8+n046L+97+j4/+nt3ez8WJzOnn3/izzf+/YsZenyIpMXkBL1KaJ1CmmiZBxtU6N
XCpII+LMEvHvepWw4lkmQ+YOyfsH5GbCSOTLEoCdnEego4v5eYQ/IbClTiAun7w42bMOBdLdeDZdjOd2
FTrWcYcoAUGhVb8sOaR6w4X1tXqOC+46rvDHihS6PDdlrNU9kzqo6bm1Li+jEUljMKFUiVeGFnVhlDVv
ext1A29Hm+661/ys49jeocIQlS0JBqyDlUsc23337JHfmJBGV1dnsy7k617cMdc792uDek8/1o2ePWgp
2sZImLMPw6Z6bf/vWv+FypYuBwlWKtav+AcWU2HSHWahkgl7shiRdUm6dM0SWLN7ZIF8ErG0NoO2s22b
g1Kwbm+RwaTrYfcol7uum8FV3hKQ19jLBjGrAeig7jfHlcog2lBrDU5xvgOKR5acm5XCLpzUTaJFS3HH
wPY1u7t/TOu/YLV/T63trA92OFtW7I1mZo82Q9HlhzNVib7VXIjgKn7YktWqO+31R7RIOTimr4I1J3II
9BEUgei+Q/eu10vF+ktgwy+hUfPv9uMChJAG2l+Ge99qU6T6PZcCr8LW22azx09dAul1jnrdAW6UezP0
7hOrOPZ60IvRdpWNstGvAAAA//83ffekZAsAAA==
`,
},
"/defs.json": {
local: "defs.json",
size: 3193,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/7RWTXPaMBC98ys8tEfa2PIX9NYp/cghAzOZnjo9uGYBtSCpstxpmuG/VzLGWPZiMKWH
JPau9r23T6tYzwPHGS4gSyUVinI2fOMMp7CkjJq3zMkzWDhqLXm+WvNc6UdwZgLYO85UQhlI51FASpc0
TYry0R6vAtB4hkIHKVPj6k2/qycBhk3HYQWyqCwSW127zbc698oj42M4+V2GPRIXwd2oQvaivtA+iSMM
3MRb8D7pC0+8IA7GfhRgHFWyRRQFfYkmhPh+TFw/GodBHEeu6yKMyCqLOr9idzAeEoYt3N57gwFHYei3
oXvvCwYdkEkwiWIyaeP33g4M3xsHQRQHgRv7sTsJQ4KZ70VzbnBlnZAzmC114EsZcKpUkX4pwWSHL+5q
B+6utLxauBvh1YduWL7Z1FaXT18RL26pUDt7U4WZkpSt+is8cOzrb6tpm4jHAnb/Gxsl/u07pOo4SSJR
Wj+bSy5AKgpZrUinXz97o50V6mphUP/bEjXbU/9nUSXWGVGf76d1IafHRh94q/DjtYVvpUyeZktdn2EW
JCZ9dIAq2Da6xqmMHrTDkm9v/ZWU+EbbPB/oBuaJWmMM9brD+vfs13kDG+ItgE+c/6gjiBNTImxRHWxV
C9hh1DatsstwMNVNNLDa7wAzPp0Z4pLPGHLTmSocRhnvpw+JEI1/Lac2YM0zZZ2WDsr6iWlalh5ufrcA
y+gfuBIFdeSB50xd4kbGc5leSN09kPryrChLysvzP8NxYd+bO6EuGfFy/Pp9MqoplfAzpxIW1hfU6nnU
MrVJbn8cB+ZnN/gbAAD//0JyEpx5DAAA
`,
},
"/image-layout-schema.json": {
local: "image-layout-schema.json",
size: 414,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/2yPsU7EMAyG9z6FFRhpUySmW286CekGJBbEEFpfmxNNQuIinVDfHSduYbhbWvmPP3/2
TwWgekxdtIGsd2oH6hjQ7b0jYx1GOExmQHg2Fz8TvHQjTkY9ZOo+ScHESBR2Wp+Td7WkjY+D7qM5Ud0+
acnuhLP9hiRmPMu6TZYKJt3aZnH9WcRC0iVgZv3HGbs1C5EnRLKY+CVfkw2ZlI1feaicJW/X135LB/gT
0Ihw3B/gyly4zZ4oWjf85+jmifO3tebksWmbVq31e/kv/F3KwhG/Zhux/0NurVtlbql+AwAA//8bwMuB
ngEAAA==
`,
},
"/image-manifest-schema.json": {
local: "image-manifest-schema.json",
size: 921,
modtime: 1485389045,
compressed: `
H4sIAAAJbogA/5ySMU/rMBSF9/yKq7RjU7/39KauTB0QA4gFMZjkJrlVbQfbRVSo/51ru6Y1ZUAdc+xz
7ndP/FEB1B261tLkyeh6BfXdhPrGaC9Jo4W1kgPCrdTUo/NwP2FLPbUy3l4E+9y1IyoZrKP300qIjTO6
SerS2EF0Vva++fNfJG2WfNRli2OP4altnuqiLd0WFAiEOhIkr99PGNzmZYPtUZssZ1hP6PgkLMZainjk
xLRcki93fhjJQU+47cClDdGBHxHicMjDIeXBWwoE6UBqIO1xQBspYvh1m4kS9ist73oxRpEmtVN89u+k
yfesRemQTmoG6Gk4b2BusQ+xAQ21b3Ijxi7D/6sL+1buGevcnqmktXJfMK09qnD176mPo5LNv53O8wsK
qbXx8eUVKNfV3WyJOz+PXHyvpsPeNdEVoWaCBe483i6cVWaNpLXF1x1ZDFhPP73D8p+UFfPHc3WoPgMA
AP//UcoRdpkDAAA=
`,
},
"/manifest-list-schema.json": {
local: "manifest-list-schema.json",
size: 873,
modtime: 1485389045,
compressed: `
H4sIAAAJbogA/6SSv0/7MBDF9/wVp/Q7flMjxNQVFiQQA4gFMZjk0lzV2OHORVSo/zv+EZdEZUDqUqnP
fu8+7+KvAqBsUGqmwZE15QrKhwHNtTVOk0GG216vEe61oRbFwR35n8cBa2qp1tHyP2T8k7rDXgd/59yw
Umoj1lRJXVpeq4Z166qLK5W0RfJRky3iPdaPrvNoibZ0W1HAUP2IUW09Rgpw+wFDhH3bYD1qA/sgdoTi
T0JFr6WcZx+baib5tP1TRwIt4bYBSTVRwHUIkQBmBJBC4SOlghbQBsg4XCNHlDjhjI5qjn2MzK1PZvVk
qN/1/uzyR9OfWYvSIZ2UeZJM15GTNbPeTzo47Kf3widnbMPNBlupIvsyfPOF8oKnCAuVY5ubccuWyzHh
MGPRxlgX39OM5pzVTSOPPf4EPXUWmTWSlozvO2IMWC+/PayT1fr/r8Wh+A4AAP//b2/SMmkDAAA=
`,
},
"/": {
isDir: true,
local: "/",
},
}

View File

@@ -1,21 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
// Generates an embbedded http.FileSystem for all schema files
// using esc (https://github.com/mjibson/esc).
// This should generally be invoked with `make schema-fs`
//go:generate esc -private -pkg=schema -ignore=.*go .

View File

@@ -1,50 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
import (
"net/http"
"github.com/opencontainers/image-spec/specs-go/v1"
)
// Media types for the OCI image formats
const (
MediaTypeDescriptor Validator = v1.MediaTypeDescriptor
MediaTypeManifest Validator = v1.MediaTypeImageManifest
MediaTypeManifestList Validator = v1.MediaTypeImageManifestList
MediaTypeImageConfig Validator = v1.MediaTypeImageConfig
MediaTypeImageLayer unimplemented = v1.MediaTypeImageLayer
)
var (
// fs stores the embedded http.FileSystem
// having the OCI JSON schema files in root "/".
fs = _escFS(false)
// specs maps OCI schema media types to schema files.
specs = map[Validator]string{
MediaTypeDescriptor: "content-descriptor.json",
MediaTypeManifest: "image-manifest-schema.json",
MediaTypeManifestList: "manifest-list-schema.json",
MediaTypeImageConfig: "config-schema.json",
}
)
// FileSystem returns an in-memory file system including the schema files.
// The schema files are located at the root directory.
func FileSystem() http.FileSystem {
return fs
}

View File

@@ -1,119 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/xeipuuv/gojsonschema"
)
// Validator wraps a media type string identifier
// and implements validation against a JSON schema.
type Validator string
type validateDescendantsFunc func(r io.Reader) error
var mapValidateDescendants = map[Validator]validateDescendantsFunc{
MediaTypeManifest: validateManifestDescendants,
}
// ValidationError contains all the errors that happened during validation.
type ValidationError struct {
Errs []error
}
func (e ValidationError) Error() string {
return fmt.Sprintf("%v", e.Errs)
}
// Validate validates the given reader against the schema of the wrapped media type.
func (v Validator) Validate(src io.Reader) error {
buf, err := ioutil.ReadAll(src)
if err != nil {
return errors.Wrap(err, "unable to read the document file")
}
if f, ok := mapValidateDescendants[v]; ok {
if f == nil {
return fmt.Errorf("internal error: mapValidateDescendents[%q] is nil", v)
}
err = f(bytes.NewReader(buf))
if err != nil {
return err
}
}
sl := gojsonschema.NewReferenceLoaderFileSystem("file:///"+specs[v], fs)
ml := gojsonschema.NewStringLoader(string(buf))
result, err := gojsonschema.Validate(sl, ml)
if err != nil {
return errors.Wrapf(
WrapSyntaxError(bytes.NewReader(buf), err),
"schema %s: unable to validate", v)
}
if result.Valid() {
return nil
}
errs := make([]error, 0, len(result.Errors()))
for _, desc := range result.Errors() {
errs = append(errs, fmt.Errorf("%s", desc))
}
return ValidationError{
Errs: errs,
}
}
type unimplemented string
func (v unimplemented) Validate(src io.Reader) error {
return fmt.Errorf("%s: unimplemented", v)
}
func validateManifestDescendants(r io.Reader) error {
header := v1.Manifest{}
buf, err := ioutil.ReadAll(r)
if err != nil {
return errors.Wrapf(err, "error reading the io stream")
}
err = json.Unmarshal(buf, &header)
if err != nil {
return errors.Wrap(err, "manifest format mismatch")
}
if header.Config.MediaType != string(v1.MediaTypeImageConfig) {
fmt.Printf("warning: config %s has an unknown media type: %s\n", header.Config.Digest, header.Config.MediaType)
}
for _, layer := range header.Layers {
if layer.MediaType != string(v1.MediaTypeImageLayer) &&
layer.MediaType != string(v1.MediaTypeImageLayerNonDistributable) {
fmt.Printf("warning: layer %s has an unknown media type: %s\n", layer.Digest, layer.MediaType)
}
}
return nil
}

View File

@@ -14,7 +14,11 @@
package v1
import "time"
import (
"time"
digest "github.com/opencontainers/go-digest"
)
// ImageConfig defines the execution parameters which should be used as a base when running a container using an image.
type ImageConfig struct {
@@ -40,7 +44,10 @@ type ImageConfig struct {
WorkingDir string `json:"WorkingDir,omitempty"`
// Labels contains arbitrary metadata for the container.
Labels map[string]string `json:"labels,omitempty"`
Labels map[string]string `json:"Labels,omitempty"`
// StopSignal contains the system call signal that will be sent to the container to exit.
StopSignal string `json:"StopSignal,omitempty"`
}
// RootFS describes a layer content addresses
@@ -49,13 +56,13 @@ type RootFS struct {
Type string `json:"type"`
// DiffIDs is an array of layer content hashes (DiffIDs), in order from bottom-most to top-most.
DiffIDs []string `json:"diff_ids"`
DiffIDs []digest.Digest `json:"diff_ids"`
}
// History describes the history of a layer.
type History struct {
// Created is the combined date and time at which the layer was created, formatted as defined by RFC 3339, section 5.6.
Created time.Time `json:"created,omitempty"`
Created *time.Time `json:"created,omitempty"`
// CreatedBy is the command which created the layer.
CreatedBy string `json:"created_by,omitempty"`
@@ -74,7 +81,7 @@ type History struct {
// This provides the `application/vnd.oci.image.config.v1+json` mediatype when marshalled to JSON.
type Image struct {
// Created is the combined date and time at which the image was created, formatted as defined by RFC 3339, section 5.6.
Created time.Time `json:"created,omitempty"`
Created *time.Time `json:"created,omitempty"`
// Author defines the name and/or email address of the person or entity which created and is responsible for maintaining the image.
Author string `json:"author,omitempty"`

View File

@@ -17,7 +17,8 @@ package v1
import digest "github.com/opencontainers/go-digest"
// Descriptor describes the disposition of targeted content.
// This structure provides `application/vnd.oci.descriptor.v1+json` mediatype when marshalled to JSON
// This structure provides `application/vnd.oci.descriptor.v1+json` mediatype
// when marshalled to JSON.
type Descriptor struct {
// MediaType is the media type of the object this schema refers to.
MediaType string `json:"mediaType,omitempty"`
@@ -30,4 +31,34 @@ type Descriptor struct {
// URLs specifies a list of URLs from which this object MAY be downloaded
URLs []string `json:"urls,omitempty"`
// Annotations contains arbitrary metadata relating to the targeted content.
Annotations map[string]string `json:"annotations,omitempty"`
// Platform describes the platform which the image in the manifest runs on.
//
// This should only be used when referring to a manifest.
Platform *Platform `json:"platform,omitempty"`
}
// Platform describes the platform which the image in the manifest runs on.
type Platform struct {
// Architecture field specifies the CPU architecture, for example
// `amd64` or `ppc64`.
Architecture string `json:"architecture"`
// OS specifies the operating system, for example `linux` or `windows`.
OS string `json:"os"`
// OSVersion is an optional field specifying the operating system
// version, for example on Windows `10.0.14393.1066`.
OSVersion string `json:"os.version,omitempty"`
// OSFeatures is an optional field specifying an array of strings,
// each listing a required OS feature (for example on Windows `win32k`).
OSFeatures []string `json:"os.features,omitempty"`
// Variant is an optional field specifying a variant of the CPU, for
// example `v7` to specify ARMv7 when architecture is `arm`.
Variant string `json:"variant,omitempty"`
}

View File

@@ -12,10 +12,18 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package image
package v1
// SpecURL is the URL for the image-spec repository
var SpecURL = "https://github.com/opencontainers/image-spec"
import "github.com/opencontainers/image-spec/specs-go"
// IssuesURL is the URL for the issues of image-tools
var IssuesURL = "https://github.com/opencontainers/image-tools/issues"
// Index references manifests for various platforms.
// This structure provides `application/vnd.oci.image.index.v1+json` mediatype when marshalled to JSON.
type Index struct {
specs.Versioned
// Manifests references platform specific manifests.
Manifests []Descriptor `json:"manifests"`
// Annotations contains arbitrary metadata for the image index.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -14,18 +14,15 @@
package v1
import "regexp"
// ImageLayoutVersion is the version of ImageLayout
const ImageLayoutVersion = "1.0.0"
const (
// ImageLayoutFile is the file name of oci image layout file
ImageLayoutFile = "oci-layout"
// ImageLayoutVersion is the version of ImageLayout
ImageLayoutVersion = "1.0.0"
)
// ImageLayout is the structure in the "oci-layout" file, found in the root
// of an OCI Image-layout directory.
type ImageLayout struct {
Version string `json:"imageLayoutVersion"`
}
var (
// RefsRegexp matches requirement of image-layout 'refs' charset.
RefsRegexp = regexp.MustCompile(`^[a-zA-Z0-9-._]+$`)
)

View File

@@ -16,7 +16,7 @@ package v1
import "github.com/opencontainers/image-spec/specs-go"
// Manifest provides `application/vnd.oci.image.manifest.list.v1+json` mediatype structure when marshalled to JSON.
// Manifest provides `application/vnd.oci.image.manifest.v1+json` mediatype structure when marshalled to JSON.
type Manifest struct {
specs.Versioned
@@ -27,6 +27,6 @@ type Manifest struct {
// Layers is an indexed list of layers referenced by the manifest.
Layers []Descriptor `json:"layers"`
// Annotations contains arbitrary metadata for the manifest list.
// Annotations contains arbitrary metadata for the image manifest.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -1,63 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package v1
import "github.com/opencontainers/image-spec/specs-go"
// Platform describes the platform which the image in the manifest runs on.
type Platform struct {
// Architecture field specifies the CPU architecture, for example
// `amd64` or `ppc64`.
Architecture string `json:"architecture"`
// OS specifies the operating system, for example `linux` or `windows`.
OS string `json:"os"`
// OSVersion is an optional field specifying the operating system
// version, for example `10.0.10586`.
OSVersion string `json:"os.version,omitempty"`
// OSFeatures is an optional field specifying an array of strings,
// each listing a required OS feature (for example on Windows `win32k`).
OSFeatures []string `json:"os.features,omitempty"`
// Variant is an optional field specifying a variant of the CPU, for
// example `ppc64le` to specify a little-endian version of a PowerPC CPU.
Variant string `json:"variant,omitempty"`
// Features is an optional field specifying an array of strings, each
// listing a required CPU feature (for example `sse4` or `aes`).
Features []string `json:"features,omitempty"`
}
// ManifestDescriptor describes a platform specific manifest.
type ManifestDescriptor struct {
Descriptor
// Platform describes the platform which the image in the manifest runs on.
Platform Platform `json:"platform"`
}
// ManifestList references manifests for various platforms.
// This structure provides `application/vnd.oci.image.manifest.list.v1+json` mediatype when marshalled to JSON.
type ManifestList struct {
specs.Versioned
// Manifests references platform specific manifests.
Manifests []ManifestDescriptor `json:"manifests"`
// Annotations contains arbitrary metadata for the manifest list.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -18,11 +18,14 @@ const (
// MediaTypeDescriptor specifies the media type for a content descriptor.
MediaTypeDescriptor = "application/vnd.oci.descriptor.v1+json"
// MediaTypeLayoutHeader specifies the media type for the oci-layout.
MediaTypeLayoutHeader = "application/vnd.oci.layout.header.v1+json"
// MediaTypeImageManifest specifies the media type for an image manifest.
MediaTypeImageManifest = "application/vnd.oci.image.manifest.v1+json"
// MediaTypeImageManifestList specifies the media type for an image manifest list.
MediaTypeImageManifestList = "application/vnd.oci.image.manifest.list.v1+json"
// MediaTypeImageIndex specifies the media type for an image index.
MediaTypeImageIndex = "application/vnd.oci.image.index.v1+json"
// MediaTypeImageLayer is the media type used for layers referenced by the manifest.
MediaTypeImageLayer = "application/vnd.oci.image.layer.v1.tar"

View File

@@ -25,7 +25,7 @@ const (
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = "-rc4"
VersionDev = "-rc6-dev"
)
// Version is the specification version that the package types support.

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,177 +0,0 @@
# image-tools [![Build Status](https://travis-ci.org/opencontainers/image-tools.svg?branch=master)](https://travis-ci.org/opencontainers/image-tools)
`image-tools` is a collection of tools for working with the [OCI image format specification](https://github.com/opencontainers/image-spec).
## Install
It is recommended that use `go get` to download a single command tools.
```
$ go get -d github.com/opencontainers/image-tools/cmd/oci-unpack
$ cd $GOPATH/src/github.com/opencontainers/image-tools/
$ make all
$ sudo make install
```
## Uninstall
```
$ sudo make uninstall
```
## Example
### Obtaining an image
The following examples assume you have a [image-layout](https://github.com/opencontainers/image-spec/blob/v1.0.0-rc2/image-layout.md) tar archive at `busybox-oci`.
One way to acquire that image is with [skopeo](https://github.com/projectatomic/skopeo#installing):
```
$ skopeo copy docker://busybox oci:busybox-oci
```
### oci-create-runtime-bundle
More information about `oci-create-runtime-bundle` can be found in its [man page](./cmd/oci-create-runtime-bundle/oci-create-runtime-bundle.1.md)
```
$ mkdir busybox-bundle
$ oci-create-runtime-bundle --ref latest busybox-oci busybox-bundle
$ cd busybox-bundle && sudo runc run busybox
```
### oci-image-validate
More information about `oci-image-validate` can be found in its [man page](./cmd/oci-image-validate/oci-image-validate.1.md)
```
$ oci-image-validate --type imageLayout --ref latest busybox-oci
busybox-oci: OK
```
### oci-unpack
More information about `oci-unpack` can be found in its [man page](./cmd/oci-unpack/oci-unpack.1.md)
```
$ mkdir busybox-bundle
$ oci-unpack --ref latest busybox-oci busybox-bundle
$ tree busybox-bundle
busybox-bundle
├── bin
│   ├── [
│   ├── [[
│   ├── acpid
│   ├── addgroup
│   ├── add-shell
[...]
```
# Contributing
Development happens on GitHub. Issues are used for bugs and actionable items and longer discussions can happen on the [mailing list](#mailing-list).
The code is licensed under the Apache 2.0 license found in the `LICENSE` file of this repository.
## Code of Conduct
Participation in the OpenContainers community is governed by [OpenContainer's Code of Conduct](https://github.com/opencontainers/tob/blob/d2f9d68c1332870e40693fe077d311e0742bc73d/code-of-conduct.md).
## Discuss your design
The project welcomes submissions, but please let everyone know what you are working on.
Before undertaking a nontrivial change to this repository, send mail to the [mailing list](#mailing-list) to discuss what you plan to do.
This gives everyone a chance to validate the design, helps prevent duplication of effort, and ensures that the idea fits.
It also guarantees that the design is sound before code is written; a GitHub pull-request is not the place for high-level discussions.
Typos and grammatical errors can go straight to a pull-request.
When in doubt, start on the [mailing-list](#mailing-list).
## Weekly Call
The contributors and maintainers of all OCI projects have a weekly meeting Wednesdays at 2:00 PM (USA Pacific.)
Everyone is welcome to participate via [UberConference web][UberConference] or audio-only: 888-587-9088 or 860-706-8529 (no PIN needed.)
An initial agenda will be posted to the [mailing list](#mailing-list) earlier in the week, and everyone is welcome to propose additional topics or suggest other agenda alterations there.
Minutes are posted to the [mailing list](#mailing-list) and minutes from past calls are archived to the [wiki](https://github.com/opencontainers/runtime-spec/wiki) for those who are unable to join the call.
## Mailing List
You can subscribe and join the mailing list on [Google Groups](https://groups.google.com/a/opencontainers.org/forum/#!forum/dev).
## IRC
OCI discussion happens on #opencontainers on Freenode ([logs][irc-logs]).
## Git commit
### Sign your work
The sign-off is a simple line at the end of the explanation for the patch, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch.
The rules are pretty simple: if you can certify the below (from [developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe@gmail.com>
using your real name (sorry, no pseudonyms or anonymous contributions.)
You can add the sign off when creating the git commit via `git commit -s`.
### Commit Style
Simple house-keeping for clean git history.
Read more on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/) or the Discussion section of [`git-commit(1)`](http://git-scm.com/docs/git-commit).
1. Separate the subject from body with a blank line
2. Limit the subject line to 50 characters
3. Capitalize the subject line
4. Do not end the subject line with a period
5. Use the imperative mood in the subject line
6. Wrap the body at 72 characters
7. Use the body to explain what and why vs. how
* If there was important/useful/essential conversation or information, copy or include a reference
8. When possible, one keyword to scope the change in the subject (i.e. "README: ...", "runtime: ...")
[UberConference]: https://www.uberconference.com/opencontainers
[irc-logs]: http://ircbot.wl.linuxfoundation.org/eavesdrop/%23opencontainers/

Some files were not shown because too many files have changed in this diff Show More