Compare commits

...

52 Commits

Author SHA1 Message Date
Antonio Murdaca
5d24b67f5e bump to v0.1.22
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-21 10:22:43 +02:00
Antonio Murdaca
3b9ee4f322 Merge pull request #366 from rhatdan/transports
Give more useful help when explaining usage
2017-06-20 17:02:22 +02:00
Daniel J Walsh
0ca26cce94 Give more useful help when explaining usage
Also specify container-storage as a valid transport

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-06-20 14:28:02 +00:00
Antonio Murdaca
29528d00ec Merge pull request #367 from runcom/vendor-c/image-list-names
vendor c/image for ListNames in transports pkg
2017-06-17 00:26:40 +02:00
Antonio Murdaca
af34f50b8c bump ostree-go
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-17 00:08:54 +02:00
Antonio Murdaca
e7b32b1e6a vendor c/image for ListNames in transports pkg
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-16 23:39:06 +02:00
Antonio Murdaca
4049bf2801 Merge pull request #365 from rhatdan/docker
Remove docker references whereever possible
2017-06-16 19:15:33 +02:00
Daniel J Walsh
ad33537769 Remove docker references whereever possible
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-06-16 10:51:42 -04:00
Antonio Murdaca
d5e34c1b5e Merge pull request #362 from rhatdan/master
Vendor in ostree fixes
2017-06-16 11:58:52 +02:00
Dan Walsh
5e586f3781 Vendor in ostree fixes
This will fix the compiler issues.

Signed-off-by: Dan Walsh <dwalsh@redhat.com>
2017-06-16 05:42:47 -04:00
Antonio Murdaca
455177e749 Merge pull request #358 from runcom/bump-release-0.1.21
Bump release 0.1.21
2017-06-15 17:16:46 +02:00
Antonio Murdaca
b85b7319aa bump back to v0.1.22
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 16:58:43 +02:00
Antonio Murdaca
0b73154601 bump to v0.1.21
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 16:58:24 +02:00
Miloslav Trmač
da48399f89 Merge pull request #357 from runcom/up-cstorage
*: update c/storage
2017-06-15 15:12:33 +02:00
Antonio Murdaca
08504d913c *: update c/storage
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-15 14:29:56 +02:00
Miloslav Trmač
fe67466701 Merge pull request #351 from giuseppe/add-ostree-dep
vendor.conf: add ostree-go
2017-06-14 17:18:51 +02:00
Giuseppe Scrivano
47e5d0cd9e vendor.conf: add ostree-go
it is used by containers/image for pulling images to the OSTree storage.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-06-14 15:35:34 +02:00
Antonio Murdaca
f0d830a8ca Merge pull request #355 from runcom/fail-on-diff-os
fail early when image os doesn't match host os
2017-06-06 13:55:26 +02:00
Antonio Murdaca
6d3f523c57 fail early when image os doesn't match host os
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-06-06 13:40:16 +02:00
Miloslav Trmač
87a36f9d5b Merge pull request #343 from mtrmac/readme-updates
README.md updates
2017-06-05 17:50:15 +02:00
Miloslav Trmač
9d12d72fb7 Improve documentation on what to do with containers/image failures in test-skopeo 2017-06-05 15:16:43 +02:00
Miloslav Trmač
fc88117065 Drop finished work from TODO
- We now have the docker-archive: transport
- Integration tests with built registries also exist
2017-06-05 15:16:43 +02:00
Miloslav Trmač
0debda0bbb Start all command examples with $
That has mostly been the case, with one outlier.  Fix it.
2017-06-05 15:16:43 +02:00
Miloslav Trmač
49f10736d1 Restructure the “Building without a container” version
Consolidate the Fedora and macOS instructions to prevent duplication,
and to suggest using $GOPATH for both.

Start with installing dependencies.
2017-06-05 15:15:04 +02:00
Miloslav Trmač
3d0d2ea6bb Document that there is a choice in using containers, and what the trade-off is 2017-06-05 15:15:02 +02:00
Miloslav Trmač
a2499d3451 Create “Building {without a container,in a container}” subsections
To make it clearer that the two are alternatives.

Document that a docker command is needed for the in-container build.

Also move the “checkout in $GOPATH” warning into the “without a
container” section, where it belongs.
2017-06-05 15:13:25 +02:00
Miloslav Trmač
7db0aab330 Move documentation build instructions to the end, with a separate header
We want to start with the Go 1.5 dependency and build/checkout
instructions.

Also create a separate subsection, to match the future “Building
in/without a container” subsections
2017-06-05 14:42:19 +02:00
Miloslav Trmač
150eb5bf18 Add Fedora instructions for installing go-md2man
It would be nice to have macOS instructions as well.
2017-06-05 14:40:58 +02:00
Miloslav Trmač
bcc0de69d4 Merge pull request #353 from surajssd/add-fedora-dependency
docs(README): add build dependencies for fedora
2017-06-05 14:39:58 +02:00
Suraj Deshmukh
cab89b9b9c docs(README): add build dependencies for fedora
Two more packages are needed to locally build skopeo
on fedora viz. btrfs-progs-devel & device-mapper-devel,
so added them in README.

Signed-off-by: Suraj Deshmukh <surajssd009005@gmail.com>
2017-06-04 16:03:23 +05:30
Miloslav Trmač
5b95a21401 Merge pull request #350 from mhrivnak/patch-1
Fixes a typo in the Name field
2017-05-31 20:49:28 +02:00
Michael Hrivnak
78c83dbcff Fixes a typo in the Name field 2017-05-31 14:22:22 -04:00
Miloslav Trmač
98ced5196c Merge pull request #347 from mtrmac/docker-certs.d
Support /etc/docker/certs.d
2017-05-30 19:40:36 +02:00
Miloslav Trmač
63272a10d7 Vendor after merging mtrmac/image:docker-certs.d 2017-05-30 18:26:43 +02:00
Antonio Murdaca
07c798ff82 Merge pull request #346 from jingqiuELE/master
Always combine RUN apt-get update with apt-get install in the same RUN statement.
2017-05-25 14:54:31 +02:00
Jing Qiu
750d72873d Always combine RUN apt-get update with apt-get install in the same RUN
statement.

From the [docs](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#build-cache) in March 2017:

Always combine RUN apt-get update with apt-get install in the same  RUN statement, for example

RUN apt-get update && apt-get install -y package-bar

Using apt-get update alone in a RUN statement causes caching issues and subsequent apt-get install instructions fail.

Signed-off-by: Jing Qiu <aqiu0720@gmail.com>
2017-05-25 11:38:35 +08:00
Antonio Murdaca
81dddac7d6 Merge pull request #345 from runcom/image-spec-rc6
update image-spec to v1.0.0-rc6
2017-05-24 16:55:18 +02:00
Antonio Murdaca
405b912f7e update image-spec to v1.0.0-rc6
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-24 16:35:57 +02:00
Miloslav Trmač
e1884aa8a8 Merge pull request #344 from mtrmac/storage-rebase
Pull in rhatdan/image:master
2017-05-17 22:20:50 +02:00
Miloslav Trmač
ffb01385dd Vendor after merging https://github.com/containers/image/pull/275 2017-05-17 17:12:23 +02:00
Miloslav Trmač
c5adc4b580 Merge pull request #331 from mtrmac/storage-rebase
Re-vendor, primarily for https://github.com/containers/storage/pull/11
2017-05-12 18:50:40 +02:00
Miloslav Trmač
69b9106646 Re-vendor, primarily for https://github.com/containers/storage/pull/11
containers/storage got new dependencies, so we will need to re-vendor
eventually anyway, and having this separate from other major work is
cleaner.

But the primary goal of this commit is to see whether it makes skopeo
buildable on OS X.
2017-05-11 13:07:14 +02:00
Antonio Murdaca
565688a963 Merge pull request #341 from projectatomic/jzb-patch-1
Update README.md
2017-05-10 21:21:00 +02:00
Joe Brockmeier
5205f3646d Update README.md
Just clarifying that Skopeo is available in later versions of Fedora as well.
2017-05-10 13:53:36 -04:00
Miloslav Trmač
3c57a0f084 Merge pull request #340 from mtrmac/setup-test
Simplify infrastructure setup, and make registry timeouts less strict
2017-05-10 15:05:52 +02:00
Miloslav Trmač
ed2088a4e5 Increase the time we wait for a registry from 0.5 to 5 seconds
We are not testing registry start-up performance, and killing the test
suite just because Travis is a bit busy doesn’t help; we’re much better
off with a test run which gives the registry a bit more time.
2017-05-10 14:46:28 +02:00
Miloslav Trmač
8b36001c0e Do not build docker/registry and remove remaining helpers 2017-05-10 14:46:28 +02:00
Miloslav Trmač
cd300805d1 Remove registry instances which we don’t use at all 2017-05-10 14:46:12 +02:00
Miloslav Trmač
8d3d0404fe Clean up SigningSuite test initialization
Move "skip if signing is not available" into the test, there may be
tests which only need verification.

Move GNUPGHOME creation from SetUpTest to SetUpSuite, sharing a single
key is fine.  We don’t change the GNUPGHOME contents at test runtime.
2017-05-10 14:45:10 +02:00
Miloslav Trmač
9985f12cd4 Move skopeoBinary check from *Suite.SetUpTest to SetUpSuite
The results are not going to vary across individual tests, so let’s only
check once.
2017-05-10 14:44:20 +02:00
Antonio Murdaca
5a42657cdb Merge pull request #339 from runcom/new-release-v0.1.20
New release v0.1.20
2017-05-10 13:01:27 +02:00
Antonio Murdaca
43d6128036 bump back to v0.1.21-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 12:37:37 +02:00
377 changed files with 137103 additions and 3619 deletions

View File

@@ -6,20 +6,13 @@ RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md
device-mapper-devel \
# gpgme bindings deps
libassuan-devel gpgme-devel \
gnupg \
# registry v1 deps
xz-devel \
python-devel \
python-pip \
swig \
redhat-rpm-config \
openssl-devel \
patch
ostree-devel \
gnupg
# Install three versions of the registry. The first is an older version that
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests. Install registry v1 also.
# and schema2 manifests.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN set -x \
@@ -31,17 +24,7 @@ RUN set -x \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH" \
&& export DRV1="$(mktemp -d)" \
&& git clone https://github.com/docker/docker-registry.git "$DRV1" \
# no need for setuptools since we have a version conflict with fedora
&& sed -i.bak s/setuptools==5.8//g "$DRV1/requirements/main.txt" \
&& sed -i.bak s/setuptools==5.8//g "$DRV1/depends/docker-registry-core/requirements/main.txt" \
&& pip install "$DRV1/depends/docker-registry-core" \
&& pip install file://"$DRV1#egg=docker-registry[bugsnag,newrelic,cors]" \
&& patch $(python -c 'import boto; import os; print os.path.dirname(boto.__file__)')/connection.py \
< "$DRV1/contrib/boto_header_patch.diff" \
&& dnf -y update && dnf install -y m2crypto
&& rm -rf "$GOPATH"
RUN set -x \
&& yum install -y which git tar wget hostname util-linux bsdtar socat ethtool device-mapper iptables tree findutils nmap-ncat e2fsprogs xfsprogs lsof docker iproute \

View File

@@ -1,5 +1,12 @@
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y golang btrfs-tools git-core libdevmapper-dev libgpgme11-dev go-md2man
RUN apt-get update && apt-get install -y \
golang \
btrfs-tools \
git-core \
libdevmapper-dev \
libgpgme11-dev \
go-md2man
ENV GOPATH=/
WORKDIR /src/github.com/projectatomic/skopeo

View File

@@ -103,43 +103,64 @@ you'll get an error. You can fix this by either logging in (via `docker login`)
Building
-
To build the manual you will need go-md2man.
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag.
There are two ways to build skopeo: in a container, or locally without a container. Choose the one which better matches your needs and environment.
### Building without a container
Building without a container requires a bit more manual work and setup in your environment, but it is more flexible:
- It should work in more environments (e.g. for native macOS builds)
- It does not require root privileges (after dependencies are installed)
- It is faster, therefore more convenient for developing `skopeo`.
Install the necessary dependencies:
```sh
$ sudo apt-get install go-md2man
Fedora$ sudo dnf install gpgme-devel libassuan-devel btrfs-progs-devel device-mapper-devel
macOS$ brew install gpgme
```
To build the `skopeo` binary you need at least Go 1.5 because it uses the latest `GO15VENDOREXPERIMENT` flag. Also, make sure to clone the repository in your `GOPATH` - otherwise compilation fails.
Make sure to clone this repository in your `GOPATH` - otherwise compilation fails.
```sh
$ git clone https://github.com/projectatomic/skopeo $GOPATH/src/github.com/projectatomic/skopeo
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make all
$ cd $GOPATH/src/github.com/projectatomic/skopeo && make binary-local
```
To build localy on OSX:
### Building in a container
Building in a container is simpler, but more restrictive:
- It requires the `docker` command and the ability to run Linux containers
- The created executable is a Linux executable, and depends on dynamic libraries which may only be available only in a container of a similar Linux distribution.
```sh
$ brew install gpgme
$ make binary-local
$ make binary # Or (make all) to also build documentation, see below.
```
You may need to install additional development packages: `gpgme-devel` and `libassuan-devel`
### Building documentation
To build the manual you will need go-md2man.
```sh
$ dnf install gpgme-devel libassuan-devel
Debian$ sudo apt-get install go-md2man
Fedora$ sudo dnf install go-md2man
```
Then
```sh
$ make docs
```
Installing
-
If you built from source:
```sh
$ sudo make install
```
`skopeo` is also available from Fedora 23:
`skopeo` is also available from Fedora 23 (and later):
```sh
sudo dnf install skopeo
$ sudo dnf install skopeo
```
TODO
-
- list all images on registry?
- registry v2 search?
- support output to docker load tar(s)
- show repo tags via flag or when reference isn't tagged or digested
- add tests (integration with deployed registries in container - Docker-like)
- support rkt/appc image spec
NOT TODO
@@ -163,10 +184,20 @@ In order to update an existing dependency:
- update the relevant dependency line in `vendor.conf`
- run `vndr github.com/pkg/errors`
In order to test out new PRs from [containers/image](https://github.com/containers/image) to not break `skopeo`:
When new PRs for [containers/image](https://github.com/containers/image) break `skopeo` (i.e. `containers/image` tests fail in `make test-skopeo`):
- create out a new branch in your `skopeo` checkout and switch to it
- update `vendor.conf`. Find out the `containers/image` dependency; update it to vendor from your own branch and your own repository fork (e.g. `github.com/containers/image my-branch https://github.com/runcom/image`)
- run `vndr github.com/containers/image`
- make any other necessary changes in the skopeo repo (e.g. add other dependencies now requied by `containers/image`, or update skopeo for changed `containers/image` API)
- optionally add new integration tests to the skopeo repo
- submit the resulting branch as a skopeo PR, marked “DO NOT MERGE”
- iterate until tests pass and the PR is reviewed
- then the original `containers/image` PR can be merged, disregarding its `make test-skopeo` failure
- as soon as possible after that, in the skopeo PR, restore the `containers/image` line in `vendor.conf` to use `containers/image:master`
- run `vndr github.com/containers/image`
- update the skopeo PR with the result, drop the “DO NOT MERGE” marking
- after tests complete succcesfully again, merge the skopeo PR
License
-

View File

@@ -4,8 +4,10 @@ import (
"errors"
"fmt"
"os"
"strings"
"github.com/containers/image/copy"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/urfave/cli"
@@ -63,8 +65,17 @@ func copyHandler(context *cli.Context) error {
}
var copyCmd = cli.Command{
Name: "copy",
Usage: "Copy an image from one location to another",
Name: "copy",
Usage: "Copy an IMAGE-NAME from one location to another",
Description: fmt.Sprintf(`
Container "IMAGE-NAME" uses a "transport":"details" format.
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "SOURCE-IMAGE DESTINATION-IMAGE",
Action: copyHandler,
// FIXME: Do we need to namespace the GPG aspect?
@@ -94,7 +105,7 @@ var copyCmd = cli.Command{
},
cli.BoolTFlag{
Name: "src-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker source registry (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to the container source registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-cert-dir",
@@ -103,7 +114,7 @@ var copyCmd = cli.Command{
},
cli.BoolTFlag{
Name: "dest-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker destination registry (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to the container destination registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-ostree-tmp-dir",

View File

@@ -3,7 +3,9 @@ package main
import (
"errors"
"fmt"
"strings"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/urfave/cli"
)
@@ -29,8 +31,16 @@ func deleteHandler(context *cli.Context) error {
}
var deleteCmd = cli.Command{
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Name: "delete",
Usage: "Delete image IMAGE-NAME",
Description: fmt.Sprintf(`
Delete an "IMAGE_NAME" from a transport
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Action: deleteHandler,
Flags: []cli.Flag{
@@ -46,7 +56,7 @@ var deleteCmd = cli.Command{
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
},
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker"
"github.com/containers/image/manifest"
"github.com/containers/image/transports"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/urfave/cli"
@@ -29,8 +30,16 @@ type inspectOutput struct {
}
var inspectCmd = cli.Command{
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Name: "inspect",
Usage: "Inspect image IMAGE-NAME",
Description: fmt.Sprintf(`
Return low-level information about "IMAGE-NAME" in a registry/transport
Supported transports:
%s
See skopeo(1) section "IMAGE NAMES" for the expected format
`, strings.Join(transports.ListNames(), ", ")),
ArgsUsage: "IMAGE-NAME",
Flags: []cli.Flag{
cli.StringFlag{
@@ -40,7 +49,7 @@ var inspectCmd = cli.Command{
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
},
cli.BoolFlag{
Name: "raw",

View File

@@ -33,7 +33,7 @@ func createApp() *cli.App {
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when talking to docker registries (defaults to true)",
Usage: "require HTTPS and verify certificates when talking to container registries (defaults to true)",
Hidden: true,
},
cli.StringFlag{
@@ -48,7 +48,7 @@ func createApp() *cli.App {
cli.StringFlag{
Name: "registries.d",
Value: "",
Usage: "use registry configuration files in `DIR` (e.g. for docker signature storage)",
Usage: "use registry configuration files in `DIR` (e.g. for container signature storage)",
},
}
app.Before = func(c *cli.Context) error {

View File

@@ -2,36 +2,39 @@
% Jhon Honce
% August 2016
# NAME
skopeo -- Various operations with container images images and container image registries
skopeo -- Various operations with container images and container image registries
# SYNOPSIS
**skopeo** [_global options_] _command_ [_command options_]
# DESCRIPTION
`skopeo` is a command line utility providing various operations with container images and container image registries. For example, it is able to inspect a repository on a Docker registry and fetch image. It fetches the repository's manifest and it is able to show you a `docker inspect`-like json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - which tags are available for the given repository? which labels the image has?
`skopeo` is a command line utility providing various operations with container images and container image registries. For example, it is able to inspect a repository on a container registry and fetch image. It fetches the repository's manifest and it is able to show you a `docker inspect`-like json output about a whole repository or a tag. This tool, in contrast to `docker inspect`, helps you gather useful information about a repository or a tag without requiring you to run `docker pull` - e.g. - which tags are available for the given repository? which labels the image has?
It also allows you to copy container images between various registries, possibly converting them as necessary, and to sign and verify images.
## IMAGE NAMES
Most commands refer to container images, using a _transport_`:`_details_ format. The following formats are supported:
**atomic:**_namespace_**/**_stream_**:**_tag_
An image in the current project of the current default Atomic
Registry. The current project and Atomic Registry instance are by
default read from `$HOME/.kube/config`, which is set e.g. using
`(oc login)`.
**atomic:**_hostname_**/**_namespace_**/**_stream_**:**_tag_
An image served by an OpenShift(Atomic) Registry server. The current OpenShift project and OpenShift Registry instance are by default read from `$HOME/.kube/config`, which is set e.g. using `(oc login)`.
**containers-storage://**_docker-reference_
An image located in a local containers/storage image store. Location and image store specified in /etc/containers/storage.conf
**dir:**_path_
An existing local directory _path_ storing the manifest, layer
tarballs and signatures as individual files. This is a
non-standardized format, primarily useful for debugging or
noninvasive container inspection.
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2".
By default, uses the authorization state in `$HOME/.docker/config.json`,
which is set e.g. using `(docker login)`.
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$HOME/.docker/config.json`, which is set e.g. using `(docker login)`.
**docker-archive:**_path_[**:**_docker-reference_]
An image is stored in the `docker save` formated file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
**docker-daemon:**_docker-reference_
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
**oci:**_path_**:**_tag_
An image _tag_ in a directory compliant with "Open Container Image
Layout Specification" at _path_.
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
# OPTIONS
@@ -41,7 +44,7 @@ Most commands refer to container images, using a _transport_`:`_details_ format.
**--insecure-policy** Adopt an insecure, permissive policy that allows anything. This obviates the need for a policy file.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for docker signature storage), overriding the default path.
**--registries.d** _dir_ use registry configuration files in _dir_ (e.g. for container signature storage), overriding the default path.
**--help**|**-h** Show help
@@ -70,20 +73,20 @@ Uses the system's trust policy to validate images, rejects images not trusted by
**--src-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the source registry
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker source registry (defaults to true)
**--src-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container source registry (defaults to true)
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker destination registry (defaults to true)
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container destination registry (defaults to true)
Existing signatures, if any, are preserved as well.
## skopeo delete
**skopeo delete** _image-name_
Mark _image-name_ for deletion. To release the allocated disk space, you need to execute the docker registry garabage collector. E.g.,
Mark _image-name_ for deletion. To release the allocated disk space, you need to execute the container registry garabage collector. E.g.,
```sh
$ docker exec -it registry bin/registry garbage-collect /etc/docker/registry/config.yml
@@ -93,7 +96,7 @@ $ docker exec -it registry bin/registry garbage-collect /etc/docker/registry/con
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
Additionally, the registry must allow deletions by setting `REGISTRY_STORAGE_DELETE_ENABLED=true` for the registry daemon.
@@ -110,7 +113,7 @@ Return low-level information about _image-name_ in a registry
**--cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the registry
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker registries (defaults to true)
**--tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to container registries (defaults to true)
## skopeo manifest-digest
**skopeo manifest-digest** _manifest-file_

View File

@@ -12,9 +12,6 @@ import (
const (
privateRegistryURL0 = "127.0.0.1:5000"
privateRegistryURL1 = "127.0.0.1:5001"
privateRegistryURL2 = "127.0.0.1:5002"
privateRegistryURL3 = "127.0.0.1:5003"
privateRegistryURL4 = "127.0.0.1:5004"
)
func Test(t *testing.T) {
@@ -26,15 +23,13 @@ func init() {
}
type SkopeoSuite struct {
regV1 *testRegistryV1
regV2 *testRegistryV2
regV2Shema1 *testRegistryV2
regV1WithAuth *testRegistryV1 // does v1 support auth?
regV2WithAuth *testRegistryV2
}
func (s *SkopeoSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
}
func (s *SkopeoSuite) TearDownSuite(c *check.C) {
@@ -42,24 +37,14 @@ func (s *SkopeoSuite) TearDownSuite(c *check.C) {
}
func (s *SkopeoSuite) SetUpTest(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.regV1 = setupRegistryV1At(c, privateRegistryURL0, false) // TODO:(runcom)
s.regV2 = setupRegistryV2At(c, privateRegistryURL1, false, false)
s.regV2Shema1 = setupRegistryV2At(c, privateRegistryURL2, false, true)
s.regV1WithAuth = setupRegistryV1At(c, privateRegistryURL3, true) // not used
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL4, true, false)
s.regV2 = setupRegistryV2At(c, privateRegistryURL0, false, false)
s.regV2WithAuth = setupRegistryV2At(c, privateRegistryURL1, true, false)
}
func (s *SkopeoSuite) TearDownTest(c *check.C) {
// not checking V1 registries now...
if s.regV2 != nil {
s.regV2.Close()
}
if s.regV2Shema1 != nil {
s.regV2Shema1.Close()
}
if s.regV2WithAuth != nil {
//cmd := exec.Command("docker", "logout", s.regV2WithAuth)
//c.Assert(cmd.Run(), check.IsNil)

View File

@@ -91,6 +91,11 @@ func (s *CopySuite) TestCopyFailsWithManifestList(c *check.C) {
assertSkopeoFails(c, ".*can not copy docker://estesp/busybox:latest: manifest contains multiple images.*", "copy", "docker://estesp/busybox:latest", "dir:somedir")
}
func (s *CopySuite) TestCopyFailsWhenImageOSDoesntMatchRuntimeOS(c *check.C) {
c.Skip("can't run this on Travis")
assertSkopeoFails(c, `.*image operating system "windows" cannot be used on "linux".*`, "copy", "docker://microsoft/windowsservercore", "containers-storage:test")
}
func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
dir1, err := ioutil.TempDir("", "copy-1")
c.Assert(err, check.IsNil)

View File

@@ -13,23 +13,10 @@ import (
)
const (
binaryV1 = "docker-registry"
binaryV2 = "registry-v2"
binaryV2Schema1 = "registry-v2-schema1"
)
type testRegistryV1 struct {
cmd *exec.Cmd
url string
dir string
}
func setupRegistryV1At(c *check.C, url string, auth bool) *testRegistryV1 {
return &testRegistryV1{
url: url,
}
}
type testRegistryV2 struct {
cmd *exec.Cmd
url string
@@ -44,7 +31,7 @@ func setupRegistryV2At(c *check.C, url string, auth, schema1 bool) *testRegistry
c.Assert(err, check.IsNil)
// Wait for registry to be ready to serve requests.
for i := 0; i != 5; i++ {
for i := 0; i != 50; i++ {
if err = reg.Ping(); err == nil {
break
}

View File

@@ -36,15 +36,8 @@ func findFingerprint(lineBytes []byte) (string, error) {
return "", errors.New("No fingerprint found")
}
func (s *SigningSuite) SetUpTest(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
_, err = exec.LookPath(skopeoBinary)
func (s *SigningSuite) SetUpSuite(c *check.C) {
_, err := exec.LookPath(skopeoBinary)
c.Assert(err, check.IsNil)
s.gpgHome, err = ioutil.TempDir("", "skopeo-gpg")
@@ -59,7 +52,7 @@ func (s *SigningSuite) SetUpTest(c *check.C) {
c.Assert(err, check.IsNil)
}
func (s *SigningSuite) TearDownTest(c *check.C) {
func (s *SigningSuite) TearDownSuite(c *check.C) {
if s.gpgHome != "" {
err := os.RemoveAll(s.gpgHome)
c.Assert(err, check.IsNil)
@@ -70,6 +63,13 @@ func (s *SigningSuite) TearDownTest(c *check.C) {
}
func (s *SigningSuite) TestSignVerifySmoke(c *check.C) {
mech, _, err := signature.NewEphemeralGPGSigningMechanism([]byte{})
c.Assert(err, check.IsNil)
defer mech.Close()
if err := mech.SupportsSigning(); err != nil { // FIXME? Test that verification and policy enforcement works, using signatures from fixtures
c.Skip(fmt.Sprintf("Signing not supported: %v", err))
}
manifestPath := "fixtures/image.manifest.json"
dockerReference := "testing/smoketest"

View File

@@ -23,7 +23,7 @@ golang.org/x/text master
github.com/docker/distribution master
github.com/docker/libtrust master
github.com/opencontainers/runc master
github.com/opencontainers/image-spec v1.0.0-rc5
github.com/opencontainers/image-spec v1.0.0-rc6
# -- start OCI image validation requirements.
github.com/opencontainers/runtime-spec v1.0.0-rc4
github.com/opencontainers/image-tools v0.1.0
@@ -31,6 +31,7 @@ github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
go4.org master https://github.com/camlistore/go4
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460
# -- end OCI image validation requirements
github.com/mtrmac/gpgme master
# openshift/origin' k8s dependencies as of OpenShift v1.1.5
@@ -43,3 +44,6 @@ github.com/imdario/mergo 6633656539c1639d9d78127b7d47c622b5d7b6dc
github.com/mistifyio/go-zfs 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa
github.com/pborman/uuid v1.0
github.com/opencontainers/selinux master
golang.org/x/sys master
github.com/tchap/go-patricia v2.2.6
github.com/BurntSushi/toml master

14
vendor/github.com/BurntSushi/toml/COPYING generated vendored Normal file
View File

@@ -0,0 +1,14 @@
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.

218
vendor/github.com/BurntSushi/toml/README.md generated vendored Normal file
View File

@@ -0,0 +1,218 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/toml-lang/toml
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
Documentation: https://godoc.org/github.com/BurntSushi/toml
Installation:
```bash
go get github.com/BurntSushi/toml
```
Try the toml validator:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.

509
vendor/github.com/BurntSushi/toml/decode.go generated vendored Normal file
View File

@@ -0,0 +1,509 @@
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

121
vendor/github.com/BurntSushi/toml/decode_meta.go generated vendored Normal file
View File

@@ -0,0 +1,121 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

27
vendor/github.com/BurntSushi/toml/doc.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/toml-lang/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml

568
vendor/github.com/BurntSushi/toml/encode.go generated vendored Normal file
View File

@@ -0,0 +1,568 @@
package toml
import (
"bufio"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
"\"", "\\\"",
"\\", "\\\\",
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
case reflect.Struct:
enc.eStruct(key, rv)
default:
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
continue
}
enc.encode(key.add(mapKey), mrv)
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
continue
}
frv := rv.Field(i)
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
if opts.skip {
continue
}
keyName := sft.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
continue
}
if opts.omitzero && isZero(sf) {
continue
}
enc.encode(key.add(keyName), sf)
}
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

19
vendor/github.com/BurntSushi/toml/encoding_types.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View File

@@ -0,0 +1,18 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

953
vendor/github.com/BurntSushi/toml/lex.go generated vendored Normal file
View File

@@ -0,0 +1,953 @@
package toml
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
itemRawString
itemMultilineString
itemRawMultilineString
itemBool
itemInteger
itemFloat
itemDatetime
itemArray // the start of an array
itemArrayEnd
itemTableStart
itemTableEnd
itemArrayTableStart
itemArrayTableEnd
itemKeyStart
itemCommentStart
itemInlineTableStart
itemInlineTableEnd
)
const (
eof = 0
comma = ','
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
inlineTableStart = '{'
inlineTableEnd = '}'
)
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
// Allow for backing up up to three runes.
// This is necessary because TOML contains 3-rune tokens (""" and ''').
prevWidths [3]int
nprev int // how many of prevWidths are in use
// If we emit an eof, we can still back up, but it is not OK to call
// next again.
atEOF bool
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
}
func (lx *lexer) nextItem() item {
for {
select {
case item := <-lx.items:
return item
default:
lx.state = lx.state(lx)
}
}
}
func lex(input string) *lexer {
lx := &lexer{
input: input,
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
}
return lx
}
func (lx *lexer) push(state stateFn) {
lx.stack = append(lx.stack, state)
}
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
return last
}
func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.start = lx.pos
}
func (lx *lexer) next() (r rune) {
if lx.atEOF {
panic("next called after EOF")
}
if lx.pos >= len(lx.input) {
lx.atEOF = true
return eof
}
if lx.input[lx.pos] == '\n' {
lx.line++
}
lx.prevWidths[2] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[0]
if lx.nprev < 3 {
lx.nprev++
}
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
lx.prevWidths[0] = w
lx.pos += w
return r
}
// ignore skips over the pending input before this point.
func (lx *lexer) ignore() {
lx.start = lx.pos
}
// backup steps back one rune. Can be called only twice between calls to next.
func (lx *lexer) backup() {
if lx.atEOF {
lx.atEOF = false
return
}
if lx.nprev < 1 {
panic("backed up too far")
}
w := lx.prevWidths[0]
lx.prevWidths[0] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[2]
lx.nprev--
lx.pos -= w
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
}
}
// accept consumes the next rune if it's equal to `valid`.
func (lx *lexer) accept(valid rune) bool {
if lx.next() == valid {
return true
}
lx.backup()
return false
}
// peek returns but does not consume the next rune in the input.
func (lx *lexer) peek() rune {
r := lx.next()
lx.backup()
return r
}
// skip ignores all input that matches the given predicate.
func (lx *lexer) skip(pred func(rune) bool) {
for {
r := lx.next()
if pred(r) {
continue
}
lx.backup()
lx.ignore()
return
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (newlines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
}
return nil
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
if isWhitespace(r) || isNL(r) {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
lx.push(lexTop)
return lexCommentStart
case tableStart:
return lexTableStart
case eof:
if lx.pos > lx.start {
return lx.errorf("unexpected EOF")
}
lx.emit(itemEOF)
return nil
}
// At this point, the only valid item can be a key, so we back up
// and let the key lexer do the rest.
lx.backup()
lx.push(lexTopEnd)
return lexKeyStart
}
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a newline. If it sees EOF, it will quit the lexer successfully.
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
// a comment will read to a newline for us.
lx.push(lexTop)
return lexCommentStart
case isWhitespace(r):
return lexTopEnd
case isNL(r):
lx.ignore()
return lexTop
case r == eof:
lx.emit(itemEOF)
return nil
}
return lx.errorf("expected a top-level item to end with a newline, "+
"comment, or EOF, but got %q instead", r)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
// it starts with a character other than '.' and ']'.
// It assumes that '[' has already been consumed.
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
} else {
lx.emit(itemTableStart)
lx.push(lexTableEnd)
}
return lexTableNameStart
}
func lexTableEnd(lx *lexer) stateFn {
lx.emit(itemTableEnd)
return lexTopEnd
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf("expected end of table array name delimiter %q, "+
"but got %q instead", arrayTableEnd, r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
}
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
return lx.errorf("unexpected end of table name " +
"(table names cannot be empty)")
case r == tableSep:
return lx.errorf("unexpected table separator " +
"(table names cannot be empty)")
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.push(lexTableNameEnd)
return lexValue // reuse string lexing
default:
return lexBareTableName
}
}
// lexBareTableName lexes the name of a table. It assumes that at least one
// valid character for the table has already been read.
func lexBareTableName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
return lexBareTableName
}
lx.backup()
lx.emit(itemText)
return lexTableNameEnd
}
// lexTableNameEnd reads the end of a piece of a table name, optionally
// consuming whitespace.
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
lx.ignore()
return lexTableNameStart
case r == tableEnd:
return lx.pop()
default:
return lx.errorf("expected '.' or ']' to end table name, "+
"but got %q instead", r)
}
}
// lexKeyStart consumes a key name up until the first non-whitespace character.
// lexKeyStart will ignore whitespace.
func lexKeyStart(lx *lexer) stateFn {
r := lx.peek()
switch {
case r == keySep:
return lx.errorf("unexpected key separator %q", keySep)
case isWhitespace(r) || isNL(r):
lx.next()
return lexSkip(lx, lexKeyStart)
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.emit(itemKeyStart)
lx.push(lexKeyEnd)
return lexValue // reuse string lexing
default:
lx.ignore()
lx.emit(itemKeyStart)
return lexBareKey
}
}
// lexBareKey consumes the text of a bare key. Assumes that the first character
// (which is not whitespace) has not yet been consumed.
func lexBareKey(lx *lexer) stateFn {
switch r := lx.next(); {
case isBareKeyChar(r):
return lexBareKey
case isWhitespace(r):
lx.backup()
lx.emit(itemText)
return lexKeyEnd
case r == keySep:
lx.backup()
lx.emit(itemText)
return lexKeyEnd
default:
return lx.errorf("bare keys cannot contain %q", r)
}
}
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
// separator).
func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case r == keySep:
return lexSkip(lx, lexValue)
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
default:
return lx.errorf("expected key separator %q, but got %q instead",
keySep, r)
}
}
// lexValue starts the consumption of a value anywhere a value is expected.
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func lexValue(lx *lexer) stateFn {
// We allow whitespace to precede a value, but NOT newlines.
// In array syntax, the array states are responsible for ignoring newlines.
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case isDigit(r):
lx.backup() // avoid an extra state and use the same as above
return lexNumberOrDateStart
}
switch r {
case arrayStart:
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case inlineTableStart:
lx.ignore()
lx.emit(itemInlineTableStart)
return lexInlineTableValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
lx.ignore() // Ignore """
return lexMultilineString
}
lx.backup()
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
lx.ignore() // Ignore """
return lexMultilineRawString
}
lx.backup()
}
lx.ignore() // ignore the "'"
return lexRawString
case '+', '-':
return lexNumberStart
case '.': // special error case, be kind to users
return lx.errorf("floats must start with a digit, not '.'")
}
if unicode.IsLetter(r) {
// Be permissive here; lexBool will give a nice error if the
// user wrote something like
// x = foo
// (i.e. not 'true' or 'false' but is something else word-like.)
lx.backup()
return lexBool
}
return lx.errorf("expected value but found %q instead", r)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and newlines are ignored.
func lexArrayValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
lx.push(lexArrayValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == arrayEnd:
// NOTE(caleb): The spec isn't clear about whether you can have
// a trailing comma or not, so we'll allow it.
return lexArrayEnd
}
lx.backup()
lx.push(lexArrayValueEnd)
return lexValue
}
// lexArrayValueEnd consumes everything between the end of an array value and
// the next value (or the end of the array): it ignores whitespace and newlines
// and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
return lexArrayEnd
}
return lx.errorf(
"expected a comma or array terminator %q, but got %q instead",
arrayEnd, r,
)
}
// lexArrayEnd finishes the lexing of an array.
// It assumes that a ']' has just been consumed.
func lexArrayEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemArrayEnd)
return lx.pop()
}
// lexInlineTableValue consumes one key/value pair in an inline table.
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
func lexInlineTableValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValue)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == inlineTableEnd:
return lexInlineTableEnd
}
lx.backup()
lx.push(lexInlineTableValueEnd)
return lexKeyStart
}
// lexInlineTableValueEnd consumes everything between the end of an inline table
// key/value pair and the next pair (or the end of the table):
// it ignores whitespace and expects either a ',' or a '}'.
func lexInlineTableValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValueEnd)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexInlineTableValue
case r == inlineTableEnd:
return lexInlineTableEnd
}
return lx.errorf("expected a comma or an inline table terminator %q, "+
"but got %q instead", inlineTableEnd, r)
}
// lexInlineTableEnd finishes the lexing of an inline table.
// It assumes that a '}' has just been consumed.
func lexInlineTableEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemInlineTableEnd)
return lx.pop()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func lexString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
lx.backup()
lx.emit(itemString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexString
}
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func lexMultilineString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case '\\':
return lexMultilineStringEscape
case stringEnd:
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
// It assumes that the beginning "'" has already been consumed and ignored.
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == rawStringEnd:
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'''" has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case rawStringEnd:
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemRawMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
return lexMultilineString
}
lx.backup()
lx.push(lexMultilineString)
return lexStringEscape(lx)
}
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'b':
fallthrough
case 't':
fallthrough
case 'n':
fallthrough
case 'f':
fallthrough
case 'r':
fallthrough
case '"':
fallthrough
case '\\':
return lx.pop()
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("invalid escape character %q; only the following "+
"escape characters are allowed: "+
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 4; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected four hexadecimal digits after '\u', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
func lexLongUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 8; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected eight hexadecimal digits after '\U', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '_':
return lexNumber
case 'e', 'E':
return lexFloat
case '.':
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
// lexNumberOrDate consumes either an integer, float or datetime.
func lexNumberOrDate(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '-':
return lexDatetime
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexDatetime consumes a Datetime, to a first approximation.
// The parser validates that it matches one of the accepted formats.
func lexDatetime(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexDatetime
}
switch r {
case '-', 'T', ':', '.', 'Z':
return lexDatetime
}
lx.backup()
lx.emit(itemDatetime)
return lx.pop()
}
// lexNumberStart consumes either an integer or a float. It assumes that a sign
// has already been read, but that *no* digits have been consumed.
// lexNumberStart will move to the appropriate integer or float states.
func lexNumberStart(lx *lexer) stateFn {
// We MUST see a digit. Even floats have to start with a digit.
r := lx.next()
if !isDigit(r) {
if r == '.' {
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
return lexNumber
}
// lexNumber consumes an integer or a float after seeing the first digit.
func lexNumber(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumber
}
switch r {
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexFloat consumes the elements of a float. It allows any sequence of
// float-like characters, so floats emitted by the lexer are only a first
// approximation and must be validated by the parser.
func lexFloat(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexFloat
}
switch r {
case '_', '.', '-', '+', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemFloat)
return lx.pop()
}
// lexBool consumes a bool string: 'true' or 'false.
func lexBool(lx *lexer) stateFn {
var rs []rune
for {
r := lx.next()
if !unicode.IsLetter(r) {
lx.backup()
break
}
rs = append(rs, r)
}
s := string(rs)
switch s {
case "true", "false":
lx.emit(itemBool)
return lx.pop()
}
return lx.errorf("expected value but found %q instead", s)
}
// lexCommentStart begins the lexing of a comment. It will emit
// itemCommentStart and consume no characters, passing control to lexComment.
func lexCommentStart(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemCommentStart)
return lexComment
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first newline character, and pass control
// back to the last state on the stack.
func lexComment(lx *lexer) stateFn {
r := lx.peek()
if isNL(r) || r == eof {
lx.emit(itemText)
return lx.pop()
}
lx.next()
return lexComment
}
// lexSkip ignores all slurped input and moves on to the next state.
func lexSkip(lx *lexer, nextState stateFn) stateFn {
return func(lx *lexer) stateFn {
lx.ignore()
return nextState
}
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
return "Text"
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
return "String"
case itemBool:
return "Bool"
case itemInteger:
return "Integer"
case itemFloat:
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemCommentStart:
return "CommentStart"
}
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
}
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}

592
vendor/github.com/BurntSushi/toml/parse.go generated vendored Normal file
View File

@@ -0,0 +1,592 @@
package toml
import (
"fmt"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
// A list of keys in the order that they appear in the TOML data.
ordered []Key
// the full key for the current hash in scope
context Key
// the base key name for everything except hashes
currentKey string
// rough approximation of line number
approxLine int
// A map of 'key.group.names' to whether they were created implicitly.
implicits map[string]bool
}
type parseError string
func (pe parseError) Error() string {
return string(pe)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(parseError); ok {
return
}
panic(r)
}
}()
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
p.approxLine, p.current(), fmt.Sprintf(format, v...))
panic(parseError(msg))
}
func (p *parser) next() item {
it := p.lx.nextItem()
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart:
p.approxLine = item.line
p.expect(itemText)
case itemTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemTableEnd, kg.typ)
p.establishContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemArrayTableEnd, kg.typ)
p.establishContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart:
kname := p.next()
p.approxLine = kname.line
p.currentKey = p.keyString(kname)
val, typ := p.value(p.next())
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
panic("unreachable")
}
}
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
}
p.bug("Expected boolean value, but got '%s'.", it.val)
case itemInteger:
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseInt(val, 10, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit "+
"signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemFloat:
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be "+
"surrounded by digits", it.val)
}
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed "+
"by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit "+
"IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemDatetime:
var t time.Time
var ok bool
var err error
for _, format := range []string{
"2006-01-02T15:04:05Z07:00",
"2006-01-02T15:04:05",
"2006-01-02",
} {
t, err = time.ParseInLocation(format, it.val, time.Local)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
case itemArray:
array := make([]interface{}, 0)
types := make([]tomlType, 0)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it)
array = append(array, val)
types = append(types, typ)
}
return array, p.typeOfArray(types)
case itemInlineTableStart:
var (
hash = make(map[string]interface{})
outerContext = p.context
outerKey = p.currentKey
)
p.context = append(p.context, p.currentKey)
p.currentKey = ""
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
if it.typ != itemKeyStart {
p.bug("Expected key start but instead found %q, around line %d",
it.val, p.approxLine)
}
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
// retrieve key
k := p.next()
p.approxLine = k.line
kname := p.keyString(k)
// retrieve value
p.currentKey = kname
val, typ := p.value(p.next())
// make sure we keep metadata up to date
p.setType(kname, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
hash[kname] = val
}
p.context = outerContext
p.currentKey = outerKey
return hash, tomlHash
}
p.bug("Unexpected value type: %s", it.typ)
panic("unreachable")
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
accept = false
continue
}
accept = true
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// establishContext sets the current context of the parser,
// where the context is either a hash or an array of hashes. Which one is
// set depends on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) establishContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 5)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as "+
"an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var tmpHash interface{}
var ok bool
hash := p.mapping
keyContext := make(Key, 0)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.bug("Expected hash to have type 'map[string]interface{}', but "+
"it has '%T' instead.", tmpHash)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Typically, if the given key has already been set, then we have
// to raise an error since duplicate keys are disallowed. However,
// it's possible that a key was previously defined implicitly. In this
// case, it is allowed to be redefined concretely. (See the
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// addImplicit sets the given Key as having been created implicitly.
func (p *parser) addImplicit(key Key) {
p.implicits[key.String()] = true
}
// removeImplicit stops tagging the given key as having been implicitly
// created.
func (p *parser) removeImplicit(key Key) {
p.implicits[key.String()] = false
}
// isImplicit returns true if the key group pointed to by the key was created
// implicitly.
func (p *parser) isImplicit(key Key) bool {
return p.implicits[key.String()]
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) == 0 || s[0] != '\n' {
return s
}
return s[1:]
}
func stripEscapedWhitespace(s string) string {
esc := strings.Split(s, "\\\n")
if len(esc) > 1 {
for i := 1; i < len(esc); i++ {
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
}
}
return strings.Join(esc, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}
func isStringType(ty itemType) bool {
return ty == itemString || ty == itemMultilineString ||
ty == itemRawString || ty == itemRawMultilineString
}

91
vendor/github.com/BurntSushi/toml/type_check.go generated vendored Normal file
View File

@@ -0,0 +1,91 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}

242
vendor/github.com/BurntSushi/toml/type_fields.go generated vendored Normal file
View File

@@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

View File

@@ -20,28 +20,50 @@ to another, for example docker container images to OCI images. It also allows
you to copy container images between various registries, possibly converting
them as necessary, and to sign and verify images.
## Command-line usage
The containers/image project is only a library with no user interface;
you can either incorporate it into your Go programs, or use the `skopeo` tool:
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
containers/image library and takes advantage of its many features.
containers/image library and takes advantage of many of its features,
e.g. `skopeo copy` exposes the `containers/image/copy.Image` functionality.
## Dependencies
Dependencies that this library prefers will not be found in the `vendor`
directory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects.
This library does not ship a committed version of its dependencies in a `vendor`
subdirectory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects, and because
types defined in the `vendor` directory would be impossible to use from your projects.
What this project tests against dependencies-wise is located
[here](https://github.com/containers/image/blob/master/vendor.conf).
[in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf).
## Building
For ordinary use, `go build ./...` is sufficient.
If you want to see what the library can do, or an example of how it is called,
consider starting with the [skopeo](https://github.com/projectatomic/skopeo) tool
instead.
When developing this library, please use `make` to take advantage of the tests and validation.
To integrate this library into your project, put it into `$GOPATH` or use
your preferred vendoring tool to include a copy in your project.
Ensure that the dependencies documented [in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf)
are also available
(using those exact versions or different versions of your choosing).
Optionally, you can use the `containers_image_openpgp` build tag (using `go build -tags …`, or `make … BUILDTAGS=…`).
This library, by default, also depends on the GpgME C library. Either install it:
```sh
Fedora$ dnf install gpgme-devel libassuan-devel
macOS$ brew install gpgme
```
or use the `containers_image_openpgp` build tag (e.g. using `go build -tags …`)
This will use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
the primary downside is that creating new signatures with the Golang-only implementation is not supported.
## Contributing
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.
## License
ASL 2.0

View File

@@ -7,6 +7,7 @@ import (
"io"
"io/ioutil"
"reflect"
"runtime"
"strings"
"time"
@@ -157,6 +158,10 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
}
}()
if err := checkImageDestinationForCurrentRuntimeOS(src, dest); err != nil {
return err
}
if src.IsMultiImage() {
return errors.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
}
@@ -277,6 +282,22 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
return nil
}
func checkImageDestinationForCurrentRuntimeOS(src types.Image, dest types.ImageDestination) error {
if dest.MustMatchRuntimeOS() {
c, err := src.OCIConfig()
if err != nil {
return errors.Wrapf(err, "Error parsing image configuration")
}
osErr := fmt.Errorf("image operating system %q cannot be used on %q", c.OS, runtime.GOOS)
if runtime.GOOS == "windows" && c.OS == "linux" {
return osErr
} else if runtime.GOOS != "windows" && c.OS == "windows" {
return osErr
}
}
return nil
}
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
func updateEmbeddedDockerReference(manifestUpdates *types.ManifestUpdateOptions, dest types.ImageDestination, src types.Image, canModifyManifest bool) error {
destRef := dest.Reference().DockerReference()

View File

@@ -51,6 +51,11 @@ func (d *dirImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dirImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -78,6 +78,11 @@ func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeRe
defer resp.Body.Close()
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *daemonImageDestination) MustMatchRuntimeOS() bool {
return true
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *daemonImageDestination) Close() error {
if !d.committed {

View File

@@ -34,6 +34,8 @@ const (
dockerCfgFileName = "config.json"
dockerCfgObsolete = ".dockercfg"
systemPerHostCertDirPath = "/etc/docker/certs.d"
resolvedPingV2URL = "%s://%s/v2/"
resolvedPingV1URL = "%s://%s/v1/_ping"
tagsPath = "/v2/%s/tags/list"
@@ -129,12 +131,29 @@ func newTransport() *http.Transport {
return tr
}
func setupCertificates(dir string, tlsc *tls.Config) error {
if dir == "" {
return nil
// dockerCertDir returns a path to a directory to be consumed by setupCertificates() depending on ctx and hostPort.
func dockerCertDir(ctx *types.SystemContext, hostPort string) string {
if ctx != nil && ctx.DockerCertPath != "" {
return ctx.DockerCertPath
}
var hostCertDir string
if ctx != nil && ctx.DockerPerHostCertDirPath != "" {
hostCertDir = ctx.DockerPerHostCertDirPath
} else if ctx != nil && ctx.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(ctx.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
}
return filepath.Join(hostCertDir, hostPort)
}
func setupCertificates(dir string, tlsc *tls.Config) error {
logrus.Debugf("Looking for TLS certificates and private keys in %s", dir)
fs, err := ioutil.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
if err != nil {
if os.IsNotExist(err) {
return nil
}
return err
}
@@ -146,7 +165,7 @@ func setupCertificates(dir string, tlsc *tls.Config) error {
return errors.Wrap(err, "unable to get system cert pool")
}
tlsc.RootCAs = systemPool
logrus.Debugf("crt: %s", fullPath)
logrus.Debugf(" crt: %s", fullPath)
data, err := ioutil.ReadFile(fullPath)
if err != nil {
return err
@@ -156,7 +175,7 @@ func setupCertificates(dir string, tlsc *tls.Config) error {
if strings.HasSuffix(f.Name(), ".cert") {
certName := f.Name()
keyName := certName[:len(certName)-5] + ".key"
logrus.Debugf("cert: %s", fullPath)
logrus.Debugf(" cert: %s", fullPath)
if !hasFile(fs, keyName) {
return errors.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
@@ -169,7 +188,7 @@ func setupCertificates(dir string, tlsc *tls.Config) error {
if strings.HasSuffix(f.Name(), ".key") {
keyName := f.Name()
certName := keyName[:len(keyName)-4] + ".cert"
logrus.Debugf("key: %s", fullPath)
logrus.Debugf(" key: %s", fullPath)
if !hasFile(fs, certName) {
return errors.Errorf("missing client certificate %s for key %s", certName, keyName)
}
@@ -199,18 +218,18 @@ func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool,
return nil, err
}
tr := newTransport()
if ctx != nil && (ctx.DockerCertPath != "" || ctx.DockerInsecureSkipTLSVerify) {
tlsc := &tls.Config{}
if err := setupCertificates(ctx.DockerCertPath, tlsc); err != nil {
return nil, err
}
tlsc.InsecureSkipVerify = ctx.DockerInsecureSkipTLSVerify
tr.TLSClientConfig = tlsc
tr.TLSClientConfig = serverDefault()
// It is undefined whether the host[:port] string for dockerHostname should be dockerHostname or dockerRegistry,
// because docker/docker does not read the certs.d subdirectory at all in that case. We use the user-visible
// dockerHostname here, because it is more symmetrical to read the configuration in that case as well, and because
// generally the UI hides the existence of the different dockerRegistry. But note that this behavior is
// undocumented and may change if docker/docker changes.
certDir := dockerCertDir(ctx, reference.Domain(ref.ref))
if err := setupCertificates(certDir, tr.TLSClientConfig); err != nil {
return nil, err
}
if tr.TLSClientConfig == nil {
tr.TLSClientConfig = serverDefault()
if ctx != nil && ctx.DockerInsecureSkipTLSVerify {
tr.TLSClientConfig.InsecureSkipVerify = true
}
client := &http.Client{Transport: tr}

View File

@@ -99,6 +99,11 @@ func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dockerImageDestination) MustMatchRuntimeOS() bool {
return false
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }

View File

@@ -81,6 +81,11 @@ func (d *Destination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *Destination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -19,12 +19,12 @@ import (
type ociImageDestination struct {
ref ociReference
index imgspecv1.ImageIndex
index imgspecv1.Index
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ref ociReference) types.ImageDestination {
index := imgspecv1.ImageIndex{
index := imgspecv1.Index{
Versioned: imgspec.Versioned{
SchemaVersion: 2,
},
@@ -66,6 +66,11 @@ func (d *ociImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ociImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -173,15 +178,13 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
}
annotations := make(map[string]string)
annotations["org.opencontainers.ref.name"] = d.ref.tag
annotations["org.opencontainers.image.ref.name"] = d.ref.tag
desc.Annotations = annotations
d.index.Manifests = append(d.index.Manifests, imgspecv1.ManifestDescriptor{
Descriptor: desc,
Platform: imgspecv1.Platform{
Architecture: runtime.GOARCH,
OS: runtime.GOOS,
},
})
desc.Platform = &imgspecv1.Platform{
Architecture: runtime.GOARCH,
OS: runtime.GOOS,
}
d.index.Manifests = append(d.index.Manifests, desc)
return nil
}

View File

@@ -12,7 +12,7 @@ import (
type ociImageSource struct {
ref ociReference
descriptor imgspecv1.ManifestDescriptor
descriptor imgspecv1.Descriptor
}
// newImageSource returns an ImageSource for reading from an existing directory.

View File

@@ -186,22 +186,22 @@ func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error)
return image.FromSource(src)
}
func (ref ociReference) getManifestDescriptor() (imgspecv1.ManifestDescriptor, error) {
func (ref ociReference) getManifestDescriptor() (imgspecv1.Descriptor, error) {
indexJSON, err := os.Open(ref.indexPath())
if err != nil {
return imgspecv1.ManifestDescriptor{}, err
return imgspecv1.Descriptor{}, err
}
defer indexJSON.Close()
index := imgspecv1.ImageIndex{}
index := imgspecv1.Index{}
if err := json.NewDecoder(indexJSON).Decode(&index); err != nil {
return imgspecv1.ManifestDescriptor{}, err
return imgspecv1.Descriptor{}, err
}
var d *imgspecv1.ManifestDescriptor
var d *imgspecv1.Descriptor
for _, md := range index.Manifests {
if md.MediaType != imgspecv1.MediaTypeImageManifest {
continue
}
refName, ok := md.Annotations["org.opencontainers.ref.name"]
refName, ok := md.Annotations["org.opencontainers.image.ref.name"]
if !ok {
continue
}
@@ -211,7 +211,7 @@ func (ref ociReference) getManifestDescriptor() (imgspecv1.ManifestDescriptor, e
}
}
if d == nil {
return imgspecv1.ManifestDescriptor{}, fmt.Errorf("no descriptor found for reference %q", ref.tag)
return imgspecv1.Descriptor{}, fmt.Errorf("no descriptor found for reference %q", ref.tag)
}
return *d, nil
}

View File

@@ -358,6 +358,11 @@ func (d *openshiftImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *openshiftImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -11,12 +11,15 @@ import (
"path/filepath"
"strconv"
"strings"
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/ostreedev/ostree-go/pkg/otbuiltin"
)
type blobToImport struct {
@@ -86,6 +89,11 @@ func (d *ostreeImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ostreeImageDestination) MustMatchRuntimeOS() bool {
return true
}
func (d *ostreeImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
tmpDir, err := ioutil.TempDir(d.tmpDirPath, "blob")
if err != nil {
@@ -153,7 +161,17 @@ func fixFiles(dir string, usermode bool) error {
return nil
}
func (d *ostreeImageDestination) importBlob(blob *blobToImport) error {
func (d *ostreeImageDestination) ostreeCommit(repo *otbuiltin.Repo, branch string, root string, metadata []string) error {
opts := otbuiltin.NewCommitOptions()
opts.AddMetadataString = metadata
opts.Timestamp = time.Now()
// OCI layers have no parent OSTree commit
opts.Parent = "0000000000000000000000000000000000000000000000000000000000000000"
_, err := repo.Commit(root, branch, opts)
return err
}
func (d *ostreeImageDestination) importBlob(repo *otbuiltin.Repo, blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
destinationPath := filepath.Join(d.tmpDirPath, blob.Digest.Hex(), "root")
if err := ensureDirectoryExists(destinationPath); err != nil {
@@ -181,11 +199,7 @@ func (d *ostreeImageDestination) importBlob(blob *blobToImport) error {
return err
}
}
return exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.size=%d", blob.Size),
"--branch", ostreeBranch,
fmt.Sprintf("--tree=dir=%s", destinationPath)).Run()
return d.ostreeCommit(repo, ostreeBranch, destinationPath, []string{fmt.Sprintf("docker.size=%d", blob.Size)})
}
func (d *ostreeImageDestination) importConfig(blob *blobToImport) error {
@@ -253,6 +267,16 @@ func (d *ostreeImageDestination) PutSignatures(signatures [][]byte) error {
}
func (d *ostreeImageDestination) Commit() error {
repo, err := otbuiltin.OpenRepo(d.ref.repo)
if err != nil {
return err
}
_, err = repo.PrepareTransaction()
if err != nil {
return err
}
for _, layer := range d.schema.LayersDescriptors {
hash := layer.Digest.Hex()
blob := d.blobs[hash]
@@ -261,7 +285,7 @@ func (d *ostreeImageDestination) Commit() error {
if blob == nil {
continue
}
err := d.importBlob(blob)
err := d.importBlob(repo, blob)
if err != nil {
return err
}
@@ -277,11 +301,11 @@ func (d *ostreeImageDestination) Commit() error {
}
manifestPath := filepath.Join(d.tmpDirPath, "manifest")
err := exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.manifest=%s", string(d.manifest)),
fmt.Sprintf("--branch=ociimage/%s", d.ref.branchName),
manifestPath).Run()
metadata := []string{fmt.Sprintf("docker.manifest=%s", string(d.manifest))}
err = d.ostreeCommit(repo, fmt.Sprintf("ociimage/%s", d.ref.branchName), manifestPath, metadata)
_, err = repo.CommitTransaction()
return err
}

View File

@@ -1,5 +1,7 @@
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
// NOTE: Keep this in sync with docs/atomic-signature.md and docs/atomic-signature-embedded.json!
package signature
import (

View File

@@ -13,9 +13,9 @@ import (
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/storage"
ddigest "github.com/opencontainers/go-digest"
)
@@ -307,7 +307,7 @@ func (s *storageImageDestination) ReapplyBlob(blobinfo types.BlobInfo) (types.Bl
return types.BlobInfo{}, err
}
if layerList, ok := s.Layers[blobinfo.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, blobinfo.Digest.String())
b, err := s.imageRef.transport.store.ImageBigData(s.ID, blobinfo.Digest.String())
if err != nil {
return types.BlobInfo{}, err
}
@@ -335,7 +335,7 @@ func (s *storageImageDestination) Commit() error {
logrus.Debugf("error creating image: %q", err)
return errors.Wrapf(err, "error creating image %q", s.ID)
}
img, err = s.imageRef.transport.store.GetImage(s.ID)
img, err = s.imageRef.transport.store.Image(s.ID)
if err != nil {
return errors.Wrapf(err, "error reading image %q", s.ID)
}
@@ -416,8 +416,15 @@ func (s *storageImageDestination) Commit() error {
return nil
}
var manifestMIMETypes = []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
return nil
return manifestMIMETypes
}
// PutManifest writes manifest to the destination.
@@ -442,6 +449,11 @@ func (s *storageImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (s *storageImageDestination) MustMatchRuntimeOS() bool {
return true
}
func (s *storageImageDestination) PutSignatures(signatures [][]byte) error {
sizes := []int{}
sigblob := []byte{}
@@ -468,7 +480,7 @@ func (s *storageImageSource) getBlobAndLayerID(info types.BlobInfo) (rc io.ReadC
return nil, -1, "", err
}
if layerList, ok := s.Layers[info.Digest]; !ok || len(layerList) < 1 {
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, info.Digest.String())
b, err := s.imageRef.transport.store.ImageBigData(s.ID, info.Digest.String())
if err != nil {
return nil, -1, "", err
}
@@ -492,7 +504,7 @@ func (s *storageImageSource) getBlobAndLayerID(info types.BlobInfo) (rc io.ReadC
}
func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64, err error) {
layer, err := store.GetLayer(layerID)
layer, err := store.Layer(layerID)
if err != nil {
return nil, -1, err
}
@@ -517,7 +529,7 @@ func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64,
}
func (s *storageImageSource) GetManifest() (manifestBlob []byte, MIMEType string, err error) {
manifestBlob, err = s.imageRef.transport.store.GetImageBigData(s.ID, "manifest")
manifestBlob, err = s.imageRef.transport.store.ImageBigData(s.ID, "manifest")
return manifestBlob, manifest.GuessMIMEType(manifestBlob), err
}
@@ -527,7 +539,7 @@ func (s *storageImageSource) GetTargetManifest(digest ddigest.Digest) (manifestB
func (s *storageImageSource) GetSignatures() (signatures [][]byte, err error) {
var offset int
signature, err := s.imageRef.transport.store.GetImageBigData(s.ID, "signatures")
signature, err := s.imageRef.transport.store.ImageBigData(s.ID, "signatures")
if err != nil {
return nil, err
}
@@ -549,7 +561,7 @@ func (s *storageImageSource) getSize() (int64, error) {
return -1, errors.Wrapf(err, "error reading image %q", s.imageRef.id)
}
for _, name := range names {
bigSize, err := s.imageRef.transport.store.GetImageBigDataSize(s.imageRef.id, name)
bigSize, err := s.imageRef.transport.store.ImageBigDataSize(s.imageRef.id, name)
if err != nil {
return -1, errors.Wrapf(err, "error reading data blob size %q for %q", name, s.imageRef.id)
}
@@ -560,7 +572,7 @@ func (s *storageImageSource) getSize() (int64, error) {
}
for _, layerList := range s.Layers {
for _, layerID := range layerList {
layer, err := s.imageRef.transport.store.GetLayer(layerID)
layer, err := s.imageRef.transport.store.Layer(layerID)
if err != nil {
return -1, err
}

View File

@@ -6,7 +6,7 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/containers/storage/storage"
"github.com/containers/storage"
"github.com/pkg/errors"
)
@@ -37,7 +37,7 @@ func newReference(transport storageTransport, reference, id string, name referen
// one present with the same name or ID, and return the image.
func (s *storageReference) resolveImage() (*storage.Image, error) {
if s.id == "" {
image, err := s.transport.store.GetImage(s.reference)
image, err := s.transport.store.Image(s.reference)
if image != nil && err == nil {
s.id = image.ID
}
@@ -46,7 +46,7 @@ func (s *storageReference) resolveImage() (*storage.Image, error) {
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return nil, ErrNoSuchImage
}
img, err := s.transport.store.GetImage(s.id)
img, err := s.transport.store.Image(s.id)
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", s.id)
}
@@ -83,7 +83,7 @@ func (s storageReference) DockerReference() reference.Named {
// disambiguate between images which may be present in multiple stores and
// share only their names.
func (s storageReference) StringWithinTransport() string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
if s.name == nil {
return storeSpec + "@" + s.id
}
@@ -102,8 +102,8 @@ func (s storageReference) PolicyConfigurationIdentity() string {
// graph root, in case we're using multiple drivers in the same directory for
// some reason.
func (s storageReference) PolicyConfigurationNamespaces() []string {
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
driverlessStoreSpec := "[" + s.transport.store.GetGraphRoot() + "]"
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
driverlessStoreSpec := "[" + s.transport.store.GraphRoot() + "]"
namespaces := []string{}
if s.name != nil {
if s.id != "" {

View File

@@ -10,7 +10,7 @@ import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/containers/storage/storage"
"github.com/containers/storage"
"github.com/opencontainers/go-digest"
ddigest "github.com/opencontainers/go-digest"
)
@@ -110,7 +110,7 @@ func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (
// recognize.
return nil, ErrInvalidReference
}
storeSpec := "[" + store.GetGraphDriverName() + "@" + store.GetGraphRoot() + "]"
storeSpec := "[" + store.GraphDriverName() + "@" + store.GraphRoot() + "]"
id := ""
if sum.Validate() == nil {
id = sum.Hex()
@@ -205,14 +205,14 @@ func (s storageTransport) GetStoreImage(store storage.Store, ref types.ImageRefe
if dref == nil {
if sref, ok := ref.(*storageReference); ok {
if sref.id != "" {
if img, err := store.GetImage(sref.id); err == nil {
if img, err := store.Image(sref.id); err == nil {
return img, nil
}
}
}
return nil, ErrInvalidReference
}
return store.GetImage(verboseName(dref))
return store.Image(verboseName(dref))
}
func (s *storageTransport) GetImage(ref types.ImageReference) (*storage.Image, error) {

View File

@@ -2,6 +2,7 @@ package transports
import (
"fmt"
"sort"
"sync"
"github.com/containers/image/types"
@@ -69,3 +70,15 @@ func Register(t types.ImageTransport) {
func ImageName(ref types.ImageReference) string {
return ref.Transport().Name() + ":" + ref.StringWithinTransport()
}
// ListNames returns a list of transport names
func ListNames() []string {
kt.mu.Lock()
defer kt.mu.Unlock()
var names []string
for _, transport := range kt.transports {
names = append(names, transport.Name())
}
sort.Strings(names)
return names
}

View File

@@ -148,11 +148,11 @@ type ImageDestination interface {
SupportsSignatures() error
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
ShouldCompressLayers() bool
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
AcceptsForeignLayerURLs() bool
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
MustMatchRuntimeOS() bool
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -307,7 +307,10 @@ type SystemContext struct {
// If not "", a directory containing a CA certificate (ending with ".crt"),
// a client certificate (ending with ".cert") and a client ceritificate key
// (ending with ".key") used when talking to a Docker Registry.
DockerCertPath string
DockerCertPath string
// If not "", overrides the systems default path for a directory containing host[:port] subdirectories with the same structure as DockerCertPath above.
// Ignored if DockerCertPath is non-empty.
DockerPerHostCertDirPath string
DockerInsecureSkipTLSVerify bool // Allow contacting docker registries over HTTP, or HTTPS with failed TLS verification. Note that this does not affect other TLS connections.
// if nil, the library tries to parse ~/.docker/config.json to retrieve credentials
DockerAuthConfig *DockerAuthConfig

37
vendor/github.com/containers/image/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,37 @@
github.com/Sirupsen/logrus 7f4b1adc791766938c29457bed0703fb9134421a
github.com/containers/storage 989b1c1d85f5dfe2076c67b54289cc13dc836c8c
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/distribution df5327f76fb6468b84a87771e361762b8be23fdb
github.com/docker/docker 75843d36aa5c3eaade50da005f9e0ff2602f3d5e
github.com/docker/go-connections 7da10c8c50cad14494ec818dcdfb6506265c0086
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
github.com/ghodss/yaml 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
github.com/gorilla/context 08b5f424b9271eedf6f9f0ce86cb9396ed337a42
github.com/gorilla/mux 94e7d24fd285520f3d12ae998f7fdd6b5393d453
github.com/imdario/mergo 50d4dbd4eb0e84778abe37cefef140271d96fade
github.com/mattn/go-runewidth 14207d285c6c197daabb5c9793d63e7af9ab2d50
github.com/mattn/go-shellwords 005a0944d84452842197c2108bd9168ced206f78
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
github.com/opencontainers/go-digest aa2ec055abd10d26d539eb630a92241b781ce4bc
github.com/opencontainers/image-spec v1.0.0-rc6
github.com/opencontainers/runc 6b1d0e76f239ffb435445e5ae316d2676c07c6e3
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
github.com/pkg/errors 248dadf4e9068a0b3e79f02ed0a610d935de5302
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
github.com/vbatts/tar-split bd4c5d64c3e9297f410025a3b1bd0c58f659e721
golang.org/x/crypto 453249f01cfeb54c3d549ddb75ff152ca243f9d8
golang.org/x/net 6b27048ae5e6ad1ef927e72e437531493de612fe
golang.org/x/sys 075e574b89e4c2d22f2286a7e2b919519c6f3547
gopkg.in/cheggaaa/pb.v1 d7e6ca3010b6f084d8056847f55d7f572f180678
gopkg.in/yaml.v2 a3f3340b5840cee44f372bddb5880fcbc419b46a
k8s.io/client-go bcde30fb7eaed76fd98a36b4120321b94995ffb6
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
github.com/tchap/go-patricia v2.2.6
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/BurntSushi/toml b26d9c308763d68093482582cea63d69be07a0f0
github.com/ostreedev/ostree-go 61532f383f1f48e5c27080b0b9c8b022c3706a97

View File

@@ -10,6 +10,7 @@ import (
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/stringid"
"github.com/containers/storage/pkg/truncindex"
)
var (
@@ -92,14 +93,19 @@ type ContainerStore interface {
type containerStore struct {
lockfile Locker
dir string
containers []Container
containers []*Container
idindex *truncindex.TruncIndex
byid map[string]*Container
bylayer map[string]*Container
byname map[string]*Container
}
func (r *containerStore) Containers() ([]Container, error) {
return r.containers, nil
containers := make([]Container, len(r.containers))
for i := range r.containers {
containers[i] = *(r.containers[i])
}
return containers, nil
}
func (r *containerStore) containerspath() string {
@@ -121,29 +127,31 @@ func (r *containerStore) Load() error {
if err != nil && !os.IsNotExist(err) {
return err
}
containers := []Container{}
containers := []*Container{}
layers := make(map[string]*Container)
idlist := []string{}
ids := make(map[string]*Container)
names := make(map[string]*Container)
if err = json.Unmarshal(data, &containers); len(data) == 0 || err == nil {
for n, container := range containers {
ids[container.ID] = &containers[n]
layers[container.LayerID] = &containers[n]
idlist = append(idlist, container.ID)
ids[container.ID] = containers[n]
layers[container.LayerID] = containers[n]
for _, name := range container.Names {
if conflict, ok := names[name]; ok {
r.removeName(conflict, name)
needSave = true
}
names[name] = &containers[n]
names[name] = containers[n]
}
}
}
r.containers = containers
r.idindex = truncindex.NewTruncIndex(idlist)
r.byid = ids
r.bylayer = layers
r.byname = names
if needSave {
r.Touch()
return r.Save()
}
return nil
@@ -158,6 +166,7 @@ func (r *containerStore) Save() error {
if err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(rpath, jdata, 0600)
}
@@ -174,7 +183,7 @@ func newContainerStore(dir string) (ContainerStore, error) {
cstore := containerStore{
lockfile: lockfile,
dir: dir,
containers: []Container{},
containers: []*Container{},
byid: make(map[string]*Container),
bylayer: make(map[string]*Container),
byname: make(map[string]*Container),
@@ -185,30 +194,35 @@ func newContainerStore(dir string) (ContainerStore, error) {
return &cstore, nil
}
func (r *containerStore) ClearFlag(id string, flag string) error {
if container, ok := r.byname[id]; ok {
id = container.ID
func (r *containerStore) lookup(id string) (*Container, bool) {
if container, ok := r.byid[id]; ok {
return container, ok
} else if container, ok := r.byname[id]; ok {
return container, ok
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
return container, ok
} else if longid, err := r.idindex.Get(id); err == nil {
if container, ok := r.byid[longid]; ok {
return container, ok
}
}
if _, ok := r.byid[id]; !ok {
return nil, false
}
func (r *containerStore) ClearFlag(id string, flag string) error {
container, ok := r.lookup(id)
if !ok {
return ErrContainerUnknown
}
container := r.byid[id]
delete(container.Flags, flag)
return r.Save()
}
func (r *containerStore) SetFlag(id string, flag string, value interface{}) error {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if _, ok := r.byid[id]; !ok {
container, ok := r.lookup(id)
if !ok {
return ErrContainerUnknown
}
container := r.byid[id]
container.Flags[flag] = value
return r.Save()
}
@@ -231,7 +245,7 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
}
}
if err == nil {
newContainer := Container{
container = &Container{
ID: id,
Names: names,
ImageID: image,
@@ -241,9 +255,9 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
BigDataSizes: make(map[string]int64),
Flags: make(map[string]interface{}),
}
r.containers = append(r.containers, newContainer)
container = &r.containers[len(r.containers)-1]
r.containers = append(r.containers, container)
r.byid[id] = container
r.idindex.Add(id)
r.bylayer[layer] = container
for _, name := range names {
r.byname[name] = container
@@ -253,25 +267,15 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
return container, err
}
func (r *containerStore) GetMetadata(id string) (string, error) {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if container, ok := r.byid[id]; ok {
func (r *containerStore) Metadata(id string) (string, error) {
if container, ok := r.lookup(id); ok {
return container.Metadata, nil
}
return "", ErrContainerUnknown
}
func (r *containerStore) SetMetadata(id, metadata string) error {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if container, ok := r.byid[id]; ok {
if container, ok := r.lookup(id); ok {
container.Metadata = metadata
return r.Save()
}
@@ -279,22 +283,11 @@ func (r *containerStore) SetMetadata(id, metadata string) error {
}
func (r *containerStore) removeName(container *Container, name string) {
newNames := []string{}
for _, oldName := range container.Names {
if oldName != name {
newNames = append(newNames, oldName)
}
}
container.Names = newNames
container.Names = stringSliceWithoutValue(container.Names, name)
}
func (r *containerStore) SetNames(id string, names []string) error {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if container, ok := r.byid[id]; ok {
if container, ok := r.lookup(id); ok {
for _, name := range container.Names {
delete(r.byname, name)
}
@@ -311,133 +304,104 @@ func (r *containerStore) SetNames(id string, names []string) error {
}
func (r *containerStore) Delete(id string) error {
if container, ok := r.byname[id]; ok {
id = container.ID
} else if container, ok := r.bylayer[id]; ok {
id = container.ID
}
if _, ok := r.byid[id]; !ok {
container, ok := r.lookup(id)
if !ok {
return ErrContainerUnknown
}
if container, ok := r.byid[id]; ok {
newContainers := []Container{}
for _, candidate := range r.containers {
if candidate.ID != id {
newContainers = append(newContainers, candidate)
}
}
delete(r.byid, container.ID)
delete(r.bylayer, container.LayerID)
for _, name := range container.Names {
delete(r.byname, name)
}
r.containers = newContainers
if err := r.Save(); err != nil {
return err
}
if err := os.RemoveAll(r.datadir(id)); err != nil {
return err
id = container.ID
newContainers := []*Container{}
for _, candidate := range r.containers {
if candidate.ID != id {
newContainers = append(newContainers, candidate)
}
}
delete(r.byid, id)
r.idindex.Delete(id)
delete(r.bylayer, container.LayerID)
for _, name := range container.Names {
delete(r.byname, name)
}
r.containers = newContainers
if err := r.Save(); err != nil {
return err
}
if err := os.RemoveAll(r.datadir(id)); err != nil {
return err
}
return nil
}
func (r *containerStore) Get(id string) (*Container, error) {
if c, ok := r.byname[id]; ok {
return c, nil
} else if c, ok := r.bylayer[id]; ok {
return c, nil
}
if c, ok := r.byid[id]; ok {
return c, nil
if container, ok := r.lookup(id); ok {
return container, nil
}
return nil, ErrContainerUnknown
}
func (r *containerStore) Lookup(name string) (id string, err error) {
container, ok := r.byname[name]
if !ok {
container, ok = r.byid[name]
if !ok {
return "", ErrContainerUnknown
}
if container, ok := r.lookup(name); ok {
return container.ID, nil
}
return container.ID, nil
return "", ErrContainerUnknown
}
func (r *containerStore) Exists(id string) bool {
if _, ok := r.byname[id]; ok {
return true
}
if _, ok := r.bylayer[id]; ok {
return true
}
if _, ok := r.byid[id]; ok {
return true
}
return false
_, ok := r.lookup(id)
return ok
}
func (r *containerStore) GetBigData(id, key string) ([]byte, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *containerStore) BigData(id, key string) ([]byte, error) {
c, ok := r.lookup(id)
if !ok {
return nil, ErrContainerUnknown
}
return ioutil.ReadFile(r.datapath(id, key))
return ioutil.ReadFile(r.datapath(c.ID, key))
}
func (r *containerStore) GetBigDataSize(id, key string) (int64, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *containerStore) BigDataSize(id, key string) (int64, error) {
c, ok := r.lookup(id)
if !ok {
return -1, ErrContainerUnknown
}
if size, ok := r.byid[id].BigDataSizes[key]; ok {
if size, ok := c.BigDataSizes[key]; ok {
return size, nil
}
return -1, ErrSizeUnknown
}
func (r *containerStore) GetBigDataNames(id string) ([]string, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *containerStore) BigDataNames(id string) ([]string, error) {
c, ok := r.lookup(id)
if !ok {
return nil, ErrContainerUnknown
}
return r.byid[id].BigDataNames, nil
return c.BigDataNames, nil
}
func (r *containerStore) SetBigData(id, key string, data []byte) error {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
c, ok := r.lookup(id)
if !ok {
return ErrContainerUnknown
}
if err := os.MkdirAll(r.datadir(id), 0700); err != nil {
if err := os.MkdirAll(r.datadir(c.ID), 0700); err != nil {
return err
}
err := ioutils.AtomicWriteFile(r.datapath(id, key), data, 0600)
err := ioutils.AtomicWriteFile(r.datapath(c.ID, key), data, 0600)
if err == nil {
save := false
oldSize, ok := r.byid[id].BigDataSizes[key]
r.byid[id].BigDataSizes[key] = int64(len(data))
if !ok || oldSize != r.byid[id].BigDataSizes[key] {
oldSize, ok := c.BigDataSizes[key]
c.BigDataSizes[key] = int64(len(data))
if !ok || oldSize != c.BigDataSizes[key] {
save = true
}
add := true
for _, name := range r.byid[id].BigDataNames {
for _, name := range c.BigDataNames {
if name == key {
add = false
break
}
}
if add {
r.byid[id].BigDataNames = append(r.byid[id].BigDataNames, key)
c.BigDataNames = append(c.BigDataNames, key)
save = true
}
if save {
@@ -476,6 +440,10 @@ func (r *containerStore) Modified() (bool, error) {
return r.lockfile.Modified()
}
func (r *containerStore) IsReadWrite() bool {
return r.lockfile.IsReadWrite()
}
func (r *containerStore) TouchedSince(when time.Time) bool {
return r.lockfile.TouchedSince(when)
}

View File

@@ -185,8 +185,8 @@ func (a *Driver) Status() [][2]string {
}
}
// GetMetadata not implemented
func (a *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata not implemented
func (a *Driver) Metadata(id string) (map[string]string, error) {
return nil, nil
}
@@ -372,6 +372,12 @@ func (a *Driver) Diff(id, parent string) (archive.Archive, error) {
})
}
// AdditionalImageStores returns additional image stores supported by the driver
func (a *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}
type fileGetNilCloser struct {
storage.FileGetter
}

View File

@@ -143,8 +143,8 @@ func (d *Driver) Status() [][2]string {
return status
}
// GetMetadata returns empty metadata for this driver.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata returns empty metadata for this driver.
func (d *Driver) Metadata(id string) (map[string]string, error) {
return nil, nil
}
@@ -518,3 +518,9 @@ func (d *Driver) Exists(id string) bool {
_, err := os.Stat(dir)
return err == nil
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}

View File

@@ -28,7 +28,6 @@ import (
"github.com/containers/storage/pkg/loopback"
"github.com/containers/storage/pkg/mount"
"github.com/containers/storage/pkg/parsers"
"github.com/containers/storage/storageversion"
"github.com/docker/go-units"
"github.com/opencontainers/selinux/go-selinux/label"
@@ -1668,17 +1667,17 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
}
// https://github.com/docker/docker/issues/4036
if supported := devicemapper.UdevSetSyncSupport(true); !supported {
if storageversion.IAmStatic == "true" {
logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a dynamic binary to use devicemapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
} else {
logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a more recent version of libdevmapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
}
if !devices.overrideUdevSyncCheck {
return graphdriver.ErrNotSupported
}
}
// if supported := devicemapper.UdevSetSyncSupport(true); !supported {
// if storageversion.IAmStatic == "true" {
// logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a dynamic binary to use devicemapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
// } else {
// logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a more recent version of libdevmapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
// }
//
// if !devices.overrideUdevSyncCheck {
// return graphdriver.ErrNotSupported
// }
// }
//create the root dir of the devmapper driver ownership to match this
//daemon's remapped root uid/gid so containers can start properly

View File

@@ -94,8 +94,8 @@ func (d *Driver) Status() [][2]string {
return status
}
// GetMetadata returns a map of information about the device.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata returns a map of information about the device.
func (d *Driver) Metadata(id string) (map[string]string, error) {
m, err := d.DeviceSet.exportDeviceMetadata(id)
if err != nil {
@@ -224,3 +224,9 @@ func (d *Driver) Put(id string) error {
func (d *Driver) Exists(id string) bool {
return d.DeviceSet.HasDevice(id)
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}

View File

@@ -69,11 +69,13 @@ type ProtoDriver interface {
Status() [][2]string
// Returns a set of key-value pairs which give low level information
// about the image/container driver is managing.
GetMetadata(id string) (map[string]string, error)
Metadata(id string) (map[string]string, error)
// Cleanup performs necessary tasks to release resources
// held by the driver, e.g., unmounting all layered filesystems
// known to this driver.
Cleanup() error
// AdditionalImageStores returns additional image stores supported by the driver
AdditionalImageStores() []string
}
// Driver is the interface for layered/snapshot file system drivers.

View File

@@ -1,6 +1,6 @@
// +build linux
package overlay2
package overlay
import (
"bytes"

View File

@@ -1,6 +1,6 @@
// +build linux
package overlay2
package overlay
import (
"bufio"
@@ -10,6 +10,7 @@ import (
"os"
"os/exec"
"path"
"path/filepath"
"strconv"
"strings"
"syscall"
@@ -61,10 +62,9 @@ var (
// that mounts do not fail due to length.
const (
driverName = "overlay2"
linkDir = "l"
lowerFile = "lower"
maxDepth = 128
linkDir = "l"
lowerFile = "lower"
maxDepth = 128
// idLength represents the number of random characters
// which can be used to create the unique link identifer
@@ -78,22 +78,24 @@ const (
// Driver contains information about the home directory and the list of active mounts that are created using this driver.
type Driver struct {
name string
home string
uidMaps []idtools.IDMap
gidMaps []idtools.IDMap
ctr *graphdriver.RefCounter
opts *overlayOptions
}
var backingFs = "<unknown>"
func init() {
graphdriver.Register(driverName, Init)
graphdriver.Register("overlay", InitAsOverlay)
graphdriver.Register("overlay2", InitAsOverlay2)
}
// Init returns the a native diff driver for overlay filesystem.
// If overlay filesystem is not supported on the host, graphdriver.ErrNotSupported is returned as error.
// If a overlay filesystem is not supported over a existing filesystem then error graphdriver.ErrIncompatibleFS is returned.
func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
// InitWithName returns the a naive diff driver for the overlay filesystem,
// which returns the passed-in name when asked which driver it is.
func InitWithName(name, home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
opts, err := parseOptions(options)
if err != nil {
return nil, err
@@ -112,7 +114,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
if !opts.overrideKernelCheck {
return nil, graphdriver.ErrNotSupported
}
logrus.Warnf("Using pre-4.0.0 kernel for overlay2, mount failures may require kernel update")
logrus.Warnf("Using pre-4.0.0 kernel for overlay, mount failures may require kernel update")
}
fsMagic, err := graphdriver.GetFSMagic(home)
@@ -126,7 +128,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
// check if they are running over btrfs, aufs, zfs, overlay, or ecryptfs
switch fsMagic {
case graphdriver.FsMagicBtrfs, graphdriver.FsMagicAufs, graphdriver.FsMagicZfs, graphdriver.FsMagicOverlay, graphdriver.FsMagicEcryptfs:
logrus.Errorf("'overlay2' is not supported over %s", backingFs)
logrus.Errorf("'overlay' is not supported over %s", backingFs)
return nil, graphdriver.ErrIncompatibleFS
}
@@ -144,17 +146,34 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
}
d := &Driver{
name: name,
home: home,
uidMaps: uidMaps,
gidMaps: gidMaps,
ctr: graphdriver.NewRefCounter(graphdriver.NewFsChecker(graphdriver.FsMagicOverlay)),
opts: opts,
}
return d, nil
}
// InitAsOverlay returns the a naive diff driver for overlay filesystem.
// If overlay filesystem is not supported on the host, graphdriver.ErrNotSupported is returned as error.
// If a overlay filesystem is not supported over a existing filesystem then error graphdriver.ErrIncompatibleFS is returned.
func InitAsOverlay(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
return InitWithName("overlay", home, options, uidMaps, gidMaps)
}
// InitAsOverlay2 returns the a naive diff driver for overlay filesystem.
// If overlay filesystem is not supported on the host, graphdriver.ErrNotSupported is returned as error.
// If a overlay filesystem is not supported over a existing filesystem then error graphdriver.ErrIncompatibleFS is returned.
func InitAsOverlay2(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
return InitWithName("overlay2", home, options, uidMaps, gidMaps)
}
type overlayOptions struct {
overrideKernelCheck bool
imageStores []string
}
func parseOptions(options []string) (*overlayOptions, error) {
@@ -166,13 +185,29 @@ func parseOptions(options []string) (*overlayOptions, error) {
}
key = strings.ToLower(key)
switch key {
case "overlay2.override_kernel_check":
case "overlay.override_kernel_check", "overlay2.override_kernel_check":
o.overrideKernelCheck, err = strconv.ParseBool(val)
if err != nil {
return nil, err
}
case "overlay.imagestore":
// Additional read only image stores to use for lower paths
for _, store := range strings.Split(val, ",") {
store = filepath.Clean(store)
if !filepath.IsAbs(store) {
return nil, fmt.Errorf("overlay: image path %q is not absolute. Can not be relative", store)
}
st, err := os.Stat(store)
if err != nil {
return nil, fmt.Errorf("overlay: Can't stat imageStore dir %s: %v", store, err)
}
if !st.IsDir() {
return nil, fmt.Errorf("overlay: image path %q must be a directory", store)
}
o.imageStores = append(o.imageStores, store)
}
default:
return nil, fmt.Errorf("overlay2: Unknown option %s", key)
return nil, fmt.Errorf("overlay: Unknown option %s", key)
}
}
return o, nil
@@ -200,7 +235,7 @@ func supportsOverlay() error {
}
func (d *Driver) String() string {
return driverName
return d.name
}
// Status returns current driver information in a two dimensional string array.
@@ -211,9 +246,9 @@ func (d *Driver) Status() [][2]string {
}
}
// GetMetadata returns meta data about the overlay driver such as
// Metadata returns meta data about the overlay driver such as
// LowerDir, UpperDir, WorkDir and MergeDir used to store data.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
func (d *Driver) Metadata(id string) (map[string]string, error) {
dir := d.dir(id)
if _, err := os.Stat(dir); err != nil {
return nil, err
@@ -342,8 +377,18 @@ func (d *Driver) getLower(parent string) (string, error) {
return strings.Join(lowers, ":"), nil
}
func (d *Driver) dir(id string) string {
return path.Join(d.home, id)
func (d *Driver) dir(val string) string {
newpath := path.Join(d.home, val)
if _, err := os.Stat(newpath); err != nil {
for _, p := range d.AdditionalImageStores() {
l := path.Join(p, d.name, val)
_, err = os.Stat(l)
if err == nil {
return l
}
}
}
return newpath
}
func (d *Driver) getLowerDirs(id string) ([]string, error) {
@@ -351,11 +396,12 @@ func (d *Driver) getLowerDirs(id string) ([]string, error) {
lowers, err := ioutil.ReadFile(path.Join(d.dir(id), lowerFile))
if err == nil {
for _, s := range strings.Split(string(lowers), ":") {
lp, err := os.Readlink(path.Join(d.home, s))
lower := d.dir(s)
lp, err := os.Readlink(lower)
if err != nil {
return nil, err
}
lowersArray = append(lowersArray, path.Clean(path.Join(d.home, "link", lp)))
lowersArray = append(lowersArray, path.Clean(d.dir(path.Join("link", lp))))
}
} else if !os.IsNotExist(err) {
return nil, err
@@ -396,6 +442,31 @@ func (d *Driver) Get(id string, mountLabel string) (s string, err error) {
return "", err
}
newlowers := ""
for _, l := range strings.Split(string(lowers), ":") {
lower := ""
newpath := path.Join(d.home, l)
if _, err := os.Stat(newpath); err != nil {
for _, p := range d.AdditionalImageStores() {
lower = path.Join(p, d.name, l)
if _, err2 := os.Stat(lower); err2 == nil {
break
}
lower = ""
}
if lower == "" {
return "", fmt.Errorf("Can't stat lower layer %q: %v", newpath, err)
}
} else {
lower = l
}
if newlowers == "" {
newlowers = lower
} else {
newlowers = newlowers + ":" + lower
}
}
mergedDir := path.Join(dir, "merged")
if count := d.ctr.Increment(mergedDir); count > 1 {
return mergedDir, nil
@@ -409,7 +480,7 @@ func (d *Driver) Get(id string, mountLabel string) (s string, err error) {
}()
workDir := path.Join(dir, "work")
opts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", string(lowers), path.Join(id, "diff"), path.Join(id, "work"))
opts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", newlowers, path.Join(id, "diff"), path.Join(id, "work"))
mountLabel = label.FormatMountLabel(opts, mountLabel)
if len(mountLabel) > syscall.Getpagesize() {
return "", fmt.Errorf("cannot mount layer, mount label too large %d", len(mountLabel))
@@ -512,3 +583,8 @@ func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
return archive.OverlayChanges(layers, diffPath)
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
return d.opts.imageStores
}

View File

@@ -1,6 +1,6 @@
// +build linux
package overlay2
package overlay
import (
"crypto/rand"

View File

@@ -144,12 +144,12 @@ func (d *graphDriverProxy) Status() [][2]string {
return ret.Status
}
func (d *graphDriverProxy) GetMetadata(id string) (map[string]string, error) {
func (d *graphDriverProxy) Metadata(id string) (map[string]string, error) {
args := &graphDriverRequest{
ID: id,
}
var ret graphDriverResponse
if err := d.client.Call("GraphDriver.GetMetadata", args, &ret); err != nil {
if err := d.client.Call("GraphDriver.Metadata", args, &ret); err != nil {
return nil, err
}
if ret.Err != "" {

View File

@@ -3,6 +3,6 @@
package register
import (
// register the overlay2 graphdriver
_ "github.com/containers/storage/drivers/overlay2"
// register the overlay graphdriver
_ "github.com/containers/storage/drivers/overlay"
)

View File

@@ -58,8 +58,8 @@ func (d *Driver) Status() [][2]string {
return nil
}
// GetMetadata is used for implementing the graphdriver.ProtoDriver interface. VFS does not currently have any meta data.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata is used for implementing the graphdriver.ProtoDriver interface. VFS does not currently have any meta data.
func (d *Driver) Metadata(id string) (map[string]string, error) {
return nil, nil
}
@@ -143,3 +143,9 @@ func (d *Driver) Exists(id string) bool {
_, err := os.Stat(d.dir(id))
return err == nil
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}

View File

@@ -133,7 +133,7 @@ func (d *Driver) create(id, parent, mountLabel string, readOnly bool, storageOpt
var layerChain []string
if rPId != "" {
parentPath, err := hcsshim.GetLayerMountPath(d.info, rPId)
parentPath, err := hcsshim.LayerMountPath(d.info, rPId)
if err != nil {
return err
}
@@ -248,7 +248,7 @@ func (d *Driver) Get(id, mountLabel string) (string, error) {
return "", err
}
mountPath, err := hcsshim.GetLayerMountPath(d.info, rID)
mountPath, err := hcsshim.LayerMountPath(d.info, rID)
if err != nil {
d.ctr.Decrement(rID)
if err2 := hcsshim.DeactivateLayer(d.info, rID); err2 != nil {
@@ -403,7 +403,7 @@ func (d *Driver) ApplyDiff(id, parent string, diff archive.Reader) (int64, error
if err != nil {
return 0, err
}
parentPath, err := hcsshim.GetLayerMountPath(d.info, rPId)
parentPath, err := hcsshim.LayerMountPath(d.info, rPId)
if err != nil {
return 0, err
}
@@ -446,8 +446,8 @@ func (d *Driver) DiffSize(id, parent string) (size int64, err error) {
return archive.ChangesSize(layerFs, changes), nil
}
// GetMetadata returns custom driver information.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata returns custom driver information.
func (d *Driver) Metadata(id string) (map[string]string, error) {
m := make(map[string]string)
m["dir"] = d.dir(id)
return m, nil

View File

@@ -210,8 +210,8 @@ func (d *Driver) Status() [][2]string {
}
}
// GetMetadata returns image/container metadata related to graph driver
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
// Metadata returns image/container metadata related to graph driver
func (d *Driver) Metadata(id string) (map[string]string, error) {
return nil, nil
}
@@ -403,3 +403,9 @@ func (d *Driver) Exists(id string) bool {
defer d.Unlock()
return d.filesystemsCache[d.zfsPath(id)] == true
}
// AdditionalImageStores returns additional image stores supported by the driver
func (d *Driver) AdditionalImageStores() []string {
var imageStores []string
return imageStores
}

View File

@@ -2,7 +2,6 @@ package storage
import (
"encoding/json"
"errors"
"io/ioutil"
"os"
"path/filepath"
@@ -10,6 +9,8 @@ import (
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/stringid"
"github.com/containers/storage/pkg/truncindex"
"github.com/pkg/errors"
)
var (
@@ -48,11 +49,32 @@ type Image struct {
Flags map[string]interface{} `json:"flags,omitempty"`
}
// ROImageStore provides bookkeeping for information about Images.
type ROImageStore interface {
ROFileBasedStore
ROMetadataStore
ROBigDataStore
// Exists checks if there is an image with the given ID or name.
Exists(id string) bool
// Get retrieves information about an image given an ID or name.
Get(id string) (*Image, error)
// Lookup attempts to translate a name to an ID. Most methods do this
// implicitly.
Lookup(name string) (string, error)
// Images returns a slice enumerating the known images.
Images() ([]Image, error)
}
// ImageStore provides bookkeeping for information about Images.
type ImageStore interface {
FileBasedStore
MetadataStore
BigDataStore
ROImageStore
RWFileBasedStore
RWMetadataStore
RWBigDataStore
FlaggableStore
// Create creates an image that has a specified ID (or a random one) and
@@ -64,36 +86,28 @@ type ImageStore interface {
// supplied values.
SetNames(id string, names []string) error
// Exists checks if there is an image with the given ID or name.
Exists(id string) bool
// Get retrieves information about an image given an ID or name.
Get(id string) (*Image, error)
// Delete removes the record of the image.
Delete(id string) error
// Wipe removes records of all images.
Wipe() error
// Lookup attempts to translate a name to an ID. Most methods do this
// implicitly.
Lookup(name string) (string, error)
// Images returns a slice enumerating the known images.
Images() ([]Image, error)
}
type imageStore struct {
lockfile Locker
dir string
images []Image
images []*Image
idindex *truncindex.TruncIndex
byid map[string]*Image
byname map[string]*Image
}
func (r *imageStore) Images() ([]Image, error) {
return r.images, nil
images := make([]Image, len(r.images))
for i := range r.images {
images[i] = *(r.images[i])
}
return images, nil
}
func (r *imageStore) imagespath() string {
@@ -109,38 +123,46 @@ func (r *imageStore) datapath(id, key string) string {
}
func (r *imageStore) Load() error {
needSave := false
shouldSave := false
rpath := r.imagespath()
data, err := ioutil.ReadFile(rpath)
if err != nil && !os.IsNotExist(err) {
return err
}
images := []Image{}
images := []*Image{}
idlist := []string{}
ids := make(map[string]*Image)
names := make(map[string]*Image)
if err = json.Unmarshal(data, &images); len(data) == 0 || err == nil {
for n, image := range images {
ids[image.ID] = &images[n]
ids[image.ID] = images[n]
idlist = append(idlist, image.ID)
for _, name := range image.Names {
if conflict, ok := names[name]; ok {
r.removeName(conflict, name)
needSave = true
shouldSave = true
}
names[name] = &images[n]
names[name] = images[n]
}
}
}
if shouldSave && !r.IsReadWrite() {
return errors.New("image store assigns the same name to multiple images")
}
r.images = images
r.idindex = truncindex.NewTruncIndex(idlist)
r.byid = ids
r.byname = names
if needSave {
r.Touch()
if shouldSave {
return r.Save()
}
return nil
}
func (r *imageStore) Save() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify the image store at %q", r.imagespath())
}
rpath := r.imagespath()
if err := os.MkdirAll(filepath.Dir(rpath), 0700); err != nil {
return err
@@ -149,6 +171,7 @@ func (r *imageStore) Save() error {
if err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(rpath, jdata, 0600)
}
@@ -165,7 +188,7 @@ func newImageStore(dir string) (ImageStore, error) {
istore := imageStore{
lockfile: lockfile,
dir: dir,
images: []Image{},
images: []*Image{},
byid: make(map[string]*Image),
byname: make(map[string]*Image),
}
@@ -175,31 +198,66 @@ func newImageStore(dir string) (ImageStore, error) {
return &istore, nil
}
func (r *imageStore) ClearFlag(id string, flag string) error {
if image, ok := r.byname[id]; ok {
id = image.ID
func newROImageStore(dir string) (ROImageStore, error) {
lockfile, err := GetROLockfile(filepath.Join(dir, "images.lock"))
if err != nil {
return nil, err
}
if _, ok := r.byid[id]; !ok {
lockfile.Lock()
defer lockfile.Unlock()
istore := imageStore{
lockfile: lockfile,
dir: dir,
images: []*Image{},
byid: make(map[string]*Image),
byname: make(map[string]*Image),
}
if err := istore.Load(); err != nil {
return nil, err
}
return &istore, nil
}
func (r *imageStore) lookup(id string) (*Image, bool) {
if image, ok := r.byid[id]; ok {
return image, ok
} else if image, ok := r.byname[id]; ok {
return image, ok
} else if longid, err := r.idindex.Get(id); err == nil {
image, ok := r.byid[longid]
return image, ok
}
return nil, false
}
func (r *imageStore) ClearFlag(id string, flag string) error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to clear flags on images at %q", r.imagespath())
}
image, ok := r.lookup(id)
if !ok {
return ErrImageUnknown
}
image := r.byid[id]
delete(image.Flags, flag)
return r.Save()
}
func (r *imageStore) SetFlag(id string, flag string, value interface{}) error {
if image, ok := r.byname[id]; ok {
id = image.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to set flags on images at %q", r.imagespath())
}
if _, ok := r.byid[id]; !ok {
image, ok := r.lookup(id)
if !ok {
return ErrImageUnknown
}
image := r.byid[id]
image.Flags[flag] = value
return r.Save()
}
func (r *imageStore) Create(id string, names []string, layer, metadata string) (image *Image, err error) {
if !r.IsReadWrite() {
return nil, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to create new images at %q", r.imagespath())
}
if id == "" {
id = stringid.GenerateRandomID()
_, idInUse := r.byid[id]
@@ -217,7 +275,7 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
}
}
if err == nil {
newImage := Image{
image = &Image{
ID: id,
Names: names,
TopLayer: layer,
@@ -226,8 +284,8 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
BigDataSizes: make(map[string]int64),
Flags: make(map[string]interface{}),
}
r.images = append(r.images, newImage)
image = &r.images[len(r.images)-1]
r.images = append(r.images, image)
r.idindex.Add(id)
r.byid[id] = image
for _, name := range names {
r.byname[name] = image
@@ -237,21 +295,18 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
return image, err
}
func (r *imageStore) GetMetadata(id string) (string, error) {
if image, ok := r.byname[id]; ok {
id = image.ID
}
if image, ok := r.byid[id]; ok {
func (r *imageStore) Metadata(id string) (string, error) {
if image, ok := r.lookup(id); ok {
return image.Metadata, nil
}
return "", ErrImageUnknown
}
func (r *imageStore) SetMetadata(id, metadata string) error {
if image, ok := r.byname[id]; ok {
id = image.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify image metadata at %q", r.imagespath())
}
if image, ok := r.byid[id]; ok {
if image, ok := r.lookup(id); ok {
image.Metadata = metadata
return r.Save()
}
@@ -259,20 +314,14 @@ func (r *imageStore) SetMetadata(id, metadata string) error {
}
func (r *imageStore) removeName(image *Image, name string) {
newNames := []string{}
for _, oldName := range image.Names {
if oldName != name {
newNames = append(newNames, oldName)
}
}
image.Names = newNames
image.Names = stringSliceWithoutValue(image.Names, name)
}
func (r *imageStore) SetNames(id string, names []string) error {
if image, ok := r.byname[id]; ok {
id = image.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to change image name assignments at %q", r.imagespath())
}
if image, ok := r.byid[id]; ok {
if image, ok := r.lookup(id); ok {
for _, name := range image.Names {
delete(r.byname, name)
}
@@ -289,125 +338,109 @@ func (r *imageStore) SetNames(id string, names []string) error {
}
func (r *imageStore) Delete(id string) error {
if image, ok := r.byname[id]; ok {
id = image.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to delete images at %q", r.imagespath())
}
if _, ok := r.byid[id]; !ok {
image, ok := r.lookup(id)
if !ok {
return ErrImageUnknown
}
if image, ok := r.byid[id]; ok {
newImages := []Image{}
for _, candidate := range r.images {
if candidate.ID != id {
newImages = append(newImages, candidate)
}
}
delete(r.byid, image.ID)
for _, name := range image.Names {
delete(r.byname, name)
}
r.images = newImages
if err := r.Save(); err != nil {
return err
}
if err := os.RemoveAll(r.datadir(id)); err != nil {
return err
id = image.ID
newImages := []*Image{}
for _, candidate := range r.images {
if candidate.ID != id {
newImages = append(newImages, candidate)
}
}
delete(r.byid, id)
r.idindex.Delete(id)
for _, name := range image.Names {
delete(r.byname, name)
}
r.images = newImages
if err := r.Save(); err != nil {
return err
}
if err := os.RemoveAll(r.datadir(id)); err != nil {
return err
}
return nil
}
func (r *imageStore) Get(id string) (*Image, error) {
if image, ok := r.byname[id]; ok {
return image, nil
}
if image, ok := r.byid[id]; ok {
if image, ok := r.lookup(id); ok {
return image, nil
}
return nil, ErrImageUnknown
}
func (r *imageStore) Lookup(name string) (id string, err error) {
image, ok := r.byname[name]
if !ok {
image, ok = r.byid[name]
if !ok {
return "", ErrImageUnknown
}
if image, ok := r.lookup(name); ok {
return image.ID, nil
}
return image.ID, nil
return "", ErrImageUnknown
}
func (r *imageStore) Exists(id string) bool {
if _, ok := r.byname[id]; ok {
return true
}
if _, ok := r.byid[id]; ok {
return true
}
return false
_, ok := r.lookup(id)
return ok
}
func (r *imageStore) GetBigData(id, key string) ([]byte, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *imageStore) BigData(id, key string) ([]byte, error) {
image, ok := r.lookup(id)
if !ok {
return nil, ErrImageUnknown
}
return ioutil.ReadFile(r.datapath(id, key))
return ioutil.ReadFile(r.datapath(image.ID, key))
}
func (r *imageStore) GetBigDataSize(id, key string) (int64, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *imageStore) BigDataSize(id, key string) (int64, error) {
image, ok := r.lookup(id)
if !ok {
return -1, ErrImageUnknown
}
if size, ok := r.byid[id].BigDataSizes[key]; ok {
if size, ok := image.BigDataSizes[key]; ok {
return size, nil
}
return -1, ErrSizeUnknown
}
func (r *imageStore) GetBigDataNames(id string) ([]string, error) {
if img, ok := r.byname[id]; ok {
id = img.ID
}
if _, ok := r.byid[id]; !ok {
func (r *imageStore) BigDataNames(id string) ([]string, error) {
image, ok := r.lookup(id)
if !ok {
return nil, ErrImageUnknown
}
return r.byid[id].BigDataNames, nil
return image.BigDataNames, nil
}
func (r *imageStore) SetBigData(id, key string, data []byte) error {
if img, ok := r.byname[id]; ok {
id = img.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to save data items associated with images at %q", r.imagespath())
}
if _, ok := r.byid[id]; !ok {
image, ok := r.lookup(id)
if !ok {
return ErrImageUnknown
}
if err := os.MkdirAll(r.datadir(id), 0700); err != nil {
if err := os.MkdirAll(r.datadir(image.ID), 0700); err != nil {
return err
}
err := ioutils.AtomicWriteFile(r.datapath(id, key), data, 0600)
err := ioutils.AtomicWriteFile(r.datapath(image.ID, key), data, 0600)
if err == nil {
add := true
save := false
oldSize, ok := r.byid[id].BigDataSizes[key]
r.byid[id].BigDataSizes[key] = int64(len(data))
if !ok || oldSize != r.byid[id].BigDataSizes[key] {
oldSize, ok := image.BigDataSizes[key]
image.BigDataSizes[key] = int64(len(data))
if !ok || oldSize != image.BigDataSizes[key] {
save = true
}
for _, name := range r.byid[id].BigDataNames {
for _, name := range image.BigDataNames {
if name == key {
add = false
break
}
}
if add {
r.byid[id].BigDataNames = append(r.byid[id].BigDataNames, key)
image.BigDataNames = append(image.BigDataNames, key)
save = true
}
if save {
@@ -418,6 +451,9 @@ func (r *imageStore) SetBigData(id, key string, data []byte) error {
}
func (r *imageStore) Wipe() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to delete images at %q", r.imagespath())
}
ids := []string{}
for id := range r.byid {
ids = append(ids, id)
@@ -446,6 +482,10 @@ func (r *imageStore) Modified() (bool, error) {
return r.lockfile.Modified()
}
func (r *imageStore) IsReadWrite() bool {
return r.lockfile.IsReadWrite()
}
func (r *imageStore) TouchedSince(when time.Time) bool {
return r.lockfile.TouchedSince(when)
}

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"compress/gzip"
"encoding/json"
"errors"
"io"
"io/ioutil"
"os"
@@ -15,6 +14,8 @@ import (
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/stringid"
"github.com/containers/storage/pkg/truncindex"
"github.com/pkg/errors"
"github.com/vbatts/tar-split/tar/asm"
"github.com/vbatts/tar-split/tar/storage"
)
@@ -74,28 +75,12 @@ type layerMountPoint struct {
MountCount int `json:"count"`
}
// LayerStore wraps a graph driver, adding the ability to refer to layers by
// ROLayerStore wraps a graph driver, adding the ability to refer to layers by
// name, and keeping track of parent-child relationships, along with a list of
// all known layers.
type LayerStore interface {
FileBasedStore
MetadataStore
FlaggableStore
// Create creates a new layer, optionally giving it a specified ID rather than
// a randomly-generated one, either inheriting data from another specified
// layer or the empty base layer. The new layer can optionally be given names
// and have an SELinux label specified for use when mounting it. Some
// underlying drivers can accept a "size" option. At this time, most
// underlying drivers do not themselves distinguish between writeable
// and read-only layers.
Create(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool) (*Layer, error)
// CreateWithFlags combines the functions of Create and SetFlag.
CreateWithFlags(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}) (layer *Layer, err error)
// Put combines the functions of CreateWithFlags and ApplyDiff.
Put(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}, diff archive.Reader) (*Layer, int64, error)
type ROLayerStore interface {
ROFileBasedStore
ROMetadataStore
// Exists checks if a layer with the specified name or ID is known.
Exists(id string) bool
@@ -103,28 +88,10 @@ type LayerStore interface {
// Get retrieves information about a layer given an ID or name.
Get(id string) (*Layer, error)
// SetNames replaces the list of names associated with a layer with the
// supplied values.
SetNames(id string, names []string) error
// Status returns an slice of key-value pairs, suitable for human consumption,
// relaying whatever status information the underlying driver can share.
Status() ([][2]string, error)
// Delete deletes a layer with the specified name or ID.
Delete(id string) error
// Wipe deletes all layers.
Wipe() error
// Mount mounts a layer for use. If the specified layer is the parent of other
// layers, it should not be written to. An SELinux label to be applied to the
// mount can be specified to override the one configured for the layer.
Mount(id, mountLabel string) (string, error)
// Unmount unmounts a layer when it is no longer in use.
Unmount(id string) error
// Changes returns a slice of Change structures, which contain a pathname
// (Path) and a description of what sort of change (Kind) was made by the
// layer (either ChangeModify, ChangeAdd, or ChangeDelete), relative to a
@@ -141,10 +108,6 @@ type LayerStore interface {
// produced by Diff.
DiffSize(from, to string) (int64, error)
// ApplyDiff reads a tarstream which was created by a previous call to Diff and
// applies its changes to a specified layer.
ApplyDiff(to string, diff archive.Reader) (int64, error)
// Lookup attempts to translate a name to an ID. Most methods do this
// implicitly.
Lookup(name string) (string, error)
@@ -153,20 +116,71 @@ type LayerStore interface {
Layers() ([]Layer, error)
}
// LayerStore wraps a graph driver, adding the ability to refer to layers by
// name, and keeping track of parent-child relationships, along with a list of
// all known layers.
type LayerStore interface {
ROLayerStore
RWFileBasedStore
RWMetadataStore
FlaggableStore
// Create creates a new layer, optionally giving it a specified ID rather than
// a randomly-generated one, either inheriting data from another specified
// layer or the empty base layer. The new layer can optionally be given names
// and have an SELinux label specified for use when mounting it. Some
// underlying drivers can accept a "size" option. At this time, most
// underlying drivers do not themselves distinguish between writeable
// and read-only layers.
Create(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool) (*Layer, error)
// CreateWithFlags combines the functions of Create and SetFlag.
CreateWithFlags(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}) (layer *Layer, err error)
// Put combines the functions of CreateWithFlags and ApplyDiff.
Put(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}, diff archive.Reader) (*Layer, int64, error)
// SetNames replaces the list of names associated with a layer with the
// supplied values.
SetNames(id string, names []string) error
// Delete deletes a layer with the specified name or ID.
Delete(id string) error
// Wipe deletes all layers.
Wipe() error
// Mount mounts a layer for use. If the specified layer is the parent of other
// layers, it should not be written to. An SELinux label to be applied to the
// mount can be specified to override the one configured for the layer.
Mount(id, mountLabel string) (string, error)
// Unmount unmounts a layer when it is no longer in use.
Unmount(id string) error
// ApplyDiff reads a tarstream which was created by a previous call to Diff and
// applies its changes to a specified layer.
ApplyDiff(to string, diff archive.Reader) (int64, error)
}
type layerStore struct {
lockfile Locker
rundir string
driver drivers.Driver
layerdir string
layers []Layer
layers []*Layer
idindex *truncindex.TruncIndex
byid map[string]*Layer
byname map[string]*Layer
byparent map[string][]*Layer
bymount map[string]*Layer
}
func (r *layerStore) Layers() ([]Layer, error) {
return r.layers, nil
layers := make([]Layer, len(r.layers))
for i := range r.layers {
layers[i] = *(r.layers[i])
}
return layers, nil
}
func (r *layerStore) mountspath() string {
@@ -178,34 +192,39 @@ func (r *layerStore) layerspath() string {
}
func (r *layerStore) Load() error {
needSave := false
shouldSave := false
rpath := r.layerspath()
data, err := ioutil.ReadFile(rpath)
if err != nil && !os.IsNotExist(err) {
return err
}
layers := []Layer{}
layers := []*Layer{}
idlist := []string{}
ids := make(map[string]*Layer)
names := make(map[string]*Layer)
mounts := make(map[string]*Layer)
parents := make(map[string][]*Layer)
if err = json.Unmarshal(data, &layers); len(data) == 0 || err == nil {
for n, layer := range layers {
ids[layer.ID] = &layers[n]
ids[layer.ID] = layers[n]
idlist = append(idlist, layer.ID)
for _, name := range layer.Names {
if conflict, ok := names[name]; ok {
r.removeName(conflict, name)
needSave = true
shouldSave = true
}
names[name] = &layers[n]
names[name] = layers[n]
}
if pslice, ok := parents[layer.Parent]; ok {
parents[layer.Parent] = append(pslice, &layers[n])
parents[layer.Parent] = append(pslice, layers[n])
} else {
parents[layer.Parent] = []*Layer{&layers[n]}
parents[layer.Parent] = []*Layer{layers[n]}
}
}
}
if shouldSave && !r.IsReadWrite() {
return errors.New("layer store assigns the same name to multiple layers")
}
mpath := r.mountspath()
data, err = ioutil.ReadFile(mpath)
if err != nil && !os.IsNotExist(err) {
@@ -224,33 +243,37 @@ func (r *layerStore) Load() error {
}
}
r.layers = layers
r.idindex = truncindex.NewTruncIndex(idlist)
r.byid = ids
r.byname = names
r.byparent = parents
r.bymount = mounts
err = nil
// Last step: try to remove anything that a previous user of this
// storage area marked for deletion but didn't manage to actually
// delete.
for _, layer := range r.layers {
if cleanup, ok := layer.Flags[incompleteFlag]; ok {
if b, ok := cleanup.(bool); ok && b {
err = r.Delete(layer.ID)
if err != nil {
break
// Last step: if we're writable, try to remove anything that a previous
// user of this storage area marked for deletion but didn't manage to
// actually delete.
if r.IsReadWrite() {
for _, layer := range r.layers {
if cleanup, ok := layer.Flags[incompleteFlag]; ok {
if b, ok := cleanup.(bool); ok && b {
err = r.Delete(layer.ID)
if err != nil {
break
}
shouldSave = true
}
needSave = true
}
}
}
if needSave {
r.Touch()
return r.Save()
if shouldSave {
return r.Save()
}
}
return err
}
func (r *layerStore) Save() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify the layer store at %q", r.layerspath())
}
rpath := r.layerspath()
if err := os.MkdirAll(filepath.Dir(rpath), 0700); err != nil {
return err
@@ -280,6 +303,7 @@ func (r *layerStore) Save() error {
if err := ioutils.AtomicWriteFile(rpath, jldata, 0600); err != nil {
return err
}
defer r.Touch()
return ioutils.AtomicWriteFile(mpath, jmdata, 0600)
}
@@ -304,7 +328,6 @@ func newLayerStore(rundir string, layerdir string, driver drivers.Driver) (Layer
byid: make(map[string]*Layer),
bymount: make(map[string]*Layer),
byname: make(map[string]*Layer),
byparent: make(map[string][]*Layer),
}
if err := rlstore.Load(); err != nil {
return nil, err
@@ -312,26 +335,60 @@ func newLayerStore(rundir string, layerdir string, driver drivers.Driver) (Layer
return &rlstore, nil
}
func (r *layerStore) ClearFlag(id string, flag string) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
func newROLayerStore(rundir string, layerdir string, driver drivers.Driver) (ROLayerStore, error) {
lockfile, err := GetROLockfile(filepath.Join(layerdir, "layers.lock"))
if err != nil {
return nil, err
}
if _, ok := r.byid[id]; !ok {
lockfile.Lock()
defer lockfile.Unlock()
rlstore := layerStore{
lockfile: lockfile,
driver: driver,
rundir: rundir,
layerdir: layerdir,
byid: make(map[string]*Layer),
bymount: make(map[string]*Layer),
byname: make(map[string]*Layer),
}
if err := rlstore.Load(); err != nil {
return nil, err
}
return &rlstore, nil
}
func (r *layerStore) lookup(id string) (*Layer, bool) {
if layer, ok := r.byid[id]; ok {
return layer, ok
} else if layer, ok := r.byname[id]; ok {
return layer, ok
} else if longid, err := r.idindex.Get(id); err == nil {
layer, ok := r.byid[longid]
return layer, ok
}
return nil, false
}
func (r *layerStore) ClearFlag(id string, flag string) error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to clear flags on layers at %q", r.layerspath())
}
layer, ok := r.lookup(id)
if !ok {
return ErrLayerUnknown
}
layer := r.byid[id]
delete(layer.Flags, flag)
return r.Save()
}
func (r *layerStore) SetFlag(id string, flag string, value interface{}) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to set flags on layers at %q", r.layerspath())
}
if _, ok := r.byid[id]; !ok {
layer, ok := r.lookup(id)
if !ok {
return ErrLayerUnknown
}
layer := r.byid[id]
layer.Flags[flag] = value
return r.Save()
}
@@ -341,6 +398,9 @@ func (r *layerStore) Status() ([][2]string, error) {
}
func (r *layerStore) Put(id, parent string, names []string, mountLabel string, options map[string]string, writeable bool, flags map[string]interface{}, diff archive.Reader) (layer *Layer, size int64, err error) {
if !r.IsReadWrite() {
return nil, -1, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to create new layers at %q", r.layerspath())
}
size = -1
if err := os.MkdirAll(r.rundir, 0700); err != nil {
return nil, -1, err
@@ -348,8 +408,10 @@ func (r *layerStore) Put(id, parent string, names []string, mountLabel string, o
if err := os.MkdirAll(r.layerdir, 0700); err != nil {
return nil, -1, err
}
if parentLayer, ok := r.byname[parent]; ok {
parent = parentLayer.ID
if parent != "" {
if parentLayer, ok := r.lookup(parent); ok {
parent = parentLayer.ID
}
}
if id == "" {
id = stringid.GenerateRandomID()
@@ -373,25 +435,19 @@ func (r *layerStore) Put(id, parent string, names []string, mountLabel string, o
err = r.driver.Create(id, parent, mountLabel, options)
}
if err == nil {
newLayer := Layer{
layer = &Layer{
ID: id,
Parent: parent,
Names: names,
MountLabel: mountLabel,
Flags: make(map[string]interface{}),
}
r.layers = append(r.layers, newLayer)
layer = &r.layers[len(r.layers)-1]
r.layers = append(r.layers, layer)
r.idindex.Add(id)
r.byid[id] = layer
for _, name := range names {
r.byname[name] = layer
}
if pslice, ok := r.byparent[parent]; ok {
pslice = append(pslice, layer)
r.byparent[parent] = pslice
} else {
r.byparent[parent] = []*Layer{layer}
}
for flag, value := range flags {
layer.Flags[flag] = value
}
@@ -436,48 +492,45 @@ func (r *layerStore) Create(id, parent string, names []string, mountLabel string
}
func (r *layerStore) Mount(id, mountLabel string) (string, error) {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return "", errors.Wrapf(ErrStoreIsReadOnly, "not allowed to update mount locations for layers at %q", r.mountspath())
}
if _, ok := r.byid[id]; !ok {
layer, ok := r.lookup(id)
if !ok {
return "", ErrLayerUnknown
}
layer := r.byid[id]
if layer.MountCount > 0 {
layer.MountCount++
return layer.MountPoint, r.Save()
}
if mountLabel == "" {
if layer, ok := r.byid[id]; ok {
mountLabel = layer.MountLabel
}
mountLabel = layer.MountLabel
}
mountpoint, err := r.driver.Get(id, mountLabel)
if mountpoint != "" && err == nil {
if layer, ok := r.byid[id]; ok {
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
}
layer.MountPoint = filepath.Clean(mountpoint)
layer.MountCount++
r.bymount[layer.MountPoint] = layer
err = r.Save()
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
}
layer.MountPoint = filepath.Clean(mountpoint)
layer.MountCount++
r.bymount[layer.MountPoint] = layer
err = r.Save()
}
return mountpoint, err
}
func (r *layerStore) Unmount(id string) error {
if layer, ok := r.bymount[filepath.Clean(id)]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to update mount locations for layers at %q", r.mountspath())
}
if layer, ok := r.byname[id]; ok {
id = layer.ID
layer, ok := r.lookup(id)
if !ok {
layerByMount, ok := r.bymount[filepath.Clean(id)]
if !ok {
return ErrLayerUnknown
}
layer = layerByMount
}
if _, ok := r.byid[id]; !ok {
return ErrLayerUnknown
}
layer := r.byid[id]
if layer.MountCount > 1 {
layer.MountCount--
return r.Save()
@@ -495,20 +548,14 @@ func (r *layerStore) Unmount(id string) error {
}
func (r *layerStore) removeName(layer *Layer, name string) {
newNames := []string{}
for _, oldName := range layer.Names {
if oldName != name {
newNames = append(newNames, oldName)
}
}
layer.Names = newNames
layer.Names = stringSliceWithoutValue(layer.Names, name)
}
func (r *layerStore) SetNames(id string, names []string) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to change layer name assignments at %q", r.layerspath())
}
if layer, ok := r.byid[id]; ok {
if layer, ok := r.lookup(id); ok {
for _, name := range layer.Names {
delete(r.byname, name)
}
@@ -524,21 +571,18 @@ func (r *layerStore) SetNames(id string, names []string) error {
return ErrLayerUnknown
}
func (r *layerStore) GetMetadata(id string) (string, error) {
if layer, ok := r.byname[id]; ok {
id = layer.ID
}
if layer, ok := r.byid[id]; ok {
func (r *layerStore) Metadata(id string) (string, error) {
if layer, ok := r.lookup(id); ok {
return layer.Metadata, nil
}
return "", ErrLayerUnknown
}
func (r *layerStore) SetMetadata(id, metadata string) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify layer metadata at %q", r.layerspath())
}
if layer, ok := r.byid[id]; ok {
if layer, ok := r.lookup(id); ok {
layer.Metadata = metadata
return r.Save()
}
@@ -550,13 +594,15 @@ func (r *layerStore) tspath(id string) string {
}
func (r *layerStore) Delete(id string) error {
if layer, ok := r.byname[id]; ok {
id = layer.ID
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to delete layers at %q", r.layerspath())
}
if _, ok := r.byid[id]; !ok {
layer, ok := r.lookup(id)
if !ok {
return ErrLayerUnknown
}
for r.byid[id].MountCount > 0 {
id = layer.ID
for layer.MountCount > 0 {
if err := r.Unmount(id); err != nil {
return err
}
@@ -564,71 +610,48 @@ func (r *layerStore) Delete(id string) error {
err := r.driver.Remove(id)
if err == nil {
os.Remove(r.tspath(id))
if layer, ok := r.byid[id]; ok {
pslice := r.byparent[layer.Parent]
newPslice := []*Layer{}
for _, candidate := range pslice {
if candidate.ID != id {
newPslice = append(newPslice, candidate)
}
}
delete(r.byid, layer.ID)
if len(newPslice) > 0 {
r.byparent[layer.Parent] = newPslice
} else {
delete(r.byparent, layer.Parent)
}
for _, name := range layer.Names {
delete(r.byname, name)
}
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
}
newLayers := []Layer{}
for _, candidate := range r.layers {
if candidate.ID != id {
newLayers = append(newLayers, candidate)
}
}
r.layers = newLayers
if err = r.Save(); err != nil {
return err
delete(r.byid, id)
r.idindex.Delete(id)
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
}
newLayers := []*Layer{}
for _, candidate := range r.layers {
if candidate.ID != id {
newLayers = append(newLayers, candidate)
}
}
r.layers = newLayers
if err = r.Save(); err != nil {
return err
}
}
return err
}
func (r *layerStore) Lookup(name string) (id string, err error) {
layer, ok := r.byname[name]
if !ok {
layer, ok = r.byid[name]
if !ok {
return "", ErrLayerUnknown
}
if layer, ok := r.lookup(name); ok {
return layer.ID, nil
}
return layer.ID, nil
return "", ErrLayerUnknown
}
func (r *layerStore) Exists(id string) bool {
if layer, ok := r.byname[id]; ok {
id = layer.ID
}
l, exists := r.byid[id]
return l != nil && exists
_, ok := r.lookup(id)
return ok
}
func (r *layerStore) Get(id string) (*Layer, error) {
if l, ok := r.byname[id]; ok {
return l, nil
}
if l, ok := r.byid[id]; ok {
return l, nil
if layer, ok := r.lookup(id); ok {
return layer, nil
}
return nil, ErrLayerUnknown
}
func (r *layerStore) Wipe() error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to delete layers at %q", r.layerspath())
}
ids := []string{}
for id := range r.byid {
ids = append(ids, id)
@@ -641,22 +664,34 @@ func (r *layerStore) Wipe() error {
return nil
}
func (r *layerStore) Changes(from, to string) ([]archive.Change, error) {
if layer, ok := r.byname[from]; ok {
from = layer.ID
}
if layer, ok := r.byname[to]; ok {
to = layer.ID
func (r *layerStore) findParentAndLayer(from, to string) (fromID string, toID string, toLayer *Layer, err error) {
var ok bool
var fromLayer *Layer
toLayer, ok = r.lookup(to)
if !ok {
return "", "", nil, ErrLayerUnknown
}
to = toLayer.ID
if from == "" {
if layer, ok := r.byid[to]; ok {
from = layer.Parent
from = toLayer.Parent
}
if from != "" {
fromLayer, ok = r.lookup(from)
if ok {
from = fromLayer.ID
} else {
fromLayer, ok = r.lookup(toLayer.Parent)
if ok {
from = fromLayer.ID
}
}
}
if to == "" {
return nil, ErrLayerUnknown
}
if _, ok := r.byid[to]; !ok {
return from, to, toLayer, nil
}
func (r *layerStore) Changes(from, to string) ([]archive.Change, error) {
from, to, _, err := r.findParentAndLayer(from, to)
if err != nil {
return nil, ErrLayerUnknown
}
return r.driver.Changes(to, from)
@@ -694,32 +729,19 @@ func (r *layerStore) newFileGetter(id string) (drivers.FileGetCloser, error) {
func (r *layerStore) Diff(from, to string) (io.ReadCloser, error) {
var metadata storage.Unpacker
if layer, ok := r.byname[from]; ok {
from = layer.ID
}
if layer, ok := r.byname[to]; ok {
to = layer.ID
}
if from == "" {
if layer, ok := r.byid[to]; ok {
from = layer.Parent
}
}
if to == "" {
return nil, ErrParentUnknown
}
if _, ok := r.byid[to]; !ok {
from, to, toLayer, err := r.findParentAndLayer(from, to)
if err != nil {
return nil, ErrLayerUnknown
}
compression := archive.Uncompressed
if cflag, ok := r.byid[to].Flags[compressionFlag]; ok {
if cflag, ok := toLayer.Flags[compressionFlag]; ok {
if ctype, ok := cflag.(float64); ok {
compression = archive.Compression(ctype)
} else if ctype, ok := cflag.(archive.Compression); ok {
compression = archive.Compression(ctype)
}
}
if from != r.byid[to].Parent {
if from != toLayer.Parent {
diff, err := r.driver.Diff(to, from)
if err == nil && (compression != archive.Uncompressed) {
preader, pwriter := io.Pipe()
@@ -797,31 +819,15 @@ func (r *layerStore) Diff(from, to string) (io.ReadCloser, error) {
}
func (r *layerStore) DiffSize(from, to string) (size int64, err error) {
if layer, ok := r.byname[from]; ok {
from = layer.ID
}
if layer, ok := r.byname[to]; ok {
to = layer.ID
}
if from == "" {
if layer, ok := r.byid[to]; ok {
from = layer.Parent
}
}
if to == "" {
return -1, ErrParentUnknown
}
if _, ok := r.byid[to]; !ok {
from, to, _, err = r.findParentAndLayer(from, to)
if err != nil {
return -1, ErrLayerUnknown
}
return r.driver.DiffSize(to, from)
}
func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err error) {
if layer, ok := r.byname[to]; ok {
to = layer.ID
}
layer, ok := r.byid[to]
layer, ok := r.lookup(to)
if !ok {
return -1, ErrLayerUnknown
}
@@ -885,6 +891,10 @@ func (r *layerStore) Modified() (bool, error) {
return r.lockfile.Modified()
}
func (r *layerStore) IsReadWrite() bool {
return r.lockfile.IsReadWrite()
}
func (r *layerStore) TouchedSince(when time.Time) bool {
return r.lockfile.TouchedSince(when)
}

186
vendor/github.com/containers/storage/lockfile.go generated vendored Normal file
View File

@@ -0,0 +1,186 @@
package storage
import (
"fmt"
"os"
"path/filepath"
"sync"
"time"
"github.com/containers/storage/pkg/stringid"
"github.com/pkg/errors"
"golang.org/x/sys/unix"
)
// A Locker represents a file lock where the file is used to cache an
// identifier of the last party that made changes to whatever's being protected
// by the lock.
type Locker interface {
sync.Locker
// Touch records, for others sharing the lock, that the caller was the
// last writer. It should only be called with the lock held.
Touch() error
// Modified() checks if the most recent writer was a party other than the
// last recorded writer. It should only be called with the lock held.
Modified() (bool, error)
// TouchedSince() checks if the most recent writer modified the file (likely using Touch()) after the specified time.
TouchedSince(when time.Time) bool
// IsReadWrite() checks if the lock file is read-write
IsReadWrite() bool
}
type lockfile struct {
mu sync.Mutex
file string
fd uintptr
lw string
locktype int16
}
var (
lockfiles map[string]*lockfile
lockfilesLock sync.Mutex
// ErrLockReadOnly indicates that the caller only took a read-only lock, and is not allowed to write
ErrLockReadOnly = errors.New("lock is not a read-write lock")
)
// GetLockfile opens a read-write lock file, creating it if necessary. The
// Locker object it returns will be returned unlocked.
func GetLockfile(path string) (Locker, error) {
lockfilesLock.Lock()
defer lockfilesLock.Unlock()
if lockfiles == nil {
lockfiles = make(map[string]*lockfile)
}
cleanPath := filepath.Clean(path)
if locker, ok := lockfiles[cleanPath]; ok {
if !locker.IsReadWrite() {
return nil, errors.Wrapf(ErrLockReadOnly, "lock %q is a read-only lock", cleanPath)
}
return locker, nil
}
fd, err := unix.Open(cleanPath, os.O_RDWR|os.O_CREATE, unix.S_IRUSR|unix.S_IWUSR)
if err != nil {
return nil, errors.Wrapf(err, "error opening %q", cleanPath)
}
unix.CloseOnExec(fd)
locker := &lockfile{file: path, fd: uintptr(fd), lw: stringid.GenerateRandomID(), locktype: unix.F_WRLCK}
lockfiles[filepath.Clean(path)] = locker
return locker, nil
}
// GetROLockfile opens a read-only lock file. The Locker object it returns
// will be returned unlocked.
func GetROLockfile(path string) (Locker, error) {
lockfilesLock.Lock()
defer lockfilesLock.Unlock()
if lockfiles == nil {
lockfiles = make(map[string]*lockfile)
}
cleanPath := filepath.Clean(path)
if locker, ok := lockfiles[cleanPath]; ok {
if locker.IsReadWrite() {
return nil, fmt.Errorf("lock %q is a read-write lock", cleanPath)
}
return locker, nil
}
fd, err := unix.Open(cleanPath, os.O_RDONLY, 0)
if err != nil {
return nil, errors.Wrapf(err, "error opening %q", cleanPath)
}
unix.CloseOnExec(fd)
locker := &lockfile{file: path, fd: uintptr(fd), lw: stringid.GenerateRandomID(), locktype: unix.F_RDLCK}
lockfiles[filepath.Clean(path)] = locker
return locker, nil
}
// Lock locks the lock file
func (l *lockfile) Lock() {
lk := unix.Flock_t{
Type: l.locktype,
Whence: int16(os.SEEK_SET),
Start: 0,
Len: 0,
Pid: int32(os.Getpid()),
}
l.mu.Lock()
for unix.FcntlFlock(l.fd, unix.F_SETLKW, &lk) != nil {
time.Sleep(10 * time.Millisecond)
}
}
// Unlock unlocks the lock file
func (l *lockfile) Unlock() {
lk := unix.Flock_t{
Type: unix.F_UNLCK,
Whence: int16(os.SEEK_SET),
Start: 0,
Len: 0,
Pid: int32(os.Getpid()),
}
for unix.FcntlFlock(l.fd, unix.F_SETLKW, &lk) != nil {
time.Sleep(10 * time.Millisecond)
}
l.mu.Unlock()
}
// Touch updates the lock file with the UID of the user
func (l *lockfile) Touch() error {
l.lw = stringid.GenerateRandomID()
id := []byte(l.lw)
_, err := unix.Seek(int(l.fd), 0, os.SEEK_SET)
if err != nil {
return err
}
n, err := unix.Write(int(l.fd), id)
if err != nil {
return err
}
if n != len(id) {
return unix.ENOSPC
}
err = unix.Fsync(int(l.fd))
if err != nil {
return err
}
return nil
}
// Modified indicates if the lock file has been updated since the last time it was loaded
func (l *lockfile) Modified() (bool, error) {
id := []byte(l.lw)
_, err := unix.Seek(int(l.fd), 0, os.SEEK_SET)
if err != nil {
return true, err
}
n, err := unix.Read(int(l.fd), id)
if err != nil {
return true, err
}
if n != len(id) {
return true, unix.ENOSPC
}
lw := l.lw
l.lw = string(id)
return l.lw != lw, nil
}
// TouchedSince indicates if the lock file has been touched since the specified time
func (l *lockfile) TouchedSince(when time.Time) bool {
st := unix.Stat_t{}
err := unix.Fstat(int(l.fd), &st)
if err != nil {
return true
}
touched := time.Unix(statTMtimeUnix(st))
return when.Before(touched)
}
// IsRWLock indicates if the lock file is a read-write lock
func (l *lockfile) IsReadWrite() bool {
return (l.locktype == unix.F_WRLCK)
}

View File

@@ -0,0 +1,137 @@
// Package truncindex provides a general 'index tree', used by Docker
// in order to be able to reference containers by only a few unambiguous
// characters of their id.
package truncindex
import (
"errors"
"fmt"
"strings"
"sync"
"github.com/tchap/go-patricia/patricia"
)
var (
// ErrEmptyPrefix is an error returned if the prefix was empty.
ErrEmptyPrefix = errors.New("Prefix can't be empty")
// ErrIllegalChar is returned when a space is in the ID
ErrIllegalChar = errors.New("illegal character: ' '")
// ErrNotExist is returned when ID or its prefix not found in index.
ErrNotExist = errors.New("ID does not exist")
)
// ErrAmbiguousPrefix is returned if the prefix was ambiguous
// (multiple ids for the prefix).
type ErrAmbiguousPrefix struct {
prefix string
}
func (e ErrAmbiguousPrefix) Error() string {
return fmt.Sprintf("Multiple IDs found with provided prefix: %s", e.prefix)
}
// TruncIndex allows the retrieval of string identifiers by any of their unique prefixes.
// This is used to retrieve image and container IDs by more convenient shorthand prefixes.
type TruncIndex struct {
sync.RWMutex
trie *patricia.Trie
ids map[string]struct{}
}
// NewTruncIndex creates a new TruncIndex and initializes with a list of IDs.
func NewTruncIndex(ids []string) (idx *TruncIndex) {
idx = &TruncIndex{
ids: make(map[string]struct{}),
// Change patricia max prefix per node length,
// because our len(ID) always 64
trie: patricia.NewTrie(patricia.MaxPrefixPerNode(64)),
}
for _, id := range ids {
idx.addID(id)
}
return
}
func (idx *TruncIndex) addID(id string) error {
if strings.Contains(id, " ") {
return ErrIllegalChar
}
if id == "" {
return ErrEmptyPrefix
}
if _, exists := idx.ids[id]; exists {
return fmt.Errorf("id already exists: '%s'", id)
}
idx.ids[id] = struct{}{}
if inserted := idx.trie.Insert(patricia.Prefix(id), struct{}{}); !inserted {
return fmt.Errorf("failed to insert id: %s", id)
}
return nil
}
// Add adds a new ID to the TruncIndex.
func (idx *TruncIndex) Add(id string) error {
idx.Lock()
defer idx.Unlock()
if err := idx.addID(id); err != nil {
return err
}
return nil
}
// Delete removes an ID from the TruncIndex. If there are multiple IDs
// with the given prefix, an error is thrown.
func (idx *TruncIndex) Delete(id string) error {
idx.Lock()
defer idx.Unlock()
if _, exists := idx.ids[id]; !exists || id == "" {
return fmt.Errorf("no such id: '%s'", id)
}
delete(idx.ids, id)
if deleted := idx.trie.Delete(patricia.Prefix(id)); !deleted {
return fmt.Errorf("no such id: '%s'", id)
}
return nil
}
// Get retrieves an ID from the TruncIndex. If there are multiple IDs
// with the given prefix, an error is thrown.
func (idx *TruncIndex) Get(s string) (string, error) {
if s == "" {
return "", ErrEmptyPrefix
}
var (
id string
)
subTreeVisitFunc := func(prefix patricia.Prefix, item patricia.Item) error {
if id != "" {
// we haven't found the ID if there are two or more IDs
id = ""
return ErrAmbiguousPrefix{prefix: string(prefix)}
}
id = string(prefix)
return nil
}
idx.RLock()
defer idx.RUnlock()
if err := idx.trie.VisitSubtree(patricia.Prefix(s), subTreeVisitFunc); err != nil {
return "", err
}
if id != "" {
return id, nil
}
return "", ErrNotExist
}
// Iterate iterates over all stored IDs, and passes each of them to the given handler.
func (idx *TruncIndex) Iterate(handler func(id string)) {
idx.trie.Visit(func(prefix patricia.Prefix, item patricia.Item) error {
handler(string(prefix))
return nil
})
}

11
vendor/github.com/containers/storage/stat_mtim.go generated vendored Normal file
View File

@@ -0,0 +1,11 @@
// +build linux solaris
package storage
import (
"golang.org/x/sys/unix"
)
func statTMtimeUnix(st unix.Stat_t) (int64, int64) {
return st.Mtim.Unix()
}

11
vendor/github.com/containers/storage/stat_mtimespec.go generated vendored Normal file
View File

@@ -0,0 +1,11 @@
// +build !linux,!solaris
package storage
import (
"golang.org/x/sys/unix"
)
func statTMtimeUnix(st unix.Stat_t) (int64, int64) {
return st.Mtimespec.Unix()
}

View File

@@ -1,138 +0,0 @@
package storage
import (
"os"
"path/filepath"
"sync"
"syscall"
"time"
"github.com/containers/storage/pkg/stringid"
)
// A Locker represents a file lock where the file is used to cache an
// identifier of the last party that made changes to whatever's being protected
// by the lock.
type Locker interface {
sync.Locker
// Touch records, for others sharing the lock, that the caller was the
// last writer. It should only be called with the lock held.
Touch() error
// Modified() checks if the most recent writer was a party other than the
// last recorded writer. It should only be called with the lock held.
Modified() (bool, error)
// TouchedSince() checks if the most recent writer modified the file (likely using Touch()) after the specified time.
TouchedSince(when time.Time) bool
}
type lockfile struct {
mu sync.Mutex
file string
fd uintptr
lw string
}
var (
lockfiles map[string]*lockfile
lockfilesLock sync.Mutex
)
// GetLockfile opens a lock file, creating it if necessary. The Locker object
// return will be returned unlocked.
func GetLockfile(path string) (Locker, error) {
lockfilesLock.Lock()
defer lockfilesLock.Unlock()
if lockfiles == nil {
lockfiles = make(map[string]*lockfile)
}
if locker, ok := lockfiles[filepath.Clean(path)]; ok {
return locker, nil
}
fd, err := syscall.Open(filepath.Clean(path), os.O_RDWR|os.O_CREATE, syscall.S_IRUSR|syscall.S_IWUSR)
if err != nil {
return nil, err
}
locker := &lockfile{file: path, fd: uintptr(fd), lw: stringid.GenerateRandomID()}
lockfiles[filepath.Clean(path)] = locker
return locker, nil
}
func (l *lockfile) Lock() {
lk := syscall.Flock_t{
Type: syscall.F_WRLCK,
Whence: int16(os.SEEK_SET),
Start: 0,
Len: 0,
Pid: int32(os.Getpid()),
}
l.mu.Lock()
for syscall.FcntlFlock(l.fd, syscall.F_SETLKW, &lk) != nil {
time.Sleep(10 * time.Millisecond)
}
}
func (l *lockfile) Unlock() {
lk := syscall.Flock_t{
Type: syscall.F_UNLCK,
Whence: int16(os.SEEK_SET),
Start: 0,
Len: 0,
Pid: int32(os.Getpid()),
}
for syscall.FcntlFlock(l.fd, syscall.F_SETLKW, &lk) != nil {
time.Sleep(10 * time.Millisecond)
}
l.mu.Unlock()
}
func (l *lockfile) Touch() error {
l.lw = stringid.GenerateRandomID()
id := []byte(l.lw)
_, err := syscall.Seek(int(l.fd), 0, os.SEEK_SET)
if err != nil {
return err
}
n, err := syscall.Write(int(l.fd), id)
if err != nil {
return err
}
if n != len(id) {
return syscall.ENOSPC
}
err = syscall.Fsync(int(l.fd))
if err != nil {
return err
}
return nil
}
func (l *lockfile) Modified() (bool, error) {
id := []byte(l.lw)
_, err := syscall.Seek(int(l.fd), 0, os.SEEK_SET)
if err != nil {
return true, err
}
n, err := syscall.Read(int(l.fd), id)
if err != nil {
return true, err
}
if n != len(id) {
return true, syscall.ENOSPC
}
lw := l.lw
l.lw = string(id)
return l.lw != lw, nil
}
func (l *lockfile) TouchedSince(when time.Time) bool {
st := syscall.Stat_t{}
err := syscall.Fstat(int(l.fd), &st)
if err != nil {
return true
}
touched := time.Unix(st.Mtim.Unix())
return when.Before(touched)
}

View File

@@ -1,13 +0,0 @@
// +build !containersstorageautogen
// Package storageversion is auto-generated at build-time
package storageversion
// Default build-time variable for library-import.
// This file is overridden on build with build-time informations.
const (
GitCommit string = "library-import"
Version string = "library-import"
BuildTime string = "library-import"
IAmStatic string = "library-import"
)

19
vendor/github.com/containers/storage/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,19 @@
github.com/BurntSushi/toml master
github.com/Microsoft/go-winio 307e919c663683a9000576fdc855acaf9534c165
github.com/Microsoft/hcsshim 0f615c198a84e0344b4ed49c464d8833d4648dfc
github.com/Sirupsen/logrus 61e43dc76f7ee59a82bdf3d71033dc12bea4c77d
github.com/docker/engine-api 4290f40c056686fcaa5c9caf02eac1dde9315adf
github.com/docker/go-connections eb315e36415380e7c2fdee175262560ff42359da
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/go-check/check 20d25e2804050c1cd24a7eea1e7a6447dd0e74ec
github.com/mattn/go-shellwords 753a2322a99f87c0eff284980e77f53041555bc6
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
github.com/opencontainers/runc 6c22e77604689db8725fa866f0f2ec0b3e8c3a07
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
github.com/pkg/errors master
github.com/tchap/go-patricia v2.2.6
github.com/vbatts/tar-split bd4c5d64c3e9297f410025a3b1bd0c58f659e721
github.com/vdemeester/shakers 24d7f1d6a71aa5d9cbe7390e4afb66b7eef9e1b3
golang.org/x/net f2499483f923065a842d38eb4c7f1927e6fc6e6d
golang.org/x/sys d75a52659825e75fff6158388dddc6a5b04f9ba5

View File

@@ -76,8 +76,7 @@ may be the better choice.
For those who have previously deployed their own registry based on the Registry
1.0 implementation and wish to deploy a Registry 2.0 while retaining images,
data migration is required. A tool to assist with migration efforts has been
created. For more information see [docker/migrator]
(https://github.com/docker/migrator).
created. For more information see [docker/migrator](https://github.com/docker/migrator).
## Contribute

View File

@@ -15,7 +15,7 @@
// tag := /[\w][\w.-]{0,127}/
//
// digest := digest-algorithm ":" digest-hex
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]*
// digest-algorithm-separator := /[+.-_]/
// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value

View File

@@ -125,6 +125,14 @@ func (a Algorithm) Hash() hash.Hash {
return algorithms[a].New()
}
// Encode encodes the raw bytes of a digest, typically from a hash.Hash, into
// the encoded portion of the digest.
func (a Algorithm) Encode(d []byte) string {
// TODO(stevvooe): Currently, all algorithms use a hex encoding. When we
// add support for back registration, we can modify this accordingly.
return fmt.Sprintf("%x", d)
}
// FromReader returns the digest of the reader using the algorithm.
func (a Algorithm) FromReader(rd io.Reader) (Digest, error) {
digester := a.Digester()

View File

@@ -45,16 +45,21 @@ func NewDigest(alg Algorithm, h hash.Hash) Digest {
// functions. This is also useful for rebuilding digests from binary
// serializations.
func NewDigestFromBytes(alg Algorithm, p []byte) Digest {
return Digest(fmt.Sprintf("%s:%x", alg, p))
return NewDigestFromEncoded(alg, alg.Encode(p))
}
// NewDigestFromHex returns a Digest from alg and a the hex encoded digest.
// NewDigestFromHex is deprecated. Please use NewDigestFromEncoded.
func NewDigestFromHex(alg, hex string) Digest {
return Digest(fmt.Sprintf("%s:%s", alg, hex))
return NewDigestFromEncoded(Algorithm(alg), hex)
}
// NewDigestFromEncoded returns a Digest from alg and the encoded digest.
func NewDigestFromEncoded(alg Algorithm, encoded string) Digest {
return Digest(fmt.Sprintf("%s:%s", alg, encoded))
}
// DigestRegexp matches valid digest types.
var DigestRegexp = regexp.MustCompile(`[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+`)
var DigestRegexp = regexp.MustCompile(`[a-z0-9]+(?:[.+_-][a-z0-9]+)*:[a-zA-Z0-9=_-]+`)
// DigestRegexpAnchored matches valid digest types, anchored to the start and end of the match.
var DigestRegexpAnchored = regexp.MustCompile(`^` + DigestRegexp.String() + `$`)
@@ -133,12 +138,17 @@ func (d Digest) Verifier() Verifier {
}
}
// Hex returns the hex digest portion of the digest. This will panic if the
// Encoded returns the encoded portion of the digest. This will panic if the
// underlying digest is not in a valid format.
func (d Digest) Hex() string {
func (d Digest) Encoded() string {
return string(d[d.sepIndex()+1:])
}
// Hex is deprecated. Please use Digest.Encoded.
func (d Digest) Hex() string {
return d.Encoded()
}
func (d Digest) String() string {
return string(d)
}

View File

@@ -7,7 +7,7 @@
The OCI Image Format project creates and maintains the software shipping container image format spec (OCI Image Format).
The specification can be found [here](spec.md).
**[The specification can be found here](spec.md).**
This repository also provides [Go types](specs-go), [intra-blob validation tooling, and JSON Schema](schema).
The Go types and validation should be compatible with the current Go release; earlier Go releases are not supported.
@@ -42,22 +42,13 @@ To support this UX the OCI Image Format contains sufficient information to launc
A: Distribution, for example using HTTP as both Docker v2.2 and AppC do today, is currently out of scope on the [OCI Scope Table](https://www.opencontainers.org/about/oci-scope-table).
There has been [some discussion on the TOB mailing list](https://groups.google.com/a/opencontainers.org/d/msg/tob/A3JnmI-D-6Y/tLuptPDHAgAJ) to make distribution an optional layer, but this topic is a work in progress.
**Q: Why a new project?**
A: The [first OCI spec](https://github.com/opencontainers/runtime-spec) centered around defining the run side of a container.
This is generally seen to be an orthogonal concern to the shipping container component.
As practical examples of this separation you see many organizations separating these concerns into different teams and organizations: the Docker Distribution project and the Docker containerd project; Amazon ECS and Amazon EC2 Container Registry, etc.
**Q: Why work on this?**
A: We are seeing many independent implementations of container image handling including build systems, registries, and image analysis tools.
As an organization we would like to encourage this growth and bring people together to ensure a technically correct and open specification continues to evolve reflecting the OCI values.
**Q: What happens to AppC or Docker Image Formats?**
A: Existing formats can continue to be a proving ground for technologies, as needed.
The OCI Image Format project strives to provide a dependable open specification that can be shared between different tools and be evolved for years or decades of compatibility; as the deb and rpm format have.
Find more [FAQ on the OCI site](https://www.opencontainers.org/faq).
## Roadmap
The [GitHub milestones](https://github.com/opencontainers/image-spec/milestones) lay out the path to the OCI v1.0.0 release in late 2016.
@@ -85,7 +76,7 @@ When in doubt, start on the [mailing-list](#mailing-list).
The contributors and maintainers of all OCI projects have a weekly meeting Wednesdays at 2:00 PM (USA Pacific).
Everyone is welcome to participate via [UberConference web][UberConference] or audio-only: +1-415-968-0849 (no PIN needed).
An initial agenda will be posted to the [mailing list](#mailing-list) earlier in the week, and everyone is welcome to propose additional topics or suggest other agenda alterations there.
Minutes are posted to the [mailing list](#mailing-list) and minutes from past calls are archived to the [wiki](https://github.com/opencontainers/runtime-spec/wiki) for those who are unable to join the call.
Minutes are posted to the [mailing list](#mailing-list) and minutes from past calls are archived [here][minutes].
## Mailing List
@@ -173,3 +164,4 @@ Read more on [How to Write a Git Commit Message](http://chris.beams.io/posts/git
[UberConference]: https://www.uberconference.com/opencontainers
[irc-logs]: http://ircbot.wl.linuxfoundation.org/eavesdrop/%23opencontainers/
[minutes]: http://ircbot.wl.linuxfoundation.org/meetings/opencontainers/

View File

@@ -14,7 +14,11 @@
package v1
import "time"
import (
"time"
digest "github.com/opencontainers/go-digest"
)
// ImageConfig defines the execution parameters which should be used as a base when running a container using an image.
type ImageConfig struct {
@@ -40,7 +44,10 @@ type ImageConfig struct {
WorkingDir string `json:"WorkingDir,omitempty"`
// Labels contains arbitrary metadata for the container.
Labels map[string]string `json:"labels,omitempty"`
Labels map[string]string `json:"Labels,omitempty"`
// StopSignal contains the system call signal that will be sent to the container to exit.
StopSignal string `json:"StopSignal,omitempty"`
}
// RootFS describes a layer content addresses
@@ -49,13 +56,13 @@ type RootFS struct {
Type string `json:"type"`
// DiffIDs is an array of layer content hashes (DiffIDs), in order from bottom-most to top-most.
DiffIDs []string `json:"diff_ids"`
DiffIDs []digest.Digest `json:"diff_ids"`
}
// History describes the history of a layer.
type History struct {
// Created is the combined date and time at which the layer was created, formatted as defined by RFC 3339, section 5.6.
Created time.Time `json:"created,omitempty"`
Created *time.Time `json:"created,omitempty"`
// CreatedBy is the command which created the layer.
CreatedBy string `json:"created_by,omitempty"`
@@ -74,7 +81,7 @@ type History struct {
// This provides the `application/vnd.oci.image.config.v1+json` mediatype when marshalled to JSON.
type Image struct {
// Created is the combined date and time at which the image was created, formatted as defined by RFC 3339, section 5.6.
Created time.Time `json:"created,omitempty"`
Created *time.Time `json:"created,omitempty"`
// Author defines the name and/or email address of the person or entity which created and is responsible for maintaining the image.
Author string `json:"author,omitempty"`

View File

@@ -17,7 +17,8 @@ package v1
import digest "github.com/opencontainers/go-digest"
// Descriptor describes the disposition of targeted content.
// This structure provides `application/vnd.oci.descriptor.v1+json` mediatype when marshalled to JSON
// This structure provides `application/vnd.oci.descriptor.v1+json` mediatype
// when marshalled to JSON.
type Descriptor struct {
// MediaType is the media type of the object this schema refers to.
MediaType string `json:"mediaType,omitempty"`
@@ -33,4 +34,31 @@ type Descriptor struct {
// Annotations contains arbitrary metadata relating to the targeted content.
Annotations map[string]string `json:"annotations,omitempty"`
// Platform describes the platform which the image in the manifest runs on.
//
// This should only be used when referring to a manifest.
Platform *Platform `json:"platform,omitempty"`
}
// Platform describes the platform which the image in the manifest runs on.
type Platform struct {
// Architecture field specifies the CPU architecture, for example
// `amd64` or `ppc64`.
Architecture string `json:"architecture"`
// OS specifies the operating system, for example `linux` or `windows`.
OS string `json:"os"`
// OSVersion is an optional field specifying the operating system
// version, for example on Windows `10.0.14393.1066`.
OSVersion string `json:"os.version,omitempty"`
// OSFeatures is an optional field specifying an array of strings,
// each listing a required OS feature (for example on Windows `win32k`).
OSFeatures []string `json:"os.features,omitempty"`
// Variant is an optional field specifying a variant of the CPU, for
// example `v7` to specify ARMv7 when architecture is `arm`.
Variant string `json:"variant,omitempty"`
}

View File

@@ -1,63 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package v1
import "github.com/opencontainers/image-spec/specs-go"
// Platform describes the platform which the image in the manifest runs on.
type Platform struct {
// Architecture field specifies the CPU architecture, for example
// `amd64` or `ppc64`.
Architecture string `json:"architecture"`
// OS specifies the operating system, for example `linux` or `windows`.
OS string `json:"os"`
// OSVersion is an optional field specifying the operating system
// version, for example `10.0.10586`.
OSVersion string `json:"os.version,omitempty"`
// OSFeatures is an optional field specifying an array of strings,
// each listing a required OS feature (for example on Windows `win32k`).
OSFeatures []string `json:"os.features,omitempty"`
// Variant is an optional field specifying a variant of the CPU, for
// example `ppc64le` to specify a little-endian version of a PowerPC CPU.
Variant string `json:"variant,omitempty"`
// Features is an optional field specifying an array of strings, each
// listing a required CPU feature (for example `sse4` or `aes`).
Features []string `json:"features,omitempty"`
}
// ManifestDescriptor describes a platform specific manifest.
type ManifestDescriptor struct {
Descriptor
// Platform describes the platform which the image in the manifest runs on.
Platform Platform `json:"platform"`
}
// ImageIndex references manifests for various platforms.
// This structure provides `application/vnd.oci.image.index.v1+json` mediatype when marshalled to JSON.
type ImageIndex struct {
specs.Versioned
// Manifests references platform specific manifests.
Manifests []ManifestDescriptor `json:"manifests"`
// Annotations contains arbitrary metadata for the image index.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -0,0 +1,29 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package v1
import "github.com/opencontainers/image-spec/specs-go"
// Index references manifests for various platforms.
// This structure provides `application/vnd.oci.image.index.v1+json` mediatype when marshalled to JSON.
type Index struct {
specs.Versioned
// Manifests references platform specific manifests.
Manifests []Descriptor `json:"manifests"`
// Annotations contains arbitrary metadata for the image index.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -27,6 +27,6 @@ type Manifest struct {
// Layers is an indexed list of layers referenced by the manifest.
Layers []Descriptor `json:"layers"`
// Annotations contains arbitrary metadata for the manifest.
// Annotations contains arbitrary metadata for the image manifest.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -18,6 +18,9 @@ const (
// MediaTypeDescriptor specifies the media type for a content descriptor.
MediaTypeDescriptor = "application/vnd.oci.descriptor.v1+json"
// MediaTypeLayoutHeader specifies the media type for the oci-layout.
MediaTypeLayoutHeader = "application/vnd.oci.layout.header.v1+json"
// MediaTypeImageManifest specifies the media type for an image manifest.
MediaTypeImageManifest = "application/vnd.oci.image.manifest.v1+json"

View File

@@ -25,7 +25,7 @@ const (
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = "-rc5"
VersionDev = "-rc6-dev"
)
// Version is the specification version that the package types support.

View File

@@ -117,8 +117,8 @@ Assuming you have an OCI bundle from the previous step you can execute the conta
The first way is to use the convenience command `run` that will handle creating, starting, and deleting the container after it exits.
```bash
# run as root
cd /mycontainer
runc run mycontainerid
```
@@ -165,8 +165,8 @@ Now we can go though the lifecycle operations in your shell.
```bash
# run as root
cd /mycontainer
runc create mycontainerid
# view the container is created and in the "created" state
@@ -185,6 +185,22 @@ runc delete mycontainerid
This adds more complexity but allows higher level systems to manage runc and provides points in the containers creation to setup various settings after the container has created and/or before it is deleted.
This is commonly used to setup the container's network stack after `create` but before `start` where the user's defined process will be running.
#### Rootless containers
`runc` has the ability to run containers without root privileges. This is called `rootless`. You need to pass some parameters to `runc` in order to run rootless containers. See below and compare with the previous version. Run the following commands as an ordinary user:
```bash
# Same as the first example
mkdir ~/mycontainer
cd ~/mycontainer
mkdir rootfs
docker export $(docker create busybox) | tar -C rootfs -xvf -
# The --rootless parameter instructs runc spec to generate a configuration for a rootless container, which will allow you to run the container as a non-root user.
runc spec --rootless
# The --root parameter tells runc where to store the container state. It must be writable by the user.
runc --root /tmp/runc run mycontainerid
```
#### Supervisors
`runc` can be used with process supervisors and init systems to ensure that containers are restarted when they exit.

View File

@@ -56,7 +56,7 @@ Once you have an instance of the factory created we can create a configuration
struct describing how the container is to be created. A sample would look similar to this:
```go
defaultMountFlags := syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV
defaultMountFlags := unix.MS_NOEXEC | unix.MS_NOSUID | unix.MS_NODEV
config := &configs.Config{
Rootfs: "/your/path/to/rootfs",
Capabilities: []string{
@@ -112,14 +112,14 @@ config := &configs.Config{
Source: "tmpfs",
Destination: "/dev",
Device: "tmpfs",
Flags: syscall.MS_NOSUID | syscall.MS_STRICTATIME,
Flags: unix.MS_NOSUID | unix.MS_STRICTATIME,
Data: "mode=755",
},
{
Source: "devpts",
Destination: "/dev/pts",
Device: "devpts",
Flags: syscall.MS_NOSUID | syscall.MS_NOEXEC,
Flags: unix.MS_NOSUID | unix.MS_NOEXEC,
Data: "newinstance,ptmxmode=0666,mode=0620,gid=5",
},
{
@@ -139,7 +139,7 @@ config := &configs.Config{
Source: "sysfs",
Destination: "/sys",
Device: "sysfs",
Flags: defaultMountFlags | syscall.MS_RDONLY,
Flags: defaultMountFlags | unix.MS_RDONLY,
},
},
UidMappings: []configs.IDMap{
@@ -165,7 +165,7 @@ config := &configs.Config{
},
Rlimits: []configs.Rlimit{
{
Type: syscall.RLIMIT_NOFILE,
Type: unix.RLIMIT_NOFILE,
Hard: uint64(1025),
Soft: uint64(1025),
},

View File

@@ -7,8 +7,10 @@ import (
"fmt"
"os"
"os/exec"
"syscall"
"syscall" // only for exec
"unsafe"
"golang.org/x/sys/unix"
)
// If arg2 is nonzero, set the "child subreaper" attribute of the
@@ -53,8 +55,8 @@ func Execv(cmd string, args []string, env []string) error {
return syscall.Exec(name, args, env)
}
func Prlimit(pid, resource int, limit syscall.Rlimit) error {
_, _, err := syscall.RawSyscall6(syscall.SYS_PRLIMIT64, uintptr(pid), uintptr(resource), uintptr(unsafe.Pointer(&limit)), uintptr(unsafe.Pointer(&limit)), 0, 0)
func Prlimit(pid, resource int, limit unix.Rlimit) error {
_, _, err := unix.RawSyscall6(unix.SYS_PRLIMIT64, uintptr(pid), uintptr(resource), uintptr(unsafe.Pointer(&limit)), uintptr(unsafe.Pointer(&limit)), 0, 0)
if err != 0 {
return err
}
@@ -62,7 +64,7 @@ func Prlimit(pid, resource int, limit syscall.Rlimit) error {
}
func SetParentDeathSignal(sig uintptr) error {
if _, _, err := syscall.RawSyscall(syscall.SYS_PRCTL, syscall.PR_SET_PDEATHSIG, sig, 0); err != 0 {
if _, _, err := unix.RawSyscall(unix.SYS_PRCTL, unix.PR_SET_PDEATHSIG, sig, 0); err != 0 {
return err
}
return nil
@@ -70,7 +72,7 @@ func SetParentDeathSignal(sig uintptr) error {
func GetParentDeathSignal() (ParentDeathSignal, error) {
var sig int
_, _, err := syscall.RawSyscall(syscall.SYS_PRCTL, syscall.PR_GET_PDEATHSIG, uintptr(unsafe.Pointer(&sig)), 0)
_, _, err := unix.RawSyscall(unix.SYS_PRCTL, unix.PR_GET_PDEATHSIG, uintptr(unsafe.Pointer(&sig)), 0)
if err != 0 {
return -1, err
}
@@ -78,7 +80,7 @@ func GetParentDeathSignal() (ParentDeathSignal, error) {
}
func SetKeepCaps() error {
if _, _, err := syscall.RawSyscall(syscall.SYS_PRCTL, syscall.PR_SET_KEEPCAPS, 1, 0); err != 0 {
if _, _, err := unix.RawSyscall(unix.SYS_PRCTL, unix.PR_SET_KEEPCAPS, 1, 0); err != 0 {
return err
}
@@ -86,7 +88,7 @@ func SetKeepCaps() error {
}
func ClearKeepCaps() error {
if _, _, err := syscall.RawSyscall(syscall.SYS_PRCTL, syscall.PR_SET_KEEPCAPS, 0, 0); err != 0 {
if _, _, err := unix.RawSyscall(unix.SYS_PRCTL, unix.PR_SET_KEEPCAPS, 0, 0); err != 0 {
return err
}
@@ -94,7 +96,7 @@ func ClearKeepCaps() error {
}
func Setctty() error {
if _, _, err := syscall.RawSyscall(syscall.SYS_IOCTL, 0, uintptr(syscall.TIOCSCTTY), 0); err != 0 {
if _, _, err := unix.RawSyscall(unix.SYS_IOCTL, 0, uintptr(unix.TIOCSCTTY), 0); err != 0 {
return err
}
return nil
@@ -135,7 +137,7 @@ func SetSubreaper(i int) error {
}
func Prctl(option int, arg2, arg3, arg4, arg5 uintptr) (err error) {
_, _, e1 := syscall.Syscall6(syscall.SYS_PRCTL, uintptr(option), arg2, arg3, arg4, arg5, 0)
_, _, e1 := unix.Syscall6(unix.SYS_PRCTL, uintptr(option), arg2, arg3, arg4, arg5, 0)
if e1 != 0 {
err = e1
}

View File

@@ -1,40 +0,0 @@
package system
import (
"fmt"
"runtime"
"syscall"
)
// Via http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7b21fddd087678a70ad64afc0f632e0f1071b092
//
// We need different setns values for the different platforms and arch
// We are declaring the macro here because the SETNS syscall does not exist in th stdlib
var setNsMap = map[string]uintptr{
"linux/386": 346,
"linux/arm64": 268,
"linux/amd64": 308,
"linux/arm": 375,
"linux/ppc": 350,
"linux/ppc64": 350,
"linux/ppc64le": 350,
"linux/s390x": 339,
}
var sysSetns = setNsMap[fmt.Sprintf("%s/%s", runtime.GOOS, runtime.GOARCH)]
func SysSetns() uint32 {
return uint32(sysSetns)
}
func Setns(fd uintptr, flags uintptr) error {
ns, exists := setNsMap[fmt.Sprintf("%s/%s", runtime.GOOS, runtime.GOARCH)]
if !exists {
return fmt.Errorf("unsupported platform %s/%s", runtime.GOOS, runtime.GOARCH)
}
_, _, err := syscall.RawSyscall(ns, fd, flags, 0)
if err != 0 {
return err
}
return nil
}

View File

@@ -3,12 +3,12 @@
package system
import (
"syscall"
"golang.org/x/sys/unix"
)
// Setuid sets the uid of the calling thread to the specified uid.
func Setuid(uid int) (err error) {
_, _, e1 := syscall.RawSyscall(syscall.SYS_SETUID32, uintptr(uid), 0, 0)
_, _, e1 := unix.RawSyscall(unix.SYS_SETUID32, uintptr(uid), 0, 0)
if e1 != 0 {
err = e1
}
@@ -17,7 +17,7 @@ func Setuid(uid int) (err error) {
// Setgid sets the gid of the calling thread to the specified gid.
func Setgid(gid int) (err error) {
_, _, e1 := syscall.RawSyscall(syscall.SYS_SETGID32, uintptr(gid), 0, 0)
_, _, e1 := unix.RawSyscall(unix.SYS_SETGID32, uintptr(gid), 0, 0)
if e1 != 0 {
err = e1
}

View File

@@ -3,12 +3,12 @@
package system
import (
"syscall"
"golang.org/x/sys/unix"
)
// Setuid sets the uid of the calling thread to the specified uid.
func Setuid(uid int) (err error) {
_, _, e1 := syscall.RawSyscall(syscall.SYS_SETUID, uintptr(uid), 0, 0)
_, _, e1 := unix.RawSyscall(unix.SYS_SETUID, uintptr(uid), 0, 0)
if e1 != 0 {
err = e1
}
@@ -17,7 +17,7 @@ func Setuid(uid int) (err error) {
// Setgid sets the gid of the calling thread to the specified gid.
func Setgid(gid int) (err error) {
_, _, e1 := syscall.RawSyscall(syscall.SYS_SETGID, uintptr(gid), 0, 0)
_, _, e1 := unix.RawSyscall(unix.SYS_SETGID, uintptr(gid), 0, 0)
if e1 != 0 {
err = e1
}

View File

@@ -3,12 +3,12 @@
package system
import (
"syscall"
"golang.org/x/sys/unix"
)
// Setuid sets the uid of the calling thread to the specified uid.
func Setuid(uid int) (err error) {
_, _, e1 := syscall.RawSyscall(syscall.SYS_SETUID32, uintptr(uid), 0, 0)
_, _, e1 := unix.RawSyscall(unix.SYS_SETUID32, uintptr(uid), 0, 0)
if e1 != 0 {
err = e1
}
@@ -17,7 +17,7 @@ func Setuid(uid int) (err error) {
// Setgid sets the gid of the calling thread to the specified gid.
func Setgid(gid int) (err error) {
_, _, e1 := syscall.RawSyscall(syscall.SYS_SETGID32, uintptr(gid), 0, 0)
_, _, e1 := unix.RawSyscall(unix.SYS_SETGID32, uintptr(gid), 0, 0)
if e1 != 0 {
err = e1
}

View File

@@ -1,8 +1,9 @@
package system
import (
"syscall"
"unsafe"
"golang.org/x/sys/unix"
)
var _zero uintptr
@@ -10,7 +11,7 @@ var _zero uintptr
// Returns the size of xattrs and nil error
// Requires path, takes allocated []byte or nil as last argument
func Llistxattr(path string, dest []byte) (size int, err error) {
pathBytes, err := syscall.BytePtrFromString(path)
pathBytes, err := unix.BytePtrFromString(path)
if err != nil {
return -1, err
}
@@ -21,7 +22,7 @@ func Llistxattr(path string, dest []byte) (size int, err error) {
newpathBytes = unsafe.Pointer(&_zero)
}
_size, _, errno := syscall.Syscall6(syscall.SYS_LLISTXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(newpathBytes), uintptr(len(dest)), 0, 0, 0)
_size, _, errno := unix.Syscall6(unix.SYS_LLISTXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(newpathBytes), uintptr(len(dest)), 0, 0, 0)
size = int(_size)
if errno != 0 {
return -1, errno
@@ -34,11 +35,11 @@ func Llistxattr(path string, dest []byte) (size int, err error) {
// Requires path and its attribute as arguments
func Lgetxattr(path string, attr string) ([]byte, error) {
var sz int
pathBytes, err := syscall.BytePtrFromString(path)
pathBytes, err := unix.BytePtrFromString(path)
if err != nil {
return nil, err
}
attrBytes, err := syscall.BytePtrFromString(attr)
attrBytes, err := unix.BytePtrFromString(attr)
if err != nil {
return nil, err
}
@@ -47,25 +48,25 @@ func Lgetxattr(path string, attr string) ([]byte, error) {
sz = 128
dest := make([]byte, sz)
destBytes := unsafe.Pointer(&dest[0])
_sz, _, errno := syscall.Syscall6(syscall.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(destBytes), uintptr(len(dest)), 0, 0)
_sz, _, errno := unix.Syscall6(unix.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(destBytes), uintptr(len(dest)), 0, 0)
switch {
case errno == syscall.ENODATA:
case errno == unix.ENODATA:
return nil, errno
case errno == syscall.ENOTSUP:
case errno == unix.ENOTSUP:
return nil, errno
case errno == syscall.ERANGE:
case errno == unix.ERANGE:
// 128 byte array might just not be good enough,
// A dummy buffer is used ``uintptr(0)`` to get real size
// of the xattrs on disk
_sz, _, errno = syscall.Syscall6(syscall.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(unsafe.Pointer(nil)), uintptr(0), 0, 0)
_sz, _, errno = unix.Syscall6(unix.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(unsafe.Pointer(nil)), uintptr(0), 0, 0)
sz = int(_sz)
if sz < 0 {
return nil, errno
}
dest = make([]byte, sz)
destBytes := unsafe.Pointer(&dest[0])
_sz, _, errno = syscall.Syscall6(syscall.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(destBytes), uintptr(len(dest)), 0, 0)
_sz, _, errno = unix.Syscall6(unix.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(destBytes), uintptr(len(dest)), 0, 0)
if errno != 0 {
return nil, errno
}
@@ -77,11 +78,11 @@ func Lgetxattr(path string, attr string) ([]byte, error) {
}
func Lsetxattr(path string, attr string, data []byte, flags int) error {
pathBytes, err := syscall.BytePtrFromString(path)
pathBytes, err := unix.BytePtrFromString(path)
if err != nil {
return err
}
attrBytes, err := syscall.BytePtrFromString(attr)
attrBytes, err := unix.BytePtrFromString(attr)
if err != nil {
return err
}
@@ -91,7 +92,7 @@ func Lsetxattr(path string, attr string, data []byte, flags int) error {
} else {
dataBytes = unsafe.Pointer(&_zero)
}
_, _, errno := syscall.Syscall6(syscall.SYS_LSETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(dataBytes), uintptr(len(data)), uintptr(flags), 0)
_, _, errno := unix.Syscall6(unix.SYS_LSETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(dataBytes), uintptr(len(data)), uintptr(flags), 0)
if errno != 0 {
return errno
}

View File

@@ -2,7 +2,8 @@ package user
import (
"errors"
"syscall"
"golang.org/x/sys/unix"
)
var (
@@ -40,7 +41,7 @@ func lookupUser(filter func(u User) bool) (User, error) {
// user cannot be found (or there is no /etc/passwd file on the filesystem),
// then CurrentUser returns an error.
func CurrentUser() (User, error) {
return LookupUid(syscall.Getuid())
return LookupUid(unix.Getuid())
}
// LookupUser looks up a user by their username in /etc/passwd. If the user
@@ -88,7 +89,7 @@ func lookupGroup(filter func(g Group) bool) (Group, error) {
// entry in /etc/passwd. If the group cannot be found (or there is no
// /etc/group file on the filesystem), then CurrentGroup returns an error.
func CurrentGroup() (Group, error) {
return LookupGid(syscall.Getgid())
return LookupGid(unix.Getgid())
}
// LookupGroup looks up a group by its name in /etc/group. If the group cannot

View File

@@ -1,148 +0,0 @@
/*
* Copyright 2016 SUSE LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
#include "cmsg.h"
#define error(fmt, ...) \
({ \
fprintf(stderr, "nsenter: " fmt ": %m\n", ##__VA_ARGS__); \
errno = ECOMM; \
goto err; /* return value */ \
})
/*
* Sends a file descriptor along the sockfd provided. Returns the return
* value of sendmsg(2). Any synchronisation and preparation of state
* should be done external to this (we expect the other side to be in
* recvfd() in the code).
*/
ssize_t sendfd(int sockfd, struct file_t file)
{
struct msghdr msg = {0};
struct iovec iov[1] = {0};
struct cmsghdr *cmsg;
int *fdptr;
int ret;
union {
char buf[CMSG_SPACE(sizeof(file.fd))];
struct cmsghdr align;
} u;
/*
* We need to send some other data along with the ancillary data,
* otherwise the other side won't recieve any data. This is very
* well-hidden in the documentation (and only applies to
* SOCK_STREAM). See the bottom part of unix(7).
*/
iov[0].iov_base = file.name;
iov[0].iov_len = strlen(file.name) + 1;
msg.msg_name = NULL;
msg.msg_namelen = 0;
msg.msg_iov = iov;
msg.msg_iovlen = 1;
msg.msg_control = u.buf;
msg.msg_controllen = sizeof(u.buf);
cmsg = CMSG_FIRSTHDR(&msg);
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SCM_RIGHTS;
cmsg->cmsg_len = CMSG_LEN(sizeof(int));
fdptr = (int *) CMSG_DATA(cmsg);
memcpy(fdptr, &file.fd, sizeof(int));
return sendmsg(sockfd, &msg, 0);
}
/*
* Receives a file descriptor from the sockfd provided. Returns the file
* descriptor as sent from sendfd(). It will return the file descriptor
* or die (literally) trying. Any synchronisation and preparation of
* state should be done external to this (we expect the other side to be
* in sendfd() in the code).
*/
struct file_t recvfd(int sockfd)
{
struct msghdr msg = {0};
struct iovec iov[1] = {0};
struct cmsghdr *cmsg;
struct file_t file = {0};
int *fdptr;
int olderrno;
union {
char buf[CMSG_SPACE(sizeof(file.fd))];
struct cmsghdr align;
} u;
/* Allocate a buffer. */
/* TODO: Make this dynamic with MSG_PEEK. */
file.name = malloc(TAG_BUFFER);
if (!file.name)
error("recvfd: failed to allocate file.tag buffer\n");
/*
* We need to "recieve" the non-ancillary data even though we don't
* plan to use it at all. Otherwise, things won't work as expected.
* See unix(7) and other well-hidden documentation.
*/
iov[0].iov_base = file.name;
iov[0].iov_len = TAG_BUFFER;
msg.msg_name = NULL;
msg.msg_namelen = 0;
msg.msg_iov = iov;
msg.msg_iovlen = 1;
msg.msg_control = u.buf;
msg.msg_controllen = sizeof(u.buf);
ssize_t ret = recvmsg(sockfd, &msg, 0);
if (ret < 0)
goto err;
cmsg = CMSG_FIRSTHDR(&msg);
if (!cmsg)
error("recvfd: got NULL from CMSG_FIRSTHDR");
if (cmsg->cmsg_level != SOL_SOCKET)
error("recvfd: expected SOL_SOCKET in cmsg: %d", cmsg->cmsg_level);
if (cmsg->cmsg_type != SCM_RIGHTS)
error("recvfd: expected SCM_RIGHTS in cmsg: %d", cmsg->cmsg_type);
if (cmsg->cmsg_len != CMSG_LEN(sizeof(int)))
error("recvfd: expected correct CMSG_LEN in cmsg: %lu", (unsigned long)cmsg->cmsg_len);
fdptr = (int *) CMSG_DATA(cmsg);
if (!fdptr || *fdptr < 0)
error("recvfd: recieved invalid pointer");
file.fd = *fdptr;
return file;
err:
olderrno = errno;
free(file.name);
errno = olderrno;
return (struct file_t){0};
}

View File

@@ -1,57 +0,0 @@
// +build linux
package utils
/*
* Copyright 2016 SUSE LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
#include <errno.h>
#include <stdlib.h>
#include "cmsg.h"
*/
import "C"
import (
"os"
"unsafe"
)
// RecvFd waits for a file descriptor to be sent over the given AF_UNIX
// socket. The file name of the remote file descriptor will be recreated
// locally (it is sent as non-auxiliary data in the same payload).
func RecvFd(socket *os.File) (*os.File, error) {
file, err := C.recvfd(C.int(socket.Fd()))
if err != nil {
return nil, err
}
defer C.free(unsafe.Pointer(file.name))
return os.NewFile(uintptr(file.fd), C.GoString(file.name)), nil
}
// SendFd sends a file descriptor over the given AF_UNIX socket. In
// addition, the file.Name() of the given file will also be sent as
// non-auxiliary data in the same payload (allowing to send contextual
// information for a file descriptor).
func SendFd(socket, file *os.File) error {
var cfile C.struct_file_t
cfile.fd = C.int(file.Fd())
cfile.name = C.CString(file.Name())
defer C.free(unsafe.Pointer(cfile.name))
_, err := C.sendfd(C.int(socket.Fd()), cfile)
return err
}

View File

@@ -1,36 +0,0 @@
/*
* Copyright 2016 SUSE LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#if !defined(CMSG_H)
#define CMSG_H
#include <sys/types.h>
/* TODO: Implement this properly with MSG_PEEK. */
#define TAG_BUFFER 4096
/* This mirrors Go's (*os.File). */
struct file_t {
char *name;
int fd;
};
struct file_t recvfd(int sockfd);
ssize_t sendfd(int sockfd, struct file_t file);
#endif /* !defined(CMSG_H) */

View File

@@ -1,126 +0,0 @@
package utils
import (
"crypto/rand"
"encoding/hex"
"encoding/json"
"io"
"os"
"path/filepath"
"strings"
"syscall"
"unsafe"
)
const (
exitSignalOffset = 128
)
// GenerateRandomName returns a new name joined with a prefix. This size
// specified is used to truncate the randomly generated value
func GenerateRandomName(prefix string, size int) (string, error) {
id := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, id); err != nil {
return "", err
}
if size > 64 {
size = 64
}
return prefix + hex.EncodeToString(id)[:size], nil
}
// ResolveRootfs ensures that the current working directory is
// not a symlink and returns the absolute path to the rootfs
func ResolveRootfs(uncleanRootfs string) (string, error) {
rootfs, err := filepath.Abs(uncleanRootfs)
if err != nil {
return "", err
}
return filepath.EvalSymlinks(rootfs)
}
// ExitStatus returns the correct exit status for a process based on if it
// was signaled or exited cleanly
func ExitStatus(status syscall.WaitStatus) int {
if status.Signaled() {
return exitSignalOffset + int(status.Signal())
}
return status.ExitStatus()
}
// WriteJSON writes the provided struct v to w using standard json marshaling
func WriteJSON(w io.Writer, v interface{}) error {
data, err := json.Marshal(v)
if err != nil {
return err
}
_, err = w.Write(data)
return err
}
// CleanPath makes a path safe for use with filepath.Join. This is done by not
// only cleaning the path, but also (if the path is relative) adding a leading
// '/' and cleaning it (then removing the leading '/'). This ensures that a
// path resulting from prepending another path will always resolve to lexically
// be a subdirectory of the prefixed path. This is all done lexically, so paths
// that include symlinks won't be safe as a result of using CleanPath.
func CleanPath(path string) string {
// Deal with empty strings nicely.
if path == "" {
return ""
}
// Ensure that all paths are cleaned (especially problematic ones like
// "/../../../../../" which can cause lots of issues).
path = filepath.Clean(path)
// If the path isn't absolute, we need to do more processing to fix paths
// such as "../../../../<etc>/some/path". We also shouldn't convert absolute
// paths to relative ones.
if !filepath.IsAbs(path) {
path = filepath.Clean(string(os.PathSeparator) + path)
// This can't fail, as (by definition) all paths are relative to root.
path, _ = filepath.Rel(string(os.PathSeparator), path)
}
// Clean the path again for good measure.
return filepath.Clean(path)
}
// SearchLabels searches a list of key-value pairs for the provided key and
// returns the corresponding value. The pairs must be separated with '='.
func SearchLabels(labels []string, query string) string {
for _, l := range labels {
parts := strings.SplitN(l, "=", 2)
if len(parts) < 2 {
continue
}
if parts[0] == query {
return parts[1]
}
}
return ""
}
// Annotations returns the bundle path and user defined annotations from the
// libcontainer state. We need to remove the bundle because that is a label
// added by libcontainer.
func Annotations(labels []string) (bundle string, userAnnotations map[string]string) {
userAnnotations = make(map[string]string)
for _, l := range labels {
parts := strings.SplitN(l, "=", 2)
if len(parts) < 2 {
continue
}
if parts[0] == "bundle" {
bundle = parts[1]
} else {
userAnnotations[parts[0]] = parts[1]
}
}
return
}
func GetIntSize() int {
return int(unsafe.Sizeof(1))
}

View File

@@ -1,43 +0,0 @@
// +build !windows
package utils
import (
"io/ioutil"
"os"
"strconv"
"syscall"
)
func CloseExecFrom(minFd int) error {
fdList, err := ioutil.ReadDir("/proc/self/fd")
if err != nil {
return err
}
for _, fi := range fdList {
fd, err := strconv.Atoi(fi.Name())
if err != nil {
// ignore non-numeric file names
continue
}
if fd < minFd {
// ignore descriptors lower than our specified minimum
continue
}
// intentionally ignore errors from syscall.CloseOnExec
syscall.CloseOnExec(fd)
// the cases where this might fail are basically file descriptors that have already been closed (including and especially the one that was created when ioutil.ReadDir did the "opendir" syscall)
}
return nil
}
// NewSockPair returns a new unix socket pair
func NewSockPair(name string) (parent *os.File, child *os.File, err error) {
fds, err := syscall.Socketpair(syscall.AF_LOCAL, syscall.SOCK_STREAM|syscall.SOCK_CLOEXEC, 0)
if err != nil {
return nil, nil, err
}
return os.NewFile(uintptr(fds[1]), name+"-p"), os.NewFile(uintptr(fds[0]), name+"-c"), nil
}

View File

@@ -213,7 +213,7 @@ func SetFileLabel(path string, label string) error {
return lsetxattr(path, xattrNameSelinux, []byte(label), 0)
}
// Filecon returns the SELinux label for this path or returns an error.
// FileLabel returns the SELinux label for this path or returns an error.
func FileLabel(path string) (string, error) {
label, err := lgetxattr(path, xattrNameSelinux)
if err != nil {
@@ -331,7 +331,7 @@ func EnforceMode() int {
}
/*
SetEnforce sets the current SELinux mode Enforcing, Permissive.
SetEnforceMode sets the current SELinux mode Enforcing, Permissive.
Disabled is not valid, since this needs to be set at boot time.
*/
func SetEnforceMode(mode int) error {

17
vendor/github.com/ostreedev/ostree-go/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,17 @@
Portions of this code are derived from:
https://github.com/dradtke/gotk3
Copyright (c) 2013 Conformal Systems LLC.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

4
vendor/github.com/ostreedev/ostree-go/README.md generated vendored Normal file
View File

@@ -0,0 +1,4 @@
OSTree-Go
=========
Go bindings for OSTree. Find out more about OSTree [here](https://github.com/ostreedev/ostree)

View File

@@ -0,0 +1,60 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
import (
"unsafe"
)
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
/*
* GBoolean
*/
// GBoolean is a Go representation of glib's gboolean
type GBoolean C.gboolean
func NewGBoolean() GBoolean {
return GBoolean(0)
}
func GBool(b bool) GBoolean {
if b {
return GBoolean(1)
}
return GBoolean(0)
}
func (b GBoolean) Ptr() unsafe.Pointer {
return unsafe.Pointer(&b)
}
func GoBool(b GBoolean) bool {
if b != 0 {
return true
}
return false
}

View File

@@ -0,0 +1,47 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
import (
"unsafe"
)
// GIO types
type GCancellable struct {
*GObject
}
func (self *GCancellable) native() *C.GCancellable {
return (*C.GCancellable)(unsafe.Pointer(self))
}
func (self *GCancellable) Ptr() unsafe.Pointer {
return unsafe.Pointer(self)
}
// At the moment, no cancellable API, just pass nil

Some files were not shown because too many files have changed in this diff Show More