Compare commits

..

43 Commits
v0.2 ... v0.4

Author SHA1 Message Date
Daniel J Walsh
9cbccf88cf Merge pull request #268 from rhatdan/master
Bump to version 0.4
2017-09-22 06:10:31 -04:00
Daniel J Walsh
de0d8cbdcf Bump to version 0.4
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-09-22 10:05:03 +00:00
Daniel J Walsh
333899deb6 Merge pull request #264 from lsm5/fix-readme-ubuntu
update README.md to remove Debian mentions
2017-09-22 05:59:07 -04:00
TomSweeneyRedHat
1d0b48d7da Add default transport to push if not provided
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #260
Approved by: rhatdan
2017-09-21 21:02:23 +00:00
Lokesh Mandvekar
a2765bb1be update README.md to remove Debian mentions
ppa install steps mentioned in README.md work only for Ubuntu xenial and
zesty so far.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2017-09-20 02:47:36 -04:00
Daniel J Walsh
c19c8f9503 Merge pull request #261 from vbatts/example
examples: adding a basic lighttpd example
2017-09-18 10:42:04 -04:00
Vincent Batts
f17bfb937f examples: adding a basic lighttpd example
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
2017-09-18 08:43:52 -04:00
Daniel J Walsh
4e4ceff6cf Merge pull request #244 from rhatdan/unbuntu
Add build information for Ubuntu
2017-09-08 07:56:11 -04:00
Daniel J Walsh
ef532adb2f Add build information for Ubuntu
We should document requied packages for installing on Ubuntu and Debian
to match up with the use on Fedora.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-09-06 06:30:27 -04:00
Nalin Dahyabhai
9327431e97 Avoid trying to print a nil ImageReference
When we fail to pull an image, don't try to include the name of the
image that pullImage() returned in the error text - it will have
returned nil for the pulled reference in most cases.  Instead, use the
name of the image as we were told it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #255
Approved by: nalind
2017-08-31 21:24:35 +00:00
TomSweeneyRedHat
c9c735e20d Add authentication to commit and push
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #250
Approved by: rhatdan
2017-08-29 15:20:19 +00:00
Nalin Dahyabhai
f28dcb3751 Auto-set build tags for ostree and selinux
Try to use pkg-config to check for 'ostree-1' and 'libselinux'.

If ostree-1 is not found, use the containers_image_ostree_stub build tag
to not require it, at the cost of not being able to use or write images
to the 'ostree' transport.

If libselinux is found, build with the 'selinux' tag,

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #252
Approved by: rhatdan
2017-08-29 13:22:53 +00:00
Daniel J Walsh
9e088bd41d Add information on buildah from man page on transports
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #234
Approved by: rhatdan
2017-08-29 10:37:54 +00:00
Daniel J Walsh
52087ca1c5 Remove --transport flag
This is no simpler then putting the transport in the image page,
we should default to the registry specified in containers/image
and not override it.  People are confused by this option, and I
see no value.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #234
Approved by: rhatdan
2017-08-29 10:37:54 +00:00
Nalin Dahyabhai
0de0d23df4 Run: don't complain about missing volume locations
Don't worry about not being able to populate temporary volumes using the
contents of the location in the image where they're expected to be
mounted if we fail to do so because that location doesn't exist.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #248
Approved by: rhatdan
2017-08-24 10:41:29 +00:00
TomSweeneyRedHat
498f0ae9d7 Add credentials to buildah from
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Add credentials to buildah from

Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #204
Approved by: nalind
2017-08-22 18:55:38 +00:00
Daniel J Walsh
ee91e6b981 Remove export command
We have implemented most of this code in kpod export, and we now
have kpod import/load/save.  No reason to implement them in both
commands.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #245
Approved by: nalind
2017-08-17 19:40:47 +00:00
Nalin Dahyabhai
265d2da6cf Always free signature.PolicyContexts
Whenever we create a containers/image/signature.PolicyContext, make sure
we don't forget to destroy it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #231
Approved by: rhatdan
2017-08-14 12:02:07 +00:00
Nalin Dahyabhai
8eb7d6d610 Run(): create the right working directory
When ensuring that the working directory exists before running a
command, make sure we create the location that we set in the
configuration file that we pass to runc.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #241
Approved by: rhatdan
2017-08-10 20:14:54 +00:00
Nalin Dahyabhai
94f2bf025a Replace --registry with --transport
Replace --registry command line flags with --transport.  For backward
compatibility, add Transport as an addtional setting that we prepend to
the still-optional Registry setting if the Transport and image name
alone don't provide a parseable image reference.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #235
Approved by: rhatdan
2017-08-03 15:55:13 +00:00
Nalin Dahyabhai
262b43a866 Improve "from" behavior with unnamed references
Fix our instantiation behavior when the source image reference is not a
named reference.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #235
Approved by: rhatdan
2017-08-03 15:55:13 +00:00
Daniel J Walsh
5259a84b7a atomic transport is being deprecated, so we should not document it.
containers/image now fully supports pushing images and signatures to an
openshift/atomic registry using the docker:// transport.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #238
Approved by: rhatdan
2017-08-02 12:01:26 +00:00
TomSweeneyRedHat
bf83bc208d Add quiet description and touch ups
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #233
Approved by: rhatdan
2017-07-30 12:34:36 +00:00
Nalin Dahyabhai
e616dc116a Avoid parsing image metadata for dates and layers
Avoid parsing metadata that the image library keeps in order to find an
image's digest and creation date; instead, compute the digest from the
manifest, and read the creation date value by inspecting the image,
logging a debug-level diagnostic if it doesn't match the value that the
storage library has on record.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #218
Approved by: rhatdan
2017-07-28 12:23:07 +00:00
Nalin Dahyabhai
933c18f2ad Read the image's creation date from public API
Use the storage library's new public field for retrieving an image's
creation date.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #218
Approved by: rhatdan
2017-07-28 12:23:07 +00:00
Nalin Dahyabhai
be5bcd549d Bump containers/storage and containers/image
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #227
Approved by: rhatdan
2017-07-28 12:10:46 +00:00
Jonathan Lebon
c845d7a5fe ci: use Fedora registry and mount repos
Use the official Fedora 26 image from the Fedora registry rather than
from the Docker Hub.

Also mount yum repos from the host. This will speed up provisioning
because PAPR injects mirror repos that are much closer and faster.

Signed-off-by: Jonathan Lebon <jlebon@redhat.com>

Closes: #225
Approved by: rhatdan
2017-07-27 18:39:31 +00:00
Jonathan Lebon
16d9d97d8c ci: rename files to the new PAPR name
Rename the YAML file and its auxiliary files to the newly supported
name.

Signed-off-by: Jonathan Lebon <jlebon@redhat.com>

Closes: #225
Approved by: rhatdan
2017-07-27 18:39:31 +00:00
Nalin Dahyabhai
8e36b22a71 Makefile: "clean" should also remove test helpers
Make "clean" also remove the imgtype helper.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #216
Approved by: rhatdan

Closes: #216
Approved by: rhatdan
2017-07-26 20:59:38 +00:00
Nalin Dahyabhai
98c4e0d970 Don't panic if an image's ID can't be parsed
Return a "doesn't match" result if an image's ID can't be turned into a
valid reference for any reason.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #217
Approved by: rhatdan
2017-07-26 20:35:47 +00:00
Nalin Dahyabhai
83fe25ca4e Turn on --enable-gc when running gometalinter
It looks like the metalinter is running out of memory while running
tests under PAPR, so give this a try.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #221
Approved by: rhatdan
2017-07-26 17:43:05 +00:00
Nalin Dahyabhai
b7e9966fb2 Make sure that we can build an RPM
Add a CI test that ensures that we can build an RPM package on the
current version (as of this writing, 26) of Fedora, using the .spec file
under contrib.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #208
Approved by: jlebon
2017-07-25 21:03:38 +00:00
Nalin Dahyabhai
8a3ccb53c4 rmi: handle truncated image IDs
Have storageImageID() use a lower-level image lookup to let it handle
truncated IDs correctly.  Wrap errors in getImage().  When reporting
that an image is in use, report its ID correctly.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #211
Approved by: rhatdan
2017-07-25 20:52:33 +00:00
Nalin Dahyabhai
3e9a075b48 Don't leak containers/image Image references
In-memory image objects created using an ImageReference's NewImage()
method need to be Close()d.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #211
Approved by: rhatdan
2017-07-25 20:52:33 +00:00
Nalin Dahyabhai
95d9d22949 Prefer higher-level storage APIs
Prefer higher-level storage APIs (Store) over lower-level storage APIs
(LayerStore, ImageStore, and ContainerStore objects), so that we don't
bypass synchronization and locking.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #211
Approved by: rhatdan
2017-07-25 20:52:33 +00:00
TomSweeneyRedHat
728f641179 Update README.md with buildah pronunciation
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #210
Approved by: rhatdan
2017-07-25 17:17:54 +00:00
Nalin Dahyabhai
8ce683f4fe Build our own libseccomp in Travis
The libseccomp2 in Travis (or rather, in the default repositories that
we have) is too old to support conditional filtering, so we need to
supply our own in order to not get "conditional filtering requires
libseccomp version >= 2.2.1" errors from runc.

That version also appears to be happy to translate the syscall name
_llseek from a name to a number that it doesn't recognize, triggering
"unrecognized syscall" errors.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Nalin Dahyabhai
fd7762b7e2 Try to enable "run" tests in CI
Try to ensure that we have runc, so that we can test the "run" command
in CI.  In the absence of a compatible packaged version of runc, we may
have to build our own.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Nalin Dahyabhai
9333e5369d Switch to running PAPR tests on Fedora 26
Run PAPR tests using Fedora 26 instead of Fedora 25.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Nalin Dahyabhai
e92020a4db Keep the version in the .spec file current
Add a test to compare the version we claim to be with the version
recorded in the RPM .spec file.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Dan Walsh
b9b2a8a7ef Vendor in latest containers/image and bump to version 0.3
OCI 1.0 Image specification is now released so we want to bump
buildah version to support 1.0 images.

Bump to version 0.3

Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #203
Approved by: rhatdan
2017-07-20 20:23:32 +00:00
Nalin Dahyabhai
b37a981500 Stop trying to set the Platform in runtime specs
run: The latest version of runtime-spec dropped the Platform field, so
stop trying to set it when generating a configuration for a runtime.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #201
Approved by: rhatdan
2017-07-20 18:38:19 +00:00
Nalin Dahyabhai
a500e22104 Update image-spec and runtime-spec to v1.0.0
Update to just-released versions of image-spec and runtime-spec, and the
latest version of runtime-tools.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #201
Approved by: rhatdan
2017-07-20 18:38:19 +00:00
94 changed files with 2322 additions and 1079 deletions

View File

@@ -18,13 +18,15 @@ dnf install -y \
golang \
gpgme-devel \
libassuan-devel \
libseccomp-devel \
libselinux-devel \
make \
ostree-devel \
which
# Red Hat CI adds a merge commit, for testing, which fails the
# PAPR adds a merge commit, for testing, which fails the
# short-commit-subject validation test, so tell git-validate.sh to only check
# up to, but not including, the merge commit.
export GITVALIDATE_TIP=$(cd $GOSRC; git log -2 --pretty='%H' | tail -n 1)
make -C $GOSRC install.tools all validate
make -C $GOSRC install.tools runc all validate TAGS="seccomp"
$GOSRC/tests/test_runner.sh

15
.papr.yml Normal file
View File

@@ -0,0 +1,15 @@
branches:
- master
- auto
- try
host:
distro: fedora/26/atomic
required: true
tests:
# mount yum repos to inherit injected mirrors from PAPR
- docker run --privileged -v /etc/yum.repos.d:/etc/yum.repos.d.host:ro
-v $PWD:/code registry.fedoraproject.org/fedora:26 sh -c
"cp -fv /etc/yum.repos.d{.host/*.repo,} && /code/.papr.sh"

View File

@@ -1,12 +0,0 @@
branches:
- master
- auto
- try
host:
distro: fedora/25/atomic
required: true
tests:
- docker run --privileged -v $PWD:/code fedora:25 /code/.redhat-ci.sh

View File

@@ -5,10 +5,13 @@ go:
- tip
dist: trusty
sudo: required
services:
- docker
before_install:
- sudo add-apt-repository -y ppa:duggan/bats
- sudo apt-get -qq update
- sudo apt-get -qq install bats btrfs-tools git libdevmapper-dev libglib2.0-dev libgpgme11-dev
- sudo apt-get -qq install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libselinux1-dev
- sudo apt-get -qq remove libseccomp2
script:
- make install.tools all validate TAGS=containers_image_ostree_stub
- make install.tools install.libseccomp.sudo all runc validate TAGS="apparmor seccomp"
- cd tests; sudo PATH="$PATH" ./test_runner.sh

View File

@@ -1,4 +1,5 @@
AUTOTAGS := $(shell ./btrfs_tag.sh) $(shell ./libdm_tag.sh)
AUTOTAGS := $(shell ./btrfs_tag.sh) $(shell ./libdm_tag.sh) $(shell ./ostree_tag.sh) $(shell ./selinux_tag.sh)
TAGS := seccomp
PREFIX := /usr/local
BINDIR := $(PREFIX)/bin
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
@@ -7,6 +8,9 @@ BUILDFLAGS := -tags "$(AUTOTAGS) $(TAGS)"
GIT_COMMIT := $(shell git rev-parse --short HEAD)
BUILD_INFO := $(shell date +%s)
RUNC_COMMIT := c5ec25487693612aed95673800863e134785f946
LIBSECCOMP_COMMIT := release-2.3
LDFLAGS := -ldflags '-X main.gitCommit=${GIT_COMMIT} -X main.buildInfo=${BUILD_INFO}'
all: buildah imgtype docs
@@ -19,7 +23,7 @@ imgtype: *.go docker/*.go util/*.go tests/imgtype.go
.PHONY: clean
clean:
$(RM) buildah
$(RM) buildah imgtype
$(MAKE) -C docs clean
.PHONY: docs
@@ -51,6 +55,19 @@ install.tools:
go get -u $(BUILDFLAGS) gopkg.in/alecthomas/gometalinter.v1
gometalinter.v1 -i
.PHONY: runc
runc: gopath
rm -rf ../../opencontainers/runc
git clone https://github.com/opencontainers/runc ../../opencontainers/runc
cd ../../opencontainers/runc && git checkout $(RUNC_COMMIT) && go build -tags "$(AUTOTAGS) $(TAGS)"
ln -sf ../../opencontainers/runc/runc
.PHONY: install.libseccomp.sudo
install.libseccomp.sudo: gopath
rm -rf ../../seccomp/libseccomp
git clone https://github.com/seccomp/libseccomp ../../seccomp/libseccomp
cd ../../seccomp/libseccomp && git checkout $(LIBSECCOMP_COMMIT) && ./autogen.sh && ./configure --prefix=/usr && make all && sudo make install
.PHONY: install
install:
install -D -m0755 buildah $(DESTDIR)/$(BINDIR)/buildah

View File

@@ -1,4 +1,4 @@
buildah - a tool which facilitates building OCI container images
[buildah](https://www.youtube.com/embed/YVk5NgSiUw8) - a tool which facilitates building OCI container images
================================================================
[![Go Report Card](https://goreportcard.com/badge/github.com/projectatomic/buildah)](https://goreportcard.com/report/github.com/projectatomic/buildah)
@@ -53,7 +53,8 @@ In Fedora, you can use this command:
skopeo-containers
```
Then to install buildah follow the steps in this example:
Then to install buildah on Fedora follow the steps in this example:
```
mkdir ~/buildah
@@ -66,6 +67,29 @@ Then to install buildah follow the steps in this example:
buildah --help
```
In Ubuntu zesty and xenial, you can use this command:
```
apt-get -y install software-properties-common
add-apt-repository -y ppa:alexlarsson/flatpak
add-apt-repository -y ppa:gophers/archive
apt-add-repository -y ppa:projectatomic/ppa
apt-get -y -qq update
apt-get -y install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libostree-dev libseccomp-dev libselinux1-dev skopeo-containers go-md2man
apt-get -y install golang-1.8
```
Then to install buildah on Ubuntu follow the steps in this example:
```
mkdir ~/buildah
cd ~/buildah
export GOPATH=`pwd`
git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
cd ./src/github.com/projectatomic/buildah
PATH=/usr/lib/go-1.8/bin:$PATH make runc all TAGS="apparmor seccomp"
make install
buildah --help
```
buildah uses `runc` to run commands when `buildah run` is used, or when `buildah build-using-dockerfile`
encounters a `RUN` instruction, so you'll also need to build and install a compatible version of
[runc](https://github.com/opencontainers/runc) for buildah to call for those cases.
@@ -79,7 +103,6 @@ encounters a `RUN` instruction, so you'll also need to build and install a compa
| [buildah-config(1)](/docs/buildah-config.md) | Update image configuration settings. |
| [buildah-containers(1)](/docs/buildah-containers.md) | List the working containers and their base images. |
| [buildah-copy(1)](/docs/buildah-copy.md) | Copies the contents of a file, URL, or directory into a container's working directory. |
| [buildah-export(1)](/docs/buildah-export.md) | Export the contents of a container's filesystem as a tar archive |
| [buildah-from(1)](/docs/buildah-from.md) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| [buildah-images(1)](/docs/buildah-images.md) | List images in local storage. |
| [buildah-inspect(1)](/docs/buildah-inspect.md) | Inspects the configuration of a container or image. |

View File

@@ -7,6 +7,7 @@ import (
"os"
"path/filepath"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/ioutils"
"github.com/opencontainers/image-spec/specs-go/v1"
@@ -19,7 +20,7 @@ const (
// identify working containers.
Package = "buildah"
// Version for the Package
Version = "0.2"
Version = "0.4"
// The value we use to identify what type of information, currently a
// serialized Builder structure, we are using as per-container state.
// This should only be changed when we make incompatible changes to
@@ -103,8 +104,13 @@ type BuilderOptions struct {
PullPolicy int
// Registry is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone can not be resolved to a
// reference to a source image.
// reference to a source image. No separator is implicitly added.
Registry string
// Transport is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone, or the image name and
// the registry together, can not be resolved to a reference to a
// source image. No separator is implicitly added.
Transport string
// Mount signals to NewBuilder() that the container should be mounted
// immediately.
Mount bool
@@ -117,6 +123,9 @@ type BuilderOptions struct {
// ReportWriter is an io.Writer which will be used to log the reading
// of the source image from a registry, if we end up pulling the image.
ReportWriter io.Writer
// github.com/containers/image/types SystemContext to hold credentials
// and other authentication/authorization information.
SystemContext *types.SystemContext
}
// ImportOptions are used to initialize a Builder from an existing container

View File

@@ -17,11 +17,6 @@ var (
Name: "quiet, q",
Usage: "refrain from announcing build instructions and image read/write progress",
},
cli.StringFlag{
Name: "registry",
Usage: "prefix to prepend to the image name in order to pull the image",
Value: DefaultRegistry,
},
cli.BoolTFlag{
Name: "pull",
Usage: "pull the image if not present",
@@ -82,10 +77,6 @@ func budCmd(c *cli.Context) error {
tags = tags[1:]
}
}
registry := DefaultRegistry
if c.IsSet("registry") {
registry = c.String("registry")
}
pull := true
if c.IsSet("pull") {
pull = c.BoolT("pull")
@@ -208,7 +199,6 @@ func budCmd(c *cli.Context) error {
options := imagebuildah.BuildOptions{
ContextDirectory: contextDir,
PullPolicy: pullPolicy,
Registry: registry,
Compression: imagebuildah.Gzip,
Quiet: quiet,
SignaturePolicyPath: signaturePolicy,

View File

@@ -19,6 +19,20 @@ var (
Name: "disable-compression, D",
Usage: "don't compress layers",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `username[:password]` for accessing the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "Require HTTPS and verify certificates when accessing the registry",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
@@ -118,11 +132,17 @@ func commitCmd(c *cli.Context) error {
dest = dest2
}
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
options := buildah.CommitOptions{
PreferredManifestType: format,
Compression: compress,
SignaturePolicyPath: signaturePolicy,
HistoryTimestamp: &timestamp,
SystemContext: systemContext,
}
if !quiet {
options.ReportWriter = os.Stderr

View File

@@ -1,28 +1,20 @@
package main
import (
"encoding/json"
"os"
"strings"
"syscall"
"time"
is "github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/containers/storage"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
type imageMetadata struct {
Tag string `json:"tag"`
CreatedTime time.Time `json:"created-time"`
ID string `json:"id"`
Blobs []types.BlobInfo `json:"blob-list"`
Layers map[string][]string `json:"layers"`
SignatureSizes []string `json:"signature-sizes"`
}
var needToShutdownStore = false
func getStore(c *cli.Context) (storage.Store, error) {
@@ -85,30 +77,84 @@ func openImage(store storage.Store, name string) (builder *buildah.Builder, err
return builder, nil
}
func parseMetadata(image storage.Image) (imageMetadata, error) {
var im imageMetadata
dec := json.NewDecoder(strings.NewReader(image.Metadata))
if err := dec.Decode(&im); err != nil {
return imageMetadata{}, err
}
return im, nil
}
func getSize(image storage.Image, store storage.Store) (int64, error) {
func getDateAndDigestAndSize(image storage.Image, store storage.Store) (time.Time, string, int64, error) {
created := time.Time{}
is.Transport.SetStore(store)
storeRef, err := is.Transport.ParseStoreReference(store, "@"+image.ID)
if err != nil {
return -1, err
return created, "", -1, err
}
img, err := storeRef.NewImage(nil)
if err != nil {
return -1, err
return created, "", -1, err
}
imgSize, err := img.Size()
if err != nil {
return -1, err
defer img.Close()
imgSize, sizeErr := img.Size()
if sizeErr != nil {
imgSize = -1
}
return imgSize, nil
manifest, _, manifestErr := img.Manifest()
manifestDigest := ""
if manifestErr == nil && len(manifest) > 0 {
manifestDigest = digest.Canonical.FromBytes(manifest).String()
}
inspectInfo, inspectErr := img.Inspect()
if inspectErr == nil && inspectInfo != nil {
created = inspectInfo.Created
}
if sizeErr != nil {
err = sizeErr
} else if manifestErr != nil {
err = manifestErr
} else if inspectErr != nil {
err = inspectErr
}
return created, manifestDigest, imgSize, err
}
// systemContextFromOptions returns a SystemContext populated with values
// per the input parameters provided by the caller for the use in authentication.
func systemContextFromOptions(c *cli.Context) (*types.SystemContext, error) {
ctx := &types.SystemContext{
DockerCertPath: c.String("cert-dir"),
}
if c.IsSet("tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT("tls-verify")
}
if c.IsSet("creds") {
var err error
ctx.DockerAuthConfig, err = getDockerAuth(c.String("creds"))
if err != nil {
return nil, err
}
}
if c.IsSet("signature-policy") {
ctx.SignaturePolicyPath = c.String("signature-policy")
}
return ctx, nil
}
func parseCreds(creds string) (string, string, error) {
if creds == "" {
return "", "", errors.Wrapf(syscall.EINVAL, "credentials can't be empty")
}
up := strings.SplitN(creds, ":", 2)
if len(up) == 1 {
return up[0], "", nil
}
if up[0] == "" {
return "", "", errors.Wrapf(syscall.EINVAL, "username can't be empty")
}
return up[0], up[1], nil
}
func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
username, password, err := parseCreds(creds)
if err != nil {
return nil, err
}
return &types.DockerAuthConfig{
Username: username,
Password: password,
}, nil
}

View File

@@ -29,30 +29,6 @@ func TestGetStore(t *testing.T) {
}
}
func TestParseMetadata(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
} else if len(images) == 0 {
t.Fatalf("no images with metadata to parse")
}
_, err = parseMetadata(images[0])
if err != nil {
t.Error(err)
}
}
func TestGetSize(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
@@ -69,7 +45,7 @@ func TestGetSize(t *testing.T) {
t.Fatalf("Error reading images: %v", err)
}
_, err = getSize(images[0], store)
_, _, _, err = getDateAndDigestAndSize(images[0], store)
if err != nil {
t.Error(err)
}

View File

@@ -1,87 +0,0 @@
package main
import (
"fmt"
"io"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
var (
exportFlags = []cli.Flag{
cli.StringFlag{
Name: "output, o",
Usage: "write to a file, instead of STDOUT",
},
}
exportCommand = cli.Command{
Name: "export",
Usage: "Export container's filesystem contents as a tar archive",
Description: `This command exports the full or shortened container ID or container name to
STDOUT and should be redirected to a tar file.
`,
Flags: exportFlags,
Action: exportCmd,
ArgsUsage: "CONTAINER",
}
)
func exportCmd(c *cli.Context) error {
var builder *buildah.Builder
args := c.Args()
if len(args) == 0 {
return errors.Errorf("container name must be specified")
}
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
name := args[0]
store, err := getStore(c)
if err != nil {
return err
}
builder, err = openBuilder(store, name)
if err != nil {
return errors.Wrapf(err, "error reading build container %q", name)
}
mountPoint, err := builder.Mount("")
if err != nil {
return errors.Wrapf(err, "error mounting %q container %q", name, builder.Container)
}
defer func() {
if err := builder.Unmount(); err != nil {
fmt.Printf("Failed to umount %q: %v\n", builder.Container, err)
}
}()
input, err := archive.Tar(mountPoint, 0)
if err != nil {
return errors.Wrapf(err, "error reading directory %q", name)
}
outFile := os.Stdout
if c.IsSet("output") {
outfile := c.String("output")
outFile, err = os.Create(outfile)
if err != nil {
return errors.Wrapf(err, "error creating file %q", outfile)
}
defer outFile.Close()
}
if logrus.IsTerminal(outFile) {
return errors.Errorf("Refusing to save to a terminal. Use the -o flag or redirect.")
}
_, err = io.Copy(outFile, input)
return err
}

View File

@@ -9,13 +9,6 @@ import (
"github.com/urfave/cli"
)
const (
// DefaultRegistry is a prefix that we apply to an image name if we
// can't find one in the local Store, in order to generate a source
// reference for the image that we can then copy to the local Store.
DefaultRegistry = "docker://"
)
var (
fromFlags = []cli.Flag{
cli.StringFlag{
@@ -31,9 +24,18 @@ var (
Usage: "pull the image even if one with the same name is already present",
},
cli.StringFlag{
Name: "registry",
Usage: "`prefix` to prepend to the image name in order to pull the image",
Value: DefaultRegistry,
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `username[:password]` for accessing the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "Require HTTPS and verify certificates when accessing the registry",
},
cli.StringFlag{
Name: "signature-policy",
@@ -65,12 +67,14 @@ func fromCmd(c *cli.Context) error {
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
image := args[0]
registry := DefaultRegistry
if c.IsSet("registry") {
registry = c.String("registry")
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
pull := true
if c.IsSet("pull") {
pull = c.BoolT("pull")
@@ -111,8 +115,8 @@ func fromCmd(c *cli.Context) error {
FromImage: image,
Container: name,
PullPolicy: pullPolicy,
Registry: registry,
SignaturePolicyPath: signaturePolicy,
SystemContext: systemContext,
}
if !quiet {
options.ReportWriter = os.Stderr

View File

@@ -9,6 +9,7 @@ import (
"encoding/json"
"github.com/Sirupsen/logrus"
is "github.com/containers/image/storage"
"github.com/containers/storage"
"github.com/pkg/errors"
@@ -190,11 +191,7 @@ func setFilterDate(images []storage.Image, imgName string) (time.Time, error) {
for _, name := range image.Names {
if matchesReference(name, imgName) {
// Set the date to this image
im, err := parseMetadata(image)
if err != nil {
return time.Time{}, errors.Wrapf(err, "could not get creation date for image %q", imgName)
}
date := im.CreatedTime
date := image.Created
return date, nil
}
}
@@ -218,16 +215,15 @@ func outputHeader(truncate, digests bool) {
func outputImages(images []storage.Image, format string, store storage.Store, filters *filterParams, argName string, hasTemplate, truncate, digests, quiet bool) error {
for _, image := range images {
imageMetadata, err := parseMetadata(image)
if err != nil {
fmt.Println(err)
createdTime := image.Created
inspectedTime, digest, size, _ := getDateAndDigestAndSize(image, store)
if !inspectedTime.IsZero() {
if createdTime != inspectedTime {
logrus.Debugf("image record and configuration disagree on the image's creation time for %q, using the one from the configuration", image)
createdTime = inspectedTime
}
}
createdTime := imageMetadata.CreatedTime.Format("Jan 2, 2006 15:04")
digest := ""
if len(imageMetadata.Blobs) > 0 {
digest = string(imageMetadata.Blobs[0].Digest)
}
size, _ := getSize(image, store)
names := []string{""}
if len(image.Names) > 0 {
@@ -250,12 +246,11 @@ func outputImages(images []storage.Image, format string, store storage.Store, fi
ID: image.ID,
Name: name,
Digest: digest,
CreatedAt: createdTime,
CreatedAt: createdTime.Format("Jan 2, 2006 15:04"),
Size: formattedSize(size),
}
if hasTemplate {
err = outputUsingTemplate(format, params)
if err != nil {
if err := outputUsingTemplate(format, params); err != nil {
return err
}
continue
@@ -297,12 +292,13 @@ func matchesDangling(name string, dangling string) bool {
func matchesLabel(image storage.Image, store storage.Store, label string) bool {
storeRef, err := is.Transport.ParseStoreReference(store, "@"+image.ID)
if err != nil {
return false
}
img, err := storeRef.NewImage(nil)
if err != nil {
return false
}
defer img.Close()
info, err := img.Inspect()
if err != nil {
return false
@@ -326,11 +322,7 @@ func matchesLabel(image storage.Image, store storage.Store, label string) bool {
// Returns true if the image was created since the filter image. Returns
// false otherwise
func matchesBeforeImage(image storage.Image, name string, params *filterParams) bool {
im, err := parseMetadata(image)
if err != nil {
return false
}
if im.CreatedTime.Before(params.beforeDate) {
if image.Created.IsZero() || image.Created.Before(params.beforeDate) {
return true
}
return false
@@ -339,18 +331,14 @@ func matchesBeforeImage(image storage.Image, name string, params *filterParams)
// Returns true if the image was created since the filter image. Returns
// false otherwise
func matchesSinceImage(image storage.Image, name string, params *filterParams) bool {
im, err := parseMetadata(image)
if err != nil {
return false
}
if im.CreatedTime.After(params.sinceDate) {
if image.Created.IsZero() || image.Created.After(params.sinceDate) {
return true
}
return false
}
func matchesID(id, argID string) bool {
return strings.HasPrefix(argID, id)
func matchesID(imageID, argID string) bool {
return strings.HasPrefix(imageID, argID)
}
func matchesReference(name, argName string) bool {

View File

@@ -78,7 +78,6 @@ func main() {
configCommand,
containersCommand,
copyCommand,
exportCommand,
fromCommand,
imagesCommand,
inspectCommand,

View File

@@ -19,6 +19,20 @@ var (
Name: "disable-compression, D",
Usage: "don't compress layers",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `username[:password]` for accessing the registry",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "Require HTTPS and verify certificates when accessing the registry",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
@@ -74,15 +88,32 @@ func pushCmd(c *cli.Context) error {
if err != nil {
return err
}
dest, err := alltransports.ParseImageName(destSpec)
// add the docker:// transport to see if they neglected it.
if err != nil {
return err
if strings.Contains(destSpec, "://") {
return err
}
destSpec = "docker://" + destSpec
dest2, err2 := alltransports.ParseImageName(destSpec)
if err2 != nil {
return err
}
dest = dest2
}
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
options := buildah.PushOptions{
Compression: compress,
SignaturePolicyPath: signaturePolicy,
Store: store,
SystemContext: systemContext,
}
if !quiet {
options.ReportWriter = os.Stderr

View File

@@ -31,7 +31,6 @@ var (
)
func rmiCmd(c *cli.Context) error {
force := false
if c.IsSet("force") {
force = c.Bool("force")
@@ -61,7 +60,7 @@ func rmiCmd(c *cli.Context) error {
if force {
removeContainers(ctrIDs, store)
} else {
for ctrID := range ctrIDs {
for _, ctrID := range ctrIDs {
return fmt.Errorf("Could not remove image %q (must force) - container %q is using its reference image", id, ctrID)
}
}
@@ -76,7 +75,7 @@ func rmiCmd(c *cli.Context) error {
} else {
name, err2 := untagImage(id, image, store)
if err2 != nil {
return err
return errors.Wrapf(err, "error removing tag %q from image %q", id, image.ID)
}
fmt.Printf("untagged: %s\n", name)
}
@@ -86,7 +85,7 @@ func rmiCmd(c *cli.Context) error {
}
id, err := removeImage(image, store)
if err != nil {
return err
return errors.Wrapf(err, "error removing image %q", image.ID)
}
fmt.Printf("%s\n", id)
}
@@ -114,7 +113,7 @@ func getImage(id string, store storage.Store) (*storage.Image, error) {
if ref != nil {
image, err2 := is.Transport.GetStoreImage(store, ref)
if err2 != nil {
return nil, err2
return nil, errors.Wrapf(err2, "error reading image using reference %q", transports.ImageName(ref))
}
return image, nil
}
@@ -122,11 +121,6 @@ func getImage(id string, store storage.Store) (*storage.Image, error) {
}
func untagImage(imgArg string, image *storage.Image, store storage.Store) (string, error) {
// Remove name from image.Names and set the new name in the ImageStore
imgStore, err := store.ImageStore()
if err != nil {
return "", errors.Wrap(err, "could not untag image")
}
newNames := []string{}
removedName := ""
for _, name := range image.Names {
@@ -136,23 +130,17 @@ func untagImage(imgArg string, image *storage.Image, store storage.Store) (strin
}
newNames = append(newNames, name)
}
imgStore.SetNames(image.ID, newNames)
err = imgStore.Save()
return removedName, err
if removedName != "" {
if err := store.SetNames(image.ID, newNames); err != nil {
return "", errors.Wrapf(err, "error removing name %q from image %q", removedName, image.ID)
}
}
return removedName, nil
}
func removeImage(image *storage.Image, store storage.Store) (string, error) {
imgStore, err := store.ImageStore()
if err != nil {
return "", errors.Wrapf(err, "could not open image store")
}
err = imgStore.Delete(image.ID)
if err != nil {
return "", errors.Wrapf(err, "could not remove image")
}
err = imgStore.Save()
if err != nil {
return "", errors.Wrapf(err, "could not save image store")
if _, err := store.DeleteImage(image.ID, true); err != nil {
return "", errors.Wrapf(err, "could not remove image %q", image.ID)
}
return image.ID, nil
}
@@ -160,12 +148,7 @@ func removeImage(image *storage.Image, store storage.Store) (string, error) {
// Returns a list of running containers associated with the given ImageReference
func runningContainers(image *storage.Image, store storage.Store) ([]string, error) {
ctrIDs := []string{}
ctrStore, err := store.ContainerStore()
if err != nil {
return nil, err
}
containers, err := ctrStore.Containers()
containers, err := store.Containers()
if err != nil {
return nil, err
}
@@ -178,12 +161,8 @@ func runningContainers(image *storage.Image, store storage.Store) ([]string, err
}
func removeContainers(ctrIDs []string, store storage.Store) error {
ctrStore, err := store.ContainerStore()
if err != nil {
return err
}
for _, ctrID := range ctrIDs {
if err = ctrStore.Delete(ctrID); err != nil {
if err := store.DeleteContainer(ctrID); err != nil {
return errors.Wrapf(err, "could not remove container %q", ctrID)
}
}
@@ -193,46 +172,46 @@ func removeContainers(ctrIDs []string, store storage.Store) error {
// If it's looks like a proper image reference, parse it and check if it
// corresponds to an image that actually exists.
func properImageRef(id string) (types.ImageReference, error) {
var ref types.ImageReference
var err error
if ref, err = alltransports.ParseImageName(id); err == nil {
if ref, err := alltransports.ParseImageName(id); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, fmt.Errorf("error confirming presence of image reference %q: %v", transports.ImageName(ref), err)
return nil, errors.Wrapf(err, "error confirming presence of image reference %q", transports.ImageName(ref))
}
return nil, fmt.Errorf("error parsing %q as an image reference: %v", id, err)
return nil, errors.Wrapf(err, "error parsing %q as an image reference", id)
}
// If it's looks like an image reference that's relative to our storage, parse
// it and check if it corresponds to an image that actually exists.
func storageImageRef(store storage.Store, id string) (types.ImageReference, error) {
var ref types.ImageReference
var err error
if ref, err = is.Transport.ParseStoreReference(store, id); err == nil {
if ref, err := is.Transport.ParseStoreReference(store, id); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, fmt.Errorf("error confirming presence of storage image reference %q: %v", transports.ImageName(ref), err)
return nil, errors.Wrapf(err, "error confirming presence of storage image reference %q", transports.ImageName(ref))
}
return nil, fmt.Errorf("error parsing %q as a storage image reference: %v", id, err)
return nil, errors.Wrapf(err, "error parsing %q as a storage image reference", id)
}
// If it might be an ID that's relative to our storage, parse it and check if it
// corresponds to an image that actually exists. This _should_ be redundant,
// since we already tried deleting the image using the ID directly above, but it
// can't hurt either.
// If it might be an ID that's relative to our storage, truncated or not, so
// parse it and check if it corresponds to an image that we have stored
// locally.
func storageImageID(store storage.Store, id string) (types.ImageReference, error) {
var ref types.ImageReference
var err error
if ref, err = is.Transport.ParseStoreReference(store, "@"+id); err == nil {
imageID := id
if img, err := store.Image(id); err == nil && img != nil {
imageID = img.ID
}
if ref, err := is.Transport.ParseStoreReference(store, "@"+imageID); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, fmt.Errorf("error confirming presence of storage image reference %q: %v", transports.ImageName(ref), err)
return nil, errors.Wrapf(err, "error confirming presence of storage image reference %q", transports.ImageName(ref))
}
return nil, fmt.Errorf("error parsing %q as a storage image reference: %v", "@"+id, err)
return nil, errors.Wrapf(err, "error parsing %q as a storage image reference: %v", "@"+id)
}

View File

@@ -54,6 +54,9 @@ type CommitOptions struct {
// HistoryTimestamp is the timestamp used when creating new items in the
// image's history. If unset, the current time will be used.
HistoryTimestamp *time.Time
// github.com/containers/image/types SystemContext to hold credentials
// and other authentication/authorization information.
SystemContext *types.SystemContext
}
// PushOptions can be used to alter how an image is copied somewhere.
@@ -73,6 +76,9 @@ type PushOptions struct {
ReportWriter io.Writer
// Store is the local storage store which holds the source image.
Store storage.Store
// github.com/containers/image/types SystemContext to hold credentials
// and other authentication/authorization information.
SystemContext *types.SystemContext
}
// shallowCopy copies the most recent layer, the configuration, and the manifest from one image to another.
@@ -229,12 +235,17 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
func (b *Builder) Commit(dest types.ImageReference, options CommitOptions) error {
policy, err := signature.DefaultPolicy(getSystemContext(options.SignaturePolicyPath))
if err != nil {
return err
return errors.Wrapf(err, "error obtaining default signature policy")
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return err
return errors.Wrapf(err, "error creating new signature policy context")
}
defer func() {
if err2 := policyContext.Destroy(); err2 != nil {
logrus.Debugf("error destroying signature polcy context: %v", err2)
}
}()
// Check if we're keeping everything in local storage. If so, we can take certain shortcuts.
_, destIsStorage := dest.Transport().(is.StoreTransport)
exporting := !destIsStorage
@@ -244,7 +255,7 @@ func (b *Builder) Commit(dest types.ImageReference, options CommitOptions) error
}
if exporting {
// Copy everything.
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter))
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter, nil, options.SystemContext))
if err != nil {
return errors.Wrapf(err, "error copying layers and metadata")
}
@@ -279,12 +290,17 @@ func Push(image string, dest types.ImageReference, options PushOptions) error {
systemContext := getSystemContext(options.SignaturePolicyPath)
policy, err := signature.DefaultPolicy(systemContext)
if err != nil {
return err
return errors.Wrapf(err, "error obtaining default signature policy")
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return err
return errors.Wrapf(err, "error creating new signature policy context")
}
defer func() {
if err2 := policyContext.Destroy(); err2 != nil {
logrus.Debugf("error destroying signature polcy context: %v", err2)
}
}()
importOptions := ImportFromImageOptions{
Image: image,
SignaturePolicyPath: options.SignaturePolicyPath,
@@ -311,7 +327,7 @@ func Push(image string, dest types.ImageReference, options PushOptions) error {
return errors.Wrapf(err, "error recomputing layer digests and building metadata")
}
// Copy everything.
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter))
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter, nil, options.SystemContext))
if err != nil {
return errors.Wrapf(err, "error copying layers and metadata")
}

View File

@@ -7,9 +7,11 @@ import (
"github.com/containers/image/types"
)
func getCopyOptions(reportWriter io.Writer) *cp.Options {
func getCopyOptions(reportWriter io.Writer, sourceSystemContext *types.SystemContext, destinationSystemContext *types.SystemContext) *cp.Options {
return &cp.Options{
ReportWriter: reportWriter,
ReportWriter: reportWriter,
SourceCtx: sourceSystemContext,
DestinationCtx: destinationSystemContext,
}
}

View File

@@ -291,9 +291,12 @@ return 1
--quiet
-q
--rm
--tls-verify
"
local options_with_args="
--cert-dir
--creds
--signature-policy
--format
-f
@@ -340,7 +343,6 @@ return 1
"
local options_with_args="
--registry
--signature-policy
--runtime
--runtime-flag
@@ -472,9 +474,12 @@ return 1
-D
--quiet
-q
--tls-verify
"
local options_with_args="
--cert-dir
--creds
--signature-policy
"
@@ -615,11 +620,13 @@ return 1
--pull-always
--quiet
-q
--tls-verify
"
local options_with_args="
--cert-dir
--creds
--name
--registry
--signature-policy
"
@@ -644,18 +651,6 @@ return 1
"
}
_buildah_export() {
local boolean_options="
--help
-h
"
local options_with_args="
-o
--output
"
}
_buildah() {
local previous_extglob_setting=$(shopt -p extglob)
shopt -s extglob
@@ -668,7 +663,6 @@ return 1
config
containers
copy
export
from
images
inspect

View File

@@ -25,7 +25,7 @@
%global shortcommit %(c=%{commit}; echo ${c:0:7})
Name: buildah
Version: 0.2
Version: 0.3
Release: 1.git%{shortcommit}%{?dist}
Summary: A command line tool used to creating OCI Images
License: ASL 2.0
@@ -69,7 +69,7 @@ popd
mv vendor src
export GOPATH=$(pwd)/_build:$(pwd):%{gopath}
make all
make all GIT_COMMIT=%{shortcommit}
%install
export GOPATH=$(pwd)/_build:$(pwd):%{gopath}

View File

@@ -35,11 +35,6 @@ Defaults to *true*.
Pull the image even if a version of the image is already present.
**--registry** *registry*
A prefix to prepend to the image name in order to pull the image. Default
value is "docker://"
**--signature-policy** *signaturepolicy*
Pathname of a signature policy file to use. It is not recommended that this

View File

@@ -13,6 +13,14 @@ specified, an ID is assigned, but no name is assigned to the image.
## OPTIONS
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry
**--creds** *creds*
The username[:password] to use to authenticate with the registry if required.
**--disable-compression, -D**
Don't compress filesystem layers when building the image.
@@ -23,6 +31,10 @@ Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--tls-verify** *bool-value*
Require HTTPS and verify certificates when talking to container registries (defaults to true)
**--quiet**
When writing the output image, suppress progress output.
@@ -39,13 +51,24 @@ Default leaves the container and its content in place.
## EXAMPLE
buildah commit containerID
This example saves an image based on the container.
`buildah commit containerID`
buildah commit --rm containerID newImageName
This example saves an image named newImageName based on the container.
`buildah commit --rm containerID newImageName`
buildah commit --disable-compression --signature-policy '/etc/containers/policy.json' containerID
buildah commit --disable-compression --signature-policy '/etc/containers/policy.json' containerID newImageName
This example saves an image based on the container disabling compression.
`buildah commit --disable-compression containerID`
This example saves an image named newImageName based on the container disabling compression.
`buildah commit --disable-compression containerID newImageName`
This example commits the container to the image on the local registry while turning off tls verification.
`buildah commit --tls-verify=false containerID docker://localhost:5000/imageId`
This example commits the container to the image on the local registry using credentials and certificates for authentication.
`buildah commit --cert-dir ~/auth --tls-verify=true --creds=username:password containerID docker://localhost:5000/imageId`
## SEE ALSO
buildah(1)

View File

@@ -1,40 +0,0 @@
% BUILDAH(1) Buildah User Manuals
% Buildah Community
% JUNE 2017
# NAME
buildah-export - Export container's filesystem content as a tar archive
# SYNOPSIS
**buildah export**
[**--help**]
[**-o**|**--output**[=*""*]]
CONTAINER
# DESCRIPTION
**buildah export** exports the full or shortened container ID or container name
to STDOUT and should be redirected to a tar file.
# OPTIONS
**--help**
Print usage statement
**-o**, **--output**=""
Write to a file, instead of STDOUT
# EXAMPLES
Export the contents of the container called angry_bell to a tar file
called angry_bell.tar:
# buildah export angry_bell > angry_bell.tar
# buildah export --output=angry_bell-latest.tar angry_bell
# ls -sh angry_bell.tar
321M angry_bell.tar
# ls -sh angry_bell-latest.tar
321M angry_bell-latest.tar
# See also
**buildah-import(1)** to create an empty filesystem image
and import the contents of the tarball into it, then optionally tag it.
# HISTORY
July 2017, Originally copied from docker project docker-export.1.md

View File

@@ -8,13 +8,42 @@ buildah from - Creates a new working container, either from scratch or using a s
## DESCRIPTION
Creates a working container based upon the specified image name. If the
supplied image name is "scratch" a new empty container is created.
supplied image name is "scratch" a new empty container is created. Image names
uses a "transport":"details" format.
Multiple transports are supported:
**dir:**_path_
An existing local directory _path_ retrieving the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_ (Default)
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$HOME/.docker/config.json`, which is set e.g. using `(docker login)`.
**docker-archive:**_path_
An image is retrieved as a `docker load` formatted file.
**docker-daemon:**_docker-reference_
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
**oci:**_path_**:**_tag_
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
## RETURN VALUE
The container ID of the container that was created. On error, -1 is returned and errno is returned.
## OPTIONS
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry
**--creds** *creds*
The username[:password] to use to authenticate with the registry if required.
**--name** *name*
A *name* for the working container
@@ -29,30 +58,33 @@ Defaults to *true*.
Pull the image even if a version of the image is already present.
**--registry** *registry*
A prefix to prepend to the image name in order to pull the image. Default
value is "docker://"
**--signature-policy** *signaturepolicy*
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--tls-verify** *bool-value*
Require HTTPS and verify certificates when talking to container registries (defaults to true)
**--quiet**
If an image needs to be pulled from the registry, suppress progress output.
## EXAMPLE
buildah from imagename --pull --registry "myregistry://"
buildah from imagename --pull
buildah from myregistry://imagename --pull
buildah from docker://myregistry.example.com/imagename --pull
buildah from imagename --signature-policy /etc/containers/policy.json
buildah from imagename --pull-always --registry "myregistry://" --name "mycontainer"
buildah from docker://myregistry.example.com/imagename --pull-always --name "mycontainer"
buildah from myregistry/myrepository/imagename:imagetag --tls-verify=false
buildah from myregistry/myrepository/imagename:imagetag --creds=myusername:mypassword --cert-dir ~/auth
## SEE ALSO
buildah(1)

View File

@@ -12,15 +12,17 @@ Displays locally stored images, their names, and their IDs.
## OPTIONS
**--json**
Display the output in JSON format.
**--digests**
Show image digests
Show the image digests.
**--filter, -f=[]**
Filter output based on conditions provided (default [])
Filter output based on conditions provided (default []). Valid
keywords are 'dangling', 'label', 'before' and 'since'.
**--format="TEMPLATE"**
@@ -36,6 +38,7 @@ Do not truncate output.
**--quiet, -q**
Displays only the image IDs.
## EXAMPLE
@@ -47,5 +50,7 @@ buildah images --quiet
buildah images -q --noheading --notruncate
buildah images --filter dangling=true
## SEE ALSO
buildah(1)

View File

@@ -20,9 +20,6 @@ Image stored in local container/storage
Multiple transports are supported:
**atomic:**_hostname_**/**_namespace_**/**_stream_**:**_tag_
An image served by an OpenShift(Atomic) Registry server. The current OpenShift project and OpenShift Registry instance are by default read from `$HOME/.kube/config`, which is set e.g. using `(oc login)`.
**dir:**_path_
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
@@ -43,6 +40,14 @@ Image stored in local container/storage
## OPTIONS
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry
**--creds** *creds*
The username[:password] to use to authenticate with the registry if required.
**--disable-compression, -D**
Don't compress copies of filesystem layers which will be pushed.
@@ -53,6 +58,10 @@ Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--tls-verify** *bool-value*
Require HTTPS and verify certificates when talking to container registries (defaults to true)
**--quiet**
When writing the output image, suppress progress output.
@@ -67,17 +76,19 @@ This example extracts the imageID image to a local directory in oci format.
`# buildah push imageID oci:/path/to/layout`
This example extracts the imageID image to a container registry named registry.example.com
This example extracts the imageID image to a container registry named registry.example.com.
`# buildah push imageID docker://registry.example.com/repository:tag`
This example extracts the imageID image and puts into the local docker container store
This example extracts the imageID image and puts into the local docker container store.
`# buildah push imageID docker-daemon:image:tag`
This example extracts the imageID image and pushes it to an OpenShift(Atomic) registry
This example extracts the imageID image and puts it into the registry on the localhost while turning off tls verification.
`# buildah push --tls-verify=false imageID docker://localhost:5000/my-imageID`
`# buildah push imageID atomic:registry.example.com/company/image:tag`
This example extracts the imageID image and puts it into the registry on the localhost using credentials and certificates for authentication.
`# buildah push --cert-dir ~/auth --tls-verify=true --creds=username:password imageID docker://localhost:5000/my-imageID`
## SEE ALSO
buildah(1)

View File

@@ -58,7 +58,6 @@ Print the version
| buildah-config(1) | Update image configuration settings. |
| buildah-containers(1) | List the working containers and their base images. |
| buildah-copy(1) | Copies the contents of a file, URL, or directory into a container's working directory. |
| buildah-export(1) | Export the contents of a container's filesystem as a tar archive |
| buildah-from(1) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| buildah-images(1) | List images in local storage. |
| buildah-inspect(1) | Inspects the configuration of a container or image |

17
examples/lighttpd.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/bash -x
ctr1=`buildah from ${1:-fedora}`
## Get all updates and install our minimal httpd server
buildah run $ctr1 -- dnf update -y
buildah run $ctr1 -- dnf install -y lighttpd
## Include some buildtime annotations
buildah config --annotation "com.example.build.host=$(uname -n)" $ctr1
## Run our server and expose the port
buildah config $ctr1 --cmd "/usr/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf"
buildah config $ctr1 --port 80
## Commit this container to an image name
buildah commit $ctr1 ${2:-$USER/lighttpd}

View File

@@ -51,8 +51,13 @@ type BuildOptions struct {
PullPolicy int
// Registry is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone can not be resolved to a
// reference to a source image.
// reference to a source image. No separator is implicitly added.
Registry string
// Transport is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone, or the image name and
// the registry together, can not be resolved to a reference to a
// source image. No separator is implicitly added.
Transport string
// IgnoreUnrecognizedInstructions tells us to just log instructions we
// don't recognize, and try to keep going.
IgnoreUnrecognizedInstructions bool
@@ -108,6 +113,7 @@ type Executor struct {
builder *buildah.Builder
pullPolicy int
registry string
transport string
ignoreUnrecognizedInstructions bool
quiet bool
runtime string
@@ -403,6 +409,7 @@ func NewExecutor(store storage.Store, options BuildOptions) (*Executor, error) {
contextDir: options.ContextDirectory,
pullPolicy: options.PullPolicy,
registry: options.Registry,
transport: options.Transport,
ignoreUnrecognizedInstructions: options.IgnoreUnrecognizedInstructions,
quiet: options.Quiet,
runtime: options.Runtime,
@@ -458,6 +465,7 @@ func (b *Executor) Prepare(ib *imagebuilder.Builder, node *parser.Node, from str
FromImage: from,
PullPolicy: b.pullPolicy,
Registry: b.registry,
Transport: b.transport,
SignaturePolicyPath: b.signaturePolicyPath,
ReportWriter: b.reportWriter,
}

147
new.go
View File

@@ -6,6 +6,9 @@ import (
"github.com/Sirupsen/logrus"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/openshift/imagebuilder"
"github.com/pkg/errors"
@@ -15,21 +18,105 @@ const (
// BaseImageFakeName is the "name" of a source image which we interpret
// as "no image".
BaseImageFakeName = imagebuilder.NoBaseImageSpecifier
// DefaultTransport is a prefix that we apply to an image name if we
// can't find one in the local Store, in order to generate a source
// reference for the image that we can then copy to the local Store.
DefaultTransport = "docker://"
)
func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
var err error
var ref types.ImageReference
var img *storage.Image
manifest := []byte{}
config := []byte{}
name := "working-container"
if options.FromImage == BaseImageFakeName {
options.FromImage = ""
}
image := options.FromImage
if options.Transport == "" {
options.Transport = DefaultTransport
}
systemContext := getSystemContext(options.SignaturePolicyPath)
imageID := ""
if image != "" {
if options.PullPolicy == PullAlways {
pulledReference, err2 := pullImage(store, options, systemContext)
if err2 != nil {
return nil, errors.Wrapf(err2, "error pulling image %q", image)
}
ref = pulledReference
}
if ref == nil {
srcRef, err2 := alltransports.ParseImageName(image)
if err2 != nil {
srcRef2, err3 := alltransports.ParseImageName(options.Registry + image)
if err3 != nil {
srcRef3, err4 := alltransports.ParseImageName(options.Transport + options.Registry + image)
if err4 != nil {
return nil, errors.Wrapf(err4, "error parsing image name %q", options.Transport+options.Registry+image)
}
srcRef2 = srcRef3
}
srcRef = srcRef2
}
destImage, err2 := localImageNameForReference(store, srcRef)
if err2 != nil {
return nil, errors.Wrapf(err2, "error computing local image name for %q", transports.ImageName(srcRef))
}
if destImage == "" {
return nil, errors.Errorf("error computing local image name for %q", transports.ImageName(srcRef))
}
ref, err = is.Transport.ParseStoreReference(store, destImage)
if err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", destImage)
}
image = destImage
}
img, err = is.Transport.GetStoreImage(store, ref)
if err != nil {
if errors.Cause(err) == storage.ErrImageUnknown && options.PullPolicy != PullIfMissing {
return nil, errors.Wrapf(err, "no such image %q", transports.ImageName(ref))
}
ref2, err2 := pullImage(store, options, systemContext)
if err2 != nil {
return nil, errors.Wrapf(err2, "error pulling image %q", image)
}
ref = ref2
img, err = is.Transport.GetStoreImage(store, ref)
}
if err != nil {
return nil, errors.Wrapf(err, "no such image %q", transports.ImageName(ref))
}
imageID = img.ID
src, err := ref.NewImage(systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error instantiating image for %q", transports.ImageName(ref))
}
defer src.Close()
config, err = src.ConfigBlob()
if err != nil {
return nil, errors.Wrapf(err, "error reading image configuration for %q", transports.ImageName(ref))
}
manifest, _, err = src.Manifest()
if err != nil {
return nil, errors.Wrapf(err, "error reading image manifest for %q", transports.ImageName(ref))
}
}
name := "working-container"
if options.Container != "" {
name = options.Container
} else {
var err2 error
if image != "" {
prefix := image
s := strings.Split(prefix, "/")
@@ -46,69 +133,17 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
}
name = prefix + "-" + name
}
}
if name != "" {
var err error
suffix := 1
tmpName := name
for err != storage.ErrContainerUnknown {
_, err = store.Container(tmpName)
if err == nil {
for errors.Cause(err2) != storage.ErrContainerUnknown {
_, err2 = store.Container(tmpName)
if err2 == nil {
suffix++
tmpName = fmt.Sprintf("%s-%d", name, suffix)
}
}
name = tmpName
}
systemContext := getSystemContext(options.SignaturePolicyPath)
imageID := ""
if image != "" {
if options.PullPolicy == PullAlways {
err := pullImage(store, options, systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error pulling image %q", image)
}
}
ref, err := is.Transport.ParseStoreReference(store, image)
if err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", image)
}
img, err = is.Transport.GetStoreImage(store, ref)
if err != nil {
if err == storage.ErrImageUnknown && options.PullPolicy != PullIfMissing {
return nil, errors.Wrapf(err, "no such image %q", image)
}
err = pullImage(store, options, systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error pulling image %q", image)
}
ref, err = is.Transport.ParseStoreReference(store, image)
if err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", image)
}
img, err = is.Transport.GetStoreImage(store, ref)
}
if err != nil {
return nil, errors.Wrapf(err, "no such image %q", image)
}
imageID = img.ID
src, err := ref.NewImage(systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error instantiating image")
}
defer src.Close()
config, err = src.ConfigBlob()
if err != nil {
return nil, errors.Wrapf(err, "error reading image configuration")
}
manifest, _, err = src.Manifest()
if err != nil {
return nil, errors.Wrapf(err, "error reading image manifest")
}
}
coptions := storage.ContainerOptions{}
container, err := store.CreateContainer("", []string{name}, imageID, "", "", &coptions)
if err != nil {

4
ostree_tag.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
if ! pkg-config ostree-1 2> /dev/null ; then
echo containers_image_ostree_stub
fi

82
pull.go
View File

@@ -1,58 +1,114 @@
package buildah
import (
"strings"
"github.com/Sirupsen/logrus"
cp "github.com/containers/image/copy"
"github.com/containers/image/docker/reference"
"github.com/containers/image/signature"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/pkg/errors"
)
func pullImage(store storage.Store, options BuilderOptions, sc *types.SystemContext) error {
func localImageNameForReference(store storage.Store, srcRef types.ImageReference) (string, error) {
if srcRef == nil {
return "", errors.Errorf("reference to image is empty")
}
ref := srcRef.DockerReference()
if ref == nil {
name := srcRef.StringWithinTransport()
_, err := is.Transport.ParseStoreReference(store, name)
if err == nil {
return name, nil
}
if strings.LastIndex(name, "/") != -1 {
name = name[strings.LastIndex(name, "/")+1:]
_, err = is.Transport.ParseStoreReference(store, name)
if err == nil {
return name, nil
}
}
return "", errors.Errorf("reference to image %q is not a named reference", transports.ImageName(srcRef))
}
name := ""
if named, ok := ref.(reference.Named); ok {
name = named.Name()
if namedTagged, ok := ref.(reference.NamedTagged); ok {
name = name + ":" + namedTagged.Tag()
}
if canonical, ok := ref.(reference.Canonical); ok {
name = name + "@" + canonical.Digest().String()
}
}
if _, err := is.Transport.ParseStoreReference(store, name); err != nil {
return "", errors.Wrapf(err, "error parsing computed local image name %q", name)
}
return name, nil
}
func pullImage(store storage.Store, options BuilderOptions, sc *types.SystemContext) (types.ImageReference, error) {
name := options.FromImage
spec := name
if options.Registry != "" {
spec = options.Registry + spec
}
spec2 := spec
if options.Transport != "" {
spec2 = options.Transport + spec
}
srcRef, err := alltransports.ParseImageName(name)
if err != nil {
srcRef2, err2 := alltransports.ParseImageName(spec)
if err2 != nil {
return errors.Wrapf(err2, "error parsing image name %q", spec)
srcRef3, err3 := alltransports.ParseImageName(spec2)
if err3 != nil {
return nil, errors.Wrapf(err3, "error parsing image name %q", spec2)
}
srcRef2 = srcRef3
}
srcRef = srcRef2
}
if ref := srcRef.DockerReference(); ref != nil {
name = srcRef.DockerReference().Name()
if tagged, ok := srcRef.DockerReference().(reference.NamedTagged); ok {
name = name + ":" + tagged.Tag()
}
destName, err := localImageNameForReference(store, srcRef)
if err != nil {
return nil, errors.Wrapf(err, "error computing local image name for %q", transports.ImageName(srcRef))
}
if destName == "" {
return nil, errors.Errorf("error computing local image name for %q", transports.ImageName(srcRef))
}
destRef, err := is.Transport.ParseStoreReference(store, name)
destRef, err := is.Transport.ParseStoreReference(store, destName)
if err != nil {
return errors.Wrapf(err, "error parsing full image name %q", name)
return nil, errors.Wrapf(err, "error parsing image name %q", destName)
}
policy, err := signature.DefaultPolicy(sc)
if err != nil {
return err
return nil, errors.Wrapf(err, "error obtaining default signature policy")
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return err
return nil, errors.Wrapf(err, "error creating new signature policy context")
}
defer func() {
if err2 := policyContext.Destroy(); err2 != nil {
logrus.Debugf("error destroying signature polcy context: %v", err2)
}
}()
logrus.Debugf("copying %q to %q", spec, name)
err = cp.Image(policyContext, destRef, srcRef, getCopyOptions(options.ReportWriter))
return err
err = cp.Image(policyContext, destRef, srcRef, getCopyOptions(options.ReportWriter, options.SystemContext, nil))
return destRef, err
}

12
run.go
View File

@@ -120,7 +120,7 @@ func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts
return errors.Wrapf(err, "error creating directory %q for volume %q in container %q", volumePath, volume, b.ContainerID)
}
srcPath := filepath.Join(mountPoint, volume)
if err = archive.CopyWithTar(srcPath, volumePath); err != nil {
if err = archive.CopyWithTar(srcPath, volumePath); err != nil && !os.IsNotExist(err) {
return errors.Wrapf(err, "error populating directory %q for volume %q in container %q using contents of %q", volumePath, volume, b.ContainerID, srcPath)
}
@@ -153,12 +153,6 @@ func (b *Builder) Run(command []string, options RunOptions) error {
}()
g := generate.New()
if b.OS() != "" {
g.SetPlatformOS(b.OS())
}
if b.Architecture() != "" {
g.SetPlatformArch(b.Architecture())
}
for _, envSpec := range append(b.Env(), options.Env...) {
env := strings.SplitN(envSpec, "=", 2)
if len(env) > 1 {
@@ -225,8 +219,8 @@ func (b *Builder) Run(command []string, options RunOptions) error {
if spec.Process.Cwd == "" {
spec.Process.Cwd = DefaultWorkingDir
}
if err = os.MkdirAll(filepath.Join(mountPoint, b.WorkDir()), 0755); err != nil {
return errors.Wrapf(err, "error ensuring working directory %q exists", b.WorkDir())
if err = os.MkdirAll(filepath.Join(mountPoint, spec.Process.Cwd), 0755); err != nil {
return errors.Wrapf(err, "error ensuring working directory %q exists", spec.Process.Cwd)
}
bindFiles := []string{"/etc/hosts", "/etc/resolv.conf"}

4
selinux_tag.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
if pkg-config libselinux 2> /dev/null ; then
echo selinux
fi

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bats
load helpers
@test "extract" {
touch ${TESTDIR}/reference-time-file
for source in scratch alpine; do
cid=$(buildah from --pull=true --signature-policy ${TESTSDIR}/policy.json ${source})
mnt=$(buildah mount $cid)
touch ${mnt}/export.file
tar -cf - --transform s,^./,,g -C ${mnt} . | tar tf - | grep -v "^./$" | sort > ${TESTDIR}/tar.output
buildah umount $cid
buildah export "$cid" > ${TESTDIR}/${source}.tar
buildah export -o ${TESTDIR}/${source}1.tar "$cid"
diff ${TESTDIR}/${source}.tar ${TESTDIR}/${source}1.tar
tar -tf ${TESTDIR}/${source}.tar | sort > ${TESTDIR}/export.output
diff ${TESTDIR}/tar.output ${TESTDIR}/export.output
rm -f ${TESTDIR}/tar.output ${TESTDIR}/export.output
rm -f ${TESTDIR}/${source}1.tar ${TESTDIR}/${source}.tar
buildah rm "$cid"
done
}

114
tests/from.bats Normal file
View File

@@ -0,0 +1,114 @@
#!/usr/bin/env bats
load helpers
@test "commit-to-from-elsewhere" {
elsewhere=${TESTDIR}/elsewhere-img
mkdir -p ${elsewhere}
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json scratch)
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid dir:${elsewhere}
buildah rm $cid
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = elsewhere-img-working-container ]
cid=$(buildah from --pull-always --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = `basename ${elsewhere}`-working-container ]
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json scratch)
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid dir:${elsewhere}
buildah rm $cid
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = elsewhere-img-working-container ]
cid=$(buildah from --pull-always --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = `basename ${elsewhere}`-working-container ]
}
@test "from-authenticate-cert" {
mkdir -p ${TESTDIR}/auth
# Create certifcate via openssl
openssl req -newkey rsa:4096 -nodes -sha256 -keyout ${TESTDIR}/auth/domain.key -x509 -days 2 -out ${TESTDIR}/auth/domain.crt -subj "/C=US/ST=Foo/L=Bar/O=Red Hat, Inc./CN=localhost"
# Skopeo and buildah both require *.cert file
cp ${TESTDIR}/auth/domain.crt ${TESTDIR}/auth/domain.cert
# Create a private registry that uses certificate and creds file
# docker run -d -p 5000:5000 --name registry -v ${TESTDIR}/auth:${TESTDIR}/auth:Z -e REGISTRY_HTTP_TLS_CERTIFICATE=${TESTDIR}/auth/domain.crt -e REGISTRY_HTTP_TLS_KEY=${TESTDIR}/auth/domain.key registry:2
# When more buildah auth is in place convert the below.
# docker pull alpine
# docker tag alpine localhost:5000/my-alpine
# docker push localhost:5000/my-alpine
# ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth)
# buildah rm $ctrid
# buildah rmi -f $(buildah --debug=false images -q)
# This should work
# ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth --tls-verify true)
rm -rf ${TESTDIR}/auth
# This should fail
run ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth --tls-verify true)
[ "$status" -ne 0 ]
# Clean up
# docker rm -f $(docker ps --all -q)
# docker rmi -f localhost:5000/my-alpine
# docker rmi -f $(docker images -q)
# buildah rm $ctrid
# buildah rmi -f $(buildah --debug=false images -q)
}
@test "from-authenticate-cert-and-creds" {
mkdir -p ${TESTDIR}/auth
# Create creds and store in ${TESTDIR}/auth/htpasswd
# docker run --entrypoint htpasswd registry:2 -Bbn testuser testpassword > ${TESTDIR}/auth/htpasswd
# Create certifcate via openssl
openssl req -newkey rsa:4096 -nodes -sha256 -keyout ${TESTDIR}/auth/domain.key -x509 -days 2 -out ${TESTDIR}/auth/domain.crt -subj "/C=US/ST=Foo/L=Bar/O=Red Hat, Inc./CN=localhost"
# Skopeo and buildah both require *.cert file
cp ${TESTDIR}/auth/domain.crt ${TESTDIR}/auth/domain.cert
# Create a private registry that uses certificate and creds file
# docker run -d -p 5000:5000 --name registry -v ${TESTDIR}/auth:${TESTDIR}/auth:Z -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=${TESTDIR}/auth/htpasswd -e REGISTRY_HTTP_TLS_CERTIFICATE=${TESTDIR}/auth/domain.crt -e REGISTRY_HTTP_TLS_KEY=${TESTDIR}/auth/domain.key registry:2
# When more buildah auth is in place convert the below.
# docker pull alpine
# docker login localhost:5000 --username testuser --password testpassword
# docker tag alpine localhost:5000/my-alpine
# docker push localhost:5000/my-alpine
# ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth)
# buildah rm $ctrid
# buildah rmi -f $(buildah --debug=false images -q)
# docker logout localhost:5000
# This should fail
run ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth --tls-verify true)
[ "$status" -ne 0 ]
# This should work
# ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth --tls-verify true --creds=testuser:testpassword)
# Clean up
rm -rf ${TESTDIR}/auth
# docker rm -f $(docker ps --all -q)
# docker rmi -f localhost:5000/my-alpine
# docker rmi -f $(docker images -q)
# buildah rm $ctrid
# buildah rmi -f $(buildah --debug=false images -q)
}

View File

@@ -4,9 +4,10 @@ BUILDAH_BINARY=${BUILDAH_BINARY:-$(dirname ${BASH_SOURCE})/../buildah}
IMGTYPE_BINARY=${IMGTYPE_BINARY:-$(dirname ${BASH_SOURCE})/../imgtype}
TESTSDIR=${TESTSDIR:-$(dirname ${BASH_SOURCE})}
STORAGE_DRIVER=${STORAGE_DRIVER:-vfs}
PATH=$(dirname ${BASH_SOURCE})/..:${PATH}
function setup() {
suffix=$(dd if=/dev/urandom bs=12 count=1 status=none | base64 | tr +/ _.)
suffix=$(dd if=/dev/urandom bs=12 count=1 status=none | base64 | tr +/ABCDEFGHIJKLMNOPQRSTUVWXYZ _.abcdefghijklmnopqrstuvwxyz)
TESTDIR=${BATS_TMPDIR}/tmp.${suffix}
rm -fr ${TESTDIR}
mkdir -p ${TESTDIR}/{root,runroot}

58
tests/rpm.bats Normal file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bats
load helpers
@test "rpm-build" {
if ! which runc ; then
skip
fi
# Build a container to use for building the binaries.
image=fedora:26
cid=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json $image)
root=$(buildah --debug=false mount $cid)
commit=$(git log --format=%H -n 1)
shortcommit=$(echo ${commit} | cut -c-7)
mkdir -p ${root}/rpmbuild/{SOURCES,SPECS}
# Build the tarball.
(cd ..; git archive --format tar.gz --prefix=buildah-${commit}/ ${commit}) > ${root}/rpmbuild/SOURCES/buildah-${shortcommit}.tar.gz
# Update the .spec file with the commit ID.
sed s:REPLACEWITHCOMMITID:${commit}:g ${TESTSDIR}/../contrib/rpm/buildah.spec > ${root}/rpmbuild/SPECS/buildah.spec
# Install build dependencies and build binary packages.
buildah --debug=false run $cid -- dnf -y install 'dnf-command(builddep)' rpm-build
buildah --debug=false run $cid -- dnf -y builddep --spec rpmbuild/SPECS/buildah.spec
buildah --debug=false run $cid -- rpmbuild --define "_topdir /rpmbuild" -ba /rpmbuild/SPECS/buildah.spec
# Build a second new container.
cid2=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json fedora:26)
root2=$(buildah --debug=false mount $cid2)
# Copy the binary packages from the first container to the second one, and build a list of
# their filenames relative to the root of the second container.
rpms=
mkdir -p ${root2}/packages
for rpm in ${root}/rpmbuild/RPMS/*/*.rpm ; do
cp $rpm ${root2}/packages/
rpms="$rpms "/packages/$(basename $rpm)
done
# Install the binary packages into the second container.
buildah --debug=false run $cid2 -- dnf -y install $rpms
# Run the binary package and compare its self-identified version to the one we tried to build.
id=$(buildah --debug=false run $cid2 -- buildah version | awk '/^Git Commit:/ { print $NF }')
bv=$(buildah --debug=false run $cid2 -- buildah version | awk '/^Version:/ { print $NF }')
rv=$(buildah --debug=false run $cid2 -- rpm -q --queryformat '%{version}' buildah)
echo "short commit: $shortcommit"
echo "id: $id"
echo "buildah version: $bv"
echo "buildah rpm version: $rv"
test $shortcommit = $id
test $bv = $rv
# Clean up.
buildah --debug=false rm $cid $cid2
}

View File

@@ -6,34 +6,64 @@ load helpers
if ! which runc ; then
skip
fi
runc --version
createrandom ${TESTDIR}/randomfile
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
root=$(buildah mount $cid)
buildah config $cid --workingdir /tmp
run buildah --debug=false run $cid pwd
[ "$status" -eq 0 ]
[ "$output" = /tmp ]
buildah config $cid --workingdir /root
run buildah --debug=false run $cid pwd
[ "$status" -eq 0 ]
[ "$output" = /root ]
cp ${TESTDIR}/randomfile $root/tmp/
buildah run $cid cp /tmp/randomfile /tmp/other-randomfile
test -s $root/tmp/other-randomfile
cmp ${TESTDIR}/randomfile $root/tmp/other-randomfile
run buildah run $cid echo -n test
[ $status != 0 ]
run buildah run $cid echo -- -n test
[ $status != 0 ]
run buildah run $cid -- echo -n -- test
[ "$output" = "-- test" ]
run buildah run $cid -- echo -- -n test --
[ "$output" = "-- -n -- test --" ]
run buildah run $cid -- echo -n "test"
[ "$output" = "test" ]
buildah unmount $cid
buildah rm $cid
}
@test "run--args" {
if ! which runc ; then
skip
fi
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
# This should fail, because buildah run doesn't have a -n flag.
run buildah --debug=false run $cid echo -n test
[ "$status" -ne 0 ]
# This should succeed, because buildah run stops caring at the --, which is preserved as part of the command.
run buildah --debug=false run $cid echo -- -n test
[ "$status" -eq 0 ]
echo :"$output":
[ "$output" = "-- -n test" ]
# This should succeed, because buildah run stops caring at the --, which is not part of the command.
run buildah --debug=false run $cid -- echo -n -- test
[ "$status" -eq 0 ]
echo :"$output":
[ "$output" = "-- test" ]
# This should succeed, because buildah run stops caring at the --.
run buildah --debug=false run $cid -- echo -- -n test --
[ "$status" -eq 0 ]
echo :"$output":
[ "$output" = "-- -n test --" ]
# This should succeed, because buildah run stops caring at the --.
run buildah --debug=false run $cid -- echo -n "test"
[ "$status" -eq 0 ]
echo :"$output":
[ "$output" = "test" ]
buildah rm $cid
}
@test "run-cmd" {
if ! which runc ; then
skip

View File

@@ -0,0 +1,181 @@
#!/bin/bash
# test_buildah_authentication
# A script to be run at the command line with Buildah installed.
# This will test the code and should be run with this command:
#
# /bin/bash -v test_buildah_authentication.sh
########
# Create creds and store in /root/auth/htpasswd
########
registry=$(buildah from registry:2)
buildah run $registry -- htpasswd -Bbn testuser testpassword > /root/auth/htpasswd
########
# Create certificate via openssl
########
openssl req -newkey rsa:4096 -nodes -sha256 -keyout /root/auth/domain.key -x509 -days 2 -out /root/auth/domain.crt -subj "/C=US/ST=Foo/L=Bar/O=Red Hat, Inc./CN=localhost"
########
# Skopeo and buildah both require *.cert file
########
cp /root/auth/domain.crt /root/auth/domain.cert
########
# Create a private registry that uses certificate and creds file
########
docker run -d -p 5000:5000 --name registry -v /root/auth:/root/auth:Z -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/root/auth/htpasswd -e REGISTRY_HTTP_TLS_CERTIFICATE=/root/auth/domain.crt -e REGISTRY_HTTP_TLS_KEY=/root/auth/domain.key registry:2
########
# Pull alpine
########
buildah from alpine
buildah containers
buildah images
########
# Log into docker on local repo
########
docker login localhost:5000 --username testuser --password testpassword
########
# Push to the local repo using cached Docker creds.
########
buildah push --cert-dir /root/auth alpine docker://localhost:5000/my-alpine
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Buildah pulls using certs and cached Docker creds.
# Should show two alpine images and containers when done.
########
ctrid=$(buildah from localhost:5000/my-alpine --cert-dir /root/auth)
buildah containers
buildah images
########
# Clean up Buildah
########
buildah rm $ctrid
buildah rmi -f localhost:5000/my-alpine:latest
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Log out of local repo
########
docker logout localhost:5000
########
# Push using only certs, this should fail.
########
buildah push --cert-dir /root/auth --tls-verify=true alpine docker://localhost:5000/my-alpine
########
# Push using creds, certs and no transport, this should work.
########
buildah push --cert-dir ~/auth --tls-verify=true --creds=testuser:testpassword alpine localhost:5000/my-alpine
########
# No creds anywhere, only the certificate, this should fail.
########
buildah from localhost:5000/my-alpine --cert-dir /root/auth --tls-verify=true
########
# Log in with creds, this should work
########
ctrid=$(buildah from localhost:5000/my-alpine --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword)
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Clean up Buildah
########
buildah rm $ctrid
buildah rmi -f $(buildah --debug=false images -q)
########
# Pull alpine
########
buildah from alpine
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Let's test commit
########
########
# No credentials, this should fail.
########
buildah commit --cert-dir /root/auth --tls-verify=true alpine-working-container docker://localhost:5000/my-commit-alpine
########
# This should work, writing image in registry. Will not create an image locally.
########
buildah commit --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword alpine-working-container docker://localhost:5000/my-commit-alpine
########
# Pull the new image that we just commited
########
buildah from localhost:5000/my-commit-alpine --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Clean up
########
rm -rf ${TESTDIR}/auth
docker rm -f $(docker ps --all -q)
docker rmi -f $(docker images -q)
buildah rm $(buildah containers -q)
buildah rmi -f $(buildah --debug=false images -q)

View File

@@ -8,6 +8,7 @@ if ! which gometalinter.v1 > /dev/null 2> /dev/null ; then
exit 1
fi
exec gometalinter.v1 \
--enable-gc \
--exclude='error return value not checked.*(Close|Log|Print).*\(errcheck\)$' \
--exclude='.*_test\.go:.*error return value not checked.*\(errcheck\)$' \
--exclude='duplicate of.*_test.go.*\(dupl\)$' \

View File

@@ -2,10 +2,15 @@
load helpers
@test "buildah version test" {
run buildah version
echo "$output"
[ "$status" -eq 0 ]
}
@test "buildah version up to date in .spec file" {
run buildah version
bversion=$(echo "$output" | awk '/^Version:/ { print $NF }')
rversion=$(cat ${TESTSDIR}/../contrib/rpm/buildah.spec | awk '/^Version:/ { print $NF }')
test "$bversion" = "$rversion"
}

View File

@@ -1,8 +1,8 @@
github.com/BurntSushi/toml master
github.com/Nvveen/Gotty master
github.com/blang/semver master
github.com/containers/image 23bddaa64cc6bf3f3077cda0dbf1cdd7007434df
github.com/containers/storage 105f7c77aef0c797429e41552743bf5b03b63263
github.com/containers/image 106607808da3cff168be56821e994611c919d283
github.com/containers/storage 5d8c2f87387fa5be9fa526ae39fbd79b8bdf27be
github.com/docker/distribution master
github.com/docker/docker 0f9ec7e47072b0c2e954b5b821bde5c1fe81bfa7
github.com/docker/engine-api master
@@ -22,10 +22,10 @@ github.com/mistifyio/go-zfs master
github.com/moby/moby 0f9ec7e47072b0c2e954b5b821bde5c1fe81bfa7
github.com/mtrmac/gpgme master
github.com/opencontainers/go-digest aa2ec055abd10d26d539eb630a92241b781ce4bc
github.com/opencontainers/image-spec v1.0.0-rc6
github.com/opencontainers/image-spec v1.0.0
github.com/opencontainers/runc master
github.com/opencontainers/runtime-spec v1.0.0-rc5
github.com/opencontainers/runtime-tools 8addcc695096a0fc61010af8766952546bba7cd0
github.com/opencontainers/runtime-spec v1.0.0
github.com/opencontainers/runtime-tools 2d270b8764c02228eeb13e36f076f5ce6f2e3591
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/openshift/imagebuilder master
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460

80
vendor/github.com/containers/image/README.md generated vendored Normal file
View File

@@ -0,0 +1,80 @@
[![GoDoc](https://godoc.org/github.com/containers/image?status.svg)](https://godoc.org/github.com/containers/image) [![Build Status](https://travis-ci.org/containers/image.svg?branch=master)](https://travis-ci.org/containers/image)
=
`image` is a set of Go libraries aimed at working in various way with
containers' images and container image registries.
The containers/image library allows application to pull and push images from
container image registries, like the upstream docker registry. It also
implements "simple image signing".
The containers/image library also allows you to inspect a repository on a
container registry without pulling down the image. This means it fetches the
repository's manifest and it is able to show you a `docker inspect`-like json
output about a whole repository or a tag. This library, in contrast to `docker
inspect`, helps you gather useful information about a repository or a tag
without requiring you to run `docker pull`.
The containers/image library also allows you to translate from one image format
to another, for example docker container images to OCI images. It also allows
you to copy container images between various registries, possibly converting
them as necessary, and to sign and verify images.
## Command-line usage
The containers/image project is only a library with no user interface;
you can either incorporate it into your Go programs, or use the `skopeo` tool:
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
containers/image library and takes advantage of many of its features,
e.g. `skopeo copy` exposes the `containers/image/copy.Image` functionality.
## Dependencies
This library does not ship a committed version of its dependencies in a `vendor`
subdirectory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects, and because
types defined in the `vendor` directory would be impossible to use from your projects.
What this project tests against dependencies-wise is located
[in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf).
## Building
If you want to see what the library can do, or an example of how it is called,
consider starting with the [skopeo](https://github.com/projectatomic/skopeo) tool
instead.
To integrate this library into your project, put it into `$GOPATH` or use
your preferred vendoring tool to include a copy in your project.
Ensure that the dependencies documented [in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf)
are also available
(using those exact versions or different versions of your choosing).
This library, by default, also depends on the GpgME and libostree C libraries. Either install them:
```sh
Fedora$ dnf install gpgme-devel libassuan-devel libostree-devel
macOS$ brew install gpgme
```
or use the build tags described below to avoid the dependencies (e.g. using `go build -tags …`)
### Supported build tags
- `containers_image_openpgp`: Use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
the primary downside is that creating new signatures with the Golang-only implementation is not supported.
- `containers_image_ostree_stub`: Instead of importing `ostree:` transport in `github.com/containers/image/transports/alltransports`, use a stub which reports that the transport is not supported. This allows building the library without requiring the `libostree` development libraries.
(Note that explicitly importing `github.com/containers/image/ostree` will still depend on the `libostree` library, this build tag only affects generic users of …`/alltransports`.)
## Contributing
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.
## License
ASL 2.0
## Contact
- Mailing list: [containers-dev](https://groups.google.com/forum/?hl=en#!forum/containers-dev)
- IRC: #[container-projects](irc://irc.freenode.net:6667/#container-projects) on freenode.net

View File

@@ -19,15 +19,24 @@ func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.
if ref.destinationRef == nil {
return nil, errors.Errorf("docker-archive: destination reference not supplied (must be of form <path>:<reference:tag>)")
}
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_EXCL|os.O_CREATE, 0644)
// ref.path can be either a pipe or a regular file
// in the case of a pipe, we require that we can open it for write
// in the case of a regular file, we don't want to overwrite any pre-existing file
// so we check for Size() == 0 below (This is racy, but using O_EXCL would also be racy,
// only in a different way. Either way, its up to the user to not have two writers to the same path.)
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
// FIXME: It should be possible to modify archives, but the only really
// sane way of doing it is to create a copy of the image, modify
// it and then do a rename(2).
if os.IsExist(err) {
err = errors.New("docker-archive doesn't support modifying existing images")
}
return nil, err
return nil, errors.Wrapf(err, "error opening file %q", ref.path)
}
fhStat, err := fh.Stat()
if err != nil {
return nil, errors.Wrapf(err, "error statting file %q", ref.path)
}
if fhStat.Mode().IsRegular() && fhStat.Size() != 0 {
return nil, errors.New("docker-archive doesn't support modifying existing images")
}
return &archiveImageDestination{

View File

@@ -308,31 +308,36 @@ func (c *dockerClient) setupRequestAuth(req *http.Request) error {
if len(c.challenges) == 0 {
return nil
}
// assume just one...
challenge := c.challenges[0]
switch challenge.Scheme {
case "basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
schemeNames := make([]string, 0, len(c.challenges))
for _, challenge := range c.challenges {
schemeNames = append(schemeNames, challenge.Scheme)
switch challenge.Scheme {
case "basic":
req.SetBasicAuth(c.username, c.password)
return nil
case "bearer":
if c.token == nil || time.Now().After(c.tokenExpiration) {
realm, ok := challenge.Parameters["realm"]
if !ok {
return errors.Errorf("missing realm in bearer auth challenge")
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope := fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
}
service, _ := challenge.Parameters["service"] // Will be "" if not present
scope := fmt.Sprintf("repository:%s:%s", c.scope.remoteName, c.scope.actions)
token, err := c.getBearerToken(realm, service, scope)
if err != nil {
return err
}
c.token = token
c.tokenExpiration = token.IssuedAt.Add(time.Duration(token.ExpiresIn) * time.Second)
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
return nil
default:
logrus.Debugf("no handler for %s authentication", challenge.Scheme)
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.token.Token))
return nil
}
return errors.Errorf("no handler for %s authentication", challenge.Scheme)
logrus.Infof("None of the challenges sent by server (%s) are supported, trying an unauthenticated request anyway", strings.Join(schemeNames, ", "))
return nil
}
func (c *dockerClient) getBearerToken(realm, service, scope string) (*bearerToken, error) {

View File

@@ -0,0 +1,2 @@
This is a copy of github.com/docker/distribution/reference as of commit fb0bebc4b64e3881cc52a2478d749845ed76d2a8,
except that ParseAnyReferenceWithSet has been removed to drop the dependency on github.com/docker/distribution/digestset.

View File

@@ -181,7 +181,7 @@ func (d *Destination) PutManifest(m []byte) error {
layerPaths = append(layerPaths, l.Digest.String())
}
items := []manifestItem{{
items := []ManifestItem{{
Config: man.Config.Digest.String(),
RepoTags: []string{d.repoTag},
Layers: layerPaths,

View File

@@ -20,7 +20,7 @@ import (
type Source struct {
tarPath string
// The following data is only available after ensureCachedDataIsPresent() succeeds
tarManifest *manifestItem // nil if not available yet.
tarManifest *ManifestItem // nil if not available yet.
configBytes []byte
configDigest digest.Digest
orderedDiffIDList []diffID
@@ -145,23 +145,28 @@ func (s *Source) ensureCachedDataIsPresent() error {
return err
}
// Check to make sure length is 1
if len(tarManifest) != 1 {
return errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(tarManifest))
}
// Read and parse config.
configBytes, err := s.readTarComponent(tarManifest.Config)
configBytes, err := s.readTarComponent(tarManifest[0].Config)
if err != nil {
return err
}
var parsedConfig image // Most fields ommitted, we only care about layer DiffIDs.
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest.Config)
return errors.Wrapf(err, "Error decoding tar config %s", tarManifest[0].Config)
}
knownLayers, err := s.prepareLayerData(tarManifest, &parsedConfig)
knownLayers, err := s.prepareLayerData(&tarManifest[0], &parsedConfig)
if err != nil {
return err
}
// Success; commit.
s.tarManifest = tarManifest
s.tarManifest = &tarManifest[0]
s.configBytes = configBytes
s.configDigest = digest.FromBytes(configBytes)
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
@@ -170,23 +175,25 @@ func (s *Source) ensureCachedDataIsPresent() error {
}
// loadTarManifest loads and decodes the manifest.json.
func (s *Source) loadTarManifest() (*manifestItem, error) {
func (s *Source) loadTarManifest() ([]ManifestItem, error) {
// FIXME? Do we need to deal with the legacy format?
bytes, err := s.readTarComponent(manifestFileName)
if err != nil {
return nil, err
}
var items []manifestItem
var items []ManifestItem
if err := json.Unmarshal(bytes, &items); err != nil {
return nil, errors.Wrap(err, "Error decoding tar manifest.json")
}
if len(items) != 1 {
return nil, errors.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(items))
}
return &items[0], nil
return items, nil
}
func (s *Source) prepareLayerData(tarManifest *manifestItem, parsedConfig *image) (map[diffID]*layerInfo, error) {
// LoadTarManifest loads and decodes the manifest.json
func (s *Source) LoadTarManifest() ([]ManifestItem, error) {
return s.loadTarManifest()
}
func (s *Source) prepareLayerData(tarManifest *ManifestItem, parsedConfig *image) (map[diffID]*layerInfo, error) {
// Collect layer data available in manifest and config.
if len(tarManifest.Layers) != len(parsedConfig.RootFS.DiffIDs) {
return nil, errors.Errorf("Inconsistent layer count: %d in manifest, %d in config", len(tarManifest.Layers), len(parsedConfig.RootFS.DiffIDs))

View File

@@ -13,7 +13,8 @@ const (
// legacyRepositoriesFileName = "repositories"
)
type manifestItem struct {
// ManifestItem is an element of the array stored in the top-level manifest.json file.
type ManifestItem struct {
Config string
RepoTags []string
Layers []string

View File

@@ -16,7 +16,7 @@ type platformSpec struct {
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
Variant string `json:"variant,omitempty"`
Features []string `json:"features,omitempty"`
Features []string `json:"features,omitempty"` // removed in OCI
}
// A manifestDescriptor references a platform-specific manifest.

View File

@@ -183,6 +183,7 @@ func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (typ
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].MediaType = m.LayersDescriptors[i].MediaType
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
copy.LayersDescriptors[i].URLs = info.URLs
@@ -213,15 +214,17 @@ func (m *manifestSchema2) convertToManifestOCI1() (types.Image, error) {
return nil, err
}
config := descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Size: int64(len(configOCIBytes)),
Digest: digest.FromBytes(configOCIBytes),
config := descriptorOCI1{
descriptor: descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Size: int64(len(configOCIBytes)),
Digest: digest.FromBytes(configOCIBytes),
},
}
layers := make([]descriptor, len(m.LayersDescriptors))
layers := make([]descriptorOCI1, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
layers[idx] = descriptorOCI1{descriptor: m.LayersDescriptors[idx]}
if m.LayersDescriptors[idx].MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable
} else {

View File

@@ -12,12 +12,18 @@ import (
"github.com/pkg/errors"
)
type descriptorOCI1 struct {
descriptor
Annotations map[string]string `json:"annotations,omitempty"`
}
type manifestOCI1 struct {
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
SchemaVersion int `json:"schemaVersion"`
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
ConfigDescriptor descriptorOCI1 `json:"config"`
LayersDescriptors []descriptorOCI1 `json:"layers"`
Annotations map[string]string `json:"annotations,omitempty"`
}
func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
@@ -29,7 +35,7 @@ func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericMa
}
// manifestOCI1FromComponents builds a new manifestOCI1 from the supplied data:
func manifestOCI1FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
func manifestOCI1FromComponents(config descriptorOCI1, src types.ImageSource, configBlob []byte, layers []descriptorOCI1) genericManifest {
return &manifestOCI1{
src: src,
configBlob: configBlob,
@@ -148,8 +154,9 @@ func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
return nil, errors.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
}
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
copy.LayersDescriptors = make([]descriptorOCI1, len(options.LayerInfos))
for i, info := range options.LayerInfos {
copy.LayersDescriptors[i].MediaType = m.LayersDescriptors[i].MediaType
copy.LayersDescriptors[i].Digest = info.Digest
copy.LayersDescriptors[i].Size = info.Size
}
@@ -169,7 +176,7 @@ func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.
func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
// Create a copy of the descriptor.
config := m.ConfigDescriptor
config := m.ConfigDescriptor.descriptor
// The only difference between OCI and DockerSchema2 is the mediatypes. The
// media type of the manifest is handled by manifestSchema2FromComponents.
@@ -177,7 +184,7 @@ func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
layers := make([]descriptor, len(m.LayersDescriptors))
for idx := range layers {
layers[idx] = m.LayersDescriptors[idx]
layers[idx] = m.LayersDescriptors[idx].descriptor
layers[idx].MediaType = manifest.DockerV2Schema2LayerMediaType
}

View File

@@ -0,0 +1 @@
This package was replicated from [github.com/docker/docker v17.04.0-ce](https://github.com/docker/docker/tree/v17.04.0-ce/api/types/strslice).

View File

@@ -174,11 +174,11 @@ func (s *storageImageDestination) putBlob(stream io.Reader, blobinfo types.BlobI
}
// Attempt to create the identified layer and import its contents.
layer, uncompressedSize, err := s.imageRef.transport.store.PutLayer(id, parentLayer, nil, "", true, multi)
if err != nil && err != storage.ErrDuplicateID {
if err != nil && errors.Cause(err) != storage.ErrDuplicateID {
logrus.Debugf("error importing layer blob %q as %q: %v", blobinfo.Digest, id, err)
return errorBlobInfo, err
}
if err == storage.ErrDuplicateID {
if errors.Cause(err) == storage.ErrDuplicateID {
// We specified an ID, and there's already a layer with
// the same ID. Drain the input so that we can look at
// its length and digest.
@@ -291,7 +291,7 @@ func (s *storageImageDestination) PutBlob(stream io.Reader, blobinfo types.BlobI
// it returns a non-nil error only on an unexpected failure.
func (s *storageImageDestination) HasBlob(blobinfo types.BlobInfo) (bool, int64, error) {
if blobinfo.Digest == "" {
return false, -1, errors.Errorf(`"Can not check for a blob with unknown digest`)
return false, -1, errors.Errorf(`Can not check for a blob with unknown digest`)
}
for _, blob := range s.BlobList {
if blob.Digest == blobinfo.Digest {
@@ -331,7 +331,7 @@ func (s *storageImageDestination) Commit() error {
}
img, err := s.imageRef.transport.store.CreateImage(s.ID, nil, lastLayer, "", nil)
if err != nil {
if err != storage.ErrDuplicateID {
if errors.Cause(err) != storage.ErrDuplicateID {
logrus.Debugf("error creating image: %q", err)
return errors.Wrapf(err, "error creating image %q", s.ID)
}
@@ -340,8 +340,8 @@ func (s *storageImageDestination) Commit() error {
return errors.Wrapf(err, "error reading image %q", s.ID)
}
if img.TopLayer != lastLayer {
logrus.Debugf("error creating image: image with ID %q exists, but uses different layers", err)
return errors.Wrapf(err, "image with ID %q already exists, but uses a different top layer", s.ID)
logrus.Debugf("error creating image: image with ID %q exists, but uses different layers", s.ID)
return errors.Wrapf(storage.ErrDuplicateID, "image with ID %q already exists, but uses a different top layer", s.ID)
}
logrus.Debugf("reusing image ID %q", img.ID)
} else {

View File

@@ -70,7 +70,9 @@ func (s *storageReference) resolveImage() (*storage.Image, error) {
// to build this reference object.
func (s storageReference) Transport() types.ImageTransport {
return &storageTransport{
store: s.transport.store,
store: s.transport.store,
defaultUIDMap: s.transport.defaultUIDMap,
defaultGIDMap: s.transport.defaultGIDMap,
}
}
@@ -83,7 +85,12 @@ func (s storageReference) DockerReference() reference.Named {
// disambiguate between images which may be present in multiple stores and
// share only their names.
func (s storageReference) StringWithinTransport() string {
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
optionsList := ""
options := s.transport.store.GraphOptions()
if len(options) > 0 {
optionsList = ":" + strings.Join(options, ",")
}
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "+" + s.transport.store.RunRoot() + optionsList + "]"
if s.name == nil {
return storeSpec + "@" + s.id
}
@@ -94,7 +101,14 @@ func (s storageReference) StringWithinTransport() string {
}
func (s storageReference) PolicyConfigurationIdentity() string {
return s.StringWithinTransport()
storeSpec := "[" + s.transport.store.GraphDriverName() + "@" + s.transport.store.GraphRoot() + "]"
if s.name == nil {
return storeSpec + "@" + s.id
}
if s.id == "" {
return storeSpec + s.reference
}
return storeSpec + s.reference + "@" + s.id
}
// Also accept policy that's tied to the combination of the graph root and

View File

@@ -11,6 +11,7 @@ import (
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/idtools"
"github.com/opencontainers/go-digest"
ddigest "github.com/opencontainers/go-digest"
)
@@ -46,10 +47,20 @@ type StoreTransport interface {
// ParseStoreReference parses a reference, overriding any store
// specification that it may contain.
ParseStoreReference(store storage.Store, reference string) (*storageReference, error)
// SetDefaultUIDMap sets the default UID map to use when opening stores.
SetDefaultUIDMap(idmap []idtools.IDMap)
// SetDefaultGIDMap sets the default GID map to use when opening stores.
SetDefaultGIDMap(idmap []idtools.IDMap)
// DefaultUIDMap returns the default UID map used when opening stores.
DefaultUIDMap() []idtools.IDMap
// DefaultGIDMap returns the default GID map used when opening stores.
DefaultGIDMap() []idtools.IDMap
}
type storageTransport struct {
store storage.Store
store storage.Store
defaultUIDMap []idtools.IDMap
defaultGIDMap []idtools.IDMap
}
func (s *storageTransport) Name() string {
@@ -66,6 +77,26 @@ func (s *storageTransport) SetStore(store storage.Store) {
s.store = store
}
// SetDefaultUIDMap sets the default UID map to use when opening stores.
func (s *storageTransport) SetDefaultUIDMap(idmap []idtools.IDMap) {
s.defaultUIDMap = idmap
}
// SetDefaultGIDMap sets the default GID map to use when opening stores.
func (s *storageTransport) SetDefaultGIDMap(idmap []idtools.IDMap) {
s.defaultGIDMap = idmap
}
// DefaultUIDMap returns the default UID map used when opening stores.
func (s *storageTransport) DefaultUIDMap() []idtools.IDMap {
return s.defaultUIDMap
}
// DefaultGIDMap returns the default GID map used when opening stores.
func (s *storageTransport) DefaultGIDMap() []idtools.IDMap {
return s.defaultGIDMap
}
// ParseStoreReference takes a name or an ID, tries to figure out which it is
// relative to the given store, and returns it in a reference object.
func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (*storageReference, error) {
@@ -110,7 +141,12 @@ func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (
// recognize.
return nil, ErrInvalidReference
}
storeSpec := "[" + store.GraphDriverName() + "@" + store.GraphRoot() + "]"
optionsList := ""
options := store.GraphOptions()
if len(options) > 0 {
optionsList = ":" + strings.Join(options, ",")
}
storeSpec := "[" + store.GraphDriverName() + "@" + store.GraphRoot() + "+" + store.RunRoot() + optionsList + "]"
id := ""
if sum.Validate() == nil {
id = sum.Hex()
@@ -127,14 +163,17 @@ func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (
} else {
logrus.Debugf("parsed reference into %q", storeSpec+refname+"@"+id)
}
return newReference(storageTransport{store: store}, refname, id, name), nil
return newReference(storageTransport{store: store, defaultUIDMap: s.defaultUIDMap, defaultGIDMap: s.defaultGIDMap}, refname, id, name), nil
}
func (s *storageTransport) GetStore() (storage.Store, error) {
// Return the transport's previously-set store. If we don't have one
// of those, initialize one now.
if s.store == nil {
store, err := storage.GetStore(storage.DefaultStoreOptions)
options := storage.DefaultStoreOptions
options.UIDMap = s.defaultUIDMap
options.GIDMap = s.defaultGIDMap
store, err := storage.GetStore(options)
if err != nil {
return nil, err
}
@@ -145,15 +184,11 @@ func (s *storageTransport) GetStore() (storage.Store, error) {
// ParseReference takes a name and/or an ID ("_name_"/"@_id_"/"_name_@_id_"),
// possibly prefixed with a store specifier in the form "[_graphroot_]" or
// "[_driver_@_graphroot_]", tries to figure out which it is, and returns it in
// a reference object. If the _graphroot_ is a location other than the default,
// it needs to have been previously opened using storage.GetStore(), so that it
// can figure out which run root goes with the graph root.
// "[_driver_@_graphroot_]" or "[_driver_@_graphroot_+_runroot_]" or
// "[_driver_@_graphroot_:_options_]" or "[_driver_@_graphroot_+_runroot_:_options_]",
// tries to figure out which it is, and returns it in a reference object.
func (s *storageTransport) ParseReference(reference string) (types.ImageReference, error) {
store, err := s.GetStore()
if err != nil {
return nil, err
}
var store storage.Store
// Check if there's a store location prefix. If there is, then it
// needs to match a store that was previously initialized using
// storage.GetStore(), or be enough to let the storage library fill out
@@ -165,37 +200,65 @@ func (s *storageTransport) ParseReference(reference string) (types.ImageReferenc
}
storeSpec := reference[1:closeIndex]
reference = reference[closeIndex+1:]
storeInfo := strings.SplitN(storeSpec, "@", 2)
if len(storeInfo) == 1 && storeInfo[0] != "" {
// One component: the graph root.
if !filepath.IsAbs(storeInfo[0]) {
return nil, ErrPathNotAbsolute
// Peel off a "driver@" from the start.
driverInfo := ""
driverSplit := strings.SplitN(storeSpec, "@", 2)
if len(driverSplit) != 2 {
if storeSpec == "" {
return nil, ErrInvalidReference
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphRoot: storeInfo[0],
})
if err != nil {
return nil, err
}
store = store2
} else if len(storeInfo) == 2 && storeInfo[0] != "" && storeInfo[1] != "" {
// Two components: the driver type and the graph root.
if !filepath.IsAbs(storeInfo[1]) {
return nil, ErrPathNotAbsolute
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphDriverName: storeInfo[0],
GraphRoot: storeInfo[1],
})
if err != nil {
return nil, err
}
store = store2
} else {
// Anything else: store specified in a form we don't
// recognize.
return nil, ErrInvalidReference
driverInfo = driverSplit[0]
if driverInfo == "" {
return nil, ErrInvalidReference
}
storeSpec = driverSplit[1]
if storeSpec == "" {
return nil, ErrInvalidReference
}
}
// Peel off a ":options" from the end.
var options []string
optionsSplit := strings.SplitN(storeSpec, ":", 2)
if len(optionsSplit) == 2 {
options = strings.Split(optionsSplit[1], ",")
storeSpec = optionsSplit[0]
}
// Peel off a "+runroot" from the new end.
runRootInfo := ""
runRootSplit := strings.SplitN(storeSpec, "+", 2)
if len(runRootSplit) == 2 {
runRootInfo = runRootSplit[1]
storeSpec = runRootSplit[0]
}
// The rest is our graph root.
rootInfo := storeSpec
// Check that any paths are absolute paths.
if rootInfo != "" && !filepath.IsAbs(rootInfo) {
return nil, ErrPathNotAbsolute
}
if runRootInfo != "" && !filepath.IsAbs(runRootInfo) {
return nil, ErrPathNotAbsolute
}
store2, err := storage.GetStore(storage.StoreOptions{
GraphDriverName: driverInfo,
GraphRoot: rootInfo,
RunRoot: runRootInfo,
GraphDriverOptions: options,
UIDMap: s.defaultUIDMap,
GIDMap: s.defaultGIDMap,
})
if err != nil {
return nil, err
}
store = store2
} else {
// We didn't have a store spec, so use the default.
store2, err := s.GetStore()
if err != nil {
return nil, err
}
store = store2
}
return s.ParseStoreReference(store, reference)
}
@@ -250,7 +313,7 @@ func (s storageTransport) ValidatePolicyConfigurationScope(scope string) error {
return ErrPathNotAbsolute
}
} else {
// Anything else: store specified in a form we don't
// Anything else: scope specified in a form we don't
// recognize.
return ErrInvalidReference
}

37
vendor/github.com/containers/image/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,37 @@
github.com/Sirupsen/logrus 7f4b1adc791766938c29457bed0703fb9134421a
github.com/containers/storage 105f7c77aef0c797429e41552743bf5b03b63263
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/distribution df5327f76fb6468b84a87771e361762b8be23fdb
github.com/docker/docker 75843d36aa5c3eaade50da005f9e0ff2602f3d5e
github.com/docker/go-connections 7da10c8c50cad14494ec818dcdfb6506265c0086
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
github.com/ghodss/yaml 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
github.com/gorilla/context 08b5f424b9271eedf6f9f0ce86cb9396ed337a42
github.com/gorilla/mux 94e7d24fd285520f3d12ae998f7fdd6b5393d453
github.com/imdario/mergo 50d4dbd4eb0e84778abe37cefef140271d96fade
github.com/mattn/go-runewidth 14207d285c6c197daabb5c9793d63e7af9ab2d50
github.com/mattn/go-shellwords 005a0944d84452842197c2108bd9168ced206f78
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
github.com/opencontainers/go-digest aa2ec055abd10d26d539eb630a92241b781ce4bc
github.com/opencontainers/image-spec v1.0.0
github.com/opencontainers/runc 6b1d0e76f239ffb435445e5ae316d2676c07c6e3
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
github.com/pkg/errors 248dadf4e9068a0b3e79f02ed0a610d935de5302
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
github.com/vbatts/tar-split bd4c5d64c3e9297f410025a3b1bd0c58f659e721
golang.org/x/crypto 453249f01cfeb54c3d549ddb75ff152ca243f9d8
golang.org/x/net 6b27048ae5e6ad1ef927e72e437531493de612fe
golang.org/x/sys 075e574b89e4c2d22f2286a7e2b919519c6f3547
gopkg.in/cheggaaa/pb.v1 d7e6ca3010b6f084d8056847f55d7f572f180678
gopkg.in/yaml.v2 a3f3340b5840cee44f372bddb5880fcbc419b46a
k8s.io/client-go bcde30fb7eaed76fd98a36b4120321b94995ffb6
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/xeipuuv/gojsonpointer master
github.com/tchap/go-patricia v2.2.6
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/BurntSushi/toml b26d9c308763d68093482582cea63d69be07a0f0
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460

43
vendor/github.com/containers/storage/README.md generated vendored Normal file
View File

@@ -0,0 +1,43 @@
`storage` is a Go library which aims to provide methods for storing filesystem
layers, container images, and containers. A `containers-storage` CLI wrapper
is also included for manual and scripting use.
To build the CLI wrapper, use 'make build-binary'.
Operations which use VMs expect to launch them using 'vagrant', defaulting to
using its 'libvirt' provider. The boxes used are also available for the
'virtualbox' provider, and can be selected by setting $VAGRANT_PROVIDER to
'virtualbox' before kicking off the build.
The library manages three types of items: layers, images, and containers.
A *layer* is a copy-on-write filesystem which is notionally stored as a set of
changes relative to its *parent* layer, if it has one. A given layer can only
have one parent, but any layer can be the parent of multiple layers. Layers
which are parents of other layers should be treated as read-only.
An *image* is a reference to a particular layer (its _top_ layer), along with
other information which the library can manage for the convenience of its
caller. This information typically includes configuration templates for
running a binary contained within the image's layers, and may include
cryptographic signatures. Multiple images can reference the same layer, as the
differences between two images may not be in their layer contents.
A *container* is a read-write layer which is a child of an image's top layer,
along with information which the library can manage for the convenience of its
caller. This information typically includes configuration information for
running the specific container. Multiple containers can be derived from a
single image.
Layers, images, and containers are represented primarily by 32 character
hexadecimal IDs, but items of each kind can also have one or more arbitrary
names attached to them, which the library will automatically resolve to IDs
when they are passed in to API calls which expect IDs.
The library can store what it calls *metadata* for each of these types of
items. This is expected to be a small piece of data, since it is cached in
memory and stored along with the library's own bookkeeping information.
Additionally, the library can store one or more of what it calls *big data* for
images and containers. This is a named chunk of larger data, which is only in
memory when it is being read from or being written to its own disk file.

View File

@@ -47,6 +47,7 @@ import (
rsystem "github.com/opencontainers/runc/libcontainer/system"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
)
var (
@@ -81,7 +82,7 @@ func Init(root string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
// Try to load the aufs kernel module
if err := supportsAufs(); err != nil {
return nil, graphdriver.ErrNotSupported
return nil, errors.Wrap(graphdriver.ErrNotSupported, "kernel does not support aufs")
}
fsMagic, err := graphdriver.GetFSMagic(root)
@@ -95,7 +96,7 @@ func Init(root string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
switch fsMagic {
case graphdriver.FsMagicAufs, graphdriver.FsMagicBtrfs, graphdriver.FsMagicEcryptfs:
logrus.Errorf("AUFS is not supported over %s", backingFs)
return nil, graphdriver.ErrIncompatibleFS
return nil, errors.Wrapf(graphdriver.ErrIncompatibleFS, "AUFS is not supported over %q", backingFs)
}
paths := []string{

View File

@@ -2,7 +2,7 @@
package aufs
import "errors"
import "github.com/pkg/errors"
// MsRemount declared to specify a non-linux system mount.
const MsRemount = 0

View File

@@ -29,6 +29,7 @@ import (
"github.com/containers/storage/pkg/parsers"
"github.com/docker/go-units"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
)
func init() {
@@ -55,7 +56,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
}
if fsMagic != graphdriver.FsMagicBtrfs {
return nil, graphdriver.ErrPrerequisites
return nil, errors.Wrapf(graphdriver.ErrPrerequisites, "%q is not on a btrfs filesystem", home)
}
rootUID, rootGID, err := idtools.GetRootUIDGID(uidMaps, gidMaps)

View File

@@ -5,7 +5,6 @@ package devmapper
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
@@ -31,6 +30,7 @@ import (
"github.com/docker/go-units"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
)
var (
@@ -1474,7 +1474,7 @@ func determineDriverCapabilities(version string) error {
versionSplit := strings.Split(version, ".")
major, err := strconv.Atoi(versionSplit[0])
if err != nil {
return graphdriver.ErrNotSupported
return errors.Wrapf(graphdriver.ErrNotSupported, "unable to parse driver major version %q as a number", versionSplit[0])
}
if major > 4 {
@@ -1488,7 +1488,7 @@ func determineDriverCapabilities(version string) error {
minor, err := strconv.Atoi(versionSplit[1])
if err != nil {
return graphdriver.ErrNotSupported
return errors.Wrapf(graphdriver.ErrNotSupported, "unable to parse driver minor version %q as a number", versionSplit[1])
}
/*
@@ -1655,11 +1655,11 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
version, err := devicemapper.GetDriverVersion()
if err != nil {
// Can't even get driver version, assume not supported
return graphdriver.ErrNotSupported
return errors.Wrap(graphdriver.ErrNotSupported, "unable to determine version of device mapper")
}
if err := determineDriverCapabilities(version); err != nil {
return graphdriver.ErrNotSupported
return errors.Wrap(graphdriver.ErrNotSupported, "unable to determine device mapper driver capabilities")
}
if err := devices.enableDeferredRemovalDeletion(); err != nil {
@@ -1733,6 +1733,15 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
metadataFile *os.File
)
fsMagic, err := graphdriver.GetFSMagic(devices.loopbackDir())
if err != nil {
return err
}
switch fsMagic {
case graphdriver.FsMagicAufs:
return errors.Errorf("devmapper: Loopback devices can not be created on AUFS filesystems")
}
if devices.dataDevice == "" {
// Make sure the sparse images exist in <root>/devicemapper/data
@@ -1959,7 +1968,7 @@ func (devices *DeviceSet) deleteTransaction(info *devInfo, syncDelete bool) erro
// If syncDelete is true, we want to return error. If deferred
// deletion is not enabled, we return an error. If error is
// something other then EBUSY, return an error.
if syncDelete || !devices.deferredDelete || err != devicemapper.ErrBusy {
if syncDelete || !devices.deferredDelete || errors.Cause(err) != devicemapper.ErrBusy {
logrus.Debugf("devmapper: Error deleting device: %s", err)
return err
}
@@ -2114,7 +2123,7 @@ func (devices *DeviceSet) removeDevice(devname string) error {
if err == nil {
break
}
if err != devicemapper.ErrBusy {
if errors.Cause(err) != devicemapper.ErrBusy {
return err
}
@@ -2149,12 +2158,12 @@ func (devices *DeviceSet) cancelDeferredRemoval(info *devInfo) error {
break
}
if err == devicemapper.ErrEnxio {
if errors.Cause(err) == devicemapper.ErrEnxio {
// Device is probably already gone. Return success.
return nil
}
if err != devicemapper.ErrBusy {
if errors.Cause(err) != devicemapper.ErrBusy {
return err
}

View File

@@ -1,13 +1,13 @@
package graphdriver
import (
"errors"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/pkg/errors"
"github.com/vbatts/tar-split/tar/storage"
"github.com/containers/storage/pkg/archive"
@@ -144,7 +144,7 @@ func GetDriver(name, home string, options []string, uidMaps, gidMaps []idtools.I
return pluginDriver, nil
}
logrus.Errorf("Failed to GetDriver graph %s %s", name, home)
return nil, ErrNotSupported
return nil, errors.Wrapf(ErrNotSupported, "failed to GetDriver graph %s %s", name, home)
}
// getBuiltinDriver initializes and returns the registered driver, but does not try to load from plugins
@@ -153,7 +153,7 @@ func getBuiltinDriver(name, home string, options []string, uidMaps, gidMaps []id
return initFunc(filepath.Join(home, name), options, uidMaps, gidMaps)
}
logrus.Errorf("Failed to built-in GetDriver graph %s %s", name, home)
return nil, ErrNotSupported
return nil, errors.Wrapf(ErrNotSupported, "failed to built-in GetDriver graph %s %s", name, home)
}
// New creates the driver and initializes it at the specified root.
@@ -228,7 +228,8 @@ func New(root string, name string, options []string, uidMaps, gidMaps []idtools.
// isDriverNotSupported returns true if the error initializing
// the graph driver is a non-supported error.
func isDriverNotSupported(err error) bool {
return err == ErrNotSupported || err == ErrPrerequisites || err == ErrIncompatibleFS
cause := errors.Cause(err)
return cause == ErrNotSupported || cause == ErrPrerequisites || cause == ErrIncompatibleFS
}
// scanPriorDrivers returns an un-ordered scan of directories of prior storage drivers

View File

@@ -53,7 +53,7 @@ const (
var (
// Slice of drivers that should be used in an order
priority = []string{
"overlay2",
"overlay",
"devicemapper",
"aufs",
"btrfs",

View File

@@ -20,6 +20,7 @@ import (
"unsafe"
log "github.com/Sirupsen/logrus"
"github.com/pkg/errors"
)
const (
@@ -56,7 +57,7 @@ func Mounted(fsType FsMagic, mountPath string) (bool, error) {
(buf.f_basetype[3] != 0) {
log.Debugf("[zfs] no zfs dataset found for rootdir '%s'", mountPath)
C.free(unsafe.Pointer(buf))
return false, ErrPrerequisites
return false, errors.Wrapf(graphdriver.ErrPrerequisites, "no zfs dataset found for rootdir '%s'", mountPath)
}
C.free(unsafe.Pointer(buf))

View File

@@ -4,7 +4,6 @@ package overlay
import (
"bufio"
"errors"
"fmt"
"io/ioutil"
"os"
@@ -27,6 +26,7 @@ import (
"github.com/containers/storage/pkg/parsers/kernel"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
)
var (
@@ -102,7 +102,7 @@ func InitWithName(name, home string, options []string, uidMaps, gidMaps []idtool
}
if err := supportsOverlay(); err != nil {
return nil, graphdriver.ErrNotSupported
return nil, errors.Wrap(graphdriver.ErrNotSupported, "kernel does not support overlay fs")
}
// require kernel 4.0.0 to ensure multiple lower dirs are supported
@@ -112,7 +112,7 @@ func InitWithName(name, home string, options []string, uidMaps, gidMaps []idtool
}
if kernel.CompareKernelVersion(*v, kernel.VersionInfo{Kernel: 4, Major: 0, Minor: 0}) < 0 {
if !opts.overrideKernelCheck {
return nil, graphdriver.ErrNotSupported
return nil, errors.Wrap(graphdriver.ErrNotSupported, "kernel too old to provide multiple lowers feature for overlay")
}
logrus.Warnf("Using pre-4.0.0 kernel for overlay, mount failures may require kernel update")
}
@@ -129,7 +129,7 @@ func InitWithName(name, home string, options []string, uidMaps, gidMaps []idtool
switch fsMagic {
case graphdriver.FsMagicBtrfs, graphdriver.FsMagicAufs, graphdriver.FsMagicZfs, graphdriver.FsMagicOverlay, graphdriver.FsMagicEcryptfs:
logrus.Errorf("'overlay' is not supported over %s", backingFs)
return nil, graphdriver.ErrIncompatibleFS
return nil, errors.Wrapf(graphdriver.ErrIncompatibleFS, "'overlay' is not supported over %s", backingFs)
}
rootUID, rootGID, err := idtools.GetRootUIDGID(uidMaps, gidMaps)
@@ -231,7 +231,7 @@ func supportsOverlay() error {
}
}
logrus.Error("'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.")
return graphdriver.ErrNotSupported
return errors.Wrap(graphdriver.ErrNotSupported, "'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.")
}
func (d *Driver) String() string {

View File

@@ -3,10 +3,10 @@
package graphdriver
import (
"errors"
"fmt"
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
)
type graphDriverProxy struct {

View File

@@ -20,6 +20,7 @@ import (
"github.com/containers/storage/pkg/parsers"
zfs "github.com/mistifyio/go-zfs"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
)
type zfsOptions struct {
@@ -47,13 +48,13 @@ func Init(base string, opt []string, uidMaps, gidMaps []idtools.IDMap) (graphdri
if _, err := exec.LookPath("zfs"); err != nil {
logrus.Debugf("[zfs] zfs command is not available: %v", err)
return nil, graphdriver.ErrPrerequisites
return nil, errors.Wrap(graphdriver.ErrPrerequisites, "the 'zfs' command is not available")
}
file, err := os.OpenFile("/dev/zfs", os.O_RDWR, 600)
if err != nil {
logrus.Debugf("[zfs] cannot open /dev/zfs: %v", err)
return nil, graphdriver.ErrPrerequisites
return nil, errors.Wrapf(graphdriver.ErrPrerequisites, "could not open /dev/zfs: %v", err)
}
defer file.Close()

View File

@@ -7,6 +7,7 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/storage/drivers"
"github.com/pkg/errors"
)
func checkRootdirFs(rootdir string) error {
@@ -18,7 +19,7 @@ func checkRootdirFs(rootdir string) error {
// on FreeBSD buf.Fstypename contains ['z', 'f', 's', 0 ... ]
if (buf.Fstypename[0] != 122) || (buf.Fstypename[1] != 102) || (buf.Fstypename[2] != 115) || (buf.Fstypename[3] != 0) {
logrus.Debugf("[zfs] no zfs dataset found for rootdir '%s'", rootdir)
return graphdriver.ErrPrerequisites
return errors.Wrapf(graphdriver.ErrPrerequisites, "no zfs dataset found for rootdir '%s'", rootdir)
}
return nil

View File

@@ -6,6 +6,7 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/storage/drivers"
"github.com/pkg/errors"
)
func checkRootdirFs(rootdir string) error {
@@ -16,7 +17,7 @@ func checkRootdirFs(rootdir string) error {
if graphdriver.FsMagic(buf.Type) != graphdriver.FsMagicZfs {
logrus.Debugf("[zfs] no zfs dataset found for rootdir '%s'", rootdir)
return graphdriver.ErrPrerequisites
return errors.Wrapf(graphdriver.ErrPrerequisites, "no zfs dataset found for rootdir '%s'", rootdir)
}
return nil

View File

@@ -22,6 +22,7 @@ import (
log "github.com/Sirupsen/logrus"
"github.com/containers/storage/drivers"
"github.com/pkg/errors"
)
func checkRootdirFs(rootdir string) error {
@@ -34,7 +35,7 @@ func checkRootdirFs(rootdir string) error {
(buf.f_basetype[3] != 0) {
log.Debugf("[zfs] no zfs dataset found for rootdir '%s'", rootdir)
C.free(unsafe.Pointer(buf))
return graphdriver.ErrPrerequisites
return errors.Wrapf(graphdriver.ErrPrerequisites, "no zfs dataset found for rootdir '%s'", rootdir)
}
C.free(unsafe.Pointer(buf))

View File

@@ -815,7 +815,7 @@ func (r *layerStore) Diff(from, to string, options *DiffOptions) (io.ReadCloser,
if err != nil {
return nil, ErrLayerUnknown
}
// Default to applying the type of encryption that we noted was used
// Default to applying the type of compression that we noted was used
// for the layerdiff when it was applied.
compression := toLayer.CompressionType
// If a particular compression type (or no compression) was selected,
@@ -823,33 +823,49 @@ func (r *layerStore) Diff(from, to string, options *DiffOptions) (io.ReadCloser,
if options != nil && options.Compression != nil {
compression = *options.Compression
}
maybeCompressReadCloser := func(rc io.ReadCloser) (io.ReadCloser, error) {
// Depending on whether or not compression is desired, return either the
// passed-in ReadCloser, or a new one that provides its readers with a
// compressed version of the data that the original would have provided
// to its readers.
if compression == archive.Uncompressed {
return rc, nil
}
preader, pwriter := io.Pipe()
compressor, err := archive.CompressStream(pwriter, compression)
if err != nil {
rc.Close()
pwriter.Close()
preader.Close()
return nil, err
}
go func() {
defer pwriter.Close()
defer compressor.Close()
defer rc.Close()
io.Copy(compressor, rc)
}()
return preader, nil
}
if from != toLayer.Parent {
diff, err := r.driver.Diff(to, from)
if err == nil && (compression != archive.Uncompressed) {
preader, pwriter := io.Pipe()
compressor, err := archive.CompressStream(pwriter, compression)
if err != nil {
diff.Close()
pwriter.Close()
return nil, err
}
go func() {
io.Copy(compressor, diff)
diff.Close()
compressor.Close()
pwriter.Close()
}()
diff = preader
if err != nil {
return nil, err
}
return diff, err
return maybeCompressReadCloser(diff)
}
tsfile, err := os.Open(r.tspath(to))
if err != nil {
if os.IsNotExist(err) {
return r.driver.Diff(to, from)
if !os.IsNotExist(err) {
return nil, err
}
return nil, err
diff, err := r.driver.Diff(to, from)
if err != nil {
return nil, err
}
return maybeCompressReadCloser(diff)
}
defer tsfile.Close()
@@ -871,33 +887,16 @@ func (r *layerStore) Diff(from, to string, options *DiffOptions) (io.ReadCloser,
return nil, err
}
var stream io.ReadCloser
if compression != archive.Uncompressed {
preader, pwriter := io.Pipe()
compressor, err := archive.CompressStream(pwriter, compression)
if err != nil {
fgetter.Close()
pwriter.Close()
preader.Close()
return nil, err
}
go func() {
asm.WriteOutputTarStream(fgetter, metadata, compressor)
compressor.Close()
pwriter.Close()
}()
stream = preader
} else {
stream = asm.NewOutputTarStream(fgetter, metadata)
}
return ioutils.NewReadCloserWrapper(stream, func() error {
err1 := stream.Close()
tarstream := asm.NewOutputTarStream(fgetter, metadata)
rc := ioutils.NewReadCloserWrapper(tarstream, func() error {
err1 := tarstream.Close()
err2 := fgetter.Close()
if err2 == nil {
return err1
}
return err2
}), nil
})
return maybeCompressReadCloser(rc)
}
func (r *layerStore) DiffSize(from, to string) (size int64, err error) {

View File

@@ -1,97 +0,0 @@
// +build ignore
// Simple tool to create an archive stream from an old and new directory
//
// By default it will stream the comparison of two temporary directories with junk files
package main
import (
"flag"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/archive"
)
var (
flDebug = flag.Bool("D", false, "debugging output")
flNewDir = flag.String("newdir", "", "")
flOldDir = flag.String("olddir", "", "")
log = logrus.New()
)
func main() {
flag.Usage = func() {
fmt.Println("Produce a tar from comparing two directory paths. By default a demo tar is created of around 200 files (including hardlinks)")
fmt.Printf("%s [OPTIONS]\n", os.Args[0])
flag.PrintDefaults()
}
flag.Parse()
log.Out = os.Stderr
if (len(os.Getenv("DEBUG")) > 0) || *flDebug {
logrus.SetLevel(logrus.DebugLevel)
}
var newDir, oldDir string
if len(*flNewDir) == 0 {
var err error
newDir, err = ioutil.TempDir("", "storage-test-newDir")
if err != nil {
log.Fatal(err)
}
defer os.RemoveAll(newDir)
if _, err := prepareUntarSourceDirectory(100, newDir, true); err != nil {
log.Fatal(err)
}
} else {
newDir = *flNewDir
}
if len(*flOldDir) == 0 {
oldDir, err := ioutil.TempDir("", "storage-test-oldDir")
if err != nil {
log.Fatal(err)
}
defer os.RemoveAll(oldDir)
} else {
oldDir = *flOldDir
}
changes, err := archive.ChangesDirs(newDir, oldDir)
if err != nil {
log.Fatal(err)
}
a, err := archive.ExportChanges(newDir, changes)
if err != nil {
log.Fatal(err)
}
defer a.Close()
i, err := io.Copy(os.Stdout, a)
if err != nil && err != io.EOF {
log.Fatal(err)
}
fmt.Fprintf(os.Stderr, "wrote archive of %d bytes", i)
}
func prepareUntarSourceDirectory(numberOfFiles int, targetPath string, makeLinks bool) (int, error) {
fileData := []byte("fooo")
for n := 0; n < numberOfFiles; n++ {
fileName := fmt.Sprintf("file-%d", n)
if err := ioutil.WriteFile(path.Join(targetPath, fileName), fileData, 0700); err != nil {
return 0, err
}
if makeLinks {
if err := os.Link(path.Join(targetPath, fileName), path.Join(targetPath, fileName+"-link")); err != nil {
return 0, err
}
}
}
totalSize := numberOfFiles * len(fileData)
return totalSize, nil
}

View File

@@ -38,7 +38,15 @@ func getNextFreeLoopbackIndex() (int, error) {
return index, err
}
func openNextAvailableLoopback(index int, sparseFile *os.File) (loopFile *os.File, err error) {
func openNextAvailableLoopback(index int, sparseName string, sparseFile *os.File) (loopFile *os.File, err error) {
// Read information about the loopback file.
var st syscall.Stat_t
err = syscall.Fstat(int(sparseFile.Fd()), &st)
if err != nil {
logrus.Errorf("Error reading information about loopback file %s: %v", sparseName, err)
return nil, ErrAttachLoopbackDevice
}
// Start looking for a free /dev/loop
for {
target := fmt.Sprintf("/dev/loop%d", index)
@@ -77,6 +85,18 @@ func openNextAvailableLoopback(index int, sparseFile *os.File) (loopFile *os.Fil
// Otherwise, we keep going with the loop
continue
}
// Check if the loopback driver and underlying filesystem agree on the loopback file's
// device and inode numbers.
dev, ino, err := getLoopbackBackingFile(loopFile)
if err != nil {
logrus.Errorf("Error getting loopback backing file: %s", err)
return nil, ErrGetLoopbackBackingFile
}
if dev != st.Dev || ino != st.Ino {
logrus.Errorf("Loopback device and filesystem disagree on device/inode for %q: %#x(%d):%#x(%d) vs %#x(%d):%#x(%d)", sparseName, dev, dev, ino, ino, st.Dev, st.Dev, st.Ino, st.Ino)
}
// In case of success, we finished. Break the loop.
break
}
@@ -110,7 +130,7 @@ func AttachLoopDevice(sparseName string) (loop *os.File, err error) {
}
defer sparseFile.Close()
loopFile, err := openNextAvailableLoopback(startIndex, sparseFile)
loopFile, err := openNextAvailableLoopback(startIndex, sparseName, sparseFile)
if err != nil {
return nil, err
}

View File

@@ -15,8 +15,8 @@ import (
var (
// ErrNotFound plugin not found
ErrNotFound = errors.New("plugin not found")
socketsPath = "/run/oci-storage/plugins"
specsPaths = []string{"/etc/oci-storage/plugins", "/usr/lib/oci-storage/plugins"}
socketsPath = "/run/containers/storage/plugins"
specsPaths = []string{"/etc/containers/storage/plugins", "/usr/lib/containers/storage/plugins"}
)
// localRegistry defines a registry that is local (using unix socket).

View File

@@ -3,10 +3,11 @@
//
// Storage discovers plugins by looking for them in the plugin directory whenever
// a user or container tries to use one by name. UNIX domain socket files must
// be located under /run/oci-storage/plugins, whereas spec files can be located
// either under /etc/oci-storage/plugins or /usr/lib/oci-storage/plugins. This
// is handled by the Registry interface, which lets you list all plugins or get
// a plugin by its name if it exists.
// be located under /run/containers/storage/plugins, whereas spec files can be
// located either under /etc/containers/storage/plugins or
// /usr/lib/containers/storage/plugins. This is handled by the Registry
// interface, which lets you list all plugins or get a plugin by its name if it
// exists.
//
// The plugins need to implement an HTTP server and bind this to the UNIX socket
// or the address specified in the spec files.

View File

@@ -183,34 +183,48 @@ type Store interface {
// by the Store.
GraphDriver() (drivers.Driver, error)
// LayerStore obtains and returns a handle to the writeable layer store object used by
// the Store.
// LayerStore obtains and returns a handle to the writeable layer store
// object used by the Store. Accessing this store directly will bypass
// locking and synchronization, so use it with care.
LayerStore() (LayerStore, error)
// ROLayerStore obtains additional read/only layer store objects used by
// the Store.
// ROLayerStore obtains additional read/only layer store objects used
// by the Store. Accessing these stores directly will bypass locking
// and synchronization, so use them with care.
ROLayerStores() ([]ROLayerStore, error)
// ImageStore obtains and returns a handle to the writable image store object used by
// the Store.
// ImageStore obtains and returns a handle to the writable image store
// object used by the Store. Accessing this store directly will bypass
// locking and synchronization, so use it with care.
ImageStore() (ImageStore, error)
// ROImageStores obtains additional read/only image store objects used by
// the Store.
// ROImageStores obtains additional read/only image store objects used
// by the Store. Accessing these stores directly will bypass locking
// and synchronization, so use them with care.
ROImageStores() ([]ROImageStore, error)
// ContainerStore obtains and returns a handle to the container store object
// used by the Store.
// ContainerStore obtains and returns a handle to the container store
// object used by the Store. Accessing this store directly will bypass
// locking and synchronization, so use it with care.
ContainerStore() (ContainerStore, error)
// CreateLayer creates a new layer in the underlying storage driver, optionally
// having the specified ID (one will be assigned if none is specified), with
// the specified layer (or no layer) as its parent, and with optional names.
// (The writeable flag is ignored.)
// CreateLayer creates a new layer in the underlying storage driver,
// optionally having the specified ID (one will be assigned if none is
// specified), with the specified layer (or no layer) as its parent,
// and with optional names. (The writeable flag is ignored.)
CreateLayer(id, parent string, names []string, mountLabel string, writeable bool) (*Layer, error)
// PutLayer combines the functions of CreateLayer and ApplyDiff, marking the
// layer for automatic removal if applying the diff fails for any reason.
// PutLayer combines the functions of CreateLayer and ApplyDiff,
// marking the layer for automatic removal if applying the diff fails
// for any reason.
//
// Note that we do some of this work in a child process. The calling
// process's main() function needs to import our pkg/reexec package and
// should begin with something like this in order to allow us to
// properly start that child process:
// if reexec.Init {
// return
// }
PutLayer(id, parent string, names []string, mountLabel string, writeable bool, diff archive.Reader) (*Layer, int64, error)
// CreateImage creates a new image, optionally with the specified ID
@@ -221,37 +235,39 @@ type Store interface {
// convenience of its caller.
CreateImage(id string, names []string, layer, metadata string, options *ImageOptions) (*Image, error)
// CreateContainer creates a new container, optionally with the specified ID
// (one will be assigned if none is specified), with optional names,
// using the specified image's top layer as the basis for the
// container's layer, and assigning the specified ID to that layer (one
// will be created if none is specified). A container is a layer which
// is associated with additional bookkeeping information which the
// library stores for the convenience of its caller.
// CreateContainer creates a new container, optionally with the
// specified ID (one will be assigned if none is specified), with
// optional names, using the specified image's top layer as the basis
// for the container's layer, and assigning the specified ID to that
// layer (one will be created if none is specified). A container is a
// layer which is associated with additional bookkeeping information
// which the library stores for the convenience of its caller.
CreateContainer(id string, names []string, image, layer, metadata string, options *ContainerOptions) (*Container, error)
// Metadata retrieves the metadata which is associated with a layer, image,
// or container (whichever the passed-in ID refers to).
// Metadata retrieves the metadata which is associated with a layer,
// image, or container (whichever the passed-in ID refers to).
Metadata(id string) (string, error)
// SetMetadata updates the metadata which is associated with a layer, image, or
// container (whichever the passed-in ID refers to) to match the specified
// value. The metadata value can be retrieved at any time using Metadata,
// or using Layer, Image, or Container and reading the object directly.
// SetMetadata updates the metadata which is associated with a layer,
// image, or container (whichever the passed-in ID refers to) to match
// the specified value. The metadata value can be retrieved at any
// time using Metadata, or using Layer, Image, or Container and reading
// the object directly.
SetMetadata(id, metadata string) error
// Exists checks if there is a layer, image, or container which has the
// passed-in ID or name.
Exists(id string) bool
// Status asks for a status report, in the form of key-value pairs, from the
// underlying storage driver. The contents vary from driver to driver.
// Status asks for a status report, in the form of key-value pairs,
// from the underlying storage driver. The contents vary from driver
// to driver.
Status() ([][2]string, error)
// Delete removes the layer, image, or container which has the passed-in ID or
// name. Note that no safety checks are performed, so this can leave images
// with references to layers which do not exist, and layers with references to
// parents which no longer exist.
// Delete removes the layer, image, or container which has the
// passed-in ID or name. Note that no safety checks are performed, so
// this can leave images with references to layers which do not exist,
// and layers with references to parents which no longer exist.
Delete(id string) error
// DeleteLayer attempts to remove the specified layer. If the layer is the
@@ -271,41 +287,59 @@ type Store interface {
// but the list of layers which would be removed is still returned.
DeleteImage(id string, commit bool) (layers []string, err error)
// DeleteContainer removes the specified container and its layer. If there is
// no matching container, or if the container exists but its layer does not, an
// error will be returned.
// DeleteContainer removes the specified container and its layer. If
// there is no matching container, or if the container exists but its
// layer does not, an error will be returned.
DeleteContainer(id string) error
// Wipe removes all known layers, images, and containers.
Wipe() error
// Mount attempts to mount a layer, image, or container for access, and returns
// the pathname if it succeeds.
// Mount attempts to mount a layer, image, or container for access, and
// returns the pathname if it succeeds.
//
// Note that we do some of this work in a child process. The calling
// process's main() function needs to import our pkg/reexec package and
// should begin with something like this in order to allow us to
// properly start that child process:
// if reexec.Init {
// return
// }
Mount(id, mountLabel string) (string, error)
// Unmount attempts to unmount a layer, image, or container, given an ID, a
// name, or a mount path.
Unmount(id string) error
// Changes returns a summary of the changes which would need to be made to one
// layer to make its contents the same as a second layer. If the first layer
// is not specified, the second layer's parent is assumed. Each Change
// structure contains a Path relative to the layer's root directory, and a Kind
// which is either ChangeAdd, ChangeModify, or ChangeDelete.
// Changes returns a summary of the changes which would need to be made
// to one layer to make its contents the same as a second layer. If
// the first layer is not specified, the second layer's parent is
// assumed. Each Change structure contains a Path relative to the
// layer's root directory, and a Kind which is either ChangeAdd,
// ChangeModify, or ChangeDelete.
Changes(from, to string) ([]archive.Change, error)
// DiffSize returns a count of the size of the tarstream which would specify
// the changes returned by Changes.
// DiffSize returns a count of the size of the tarstream which would
// specify the changes returned by Changes.
DiffSize(from, to string) (int64, error)
// Diff returns the tarstream which would specify the changes returned by
// Changes. If options are passed in, they can override default behaviors.
// Diff returns the tarstream which would specify the changes returned
// by Changes. If options are passed in, they can override default
// behaviors.
Diff(from, to string, options *DiffOptions) (io.ReadCloser, error)
// ApplyDiff applies a tarstream to a layer. Information about the tarstream
// is cached with the layer. Typically, a layer which is populated using a
// tarstream will be expected to not be modified in any other way, either
// before or after the diff is applied.
// ApplyDiff applies a tarstream to a layer. Information about the
// tarstream is cached with the layer. Typically, a layer which is
// populated using a tarstream will be expected to not be modified in
// any other way, either before or after the diff is applied.
//
// Note that we do some of this work in a child process. The calling
// process's main() function needs to import our pkg/reexec package and
// should begin with something like this in order to allow us to
// properly start that child process:
// if reexec.Init {
// return
// }
ApplyDiff(to string, diff archive.Reader) (int64, error)
// LayersByCompressedDigest returns a slice of the layers with the
@@ -335,12 +369,12 @@ type Store interface {
// SetNames changes the list of names for a layer, image, or container.
SetNames(id string, names []string) error
// ListImageBigData retrieves a list of the (possibly large) chunks of named
// data associated with an image.
// ListImageBigData retrieves a list of the (possibly large) chunks of
// named data associated with an image.
ListImageBigData(id string) ([]string, error)
// ImageBigData retrieves a (possibly large) chunk of named data associated
// with an image.
// ImageBigData retrieves a (possibly large) chunk of named data
// associated with an image.
ImageBigData(id, key string) ([]byte, error)
// ImageBigDataSize retrieves the size of a (possibly large) chunk

20
vendor/github.com/containers/storage/vendor.conf generated vendored Normal file
View File

@@ -0,0 +1,20 @@
github.com/BurntSushi/toml master
github.com/Microsoft/go-winio 307e919c663683a9000576fdc855acaf9534c165
github.com/Microsoft/hcsshim 0f615c198a84e0344b4ed49c464d8833d4648dfc
github.com/Sirupsen/logrus 61e43dc76f7ee59a82bdf3d71033dc12bea4c77d
github.com/docker/engine-api 4290f40c056686fcaa5c9caf02eac1dde9315adf
github.com/docker/go-connections eb315e36415380e7c2fdee175262560ff42359da
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/go-check/check 20d25e2804050c1cd24a7eea1e7a6447dd0e74ec
github.com/mattn/go-shellwords 753a2322a99f87c0eff284980e77f53041555bc6
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
github.com/opencontainers/go-digest master
github.com/opencontainers/runc 6c22e77604689db8725fa866f0f2ec0b3e8c3a07
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
github.com/pkg/errors master
github.com/tchap/go-patricia v2.2.6
github.com/vbatts/tar-split bd4c5d64c3e9297f410025a3b1bd0c58f659e721
github.com/vdemeester/shakers 24d7f1d6a71aa5d9cbe7390e4afb66b7eef9e1b3
golang.org/x/net f2499483f923065a842d38eb4c7f1927e6fc6e6d
golang.org/x/sys d75a52659825e75fff6158388dddc6a5b04f9ba5

167
vendor/github.com/opencontainers/image-spec/README.md generated vendored Normal file
View File

@@ -0,0 +1,167 @@
# OCI Image Format Specification
<div>
<a href="https://travis-ci.org/opencontainers/image-spec">
<img src="https://travis-ci.org/opencontainers/image-spec.svg?branch=master"></img>
</a>
</div>
The OCI Image Format project creates and maintains the software shipping container image format spec (OCI Image Format).
**[The specification can be found here](spec.md).**
This repository also provides [Go types](specs-go), [intra-blob validation tooling, and JSON Schema](schema).
The Go types and validation should be compatible with the current Go release; earlier Go releases are not supported.
Additional documentation about how this group operates:
- [Code of Conduct](https://github.com/opencontainers/tob/blob/d2f9d68c1332870e40693fe077d311e0742bc73d/code-of-conduct.md)
- [Roadmap](#roadmap)
- [Releases](RELEASES.md)
- [Project Documentation](project.md)
The _optional_ and _base_ layers of all OCI projects are tracked in the [OCI Scope Table](https://www.opencontainers.org/about/oci-scope-table).
## Running an OCI Image
The OCI Image Format partner project is the [OCI Runtime Spec project](https://github.com/opencontainers/runtime-spec).
The Runtime Specification outlines how to run a "[filesystem bundle](https://github.com/opencontainers/runtime-spec/blob/master/bundle.md)" that is unpacked on disk.
At a high-level an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle.
At this point the OCI Runtime Bundle would be run by an OCI Runtime.
This entire workflow supports the UX that users have come to expect from container engines like Docker and rkt: primarily, the ability to run an image with no additional arguments:
* docker run example.com/org/app:v1.0.0
* rkt run example.com/org/app,version=v1.0.0
To support this UX the OCI Image Format contains sufficient information to launch the application on the target platform (e.g. command, arguments, environment variables, etc).
## FAQ
**Q: Why doesn't this project mention distribution?**
A: Distribution, for example using HTTP as both Docker v2.2 and AppC do today, is currently out of scope on the [OCI Scope Table](https://www.opencontainers.org/about/oci-scope-table).
There has been [some discussion on the TOB mailing list](https://groups.google.com/a/opencontainers.org/d/msg/tob/A3JnmI-D-6Y/tLuptPDHAgAJ) to make distribution an optional layer, but this topic is a work in progress.
**Q: What happens to AppC or Docker Image Formats?**
A: Existing formats can continue to be a proving ground for technologies, as needed.
The OCI Image Format project strives to provide a dependable open specification that can be shared between different tools and be evolved for years or decades of compatibility; as the deb and rpm format have.
Find more [FAQ on the OCI site](https://www.opencontainers.org/faq).
## Roadmap
The [GitHub milestones](https://github.com/opencontainers/image-spec/milestones) lay out the path to the OCI v1.0.0 release in late 2016.
# Contributing
Development happens on GitHub for the spec.
Issues are used for bugs and actionable items and longer discussions can happen on the [mailing list](#mailing-list).
The specification and code is licensed under the Apache 2.0 license found in the `LICENSE` file of this repository.
## Discuss your design
The project welcomes submissions, but please let everyone know what you are working on.
Before undertaking a nontrivial change to this specification, send mail to the [mailing list](#mailing-list) to discuss what you plan to do.
This gives everyone a chance to validate the design, helps prevent duplication of effort, and ensures that the idea fits.
It also guarantees that the design is sound before code is written; a GitHub pull-request is not the place for high-level discussions.
Typos and grammatical errors can go straight to a pull-request.
When in doubt, start on the [mailing-list](#mailing-list).
## Weekly Call
The contributors and maintainers of all OCI projects have a weekly meeting Wednesdays at 2:00 PM (USA Pacific).
Everyone is welcome to participate via [UberConference web][UberConference] or audio-only: +1-415-968-0849 (no PIN needed).
An initial agenda will be posted to the [mailing list](#mailing-list) earlier in the week, and everyone is welcome to propose additional topics or suggest other agenda alterations there.
Minutes are posted to the [mailing list](#mailing-list) and minutes from past calls are archived [here][minutes].
## Mailing List
You can subscribe and join the mailing list on [Google Groups](https://groups.google.com/a/opencontainers.org/forum/#!forum/dev).
## IRC
OCI discussion happens on #opencontainers on Freenode ([logs][irc-logs]).
## Markdown style
To keep consistency throughout the Markdown files in the Open Container spec all files should be formatted one sentence per line.
This fixes two things: it makes diffing easier with git and it resolves fights about line wrapping length.
For example, this paragraph will span three lines in the Markdown source.
## Git commit
### Sign your work
The sign-off is a simple line at the end of the explanation for the patch, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch.
The rules are pretty simple: if you can certify the below (from [developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe@gmail.com>
using your real name (sorry, no pseudonyms or anonymous contributions.)
You can add the sign off when creating the git commit via `git commit -s`.
### Commit Style
Simple house-keeping for clean git history.
Read more on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/) or the Discussion section of [`git-commit(1)`](http://git-scm.com/docs/git-commit).
1. Separate the subject from body with a blank line
2. Limit the subject line to 50 characters
3. Capitalize the subject line
4. Do not end the subject line with a period
5. Use the imperative mood in the subject line
6. Wrap the body at 72 characters
7. Use the body to explain what and why vs. how
* If there was important/useful/essential conversation or information, copy or include a reference
8. When possible, one keyword to scope the change in the subject (i.e. "README: ...", "runtime: ...")
[UberConference]: https://www.uberconference.com/opencontainers
[irc-logs]: http://ircbot.wl.linuxfoundation.org/eavesdrop/%23opencontainers/
[minutes]: http://ircbot.wl.linuxfoundation.org/meetings/opencontainers/

View File

@@ -0,0 +1,56 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package v1
const (
// AnnotationCreated is the annotation key for the date and time on which the image was built (date-time string as defined by RFC 3339).
AnnotationCreated = "org.opencontainers.image.created"
// AnnotationAuthors is the annotation key for the contact details of the people or organization responsible for the image (freeform string).
AnnotationAuthors = "org.opencontainers.image.authors"
// AnnotationURL is the annotation key for the URL to find more information on the image.
AnnotationURL = "org.opencontainers.image.url"
// AnnotationDocumentation is the annotation key for the URL to get documentation on the image.
AnnotationDocumentation = "org.opencontainers.image.documentation"
// AnnotationSource is the annotation key for the URL to get source code for building the image.
AnnotationSource = "org.opencontainers.image.source"
// AnnotationVersion is the annotation key for the version of the packaged software.
// The version MAY match a label or tag in the source code repository.
// The version MAY be Semantic versioning-compatible.
AnnotationVersion = "org.opencontainers.image.version"
// AnnotationRevision is the annotation key for the source control revision identifier for the packaged software.
AnnotationRevision = "org.opencontainers.image.revision"
// AnnotationVendor is the annotation key for the name of the distributing entity, organization or individual.
AnnotationVendor = "org.opencontainers.image.vendor"
// AnnotationLicenses is the annotation key for the license(s) under which contained software is distributed as an SPDX License Expression.
AnnotationLicenses = "org.opencontainers.image.licenses"
// AnnotationRefName is the annotation key for the name of the reference for a target.
// SHOULD only be considered valid when on descriptors on `index.json` within image layout.
AnnotationRefName = "org.opencontainers.image.ref.name"
// AnnotationTitle is the annotation key for the human-readable title of the image.
AnnotationTitle = "org.opencontainers.image.title"
// AnnotationDescription is the annotation key for the human-readable description of the software packaged in the image.
AnnotationDescription = "org.opencontainers.image.description"
)

View File

@@ -37,7 +37,7 @@ type ImageConfig struct {
// Cmd defines the default arguments to the entrypoint of the container.
Cmd []string `json:"Cmd,omitempty"`
// Volumes is a set of directories which should be created as data volumes in a container running this image.
// Volumes is a set of directories describing where the process is likely write data specific to a container instance.
Volumes map[string]struct{} `json:"Volumes,omitempty"`
// WorkingDir sets the current working directory of the entrypoint process in the container.

View File

@@ -25,7 +25,7 @@ const (
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = "-rc6-dev"
VersionDev = ""
)
// Version is the specification version that the package types support.

158
vendor/github.com/opencontainers/runtime-spec/README.md generated vendored Normal file
View File

@@ -0,0 +1,158 @@
# Open Container Initiative Runtime Specification
The [Open Container Initiative][oci] develops specifications for standards on Operating System process and application containers.
The specification can be found [here](spec.md).
## Table of Contents
Additional documentation about how this group operates:
- [Code of Conduct][code-of-conduct]
- [Style and Conventions](style.md)
- [Implementations](implementations.md)
- [Releases](RELEASES.md)
- [project](project.md)
- [charter][charter]
## Use Cases
To provide context for users the following section gives example use cases for each part of the spec.
### Application Bundle Builders
Application bundle builders can create a [bundle](bundle.md) directory that includes all of the files required for launching an application as a container.
The bundle contains an OCI [configuration file](config.md) where the builder can specify host-independent details such as [which executable to launch](config.md#process) and host-specific settings such as [mount](config.md#mounts) locations, [hook](config.md#hooks) paths, Linux [namespaces](config-linux.md#namespaces) and [cgroups](config-linux.md#control-groups).
Because the configuration includes host-specific settings, application bundle directories copied between two hosts may require configuration adjustments.
### Hook Developers
[Hook](config.md#hooks) developers can extend the functionality of an OCI-compliant runtime by hooking into a container's lifecycle with an external application.
Example use cases include sophisticated network configuration, volume garbage collection, etc.
### Runtime Developers
Runtime developers can build runtime implementations that run OCI-compliant bundles and container configuration, containing low-level OS and host-specific details, on a particular platform.
## Contributing
Development happens on GitHub for the spec.
Issues are used for bugs and actionable items and longer discussions can happen on the [mailing list](#mailing-list).
The specification and code is licensed under the Apache 2.0 license found in the [LICENSE](./LICENSE) file.
### Discuss your design
The project welcomes submissions, but please let everyone know what you are working on.
Before undertaking a nontrivial change to this specification, send mail to the [mailing list](#mailing-list) to discuss what you plan to do.
This gives everyone a chance to validate the design, helps prevent duplication of effort, and ensures that the idea fits.
It also guarantees that the design is sound before code is written; a GitHub pull-request is not the place for high-level discussions.
Typos and grammatical errors can go straight to a pull-request.
When in doubt, start on the [mailing-list](#mailing-list).
### Weekly Call
The contributors and maintainers of all OCI projects have a weekly meeting on Wednesdays at:
* 8:00 AM (USA Pacific), during [odd weeks][iso-week].
* 2:00 PM (USA Pacific), during [even weeks][iso-week].
There is an [iCalendar][rfc5545] format for the meetings [here](meeting.ics).
Everyone is welcome to participate via [UberConference web][uberconference] or audio-only: +1 415 968 0849 (no PIN needed).
An initial agenda will be posted to the [mailing list](#mailing-list) earlier in the week, and everyone is welcome to propose additional topics or suggest other agenda alterations there.
Minutes are posted to the [mailing list](#mailing-list) and minutes from past calls are archived [here][minutes], with minutes from especially old meetings (September 2015 and earlier) archived [here][runtime-wiki].
### Mailing List
You can subscribe and join the mailing list on [Google Groups][dev-list].
### IRC
OCI discussion happens on #opencontainers on Freenode ([logs][irc-logs]).
### Git commit
#### Sign your work
The sign-off is a simple line at the end of the explanation for the patch, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch.
The rules are pretty simple: if you can certify the below (from http://developercertificate.org):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe@gmail.com>
using your real name (sorry, no pseudonyms or anonymous contributions.)
You can add the sign off when creating the git commit via `git commit -s`.
#### Commit Style
Simple house-keeping for clean git history.
Read more on [How to Write a Git Commit Message][how-to-git-commit] or the Discussion section of [git-commit(1)][git-commit.1].
1. Separate the subject from body with a blank line
2. Limit the subject line to 50 characters
3. Capitalize the subject line
4. Do not end the subject line with a period
5. Use the imperative mood in the subject line
6. Wrap the body at 72 characters
7. Use the body to explain what and why vs. how
* If there was important/useful/essential conversation or information, copy or include a reference
8. When possible, one keyword to scope the change in the subject (i.e. "README: ...", "runtime: ...")
[charter]: https://www.opencontainers.org/about/governance
[code-of-conduct]: https://github.com/opencontainers/tob/blob/master/code-of-conduct.md
[dev-list]: https://groups.google.com/a/opencontainers.org/forum/#!forum/dev
[how-to-git-commit]: http://chris.beams.io/posts/git-commit
[irc-logs]: http://ircbot.wl.linuxfoundation.org/eavesdrop/%23opencontainers/
[iso-week]: https://en.wikipedia.org/wiki/ISO_week_date#Calculating_the_week_number_of_a_given_date
[minutes]: http://ircbot.wl.linuxfoundation.org/meetings/opencontainers/
[oci]: https://www.opencontainers.org
[rfc5545]: https://tools.ietf.org/html/rfc5545
[runtime-wiki]: https://github.com/opencontainers/runtime-spec/wiki
[uberconference]: https://www.uberconference.com/opencontainers
[git-commit.1]: http://git-scm.com/docs/git-commit

View File

@@ -6,26 +6,24 @@ import "os"
type Spec struct {
// Version of the Open Container Runtime Specification with which the bundle complies.
Version string `json:"ociVersion"`
// Platform specifies the configuration's target platform.
Platform Platform `json:"platform"`
// Process configures the container process.
Process Process `json:"process"`
Process *Process `json:"process,omitempty"`
// Root configures the container's root filesystem.
Root Root `json:"root"`
Root *Root `json:"root,omitempty"`
// Hostname configures the container's hostname.
Hostname string `json:"hostname,omitempty"`
// Mounts configures additional mounts (on top of Root).
Mounts []Mount `json:"mounts,omitempty"`
// Hooks configures callbacks for container lifecycle events.
Hooks *Hooks `json:"hooks,omitempty"`
Hooks *Hooks `json:"hooks,omitempty" platform:"linux,solaris"`
// Annotations contains arbitrary metadata for the container.
Annotations map[string]string `json:"annotations,omitempty"`
// Linux is platform specific configuration for Linux based containers.
// Linux is platform-specific configuration for Linux based containers.
Linux *Linux `json:"linux,omitempty" platform:"linux"`
// Solaris is platform specific configuration for Solaris containers.
// Solaris is platform-specific configuration for Solaris based containers.
Solaris *Solaris `json:"solaris,omitempty" platform:"solaris"`
// Windows is platform specific configuration for Windows based containers, including Hyper-V containers.
// Windows is platform-specific configuration for Windows based containers.
Windows *Windows `json:"windows,omitempty" platform:"windows"`
}
@@ -34,7 +32,7 @@ type Process struct {
// Terminal creates an interactive terminal for the container.
Terminal bool `json:"terminal,omitempty"`
// ConsoleSize specifies the size of the console.
ConsoleSize Box `json:"consoleSize,omitempty"`
ConsoleSize *Box `json:"consoleSize,omitempty"`
// User specifies user information for the process.
User User `json:"user"`
// Args specifies the binary and arguments for the application to execute.
@@ -47,11 +45,13 @@ type Process struct {
// Capabilities are Linux capabilities that are kept for the process.
Capabilities *LinuxCapabilities `json:"capabilities,omitempty" platform:"linux"`
// Rlimits specifies rlimit options to apply to the process.
Rlimits []LinuxRlimit `json:"rlimits,omitempty" platform:"linux"`
Rlimits []POSIXRlimit `json:"rlimits,omitempty" platform:"linux,solaris"`
// NoNewPrivileges controls whether additional privileges could be gained by processes in the container.
NoNewPrivileges bool `json:"noNewPrivileges,omitempty" platform:"linux"`
// ApparmorProfile specifies the apparmor profile for the container.
ApparmorProfile string `json:"apparmorProfile,omitempty" platform:"linux"`
// Specify an oom_score_adj for the container.
OOMScoreAdj *int `json:"oomScoreAdj,omitempty" platform:"linux"`
// SelinuxLabel specifies the selinux context that the container process is run as.
SelinuxLabel string `json:"selinuxLabel,omitempty" platform:"linux"`
}
@@ -99,23 +99,13 @@ type Root struct {
Readonly bool `json:"readonly,omitempty"`
}
// Platform specifies OS and arch information for the host system that the container
// is created for.
type Platform struct {
// OS is the operating system.
OS string `json:"os"`
// Arch is the architecture
Arch string `json:"arch"`
}
// Mount specifies a mount for a container.
type Mount struct {
// Destination is the path where the mount will be placed relative to the container's root. The path and child directories MUST exist, a runtime MUST NOT create directories automatically to a mount point.
// Destination is the absolute path where the mount will be placed in the container.
Destination string `json:"destination"`
// Type specifies the mount kind.
Type string `json:"type,omitempty"`
// Source specifies the source path of the mount. In the case of bind mounts on
// Linux based systems this would be the file on the host.
Type string `json:"type,omitempty" platform:"linux,solaris"`
// Source specifies the source path of the mount.
Source string `json:"source,omitempty"`
// Options are fstab style mount options.
Options []string `json:"options,omitempty"`
@@ -132,7 +122,6 @@ type Hook struct {
// Hooks for container setup and teardown
type Hooks struct {
// Prestart is a list of hooks to be run before the container process is executed.
// On Linux, they are run after the container namespaces are created.
Prestart []Hook `json:"prestart,omitempty"`
// Poststart is a list of hooks to be run after the container process is started.
Poststart []Hook `json:"poststart,omitempty"`
@@ -140,11 +129,11 @@ type Hooks struct {
Poststop []Hook `json:"poststop,omitempty"`
}
// Linux contains platform specific configuration for Linux based containers.
// Linux contains platform-specific configuration for Linux based containers.
type Linux struct {
// UIDMapping specifies user mappings for supporting user namespaces on Linux.
// UIDMapping specifies user mappings for supporting user namespaces.
UIDMappings []LinuxIDMapping `json:"uidMappings,omitempty"`
// GIDMapping specifies group mappings for supporting user namespaces on Linux.
// GIDMapping specifies group mappings for supporting user namespaces.
GIDMappings []LinuxIDMapping `json:"gidMappings,omitempty"`
// Sysctl are a set of key value pairs that are set for the container on start
Sysctl map[string]string `json:"sysctl,omitempty"`
@@ -169,11 +158,14 @@ type Linux struct {
ReadonlyPaths []string `json:"readonlyPaths,omitempty"`
// MountLabel specifies the selinux context for the mounts in the container.
MountLabel string `json:"mountLabel,omitempty"`
// IntelRdt contains Intel Resource Director Technology (RDT) information
// for handling resource constraints (e.g., L3 cache) for the container
IntelRdt *LinuxIntelRdt `json:"intelRdt,omitempty"`
}
// LinuxNamespace is the configuration for a Linux namespace
type LinuxNamespace struct {
// Type is the type of Linux namespace
// Type is the type of namespace
Type LinuxNamespaceType `json:"type"`
// Path is a path to an existing namespace persisted on disk that can be joined
// and is of the same type
@@ -210,8 +202,8 @@ type LinuxIDMapping struct {
Size uint32 `json:"size"`
}
// LinuxRlimit type and restrictions
type LinuxRlimit struct {
// POSIXRlimit type and restrictions
type POSIXRlimit struct {
// Type of the rlimit to set
Type string `json:"type"`
// Hard is the hard limit for the specified type
@@ -244,12 +236,12 @@ type linuxBlockIODevice struct {
Minor int64 `json:"minor"`
}
// LinuxWeightDevice struct holds a `major:minor weight` pair for blkioWeightDevice
// LinuxWeightDevice struct holds a `major:minor weight` pair for weightDevice
type LinuxWeightDevice struct {
linuxBlockIODevice
// Weight is the bandwidth rate for the device, range is from 10 to 1000
// Weight is the bandwidth rate for the device.
Weight *uint16 `json:"weight,omitempty"`
// LeafWeight is the bandwidth rate for the device while competing with the cgroup's child cgroups, range is from 10 to 1000, CFQ scheduler only
// LeafWeight is the bandwidth rate for the device while competing with the cgroup's child cgroups, CFQ scheduler only
LeafWeight *uint16 `json:"leafWeight,omitempty"`
}
@@ -262,36 +254,38 @@ type LinuxThrottleDevice struct {
// LinuxBlockIO for Linux cgroup 'blkio' resource management
type LinuxBlockIO struct {
// Specifies per cgroup weight, range is from 10 to 1000
Weight *uint16 `json:"blkioWeight,omitempty"`
// Specifies tasks' weight in the given cgroup while competing with the cgroup's child cgroups, range is from 10 to 1000, CFQ scheduler only
LeafWeight *uint16 `json:"blkioLeafWeight,omitempty"`
// Specifies per cgroup weight
Weight *uint16 `json:"weight,omitempty"`
// Specifies tasks' weight in the given cgroup while competing with the cgroup's child cgroups, CFQ scheduler only
LeafWeight *uint16 `json:"leafWeight,omitempty"`
// Weight per cgroup per device, can override BlkioWeight
WeightDevice []LinuxWeightDevice `json:"blkioWeightDevice,omitempty"`
WeightDevice []LinuxWeightDevice `json:"weightDevice,omitempty"`
// IO read rate limit per cgroup per device, bytes per second
ThrottleReadBpsDevice []LinuxThrottleDevice `json:"blkioThrottleReadBpsDevice,omitempty"`
ThrottleReadBpsDevice []LinuxThrottleDevice `json:"throttleReadBpsDevice,omitempty"`
// IO write rate limit per cgroup per device, bytes per second
ThrottleWriteBpsDevice []LinuxThrottleDevice `json:"blkioThrottleWriteBpsDevice,omitempty"`
ThrottleWriteBpsDevice []LinuxThrottleDevice `json:"throttleWriteBpsDevice,omitempty"`
// IO read rate limit per cgroup per device, IO per second
ThrottleReadIOPSDevice []LinuxThrottleDevice `json:"blkioThrottleReadIOPSDevice,omitempty"`
ThrottleReadIOPSDevice []LinuxThrottleDevice `json:"throttleReadIOPSDevice,omitempty"`
// IO write rate limit per cgroup per device, IO per second
ThrottleWriteIOPSDevice []LinuxThrottleDevice `json:"blkioThrottleWriteIOPSDevice,omitempty"`
ThrottleWriteIOPSDevice []LinuxThrottleDevice `json:"throttleWriteIOPSDevice,omitempty"`
}
// LinuxMemory for Linux cgroup 'memory' resource management
type LinuxMemory struct {
// Memory limit (in bytes).
Limit *uint64 `json:"limit,omitempty"`
Limit *int64 `json:"limit,omitempty"`
// Memory reservation or soft_limit (in bytes).
Reservation *uint64 `json:"reservation,omitempty"`
Reservation *int64 `json:"reservation,omitempty"`
// Total memory limit (memory + swap).
Swap *uint64 `json:"swap,omitempty"`
Swap *int64 `json:"swap,omitempty"`
// Kernel memory limit (in bytes).
Kernel *uint64 `json:"kernel,omitempty"`
Kernel *int64 `json:"kernel,omitempty"`
// Kernel memory limit for tcp (in bytes)
KernelTCP *uint64 `json:"kernelTCP,omitempty"`
// How aggressive the kernel will swap memory pages. Range from 0 to 100.
KernelTCP *int64 `json:"kernelTCP,omitempty"`
// How aggressive the kernel will swap memory pages.
Swappiness *uint64 `json:"swappiness,omitempty"`
// DisableOOMKiller disables the OOM killer for out of memory conditions
DisableOOMKiller *bool `json:"disableOOMKiller,omitempty"`
}
// LinuxCPU for Linux cgroup 'cpu' resource management
@@ -330,10 +324,6 @@ type LinuxNetwork struct {
type LinuxResources struct {
// Devices configures the device whitelist.
Devices []LinuxDeviceCgroup `json:"devices,omitempty"`
// DisableOOMKiller disables the OOM killer for out of memory conditions
DisableOOMKiller *bool `json:"disableOOMKiller,omitempty"`
// Specify an oom_score_adj for the container.
OOMScoreAdj *int `json:"oomScoreAdj,omitempty"`
// Memory restriction configuration
Memory *LinuxMemory `json:"memory,omitempty"`
// CPU resource restriction configuration
@@ -380,7 +370,7 @@ type LinuxDeviceCgroup struct {
Access string `json:"access,omitempty"`
}
// Solaris contains platform specific configuration for Solaris application containers.
// Solaris contains platform-specific configuration for Solaris application containers.
type Solaris struct {
// SMF FMRI which should go "online" before we start the container process.
Milestone string `json:"milestone,omitempty"`
@@ -427,8 +417,20 @@ type SolarisAnet struct {
// Windows defines the runtime configuration for Windows based containers, including Hyper-V containers.
type Windows struct {
// LayerFolders contains a list of absolute paths to directories containing image layers.
LayerFolders []string `json:"layerFolders"`
// Resources contains information for handling resource constraints for the container.
Resources *WindowsResources `json:"resources,omitempty"`
// CredentialSpec contains a JSON object describing a group Managed Service Account (gMSA) specification.
CredentialSpec interface{} `json:"credentialSpec,omitempty"`
// Servicing indicates if the container is being started in a mode to apply a Windows Update servicing operation.
Servicing bool `json:"servicing,omitempty"`
// IgnoreFlushesDuringBoot indicates if the container is being started in a mode where disk writes are not flushed during its boot process.
IgnoreFlushesDuringBoot bool `json:"ignoreFlushesDuringBoot,omitempty"`
// HyperV contains information for running a container with Hyper-V isolation.
HyperV *WindowsHyperV `json:"hyperv,omitempty"`
// Network restriction configuration.
Network *WindowsNetwork `json:"network,omitempty"`
}
// WindowsResources has container runtime resource constraints for containers running on Windows.
@@ -439,26 +441,22 @@ type WindowsResources struct {
CPU *WindowsCPUResources `json:"cpu,omitempty"`
// Storage restriction configuration.
Storage *WindowsStorageResources `json:"storage,omitempty"`
// Network restriction configuration.
Network *WindowsNetworkResources `json:"network,omitempty"`
}
// WindowsMemoryResources contains memory resource management settings.
type WindowsMemoryResources struct {
// Memory limit in bytes.
Limit *uint64 `json:"limit,omitempty"`
// Memory reservation in bytes.
Reservation *uint64 `json:"reservation,omitempty"`
}
// WindowsCPUResources contains CPU resource management settings.
type WindowsCPUResources struct {
// Number of CPUs available to the container.
Count *uint64 `json:"count,omitempty"`
// CPU shares (relative weight to other containers with cpu shares). Range is from 1 to 10000.
// CPU shares (relative weight to other containers with cpu shares).
Shares *uint16 `json:"shares,omitempty"`
// Percent of available CPUs usable by the container.
Percent *uint8 `json:"percent,omitempty"`
// Specifies the portion of processor cycles that this container can use as a percentage times 100.
Maximum *uint16 `json:"maximum,omitempty"`
}
// WindowsStorageResources contains storage resource management settings.
@@ -471,17 +469,29 @@ type WindowsStorageResources struct {
SandboxSize *uint64 `json:"sandboxSize,omitempty"`
}
// WindowsNetworkResources contains network resource management settings.
type WindowsNetworkResources struct {
// EgressBandwidth is the maximum egress bandwidth in bytes per second.
EgressBandwidth *uint64 `json:"egressBandwidth,omitempty"`
// WindowsNetwork contains network settings for Windows containers.
type WindowsNetwork struct {
// List of HNS endpoints that the container should connect to.
EndpointList []string `json:"endpointList,omitempty"`
// Specifies if unqualified DNS name resolution is allowed.
AllowUnqualifiedDNSQuery bool `json:"allowUnqualifiedDNSQuery,omitempty"`
// Comma separated list of DNS suffixes to use for name resolution.
DNSSearchList []string `json:"DNSSearchList,omitempty"`
// Name (ID) of the container that we will share with the network stack.
NetworkSharedContainerName string `json:"networkSharedContainerName,omitempty"`
}
// WindowsHyperV contains information for configuring a container to run with Hyper-V isolation.
type WindowsHyperV struct {
// UtilityVMPath is an optional path to the image used for the Utility VM.
UtilityVMPath string `json:"utilityVMPath,omitempty"`
}
// LinuxSeccomp represents syscall restrictions
type LinuxSeccomp struct {
DefaultAction LinuxSeccompAction `json:"defaultAction"`
Architectures []Arch `json:"architectures,omitempty"`
Syscalls []LinuxSyscall `json:"syscalls"`
Syscalls []LinuxSyscall `json:"syscalls,omitempty"`
}
// Arch used for additional architectures
@@ -540,14 +550,21 @@ const (
type LinuxSeccompArg struct {
Index uint `json:"index"`
Value uint64 `json:"value"`
ValueTwo uint64 `json:"valueTwo"`
ValueTwo uint64 `json:"valueTwo,omitempty"`
Op LinuxSeccompOperator `json:"op"`
}
// LinuxSyscall is used to match a syscall in Seccomp
type LinuxSyscall struct {
Names []string `json:"names"`
Action LinuxSeccompAction `json:"action"`
Args []LinuxSeccompArg `json:"args"`
Comment string `json:"comment"`
Names []string `json:"names"`
Action LinuxSeccompAction `json:"action"`
Args []LinuxSeccompArg `json:"args,omitempty"`
}
// LinuxIntelRdt has container runtime resource constraints
// for Intel RDT/CAT which introduced in Linux 4.10 kernel
type LinuxIntelRdt struct {
// The schema for L3 cache id and capacity bitmask (CBM)
// Format: "L3:<cache_id0>=<cbm0>;<cache_id1>=<cbm1>;..."
L3CacheSchema string `json:"l3CacheSchema,omitempty"`
}

View File

@@ -9,7 +9,7 @@ type State struct {
// Status is the runtime status of the container.
Status string `json:"status"`
// Pid is the process ID for the container process.
Pid int `json:"pid"`
Pid int `json:"pid,omitempty"`
// Bundle is the path to the container's bundle directory.
Bundle string `json:"bundle"`
// Annotations are key values associated with the container.

View File

@@ -11,7 +11,7 @@ const (
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = "-rc5"
VersionDev = ""
)
// Version is the specification version that the package types support.

View File

@@ -6,7 +6,6 @@ import (
"fmt"
"io"
"os"
"runtime"
"strings"
rspec "github.com/opencontainers/runtime-spec/specs-go"
@@ -35,15 +34,11 @@ type ExportOptions struct {
func New() Generator {
spec := rspec.Spec{
Version: rspec.Version,
Platform: rspec.Platform{
OS: runtime.GOOS,
Arch: runtime.GOARCH,
},
Root: rspec.Root{
Root: &rspec.Root{
Path: "",
Readonly: false,
},
Process: rspec.Process{
Process: &rspec.Process{
Terminal: false,
User: rspec.User{},
Args: []string{
@@ -136,7 +131,7 @@ func New() Generator {
"CAP_AUDIT_WRITE",
},
},
Rlimits: []rspec.LinuxRlimit{
Rlimits: []rspec.POSIXRlimit{
{
Type: "RLIMIT_NOFILE",
Hard: uint64(1024),
@@ -308,13 +303,13 @@ func (g *Generator) SetVersion(version string) {
// SetRootPath sets g.spec.Root.Path.
func (g *Generator) SetRootPath(path string) {
g.initSpec()
g.initSpecRoot()
g.spec.Root.Path = path
}
// SetRootReadonly sets g.spec.Root.Readonly.
func (g *Generator) SetRootReadonly(b bool) {
g.initSpec()
g.initSpecRoot()
g.spec.Root.Readonly = b
}
@@ -346,57 +341,45 @@ func (g *Generator) RemoveAnnotation(key string) {
delete(g.spec.Annotations, key)
}
// SetPlatformOS sets g.spec.Process.OS.
func (g *Generator) SetPlatformOS(os string) {
g.initSpec()
g.spec.Platform.OS = os
}
// SetPlatformArch sets g.spec.Platform.Arch.
func (g *Generator) SetPlatformArch(arch string) {
g.initSpec()
g.spec.Platform.Arch = arch
}
// SetProcessUID sets g.spec.Process.User.UID.
func (g *Generator) SetProcessUID(uid uint32) {
g.initSpec()
g.initSpecProcess()
g.spec.Process.User.UID = uid
}
// SetProcessGID sets g.spec.Process.User.GID.
func (g *Generator) SetProcessGID(gid uint32) {
g.initSpec()
g.initSpecProcess()
g.spec.Process.User.GID = gid
}
// SetProcessCwd sets g.spec.Process.Cwd.
func (g *Generator) SetProcessCwd(cwd string) {
g.initSpec()
g.initSpecProcess()
g.spec.Process.Cwd = cwd
}
// SetProcessNoNewPrivileges sets g.spec.Process.NoNewPrivileges.
func (g *Generator) SetProcessNoNewPrivileges(b bool) {
g.initSpec()
g.initSpecProcess()
g.spec.Process.NoNewPrivileges = b
}
// SetProcessTerminal sets g.spec.Process.Terminal.
func (g *Generator) SetProcessTerminal(b bool) {
g.initSpec()
g.initSpecProcess()
g.spec.Process.Terminal = b
}
// SetProcessApparmorProfile sets g.spec.Process.ApparmorProfile.
func (g *Generator) SetProcessApparmorProfile(prof string) {
g.initSpec()
g.initSpecProcess()
g.spec.Process.ApparmorProfile = prof
}
// SetProcessArgs sets g.spec.Process.Args.
func (g *Generator) SetProcessArgs(args []string) {
g.initSpec()
g.initSpecProcess()
g.spec.Process.Args = args
}
@@ -411,7 +394,7 @@ func (g *Generator) ClearProcessEnv() {
// AddProcessEnv adds name=value into g.spec.Process.Env, or replaces an
// existing entry with the given name.
func (g *Generator) AddProcessEnv(name, value string) {
g.initSpec()
g.initSpecProcess()
env := fmt.Sprintf("%s=%s", name, value)
for idx := range g.spec.Process.Env {
@@ -425,7 +408,7 @@ func (g *Generator) AddProcessEnv(name, value string) {
// AddProcessRlimits adds rlimit into g.spec.Process.Rlimits.
func (g *Generator) AddProcessRlimits(rType string, rHard uint64, rSoft uint64) {
g.initSpec()
g.initSpecProcess()
for i, rlimit := range g.spec.Process.Rlimits {
if rlimit.Type == rType {
g.spec.Process.Rlimits[i].Hard = rHard
@@ -434,7 +417,7 @@ func (g *Generator) AddProcessRlimits(rType string, rHard uint64, rSoft uint64)
}
}
newRlimit := rspec.LinuxRlimit{
newRlimit := rspec.POSIXRlimit{
Type: rType,
Hard: rHard,
Soft: rSoft,
@@ -461,7 +444,7 @@ func (g *Generator) ClearProcessRlimits() {
if g.spec == nil {
return
}
g.spec.Process.Rlimits = []rspec.LinuxRlimit{}
g.spec.Process.Rlimits = []rspec.POSIXRlimit{}
}
// ClearProcessAdditionalGids clear g.spec.Process.AdditionalGids.
@@ -474,7 +457,7 @@ func (g *Generator) ClearProcessAdditionalGids() {
// AddProcessAdditionalGid adds an additional gid into g.spec.Process.AdditionalGids.
func (g *Generator) AddProcessAdditionalGid(gid uint32) {
g.initSpec()
g.initSpecProcess()
for _, group := range g.spec.Process.User.AdditionalGids {
if group == gid {
return
@@ -485,7 +468,7 @@ func (g *Generator) AddProcessAdditionalGid(gid uint32) {
// SetProcessSelinuxLabel sets g.spec.Process.SelinuxLabel.
func (g *Generator) SetProcessSelinuxLabel(label string) {
g.initSpec()
g.initSpecProcess()
g.spec.Process.SelinuxLabel = label
}
@@ -501,16 +484,10 @@ func (g *Generator) SetLinuxMountLabel(label string) {
g.spec.Linux.MountLabel = label
}
// SetLinuxResourcesDisableOOMKiller sets g.spec.Linux.Resources.DisableOOMKiller.
func (g *Generator) SetLinuxResourcesDisableOOMKiller(disable bool) {
g.initSpecLinuxResources()
g.spec.Linux.Resources.DisableOOMKiller = &disable
}
// SetLinuxResourcesOOMScoreAdj sets g.spec.Linux.Resources.OOMScoreAdj.
func (g *Generator) SetLinuxResourcesOOMScoreAdj(adj int) {
g.initSpecLinuxResources()
g.spec.Linux.Resources.OOMScoreAdj = &adj
// SetProcessOOMScoreAdj sets g.spec.Process.OOMScoreAdj.
func (g *Generator) SetProcessOOMScoreAdj(adj int) {
g.initSpecProcess()
g.spec.Process.OOMScoreAdj = &adj
}
// SetLinuxResourcesCPUShares sets g.spec.Linux.Resources.CPU.Shares.
@@ -555,32 +532,62 @@ func (g *Generator) SetLinuxResourcesCPUMems(mems string) {
g.spec.Linux.Resources.CPU.Mems = mems
}
// AddLinuxResourcesHugepageLimit adds or sets g.spec.Linux.Resources.HugepageLimits.
func (g *Generator) AddLinuxResourcesHugepageLimit(pageSize string, limit uint64) {
hugepageLimit := rspec.LinuxHugepageLimit{
Pagesize: pageSize,
Limit: limit,
}
g.initSpecLinuxResources()
for i, pageLimit := range g.spec.Linux.Resources.HugepageLimits {
if pageLimit.Pagesize == pageSize {
g.spec.Linux.Resources.HugepageLimits[i].Limit = limit
return
}
}
g.spec.Linux.Resources.HugepageLimits = append(g.spec.Linux.Resources.HugepageLimits, hugepageLimit)
}
// DropLinuxResourcesHugepageLimit drops a hugepage limit from g.spec.Linux.Resources.HugepageLimits.
func (g *Generator) DropLinuxResourcesHugepageLimit(pageSize string) error {
g.initSpecLinuxResources()
for i, pageLimit := range g.spec.Linux.Resources.HugepageLimits {
if pageLimit.Pagesize == pageSize {
g.spec.Linux.Resources.HugepageLimits = append(g.spec.Linux.Resources.HugepageLimits[:i], g.spec.Linux.Resources.HugepageLimits[i+1:]...)
return nil
}
}
return nil
}
// SetLinuxResourcesMemoryLimit sets g.spec.Linux.Resources.Memory.Limit.
func (g *Generator) SetLinuxResourcesMemoryLimit(limit uint64) {
func (g *Generator) SetLinuxResourcesMemoryLimit(limit int64) {
g.initSpecLinuxResourcesMemory()
g.spec.Linux.Resources.Memory.Limit = &limit
}
// SetLinuxResourcesMemoryReservation sets g.spec.Linux.Resources.Memory.Reservation.
func (g *Generator) SetLinuxResourcesMemoryReservation(reservation uint64) {
func (g *Generator) SetLinuxResourcesMemoryReservation(reservation int64) {
g.initSpecLinuxResourcesMemory()
g.spec.Linux.Resources.Memory.Reservation = &reservation
}
// SetLinuxResourcesMemorySwap sets g.spec.Linux.Resources.Memory.Swap.
func (g *Generator) SetLinuxResourcesMemorySwap(swap uint64) {
func (g *Generator) SetLinuxResourcesMemorySwap(swap int64) {
g.initSpecLinuxResourcesMemory()
g.spec.Linux.Resources.Memory.Swap = &swap
}
// SetLinuxResourcesMemoryKernel sets g.spec.Linux.Resources.Memory.Kernel.
func (g *Generator) SetLinuxResourcesMemoryKernel(kernel uint64) {
func (g *Generator) SetLinuxResourcesMemoryKernel(kernel int64) {
g.initSpecLinuxResourcesMemory()
g.spec.Linux.Resources.Memory.Kernel = &kernel
}
// SetLinuxResourcesMemoryKernelTCP sets g.spec.Linux.Resources.Memory.KernelTCP.
func (g *Generator) SetLinuxResourcesMemoryKernelTCP(kernelTCP uint64) {
func (g *Generator) SetLinuxResourcesMemoryKernelTCP(kernelTCP int64) {
g.initSpecLinuxResourcesMemory()
g.spec.Linux.Resources.Memory.KernelTCP = &kernelTCP
}
@@ -591,6 +598,12 @@ func (g *Generator) SetLinuxResourcesMemorySwappiness(swappiness uint64) {
g.spec.Linux.Resources.Memory.Swappiness = &swappiness
}
// SetLinuxResourcesMemoryDisableOOMKiller sets g.spec.Linux.Resources.Memory.DisableOOMKiller.
func (g *Generator) SetLinuxResourcesMemoryDisableOOMKiller(disable bool) {
g.initSpecLinuxResourcesMemory()
g.spec.Linux.Resources.Memory.DisableOOMKiller = &disable
}
// SetLinuxResourcesNetworkClassID sets g.spec.Linux.Resources.Network.ClassID.
func (g *Generator) SetLinuxResourcesNetworkClassID(classid uint32) {
g.initSpecLinuxResourcesNetwork()
@@ -714,12 +727,15 @@ func (g *Generator) ClearPreStartHooks() {
if g.spec == nil {
return
}
if g.spec.Hooks == nil {
return
}
g.spec.Hooks.Prestart = []rspec.Hook{}
}
// AddPreStartHook add a prestart hook into g.spec.Hooks.Prestart.
func (g *Generator) AddPreStartHook(path string, args []string) {
g.initSpec()
g.initSpecHooks()
hook := rspec.Hook{Path: path, Args: args}
g.spec.Hooks.Prestart = append(g.spec.Hooks.Prestart, hook)
}
@@ -729,12 +745,15 @@ func (g *Generator) ClearPostStopHooks() {
if g.spec == nil {
return
}
if g.spec.Hooks == nil {
return
}
g.spec.Hooks.Poststop = []rspec.Hook{}
}
// AddPostStopHook adds a poststop hook into g.spec.Hooks.Poststop.
func (g *Generator) AddPostStopHook(path string, args []string) {
g.initSpec()
g.initSpecHooks()
hook := rspec.Hook{Path: path, Args: args}
g.spec.Hooks.Poststop = append(g.spec.Hooks.Poststop, hook)
}
@@ -744,12 +763,15 @@ func (g *Generator) ClearPostStartHooks() {
if g.spec == nil {
return
}
if g.spec.Hooks == nil {
return
}
g.spec.Hooks.Poststart = []rspec.Hook{}
}
// AddPostStartHook adds a poststart hook into g.spec.Hooks.Poststart.
func (g *Generator) AddPostStartHook(path string, args []string) {
g.initSpec()
g.initSpecHooks()
hook := rspec.Hook{Path: path, Args: args}
g.spec.Hooks.Poststart = append(g.spec.Hooks.Poststart, hook)
}
@@ -830,6 +852,7 @@ func (g *Generator) SetupPrivileged(privileged bool) {
finalCapList = append(finalCapList, fmt.Sprintf("CAP_%s", strings.ToUpper(cap.String())))
}
g.initSpecLinux()
g.initSpecProcessCapabilities()
g.spec.Process.Capabilities.Bounding = finalCapList
g.spec.Process.Capabilities.Effective = finalCapList
g.spec.Process.Capabilities.Inheritable = finalCapList
@@ -860,7 +883,7 @@ func (g *Generator) AddProcessCapability(c string) error {
return err
}
g.initSpec()
g.initSpecProcessCapabilities()
for _, cap := range g.spec.Process.Capabilities.Bounding {
if strings.ToUpper(cap) == cp {
@@ -907,40 +930,35 @@ func (g *Generator) DropProcessCapability(c string) error {
return err
}
g.initSpec()
g.initSpecProcessCapabilities()
for i, cap := range g.spec.Process.Capabilities.Bounding {
if strings.ToUpper(cap) == cp {
g.spec.Process.Capabilities.Bounding = append(g.spec.Process.Capabilities.Bounding[:i], g.spec.Process.Capabilities.Bounding[i+1:]...)
return nil
}
}
for i, cap := range g.spec.Process.Capabilities.Effective {
if strings.ToUpper(cap) == cp {
g.spec.Process.Capabilities.Effective = append(g.spec.Process.Capabilities.Effective[:i], g.spec.Process.Capabilities.Effective[i+1:]...)
return nil
}
}
for i, cap := range g.spec.Process.Capabilities.Inheritable {
if strings.ToUpper(cap) == cp {
g.spec.Process.Capabilities.Inheritable = append(g.spec.Process.Capabilities.Inheritable[:i], g.spec.Process.Capabilities.Inheritable[i+1:]...)
return nil
}
}
for i, cap := range g.spec.Process.Capabilities.Permitted {
if strings.ToUpper(cap) == cp {
g.spec.Process.Capabilities.Permitted = append(g.spec.Process.Capabilities.Permitted[:i], g.spec.Process.Capabilities.Permitted[i+1:]...)
return nil
}
}
for i, cap := range g.spec.Process.Capabilities.Ambient {
if strings.ToUpper(cap) == cp {
g.spec.Process.Capabilities.Ambient = append(g.spec.Process.Capabilities.Ambient[:i], g.spec.Process.Capabilities.Ambient[i+1:]...)
return nil
}
}
@@ -964,7 +982,7 @@ func mapStrToNamespace(ns string, path string) (rspec.LinuxNamespace, error) {
case "cgroup":
return rspec.LinuxNamespace{Type: rspec.CgroupNamespace, Path: path}, nil
default:
return rspec.LinuxNamespace{}, fmt.Errorf("Should not reach here!")
return rspec.LinuxNamespace{}, fmt.Errorf("unrecognized namespace %q", ns)
}
}
@@ -1031,7 +1049,7 @@ func (g *Generator) AddDevice(device rspec.LinuxDevice) {
g.spec.Linux.Devices = append(g.spec.Linux.Devices, device)
}
//RemoveDevice remove a device from g.spec.Linux.Devices
// RemoveDevice remove a device from g.spec.Linux.Devices
func (g *Generator) RemoveDevice(path string) error {
if g.spec == nil || g.spec.Linux == nil || g.spec.Linux.Devices == nil {
return nil
@@ -1046,6 +1064,7 @@ func (g *Generator) RemoveDevice(path string) error {
return nil
}
// ClearLinuxDevices clears g.spec.Linux.Devices
func (g *Generator) ClearLinuxDevices() {
if g.spec == nil || g.spec.Linux == nil || g.spec.Linux.Devices == nil {
return

View File

@@ -30,8 +30,9 @@ func ParseSyscallFlag(args SyscallOpts, config *rspec.LinuxSeccomp) error {
}
action, _ := parseAction(arguments[0])
if action == config.DefaultAction {
return fmt.Errorf("default action already set as %s", action)
if action == config.DefaultAction && args.argsAreEmpty() {
// default already set, no need to make changes
return nil
}
var newSyscall rspec.LinuxSyscall
@@ -96,7 +97,7 @@ func ParseDefaultAction(action string, config *rspec.LinuxSeccomp) error {
return err
}
config.DefaultAction = defaultAction
err = RemoveAllMatchingRules(config, action)
err = RemoveAllMatchingRules(config, defaultAction)
if err != nil {
return err
}
@@ -125,3 +126,10 @@ func newSyscallStruct(name string, action rspec.LinuxSeccompAction, args []rspec
}
return syscallStruct
}
func (s SyscallOpts) argsAreEmpty() bool {
return (s.Index == "" &&
s.Value == "" &&
s.ValueTwo == "" &&
s.Operator == "")
}

View File

@@ -15,12 +15,7 @@ func RemoveAction(arguments string, config *rspec.LinuxSeccomp) error {
return fmt.Errorf("Cannot remove action from nil Seccomp pointer")
}
var syscallsToRemove []string
if strings.Contains(arguments, ",") {
syscallsToRemove = strings.Split(arguments, ",")
} else {
syscallsToRemove = append(syscallsToRemove, arguments)
}
syscallsToRemove := strings.Split(arguments, ",")
for counter, syscallStruct := range config.Syscalls {
if reflect.DeepEqual(syscallsToRemove, syscallStruct.Names) {
@@ -42,16 +37,11 @@ func RemoveAllSeccompRules(config *rspec.LinuxSeccomp) error {
}
// RemoveAllMatchingRules will remove any syscall rules that match the specified action
func RemoveAllMatchingRules(config *rspec.LinuxSeccomp, action string) error {
func RemoveAllMatchingRules(config *rspec.LinuxSeccomp, seccompAction rspec.LinuxSeccompAction) error {
if config == nil {
return fmt.Errorf("Cannot remove action from nil Seccomp pointer")
}
seccompAction, err := parseAction(action)
if err != nil {
return err
}
for _, syscall := range config.Syscalls {
if reflect.DeepEqual(syscall.Action, seccompAction) {
RemoveAction(strings.Join(syscall.Names, ","), config)

View File

@@ -370,26 +370,25 @@ func DefaultProfile(rs *specs.Spec) *rspec.LinuxSeccomp {
var sysCloneFlagsIndex uint
capSysAdmin := false
var cap string
var caps []string
caps := make(map[string]bool)
for _, cap = range rs.Process.Capabilities.Bounding {
caps = append(caps, cap)
for _, cap := range rs.Process.Capabilities.Bounding {
caps[cap] = true
}
for _, cap = range rs.Process.Capabilities.Effective {
caps = append(caps, cap)
for _, cap := range rs.Process.Capabilities.Effective {
caps[cap] = true
}
for _, cap = range rs.Process.Capabilities.Inheritable {
caps = append(caps, cap)
for _, cap := range rs.Process.Capabilities.Inheritable {
caps[cap] = true
}
for _, cap = range rs.Process.Capabilities.Permitted {
caps = append(caps, cap)
for _, cap := range rs.Process.Capabilities.Permitted {
caps[cap] = true
}
for _, cap = range rs.Process.Capabilities.Ambient {
caps = append(caps, cap)
for _, cap := range rs.Process.Capabilities.Ambient {
caps[cap] = true
}
for _, cap = range caps {
for cap := range caps {
switch cap {
case "CAP_DAC_READ_SEARCH":
syscalls = append(syscalls, []rspec.LinuxSyscall{

View File

@@ -10,6 +10,27 @@ func (g *Generator) initSpec() {
}
}
func (g *Generator) initSpecProcess() {
g.initSpec()
if g.spec.Process == nil {
g.spec.Process = &rspec.Process{}
}
}
func (g *Generator) initSpecProcessCapabilities() {
g.initSpecProcess()
if g.spec.Process.Capabilities == nil {
g.spec.Process.Capabilities = &rspec.LinuxCapabilities{}
}
}
func (g *Generator) initSpecRoot() {
g.initSpec()
if g.spec.Root == nil {
g.spec.Root = &rspec.Root{}
}
}
func (g *Generator) initSpecAnnotations() {
g.initSpec()
if g.spec.Annotations == nil {
@@ -17,6 +38,13 @@ func (g *Generator) initSpecAnnotations() {
}
}
func (g *Generator) initSpecHooks() {
g.initSpec()
if g.spec.Hooks == nil {
g.spec.Hooks = &rspec.Hooks{}
}
}
func (g *Generator) initSpecLinux() {
g.initSpec()
if g.spec.Linux == nil {

View File

@@ -9,6 +9,7 @@ import (
"os"
"path/filepath"
"reflect"
"runtime"
"strings"
"unicode"
"unicode/utf8"
@@ -47,15 +48,27 @@ type Validator struct {
spec *rspec.Spec
bundlePath string
HostSpecific bool
platform string
}
// NewValidator creates a Validator
func NewValidator(spec *rspec.Spec, bundlePath string, hostSpecific bool) Validator {
return Validator{spec: spec, bundlePath: bundlePath, HostSpecific: hostSpecific}
func NewValidator(spec *rspec.Spec, bundlePath string, hostSpecific bool, platform string) Validator {
if hostSpecific && platform != runtime.GOOS {
platform = runtime.GOOS
}
return Validator{
spec: spec,
bundlePath: bundlePath,
HostSpecific: hostSpecific,
platform: platform,
}
}
// NewValidatorFromPath creates a Validator with specified bundle path
func NewValidatorFromPath(bundlePath string, hostSpecific bool) (Validator, error) {
func NewValidatorFromPath(bundlePath string, hostSpecific bool, platform string) (Validator, error) {
if hostSpecific && platform != runtime.GOOS {
platform = runtime.GOOS
}
if bundlePath == "" {
return Validator{}, fmt.Errorf("Bundle path shouldn't be empty")
}
@@ -77,20 +90,21 @@ func NewValidatorFromPath(bundlePath string, hostSpecific bool) (Validator, erro
return Validator{}, err
}
return NewValidator(&spec, bundlePath, hostSpecific), nil
return NewValidator(&spec, bundlePath, hostSpecific, platform), nil
}
// CheckAll checks all parts of runtime bundle
func (v *Validator) CheckAll() (msgs []string) {
msgs = append(msgs, v.CheckPlatform()...)
msgs = append(msgs, v.CheckRootfsPath()...)
msgs = append(msgs, v.CheckMandatoryFields()...)
msgs = append(msgs, v.CheckSemVer()...)
msgs = append(msgs, v.CheckMounts()...)
msgs = append(msgs, v.CheckPlatform()...)
msgs = append(msgs, v.CheckProcess()...)
msgs = append(msgs, v.CheckOS()...)
msgs = append(msgs, v.CheckLinux()...)
msgs = append(msgs, v.CheckHooks()...)
if v.spec.Linux != nil {
msgs = append(msgs, v.CheckLinux()...)
}
return
}
@@ -129,8 +143,13 @@ func (v *Validator) CheckRootfsPath() (msgs []string) {
msgs = append(msgs, fmt.Sprintf("root.path is %q, but it MUST be a child of %q", v.spec.Root.Path, absBundlePath))
}
return
if v.platform == "windows" {
if v.spec.Root.Readonly {
msgs = append(msgs, "root.readonly field MUST be omitted or false when target platform is windows")
}
}
return
}
// CheckSemVer checks v.spec.Version
@@ -149,37 +168,6 @@ func (v *Validator) CheckSemVer() (msgs []string) {
return
}
// CheckPlatform checks v.spec.Platform
func (v *Validator) CheckPlatform() (msgs []string) {
logrus.Debugf("check platform")
validCombins := map[string][]string{
"android": {"arm"},
"darwin": {"386", "amd64", "arm", "arm64"},
"dragonfly": {"amd64"},
"freebsd": {"386", "amd64", "arm"},
"linux": {"386", "amd64", "arm", "arm64", "ppc64", "ppc64le", "mips64", "mips64le", "s390x"},
"netbsd": {"386", "amd64", "arm"},
"openbsd": {"386", "amd64", "arm"},
"plan9": {"386", "amd64"},
"solaris": {"amd64"},
"windows": {"386", "amd64"}}
platform := v.spec.Platform
for os, archs := range validCombins {
if os == platform.OS {
for _, arch := range archs {
if arch == platform.Arch {
return nil
}
}
msgs = append(msgs, fmt.Sprintf("Combination of %q and %q is invalid.", platform.OS, platform.Arch))
}
}
msgs = append(msgs, fmt.Sprintf("Operation system %q of the bundle is not supported yet.", platform.OS))
return
}
// CheckHooks check v.spec.Hooks
func (v *Validator) CheckHooks() (msgs []string) {
logrus.Debugf("check hooks")
@@ -259,11 +247,12 @@ func (v *Validator) CheckProcess() (msgs []string) {
}
}
msgs = append(msgs, v.CheckCapablities()...)
if v.spec.Process.Capabilities != nil {
msgs = append(msgs, v.CheckCapabilities()...)
}
msgs = append(msgs, v.CheckRlimits()...)
if v.spec.Platform.OS == "linux" {
if v.platform == "linux" {
if len(process.ApparmorProfile) > 0 {
profilePath := filepath.Join(v.bundlePath, v.spec.Root.Path, "/etc/apparmor.d", process.ApparmorProfile)
_, err := os.Stat(profilePath)
@@ -276,39 +265,64 @@ func (v *Validator) CheckProcess() (msgs []string) {
return
}
func (v *Validator) CheckCapablities() (msgs []string) {
// CheckCapabilities checks v.spec.Process.Capabilities
func (v *Validator) CheckCapabilities() (msgs []string) {
process := v.spec.Process
if v.spec.Platform.OS == "linux" {
var caps []string
if v.platform == "linux" {
var effective, permitted, inheritable, ambient bool
caps := make(map[string][]string)
for _, cap := range process.Capabilities.Bounding {
caps = append(caps, cap)
caps[cap] = append(caps[cap], "bounding")
}
for _, cap := range process.Capabilities.Effective {
caps = append(caps, cap)
caps[cap] = append(caps[cap], "effective")
}
for _, cap := range process.Capabilities.Inheritable {
caps = append(caps, cap)
caps[cap] = append(caps[cap], "inheritable")
}
for _, cap := range process.Capabilities.Permitted {
caps = append(caps, cap)
caps[cap] = append(caps[cap], "permitted")
}
for _, cap := range process.Capabilities.Ambient {
caps = append(caps, cap)
caps[cap] = append(caps[cap], "ambient")
}
for _, capability := range caps {
for capability, owns := range caps {
if err := CapValid(capability, v.HostSpecific); err != nil {
msgs = append(msgs, fmt.Sprintf("capability %q is not valid, man capabilities(7)", capability))
}
effective, permitted, ambient, inheritable = false, false, false, false
for _, set := range owns {
if set == "effective" {
effective = true
}
if set == "inheritable" {
inheritable = true
}
if set == "permitted" {
permitted = true
}
if set == "ambient" {
ambient = true
}
}
if effective && !permitted {
msgs = append(msgs, fmt.Sprintf("effective capability %q is not allowed, as it's not permitted", capability))
}
if ambient && !(effective && inheritable) {
msgs = append(msgs, fmt.Sprintf("ambient capability %q is not allowed, as it's not permitted and inheribate", capability))
}
}
} else {
logrus.Warnf("process.capabilities validation not yet implemented for OS %q", v.spec.Platform.OS)
logrus.Warnf("process.capabilities validation not yet implemented for OS %q", v.platform)
}
return
}
// CheckRlimits checks v.spec.Process.Rlimits
func (v *Validator) CheckRlimits() (msgs []string) {
process := v.spec.Process
for index, rlimit := range process.Rlimits {
@@ -317,14 +331,7 @@ func (v *Validator) CheckRlimits() (msgs []string) {
msgs = append(msgs, fmt.Sprintf("rlimit can not contain the same type %q.", process.Rlimits[index].Type))
}
}
if v.spec.Platform.OS == "linux" {
if err := rlimitValid(rlimit); err != nil {
msgs = append(msgs, err.Error())
}
} else {
logrus.Warnf("process.rlimits validation not yet implemented for OS %q", v.spec.Platform.OS)
}
msgs = append(msgs, v.rlimitValid(rlimit)...)
}
return
@@ -375,46 +382,39 @@ func supportedMountTypes(OS string, hostSpecific bool) (map[string]bool, error)
func (v *Validator) CheckMounts() (msgs []string) {
logrus.Debugf("check mounts")
supportedTypes, err := supportedMountTypes(v.spec.Platform.OS, v.HostSpecific)
supportedTypes, err := supportedMountTypes(v.platform, v.HostSpecific)
if err != nil {
msgs = append(msgs, err.Error())
return
}
if supportedTypes != nil {
for _, mount := range v.spec.Mounts {
for _, mount := range v.spec.Mounts {
if supportedTypes != nil {
if !supportedTypes[mount.Type] {
msgs = append(msgs, fmt.Sprintf("Unsupported mount type %q", mount.Type))
}
}
if !filepath.IsAbs(mount.Destination) {
msgs = append(msgs, fmt.Sprintf("destination %v is not an absolute path", mount.Destination))
}
if !filepath.IsAbs(mount.Destination) {
msgs = append(msgs, fmt.Sprintf("destination %v is not an absolute path", mount.Destination))
}
}
return
}
// CheckOS checks v.spec.Platform.OS
func (v *Validator) CheckOS() (msgs []string) {
logrus.Debugf("check os")
// CheckPlatform checks v.platform
func (v *Validator) CheckPlatform() (msgs []string) {
logrus.Debugf("check platform")
if v.spec.Platform.OS != "linux" {
if v.spec.Linux != nil {
msgs = append(msgs, fmt.Sprintf("'linux' MUST NOT be set when platform.os is %q", v.spec.Platform.OS))
}
if v.platform != "linux" && v.platform != "solaris" && v.platform != "windows" {
msgs = append(msgs, fmt.Sprintf("platform %q is not supported", v.platform))
return
}
if v.spec.Platform.OS != "solaris" {
if v.spec.Solaris != nil {
msgs = append(msgs, fmt.Sprintf("'solaris' MUST NOT be set when platform.os is %q", v.spec.Platform.OS))
}
}
if v.spec.Platform.OS != "windows" {
if v.spec.Windows != nil {
msgs = append(msgs, fmt.Sprintf("'windows' MUST NOT be set when platform.os is %q", v.spec.Platform.OS))
if v.platform == "windows" {
if v.spec.Windows == nil {
msgs = append(msgs, "'windows' MUST be set when platform is `windows`")
}
}
@@ -475,7 +475,7 @@ func (v *Validator) CheckLinux() (msgs []string) {
}
}
if v.spec.Platform.OS == "linux" && !typeList[rspec.UTSNamespace].newExist && v.spec.Hostname != "" {
if v.platform == "linux" && !typeList[rspec.UTSNamespace].newExist && v.spec.Hostname != "" {
msgs = append(msgs, fmt.Sprintf("On Linux, hostname requires a new UTS namespace to be specified as well"))
}
@@ -503,19 +503,21 @@ func (v *Validator) CheckLinux() (msgs []string) {
case "rslave":
case "shared":
case "rshared":
case "unbindable":
case "runbindable":
default:
msgs = append(msgs, "rootfsPropagation must be empty or one of \"private|rprivate|slave|rslave|shared|rshared\"")
msgs = append(msgs, "rootfsPropagation must be empty or one of \"private|rprivate|slave|rslave|shared|rshared|unbindable|runbindable\"")
}
for _, maskedPath := range v.spec.Linux.MaskedPaths {
if !strings.HasPrefix(maskedPath, "/") {
msgs = append(msgs, "maskedPath %v is not an absolute path", maskedPath)
msgs = append(msgs, fmt.Sprintf("maskedPath %v is not an absolute path", maskedPath))
}
}
for _, readonlyPath := range v.spec.Linux.ReadonlyPaths {
if !strings.HasPrefix(readonlyPath, "/") {
msgs = append(msgs, "readonlyPath %v is not an absolute path", readonlyPath)
msgs = append(msgs, fmt.Sprintf("readonlyPath %v is not an absolute path", readonlyPath))
}
}
@@ -650,16 +652,23 @@ func envValid(env string) bool {
return true
}
func rlimitValid(rlimit rspec.LinuxRlimit) error {
func (v *Validator) rlimitValid(rlimit rspec.POSIXRlimit) (msgs []string) {
if rlimit.Hard < rlimit.Soft {
return fmt.Errorf("hard limit of rlimit %s should not be less than soft limit", rlimit.Type)
msgs = append(msgs, fmt.Sprintf("hard limit of rlimit %s should not be less than soft limit", rlimit.Type))
}
for _, val := range defaultRlimits {
if val == rlimit.Type {
return nil
if v.platform == "linux" {
for _, val := range defaultRlimits {
if val == rlimit.Type {
return
}
}
msgs = append(msgs, fmt.Sprintf("rlimit type %q is invalid", rlimit.Type))
} else {
logrus.Warnf("process.rlimits validation not yet implemented for platform %q", v.platform)
}
return fmt.Errorf("rlimit type %q is invalid", rlimit.Type)
return
}
func namespaceValid(ns rspec.LinuxNamespace) bool {